Monday, June 18, 2018

Where is the Edge, for Computing?

Edge computing is a subject of enormous interest for mobile service providers, some internet service providers and telcos, cloud computing and data center operators as well as suppliers of enterprise software and devices, as edge computing could create substantial new markets for communications, solutions and computing facilities.

Of course, we have to define “edge,” and the answer might be application dependent. In some cases, such as an industrial sensor that aims to aid decisions about when a particular piece of machinery is dangerously hot, and has to be shut down, to prevent damage, the edge is the device itself.

In that case, communications is not essential. The device must act autonomously, on its own.

In other cases, the edge might be someplace on the enterprise premises. In such cases, enterprise servers handle the processing load, and there is no need for wide area communications.

source: Industrial IoT Consortium

The sweet spot for remote edge computing starts with use cases where response times might be in milliseconds, but beyond the reach of a premises local area network, especially for distributed networks of sensors that are intended to control the behavior of other processors in real time.

And latency is the value driver, not bandwidth. In industrial settings, the value of monitoring for equipment failure often is measured in ability to respond times in seconds. In other cases, the value of notification (if not real-world response) is measured in minutes.

In other settings, such as autonomous vehicle support, response times in milliseconds might be necessary.

And though it might seem frivolous, one clear consumer use case for response times in milliseconds is “changing the channel on a 4K or 8K TV.” The issue is that users normally expect to press a button and see full-motion, real-time content displayed on the screen.

The issue with such operations in a 4K or 8K setting is that there is so much data to load that a noticeable and objectionable lag might occur, if content were not cached close to the edge of a delivery network.

The point is that the new value comes from use cases where transmission delay--to and from a cloud data center--cannot be tolerated, and where data processing must happen locally, and fast.

For many other applications, notification times are less stringent, and less latency-bound, so cloud computing still works.


No comments:

Costs of Creating Machine Learning Models is Up Sharply

With the caveat that we must be careful about making linear extrapolations into the future, training costs of state-of-the-art AI models hav...