Business author Will Willis (not talking about edge computing) penned the phrase “if you’re not living on the edge you’re taking up too much space.”
One of the complexities of “edge computing” is that, in a factual sense, every computing resource lives on the edge. But clients of particular resources might be located locally or far away. And that is the issue: “computing on the edge,” not “living at the edge.”
Every hyperscale data center and every smartphone, smart speaker, personal computer, smartwatch, industrial sensor all “live on the edge.” They are fixed in space, but connect over some access link to the core networks and internet. Everything is physically at the edge.
The exception, some might argue, are devices communicating over a satellite link. In those scenarios, the “access” link and the “wide area network” connections often are indistinguishable. We can argue about whether the repeater (the satellite) sits in the core, or is simply another network edge.
But the issue is distributed computing operations; computing “somewhere else than where the device is.” The “cloud,” remember is an abstraction, not a place.
“Edge computing places high-performance compute, storage and network resources as close as possible to end users and devices,” the State of the Edge report says. The advantages include reduced WAN transport costs and lower latency.
And latency is the key issue. Human-scale apps can have response tolerances as long as a few seconds. It’s mostly an inconvenience.
Annoying, yes; but not life-threatening. In comparison, for applications that operate at machine-scale, where latency is measured in microseconds, even small delays in data transmission can have severe and even fatal consequences.
To be sure, debate exists about whether computing on an edge device directly can be considered “edge computing.” Though processing can and will occur wherever it is best placed for any given application, it is probably the general understanding that edge computing implies that resources on the device edge are as close as you can possibly get to the end user, both physically and logically.
Some refer to the “infrastructure edge” refers to IT resources which are positioned on the network operator or service provider side of the last mile network, such as at a cable headend or at the base of a cell tower.
In such cases, the primary building blocks are edge data centers, which are typically placed at five-mile to 10-mile intervals in urban and suburban environments.
The difference between device edge and infrastructure edge might be characterized as “zero hop” (computing on the device itself) and “one short hop” with total round-trip latency of a few milliseconds.
The infrastructure edge, in many cases, will include an access edge, directly connected to clients, as well as aggregation edge that is two network hops from any end user and both simplifies connection to all other cloud resources as well as provides a more-powerful analytical capability that still is close to the device edge.
The access edge might be thought of as providing real-time processing tasks, while the aggregation edge handles less-immediate but higher-order analytical tasks (such as trend analysis).
Individual edge data centers might typically feature 50 to 150 kiloWatts of capacity. Hyperscale data centers can feature scores of megaWatts of capability, essentially three orders of magnitude difference, or about 1,000 times or so greater computing capability.