Sunday, March 11, 2018

Ultra-Low Latency is the Defining Characteristic of Next Era of Networks and Business Models

Eras now change so fast in the computing space that it is hard to come up with nomenclature that makes sense. It used to be easy.

In terms of platforms, we moved from mainframe to minicomputers to personal computers (and client-server) to mobile or ambient computing. Some might say the next evolution is “pervasive computing” based on widespread internet of things adoption.

In the area of core technology we moved from vacuum tubes to transistors to integrated circuits to microprocessors, graphics processors and now seem headed for an era where artificial intelligence becomes the driver.

In terms of apps, we have moved from enterprise back office to desktop productivity to internet apps on smartphones. It is the recent era that seems most diverse. Some people might say we are now in the age of mobile computing. Others might say it is more about “internet computing” or “cloud computing” or “social computing.”

Recent “ages” or “eras” change so fast we probably are not talking about ages or eras at all. Within a span of seven years, for example, we might be said to have moved through the ages of web, to search, to social, to cloud, to mobile.

Obviously we are too granular. An “age” or “era” should not be so short as to last only a year or two. That is a buzzword or a theme, certainly, but not an “age or era.”

At the risk of contributing to the confusion, we might be on the cusp of an era of ultra-low-latency networks designed to support new applications requiring such ultra-low latency, one of the defining performance characteristics of 5G.

But there are other big changes. A new edge computing infrastructure is believed necessary to support the ultra-low-latency apps for which 5G will provide the access. Inside the core network, new methods of creating virtual private networks (including network slicing) might also sometimes play a role in creating low-latency networks.

Most likely, the coming era will not be called anything like “ultra-low latency.” It simply is not catchy enough. “Era of IoT” or “ambient computing,” while not especially easy  to grasp, might make more sense.

Still, the key change is latency performance, though most casual observers will point to the huge increases in bandwidth as the defining characteristic. But when Nokia Bell Labs consultants talk about “creating time,” they are speaking broadly about latency performance, not bandwidth.

And to the extent that 5G revenue upside comes from internet of things apps (sensor connections), much of the clear upside comes in the ultra-low-latency areas, as much of the IoT sensor access requirement might be met by existing 4G or fixed connections (Wi-Fi, for example).

Many sensors will have essentially zero latency dependence, as they might report data only once a day or at specified other intervals, but with no particular dependence on latency performance.

Other applications, including autonomous vehicles, virtual or augmented reality, remote surgery or instant startup of 8K video content, might well require latency below 10 milliseconds.


The implications are fairly clear: if in fact the next big wave of revenue opportunity for service providers are various internet of things (sensor) applications--not humans using smartphones--the key area where 5G is unique is ultra-low latency, since 5G is specifically built to deliver ultra-low latency.

So it is not bandwidth, as such, that is the clear differentiation between pre-5G and 5G and future networks. Latency performance is where the difference exists, not bandwidth, though the increases will be significant.

Even advanced 4G networks, Wi-Fi and other networks will routinely operate in the hundreds of megabits per second to gigabit ranges, and most apps will not routinely require that much bandwidth.

It will be some time until we are able to come up with a universally-accepted nomenclature for the coming era that includes 5G and edge computing. As has been the case in the past, we will be able to characterize the eras at the component level, the computing model, the devices used or by lead applications.

And, cumbersome though it might be, latency will figure in nearly every characterization, and possibly in all descriptors. Low latency really is the key, where it comes to the business model, the key changes in architecture or revenue model changes.

No comments:

Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...