Sunday, July 31, 2022

How Long for Internet to Achive Ubiquitous 1,000-Fold Computational Increase?

Some believe the next-generation internet could require a three-order-of-magnitude (1,000 times) increase in computing power, to support lots of artificial intelligence, 3D rendering, metaverse and distributed applications. 


What that will entail depends on how fast the new infrastructure has to be built. If we are able to upgrade infrastructure roughly on the past timetable, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


That assumes we have pulled a number of levers beyond expected advances in processor power, processor architectures and declines in cost per unit of cycle. Network architectures and appliances also have to change. Quite often, so do applications and end user demand. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


And that all assumes underlying demand driving the pace of innovation. 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


Persistent 3D virtual worlds would seem to be the driver for such demand.  


Low-latency apps such as persistent environments also should increase pressure to prioritize traffic, move computing closer to the actual end user location and possibly lead to new forms of content handling and computation to support such content. 


Compared to today, where content delivery networks operate to reduce latency, content computation networks would also be necessary to do all the local and fast processing to support immersive 3D experiences that also are persistent. 


How we supply enough fast compute to handle rendering, for example, could be a combination fo device and edge computing architecture. 


Among the other issues are whether chip capabilities can scale fast enough to support such levels of compute intensity. 


So long as we have enough levers to pull, a 1,000-fold increase in computing availability within two or three decades is possible. Moore's Law suggests it is possible, assuming we can keep up the rate of change in a variety of ways, even if, at the physical level, Moore’s Law ceases to operate.  


But that also means fully-immersive internet experiences, used by everybody, all the time, also would be accompanied by business models to match. 


So in practical terms, perhaps some users and supported experiences will use 1,000 times more computational support. But it is unlikely that the full internet will have evolved to do so.


No comments: