Thursday, October 20, 2022

Can VR/AR or Metaverse Wait 2 Decades for the Compute/Connectivity Platform to be Built?

The Telecom Infra Project has formed a group to look at metaverse-ready networks. Whether one accepts the notion of “metaverse” or not, virtually everyone agrees that future experiences will include use of extended, augmented or virtual reality on a wider scale. 


And that is certain to affect both computing and connectivity platforms, in the same way that entertainment video and gaming have shaped network performance demands, in terms of latency performance and capacity. 


The metaverse or just AR and VR will deliver immersive experiences that will require better network performance, for both fixed and mobile networks, TIP says. 


And therein lie many questions. If we assume both ultra-high data bandwidth and ultra-low latency for the most-stringent applications, both “computing” and “connectivity” platforms will be adjusted in some ways. 


Present thinking includes more use of edge computing and probably quality-assured bandwidth in some form. But it is not simply a matter of “what” will be required but also “when” resources will be required, and “where?”


As always, any set of performance requirements might be satisfied in a number of ways. What blend of local versus remote computing will work? And how “local” is good enough? What mix of local distribution (Wi-Fi, bluetooth, 5G and other) is feasible? When can--or should--remote resources be invoked? 


And can all that be done relying on Moore’s Law rates of improvement, Edholm’s Law of access bandwidth improvement or Nielsen’s Law of internet access speed? If we must create improvements at faster rates than simply relying on historic rates of improvement, where are the levers to pull?


The issue really is timing. Left to its own internal logic, the headline speed services in most countries will be terabits per second by perhaps 2050. The problem for metaverse or VR experience providers is that they might not be able to wait that long. 


That means the top-end home broadband speed could be 85 Gbps to 100 Gbps by about 2030. 

source: NCTA  


But most consumers will not be buying service at such rates. Perhaps fewer than 10 percent will do so. So what could developers expect as a baseline? 10 Gbps? Or 40 Gbps? And is that sufficient, all other things considered? 


And is access bandwidth the real hurdle? Intel argues that metaverse will require computing resources 1,000 times better than today. Can Moore’s Law rates of improvement supply that degree of improvement? Sure, given enough time. 


As a rough estimate, vastly-improved platforms--beyond the Nielsen’s Law rates of improvement--might be needed within a decade to support widespread use of VR/AR or metaverse use cases, however one wishes to frame the matter. 


Though the average or typical consumer does not buy the “fastest possible” tier of service, the steady growth of headline tier speed since the time of dial-up access is quite linear. 


And the growth trend--50 percent per year speed increases--known as Nielsen’s Law--has operated since the days of dial-up internet access.


The simple question is “if the metaverse requires 1,000 times more computing power than we generally use at present, how do we get there within a decade? Given enough time, the normal increases in computational power and access bandwidth would get us there, of course.


But metaverse or extensive AR and VR might require that the digital infrastructure  foundation already be in place, before apps and environments can be created. 


What that will entail depends on how fast the new infrastructure has to be built. If we are able to upgrade infrastructure roughly on the past timetable, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


That assumes we have pulled a number of levers beyond expected advances in processor power, processor architectures and declines in cost per unit of cycle. Network architectures and appliances also have to change. Quite often, so do applications and end user demand. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


And that all assumes underlying demand driving the pace of innovation. 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


So the timing of capital investment in excess of current requirements is really the issue. How soon? How Much? What Type?


The issue is how and when to accelerate rates of improvement? Can widespread use of AR/VR or metaverse happen if we must wait two decades for the platform to be built?

No comments:

AI is Just a Tool

Artificial intelligence, like any tool, can be used well, or in troubling ways. In this case, an artisit uses AI to recreate his own voice, ...