Showing posts sorted by relevance for query 1000 times metaverse. Sort by date Show all posts
Showing posts sorted by relevance for query 1000 times metaverse. Sort by date Show all posts

Thursday, February 23, 2023

Can Compute Increase 1000 Times to Support Metaverse? What AI Processing Suggests

Metaverse at scale implies some fairly dramatic increases in computational resources and, to a lesser extent, bandwidth. 


Some believe the next-generation internet could require a three-order-of-magnitude (1,000 times) increase in computing power, to support lots of artificial intelligence, 3D rendering, metaverse and distributed applications. 


The issue is how that compares with historical increases in computational power. In the past, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


Will that be fast enough to support ubiquitous metaverse experiences? There is reasons for both optimism and concern. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


Still, there is reason for optimism. Consider the advances in computational support to support artificial intelligence and generative AI, to support use cases such as ChatGPT. 


source: Mindsync 


“We've accelerated and advanced AI processing by a million x over the last decade,” said Jensen Huang, Nvidia CEO. “Moore's Law, in its best days, would have delivered 100x in a decade.”


“We've made large language model processing a million times faster,” he said. “What would have taken a couple of months in the beginning, now it happens in about 10 days.”


In other words, vast increases in computational power might well hit the 1,000 times requirement, should it prove necessary. 


And improvements on a number of scales will enable such growth, beyond Moore’s Law and chip density. As it turns out, many parameters can be improved. 


source: OpenAI 


 “No AI in itself is an application,” Huang says. Preprocessing and  post-processing often represents half or two-thirds of the overall workload, he pointed out. 

By accelerating the entire end-to-end pipeline, from preprocessing, from data ingestion, data processing, all the way to the preprocessing all the way to post processing, “we're able to accelerate the entire pipeline versus just accelerating half of the pipeline,” said Huang. 

The point is that metaverse requirements--even assuming a 1,000-fold increase in computational support within a decade or so--seem feasible, given what is happening with artificial intelligence processing gains.


Thursday, October 20, 2022

Can VR/AR or Metaverse Wait 2 Decades for the Compute/Connectivity Platform to be Built?

The Telecom Infra Project has formed a group to look at metaverse-ready networks. Whether one accepts the notion of “metaverse” or not, virtually everyone agrees that future experiences will include use of extended, augmented or virtual reality on a wider scale. 


And that is certain to affect both computing and connectivity platforms, in the same way that entertainment video and gaming have shaped network performance demands, in terms of latency performance and capacity. 


The metaverse or just AR and VR will deliver immersive experiences that will require better network performance, for both fixed and mobile networks, TIP says. 


And therein lie many questions. If we assume both ultra-high data bandwidth and ultra-low latency for the most-stringent applications, both “computing” and “connectivity” platforms will be adjusted in some ways. 


Present thinking includes more use of edge computing and probably quality-assured bandwidth in some form. But it is not simply a matter of “what” will be required but also “when” resources will be required, and “where?”


As always, any set of performance requirements might be satisfied in a number of ways. What blend of local versus remote computing will work? And how “local” is good enough? What mix of local distribution (Wi-Fi, bluetooth, 5G and other) is feasible? When can--or should--remote resources be invoked? 


And can all that be done relying on Moore’s Law rates of improvement, Edholm’s Law of access bandwidth improvement or Nielsen’s Law of internet access speed? If we must create improvements at faster rates than simply relying on historic rates of improvement, where are the levers to pull?


The issue really is timing. Left to its own internal logic, the headline speed services in most countries will be terabits per second by perhaps 2050. The problem for metaverse or VR experience providers is that they might not be able to wait that long. 


That means the top-end home broadband speed could be 85 Gbps to 100 Gbps by about 2030. 

source: NCTA  


But most consumers will not be buying service at such rates. Perhaps fewer than 10 percent will do so. So what could developers expect as a baseline? 10 Gbps? Or 40 Gbps? And is that sufficient, all other things considered? 


And is access bandwidth the real hurdle? Intel argues that metaverse will require computing resources 1,000 times better than today. Can Moore’s Law rates of improvement supply that degree of improvement? Sure, given enough time. 


As a rough estimate, vastly-improved platforms--beyond the Nielsen’s Law rates of improvement--might be needed within a decade to support widespread use of VR/AR or metaverse use cases, however one wishes to frame the matter. 


Though the average or typical consumer does not buy the “fastest possible” tier of service, the steady growth of headline tier speed since the time of dial-up access is quite linear. 


And the growth trend--50 percent per year speed increases--known as Nielsen’s Law--has operated since the days of dial-up internet access.


The simple question is “if the metaverse requires 1,000 times more computing power than we generally use at present, how do we get there within a decade? Given enough time, the normal increases in computational power and access bandwidth would get us there, of course.


But metaverse or extensive AR and VR might require that the digital infrastructure  foundation already be in place, before apps and environments can be created. 


What that will entail depends on how fast the new infrastructure has to be built. If we are able to upgrade infrastructure roughly on the past timetable, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


That assumes we have pulled a number of levers beyond expected advances in processor power, processor architectures and declines in cost per unit of cycle. Network architectures and appliances also have to change. Quite often, so do applications and end user demand. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


And that all assumes underlying demand driving the pace of innovation. 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


So the timing of capital investment in excess of current requirements is really the issue. How soon? How Much? What Type?


The issue is how and when to accelerate rates of improvement? Can widespread use of AR/VR or metaverse happen if we must wait two decades for the platform to be built?

Sunday, July 31, 2022

How Long for Internet to Achive Ubiquitous 1,000-Fold Computational Increase?

Some believe the next-generation internet could require a three-order-of-magnitude (1,000 times) increase in computing power, to support lots of artificial intelligence, 3D rendering, metaverse and distributed applications. 


What that will entail depends on how fast the new infrastructure has to be built. If we are able to upgrade infrastructure roughly on the past timetable, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


That assumes we have pulled a number of levers beyond expected advances in processor power, processor architectures and declines in cost per unit of cycle. Network architectures and appliances also have to change. Quite often, so do applications and end user demand. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


And that all assumes underlying demand driving the pace of innovation. 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


Persistent 3D virtual worlds would seem to be the driver for such demand.  


Low-latency apps such as persistent environments also should increase pressure to prioritize traffic, move computing closer to the actual end user location and possibly lead to new forms of content handling and computation to support such content. 


Compared to today, where content delivery networks operate to reduce latency, content computation networks would also be necessary to do all the local and fast processing to support immersive 3D experiences that also are persistent. 


How we supply enough fast compute to handle rendering, for example, could be a combination fo device and edge computing architecture. 


Among the other issues are whether chip capabilities can scale fast enough to support such levels of compute intensity. 


So long as we have enough levers to pull, a 1,000-fold increase in computing availability within two or three decades is possible. Moore's Law suggests it is possible, assuming we can keep up the rate of change in a variety of ways, even if, at the physical level, Moore’s Law ceases to operate.  


But that also means fully-immersive internet experiences, used by everybody, all the time, also would be accompanied by business models to match. 


So in practical terms, perhaps some users and supported experiences will use 1,000 times more computational support. But it is unlikely that the full internet will have evolved to do so.


Saturday, February 19, 2022

Can You Enjoy Metaverse Without Edge Computing?

Edge computing is certain to play a bigger role in our computing fabric as augmented reality, virtual reality and future Metaverse environments become possible. “Even at ultra-low latency, it makes little sense to stream (versus locally process) AR data given the speed at which a camera moves and new input data is received (i.e. literally the speed of light and from only a few feet away),” says Matthew Ball, EpyllionCo managing partner.


The conventional wisdom today is that multi-player games, to say nothing of more-immersive applications, do not work when total latency is greater than 150 milliseconds and user experience is impaired when latency is as low as 50 milliseconds, says Ball. 


CityPairs.png

source: Matthew Ball 


Will the Metaverse require 1,000 times more computing power? Intel thinks so. And that implies we might be decades away from a ubiquitous and widely-accepted Metaverse that people actually use routinely. 


“Consider what is required to put two individuals in a social setting in an entirely virtual environment: convincing and detailed avatars with realistic clothing, hair and skin tones; all rendered in real time and based on sensor data capturing real world 3D objects, gestures, audio and much more; data transfer at super high bandwidths and extremely low latencies; and a persistent model of the environment, which may contain both real and simulated elements,” says Raja Koduri, Intel SVP and GM of Intel’s Accelerated Computing Systems and Graphics Group. “Now, imagine solving this problem at scale--for hundreds of millions of users simultaneously--and you will quickly realize that our computing, storage and networking infrastructure today is simply not enough to enable this vision.”


“We need several orders of magnitude more powerful computing capability, accessible at much lower latencies across a multitude of device form factors,” says Koduri. 


“Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000-times increase in computational efficiency from today’s state of the art,” he notes. 


Wednesday, April 27, 2022

Metaverse is a Decade Away

Some technology transformations are so prodigious that it takes decades for mass adoption to happen. We might point to artificial intelligence or virtual reality as prime examples. Now we probably can add Web 3.0 and metaverse to that list. 


At a practical level, we might also point to the delay of “new use cases” developing during the 3G and 4G eras. That is likely to happen with 5G as well. Some futuristic apps predicted for 3G did not happen until 4G. Some will not happen until 5G. Likely, many will not mature until 6G. 


The simple fact is that the digital infrastructure will not support metaverse immersive apps, as envisioned, for some time. Latency performance is not there; compute density is not there; bandwidth is not there. 


In fact, it is possible to argue that metaverse is itself digital infrastructure, as much as it might also be viewed as an application supported by a range of other elements and capabilities, including web 3.0, blockchain and decentralized autonomous organizations, artificial intelligence, edge computing, fast access networks and high-performance computing. 


source: Constellation Research 


Scaling persistent, immersive, real-time computing globally to support the metaverse will require computational efficiency 1,000 times greater than today’s state of the art can offer, Intel has argued. 


To reduce latency, computing will have to move to the edge and access networks will have to be upgraded. 


All of that takes time, lots of capital investment and an evolution of business models and company cultures. Metaverse is coming, but it is not here today, and will take a decade or more to fully demonstrate its value. Major technology transformations are like that.


Generative AI Might Create the Next Digital Real Estate

To the extent that generative artificial intelligence could enable the creation of rivals to search, and improve search, it also creates mon...