Thursday, February 23, 2023

Can Compute Increase 1000 Times to Support Metaverse? What AI Processing Suggests

Metaverse at scale implies some fairly dramatic increases in computational resources and, to a lesser extent, bandwidth. 


Some believe the next-generation internet could require a three-order-of-magnitude (1,000 times) increase in computing power, to support lots of artificial intelligence, 3D rendering, metaverse and distributed applications. 


The issue is how that compares with historical increases in computational power. In the past, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


Will that be fast enough to support ubiquitous metaverse experiences? There is reasons for both optimism and concern. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


Still, there is reason for optimism. Consider the advances in computational support to support artificial intelligence and generative AI, to support use cases such as ChatGPT. 


source: Mindsync 


“We've accelerated and advanced AI processing by a million x over the last decade,” said Jensen Huang, Nvidia CEO. “Moore's Law, in its best days, would have delivered 100x in a decade.”


“We've made large language model processing a million times faster,” he said. “What would have taken a couple of months in the beginning, now it happens in about 10 days.”


In other words, vast increases in computational power might well hit the 1,000 times requirement, should it prove necessary. 


And improvements on a number of scales will enable such growth, beyond Moore’s Law and chip density. As it turns out, many parameters can be improved. 


source: OpenAI 


 “No AI in itself is an application,” Huang says. Preprocessing and  post-processing often represents half or two-thirds of the overall workload, he pointed out. 

By accelerating the entire end-to-end pipeline, from preprocessing, from data ingestion, data processing, all the way to the preprocessing all the way to post processing, “we're able to accelerate the entire pipeline versus just accelerating half of the pipeline,” said Huang. 

The point is that metaverse requirements--even assuming a 1,000-fold increase in computational support within a decade or so--seem feasible, given what is happening with artificial intelligence processing gains.


Wednesday, February 22, 2023

As Multi-Gigabit Home Broadband Grows, How Much Do People Really Need?

Announcements about multi-gigabit home broadband upgrades now have become so commonplace we are no longer surprised by the advances, which so outstrip the usual recommendations for minimum app bandwidth that we must reckon speed claims as marketing platforms, not user requirements. 


In fact, the primary value of any home broadband connection is assuring minimum bandwidth for all the simultaneously-connected devices at a location. The actual capacity required by any single app or device are quite low, in comparison to gigabit or multi-gigabit services. 


As observers always seem to note, web browsing, use of email and social media are low-bandwidth use cases, rarely actually requiring more than a couple of megabits per app. 


As always, entertainment video is the bandwidth hog, as high-definition TV might require up to 10 Mbps per stream. 4K and 8K streaming will require more bandwidth: up to 35 Mbps for 4K and perhaps 100 Mbps for 8K, per stream, when it is generally available. 


Online gaming might require a minimum of 10 Mbps. Work at home generally is a low-bandwidth requirement, with one exception: video conferencing or perhaps some remote work apps. Some recommend 10 Mbps to 20 Mbps in such cases. 


So the key variable for any specific home user is how many people, devices and apps are used concurrently, with the key bandwidth variable being video streaming beyond high definition content (4K, at the moment). 


But even in a home where two 4K streams are running (up to 70 Mbps); two online gaming sessions are happening at the same time (up to 20 Mbps) and perhaps three casual mobile phone sessions (up to 6 Mbps), total bandwidth only amounts to perhaps 96 Mbps. 


Gigabit speeds are overkill, in that scenario. And yet we are moving to multi-gigabit services. We know more capacity will be needed over time. But right now, home broadband speed claims are in the realm of marketing platforms, as available bandwidth so outstrips user requirements. 


Many estimate that by 2025, the “average” home broadband user might still require less than 300 Mbps worth of capacity. Nielsen’s law of course predicts that the top available commercial speeds in 2025 will be about 10 Gbps. 


source: NCTA 


That does not mean the “typical” customer will buy services at that rate. In fact, quite few will do so (adoption in single digits, probably). In 2025, perhaps half of customers might buy services operating at 1 Gbps, if present purchasing patterns continue to hold. 


Whatever the top “headline rate,” most customers buy services operating at 10 percent to 40 percent of that headline rate.     


Retail Private Labels and Connectivity Upstart Attack Strategies

One of the established ways new competitors enter existing markets is similar to the way retail private brands enter existing markets. New suppliers in the connectivity business, for example, often offer a simple value proposition: “similar product, lower price.” 


In fact, attackers often must enter the market with products that more accurately are “basic product; mostly works; some hassle factor; lower price.” Think about the introduction of Skype and other voice over IP products when first introduced. 


One could not call any public telephone network number. One had to use headphones, as “telephones” did not support use of VoIP. There was no big network effect: one could only talk with others who agreed to become part of a community of users. 


Over time, defects are eliminated, features are added and the upstart provider’s product begins to approximate most of the value of the legacy leader. Eventually, there is virtually no difference between the upstart’s product and the legacy leader’s product, in terms of features, functionality or ease of use. 


source: McKinsey 


Retail private brands also move through similar development cycles, eventually perhaps creating new brands with substantial brand awareness, perceived value and customer loyalty. 


Often, the adoption pattern is similar to that of upstart connectivity providers: target the value-driven customer segment first, then gradually add features to attract mainstream segments and finally the full range of customers. 


In many ways, the private label business also is similar to the connectivity business in that a “price attack” can eventually become a “quality attack.” Often, a new provider can deliver experiences or performance that are better than the legacy provider.


Monday, February 20, 2023

Does Early 5G Cause Growth, or Merely Reflect It?

This illustration of 5G deployment globally illustrates a point. Places where technology is widely deployed also tend to be the places where new technology gets deployed first.


source: Ookla 


Generally speaking, such places also are where economic activity is strong, national and individual incomes are higher and where measures of wealth are higher as well. 


While it might be hoped that early 5G deployment has a causal relationship to economic growth, it might be more correct to say that early 5G deployment reflects already-existing economic growth. 


The oft-mentioned value of early 5G availability as a driver of economic results might be exactly the inverse of truth. As the picture illustrates, 5G has been deployed early where nations and economies already were advanced. 


In other words, early 5G deployment reflects development, rather than creating it. 


Infrastructure tends to be correlated with economic results, to be sure. But it might be the case that pre-existing growth led to infrastructure creation, rather than the other way around.


Sunday, February 19, 2023

Data Center Architecture Now Hinges on Connectivity Foundation

Cloud computing requries data centers. Data centers require communications. Communications requires computing and data centers. That means each industry potentially can assume additional roles inm in the other parts of the ecosystem. That, in turn, perennially spurs hopes that connectivity providers can operate in the cloud computing or data center roles.


So Orange now talks about a strategy that builds on connectivity but adds other digital roles beyond connectivity. Other "telcos" have similar hopes to shift from being "telcos" to becoming "techcos." It never is completely clear what that means, but the implication always is that new lines of business and revenue are the result.


Hopes aside, connectivity services now have become an essential foundation for cloud computing and data center value.


Some adaptations of data center architecture in the cloud computing era have to do with connectivity architecture more than the way servers are cooled, the types of processors and specialized chips used in servers or the way servers are clustered


A cluster is a large unit of deployment, involving hundreds of server cabinets with top of rack (TOR) switches aggregated on a set of cluster switches. Meta has rearchitected its switch and server network into “pods,” in an effort to recast the entire data center as one fabric. 


source: Meta 


The point is to create a modularized compute fabric able to function with lower-cost switches. That, in turn, means the fabric is not limited by the port density of the switches. It also enables lower-cost deployment as efficient standard units are the building blocks. 


That also leads to simpler management tasks and operational supervision as well. 


And though  much of that architecture refers to “north-south” communications within any single data center, the value of the north-south architecture also hinges on the robustness of the east-west connections between data centers and the rest of the internet ecosystem. 

source: Meta 


In fact, it is virtually impossible to describe a data center architecture without reference to wide area and local area connectivity, using either the older three-tier or newer spine-leaf models. The point is that data centers require connectivity as a fundamental part of the computing architecture. 

source: Ultimate Kronos Group 


That will likely be amplified as data centers move to support machine learning operations as well as higher-order artificial intelligence operations. 


Still, in the cloud computing era, no data center has much value unless it is connected by high-bandwidth optical fiber links to other data centers and internet access and transport networks. 


“From a connectivity perspective, these networks are heavily meshed fiber infrastructures to ensure that no one server is more than two network hops from each other,” says Corning.


The other change is a shift in importance of north-south (inside the data center) and east-west (across the cloud) connections and data movement. As important as north-south intra-data-center traffic remains, communications across wide area networks assumes new importance in the AI era. 


The traditional core-spine-leaf connectivity architecture between switches and centers increasingly looks to be replaced by a two-layer spine-leaf design that reduces latency. In other words, in addition to what happens inside the data center, there will be changes outside the data center, in the connectivity network design. 


source: Commscope


In the above illustration, the Entrance Room (ER) is the entrance facility to the data center.


The Main Distribution Area (MDA) holds equipment such as routers, LAN and SAN switches. The Zone Distribution Area (ZDA) is a consolidation point for all the data center network cabling and switches.


The Equipment Distribution Area (EDA) is the main server area where the racks and cabinets are located.

 

So even if connectivity remains a separate business from cloud computing and the data center business. But all are part of a single ecosystem these days. Cloud computing requires data centers and data centers require good connectivity


Why Outcomes Always Lag Major Technology Investments

We might as well get used to the idea that artificial intelligence, machine learning, AR, VR, metaverse and Web3 are not going to produce the expected advantages as fast we invest in those technologies. 

The reason is simply that organizations cannot change as fast as does technology. So there will be an outcomes lag between investment and perceived outcomes.  

Martec's Law essentially argues that technology change happens faster than humans and organizations can change. That might explain why new technology sometimes takes decades to produce measurable change in organizational performance, or why a productivity gap exists.  

source: Chiefmartec 

Since there simply is no way organizations can change fast enough to keep up with technology, the practical task is simply to decide which specific technologies to embrace. In some instances, a major reset is possible, but typically only by a fairly-significant organizational change, such as spinning of parts of a business, selling or acquiring assets. 


source: Chiefmartec 

Some might argue that the Covid-19 epidemic caused an acceleration of technology adoption, though some also argue that demand was essentially “pulled forward” in time. In that sense, the pandemic was a “cataclysmic” event that caused a sudden burst of technology adoption. 

source: chiefmartec

The main point is that managerial discretion is involved. Since firms cannot deploy all the new technologies, choices have to be made about which to pursue. Even when the right choices are made, however, outcomes might take a while to surface. That likely is going to happen with AI investments, much as has happened in the past with other lags in measured productivity after major investment. 

We might reasonably expect similar disappointment with other major trends including metaverse, AR, VR or Web3. Organizations cannot change as fast as the technology does.

Wednesday, February 15, 2023

IP and Cloud Have Business Model Implications

Sometimes we fail to appreciate just how much business models in the connectivity and computing businesses are related to, driven by or enabled by the ways we implement technology. The best example is the development of “over the top” or “at the edge” business models. 


When the global connectivity industry decided to adopt TCP/IP as its next generation network, it also embraced a layered approach to computing and network functions, where functions of one sort are encapsulated. That means internal functions within a layer can be changed without requiring changes in all the other layers and functions. 


That has led to the reality that application layer functions are separated from the details of network transport or computing. In other words, Meta and Google do not need the permission of any internet service providers to connect its users and customers. 


Perhaps we might eventually come to see that the development of remote, cloud-based computing leads to similar connectivity business model changes. Practical examples already abound. E-commerce has disintermediated distributors, allowing buyers and sellers to conduct transactions directly. 


Eventually, we might come to see owners of digital infra as buyers and sellers in a cloud ecosystem enabled by marketplaces that displace older retail formats. 


Where one telco or one data center might have sold to customers within a specific geographic area, we might see more transactions conducted where customers and seller transactions are mediated by a retail platform, and not conducted directly between an infra supplier (computing cycles, storage, connectivity) and its customers or users. 


In any number of retail areas, this already happens routinely. People can buy lodging, vehicle rentals, other forms of transportation, clothing, professional services, grocery and other consumer necessities or discretionary items using online marketplaces that aggregate buyers and sellers. 


So one wonders whether this also could be replicated in the connectivity or computing businesses at a wider level. And as with IP and layers, such developments would be linked to or enabled by changes in technology. 


40 years ago, if asked to describe a mass market communications network, one would likely have talked about a network of class 4 voice switches at the core of the network, with class 5 switches for local distribution to customers. So the network was hierarchical. 


That structure also was closed, the authorized owners of such assets being restricted, by law, to a single provider in any geography. Also, all apps and devices used on such networks had to be approved by the sole operator in each area. 


source: Revesoft 


The point is that there was a correspondence between technology form and business possibility. 


If asked about data networks, which would have been X.25 at that time, one would have described a series of core network switches (packet switching exchanges) that offload to local data circuit terminating equipment (functionally equivalent to a router). Think of the PSE as the functional equivalent of the class 4 switches. 


source: Revesoft 


Again, there was a correspondence between technology and business models. Data networks could not yet be created “at the edge of the network” by enterprises themselves, as ownership of the core network switches was necessary. Frame relay, which succeeded X.25 essentially followed the same model, as did ATM. 


The adoption of IP changed all that. In the IP era, the network itself could be owned and operated by an enterprise at the edges, without the need to buy a turned-up service from a connectivity provider. That is a radically different model. 


At a device level one might say the backbone network now is a mesh network of routers that take the place of the former class 4 switches, with flatter rather than hierarchical relationships between the backbone network elements. 


source: Knoldus 


Internet access is required, but aside from that, entities can build wide area networks completely from the edges of the network. That of course affects WAN service provider revenues. 


These days, we tend to abstract almost everything: servers, switches, routers, optical transport and all the network command and control, applications software and data centers. So the issue is whether that degree of abstraction, plus relative ease of online marketing, sales, ordering,  fulfillment and settlement, creates some new business model opportunities. 


source: SiteValley 


As has been the case for other industries and products, online sales that disintermediate direct sales by distributors would seem to be an obvious new business model opportunity. To be sure, digital infra firms always have used bilateral wholesale deals and sales agents and other distributors. 


What they have not generally done is contribute inventory to third party platforms as a major sales channel. Logically, that should change. 


Where, and How Much, Might Generative AI Displace Search?

Some observers point out that generative artificial intelligence poses some risk for operators of search engines, as both search and GenAI s...