Thursday, February 23, 2023

Access Networks are Getting More Expensive

Like it or not, both mobile and fixed network operators are facing higher infrastructure costs. Fixed network operators must deploy optical fiber access networks. Mobile operators have to deploy dense small cell networks.


In each case, network architectures and investments are driven by physics: networks must support more bandwidth, which requires capacious physical media or small cells or both.


This is a good illustration of the reason why small cells are the future of mobile networks. It is just physics: the available radio frequencies we have added over time keep moving higher in the spectrum. Lower-frequency signals travel farther before they are attenuated.


Higher-frequency signals are attenuated quite quickly. The only available new spectrum we really can add is in the high-band areas. Those signals support lots of bandwidth, but will not travel very far, so cell sizes must necessarily be smaller. 

source: NTT, SKT 


And while small cells do not cost nearly what macrocells do (an order of magnitude less, at the very least), networks will need many more sites. The basics of cell geometry are that one quadruples the number of sites every time the cell radius shrinks by 50 percent. 


So shrinking from a 10-km radius to 50km radius requires four times as many cells. Shrinking again from 5 km to 2.5 km requires another quadrupling of sites, and so forth. By the time one moves to cell radii of 100 km or so, cell networks are quite dense. 


And optical fiber backhaul often is required to support such site. Ignoring the cost of many times more cells is the additional cost of fiber backhaul. 


So it should not be surprising that network cost is going up.


Competitors Worry About an End to EU Mandatory Wholesale Prices

European Union proposals to spur gigabit home broadband have gotten some opposition from non-dominant connectivity service providers, in large part because of feared changes in wholesale access rules viewed as boosting legacy provider revenues and raising those of new market entrants. 


That outcome is expected if the rules on mandatory wholesale pricing are removed or modified. 


To be sure, the proposed new rules would also focus on streamlining construction and permitting requirements, such as requiring publicly-owned physical infrastructure including ducts, poles and sewers to make access to those facilities available to service providers. 


When the Commission talks about the need to “incentivize network investments,” competitors are wary of new rules allowing wholesale rates to float free of mandatory prices. When the Commission talks about the need  to create“ economies of scale for operators” and a better climate for cross-border investment, that signals an intent to reduce facilities deployment cost that will help facilities providers the most. 


But it is the change in wholesale access pricing that will affect non-facilities-based competitors the most. 


In the U.S. market, for example, deregulation of the local access business entailed instituting generous wholesale discounts that incumbents were obligated to provide wholesale customers who wished to use their facilities. That created an immediate rush of service providers into the market able to buy end-to-end wholesale access at about a 30 percent discount, speeding market access but also eliminating the need to build facilities. 


When regulators decided to overturn those rules, and allow market-based pricing, the wholesale-based competitive industry promptly began to recede. Wholesale access still happens, but at market-based rates and generally for backhaul, trunking or service to support services for business customers.


Can Compute Increase 1000 Times to Support Metaverse? What AI Processing Suggests

Metaverse at scale implies some fairly dramatic increases in computational resources and, to a lesser extent, bandwidth. 


Some believe the next-generation internet could require a three-order-of-magnitude (1,000 times) increase in computing power, to support lots of artificial intelligence, 3D rendering, metaverse and distributed applications. 


The issue is how that compares with historical increases in computational power. In the past, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


Will that be fast enough to support ubiquitous metaverse experiences? There is reasons for both optimism and concern. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


Still, there is reason for optimism. Consider the advances in computational support to support artificial intelligence and generative AI, to support use cases such as ChatGPT. 


source: Mindsync 


“We've accelerated and advanced AI processing by a million x over the last decade,” said Jensen Huang, Nvidia CEO. “Moore's Law, in its best days, would have delivered 100x in a decade.”


“We've made large language model processing a million times faster,” he said. “What would have taken a couple of months in the beginning, now it happens in about 10 days.”


In other words, vast increases in computational power might well hit the 1,000 times requirement, should it prove necessary. 


And improvements on a number of scales will enable such growth, beyond Moore’s Law and chip density. As it turns out, many parameters can be improved. 


source: OpenAI 


 “No AI in itself is an application,” Huang says. Preprocessing and  post-processing often represents half or two-thirds of the overall workload, he pointed out. 

By accelerating the entire end-to-end pipeline, from preprocessing, from data ingestion, data processing, all the way to the preprocessing all the way to post processing, “we're able to accelerate the entire pipeline versus just accelerating half of the pipeline,” said Huang. 

The point is that metaverse requirements--even assuming a 1,000-fold increase in computational support within a decade or so--seem feasible, given what is happening with artificial intelligence processing gains.


Wednesday, February 22, 2023

As Multi-Gigabit Home Broadband Grows, How Much Do People Really Need?

Announcements about multi-gigabit home broadband upgrades now have become so commonplace we are no longer surprised by the advances, which so outstrip the usual recommendations for minimum app bandwidth that we must reckon speed claims as marketing platforms, not user requirements. 


In fact, the primary value of any home broadband connection is assuring minimum bandwidth for all the simultaneously-connected devices at a location. The actual capacity required by any single app or device are quite low, in comparison to gigabit or multi-gigabit services. 


As observers always seem to note, web browsing, use of email and social media are low-bandwidth use cases, rarely actually requiring more than a couple of megabits per app. 


As always, entertainment video is the bandwidth hog, as high-definition TV might require up to 10 Mbps per stream. 4K and 8K streaming will require more bandwidth: up to 35 Mbps for 4K and perhaps 100 Mbps for 8K, per stream, when it is generally available. 


Online gaming might require a minimum of 10 Mbps. Work at home generally is a low-bandwidth requirement, with one exception: video conferencing or perhaps some remote work apps. Some recommend 10 Mbps to 20 Mbps in such cases. 


So the key variable for any specific home user is how many people, devices and apps are used concurrently, with the key bandwidth variable being video streaming beyond high definition content (4K, at the moment). 


But even in a home where two 4K streams are running (up to 70 Mbps); two online gaming sessions are happening at the same time (up to 20 Mbps) and perhaps three casual mobile phone sessions (up to 6 Mbps), total bandwidth only amounts to perhaps 96 Mbps. 


Gigabit speeds are overkill, in that scenario. And yet we are moving to multi-gigabit services. We know more capacity will be needed over time. But right now, home broadband speed claims are in the realm of marketing platforms, as available bandwidth so outstrips user requirements. 


Many estimate that by 2025, the “average” home broadband user might still require less than 300 Mbps worth of capacity. Nielsen’s law of course predicts that the top available commercial speeds in 2025 will be about 10 Gbps. 


source: NCTA 


That does not mean the “typical” customer will buy services at that rate. In fact, quite few will do so (adoption in single digits, probably). In 2025, perhaps half of customers might buy services operating at 1 Gbps, if present purchasing patterns continue to hold. 


Whatever the top “headline rate,” most customers buy services operating at 10 percent to 40 percent of that headline rate.     


Retail Private Labels and Connectivity Upstart Attack Strategies

One of the established ways new competitors enter existing markets is similar to the way retail private brands enter existing markets. New suppliers in the connectivity business, for example, often offer a simple value proposition: “similar product, lower price.” 


In fact, attackers often must enter the market with products that more accurately are “basic product; mostly works; some hassle factor; lower price.” Think about the introduction of Skype and other voice over IP products when first introduced. 


One could not call any public telephone network number. One had to use headphones, as “telephones” did not support use of VoIP. There was no big network effect: one could only talk with others who agreed to become part of a community of users. 


Over time, defects are eliminated, features are added and the upstart provider’s product begins to approximate most of the value of the legacy leader. Eventually, there is virtually no difference between the upstart’s product and the legacy leader’s product, in terms of features, functionality or ease of use. 


source: McKinsey 


Retail private brands also move through similar development cycles, eventually perhaps creating new brands with substantial brand awareness, perceived value and customer loyalty. 


Often, the adoption pattern is similar to that of upstart connectivity providers: target the value-driven customer segment first, then gradually add features to attract mainstream segments and finally the full range of customers. 


In many ways, the private label business also is similar to the connectivity business in that a “price attack” can eventually become a “quality attack.” Often, a new provider can deliver experiences or performance that are better than the legacy provider.


Monday, February 20, 2023

Does Early 5G Cause Growth, or Merely Reflect It?

This illustration of 5G deployment globally illustrates a point. Places where technology is widely deployed also tend to be the places where new technology gets deployed first.


source: Ookla 


Generally speaking, such places also are where economic activity is strong, national and individual incomes are higher and where measures of wealth are higher as well. 


While it might be hoped that early 5G deployment has a causal relationship to economic growth, it might be more correct to say that early 5G deployment reflects already-existing economic growth. 


The oft-mentioned value of early 5G availability as a driver of economic results might be exactly the inverse of truth. As the picture illustrates, 5G has been deployed early where nations and economies already were advanced. 


In other words, early 5G deployment reflects development, rather than creating it. 


Infrastructure tends to be correlated with economic results, to be sure. But it might be the case that pre-existing growth led to infrastructure creation, rather than the other way around.


Sunday, February 19, 2023

Data Center Architecture Now Hinges on Connectivity Foundation

Cloud computing requries data centers. Data centers require communications. Communications requires computing and data centers. That means each industry potentially can assume additional roles inm in the other parts of the ecosystem. That, in turn, perennially spurs hopes that connectivity providers can operate in the cloud computing or data center roles.


So Orange now talks about a strategy that builds on connectivity but adds other digital roles beyond connectivity. Other "telcos" have similar hopes to shift from being "telcos" to becoming "techcos." It never is completely clear what that means, but the implication always is that new lines of business and revenue are the result.


Hopes aside, connectivity services now have become an essential foundation for cloud computing and data center value.


Some adaptations of data center architecture in the cloud computing era have to do with connectivity architecture more than the way servers are cooled, the types of processors and specialized chips used in servers or the way servers are clustered


A cluster is a large unit of deployment, involving hundreds of server cabinets with top of rack (TOR) switches aggregated on a set of cluster switches. Meta has rearchitected its switch and server network into “pods,” in an effort to recast the entire data center as one fabric. 


source: Meta 


The point is to create a modularized compute fabric able to function with lower-cost switches. That, in turn, means the fabric is not limited by the port density of the switches. It also enables lower-cost deployment as efficient standard units are the building blocks. 


That also leads to simpler management tasks and operational supervision as well. 


And though  much of that architecture refers to “north-south” communications within any single data center, the value of the north-south architecture also hinges on the robustness of the east-west connections between data centers and the rest of the internet ecosystem. 

source: Meta 


In fact, it is virtually impossible to describe a data center architecture without reference to wide area and local area connectivity, using either the older three-tier or newer spine-leaf models. The point is that data centers require connectivity as a fundamental part of the computing architecture. 

source: Ultimate Kronos Group 


That will likely be amplified as data centers move to support machine learning operations as well as higher-order artificial intelligence operations. 


Still, in the cloud computing era, no data center has much value unless it is connected by high-bandwidth optical fiber links to other data centers and internet access and transport networks. 


“From a connectivity perspective, these networks are heavily meshed fiber infrastructures to ensure that no one server is more than two network hops from each other,” says Corning.


The other change is a shift in importance of north-south (inside the data center) and east-west (across the cloud) connections and data movement. As important as north-south intra-data-center traffic remains, communications across wide area networks assumes new importance in the AI era. 


The traditional core-spine-leaf connectivity architecture between switches and centers increasingly looks to be replaced by a two-layer spine-leaf design that reduces latency. In other words, in addition to what happens inside the data center, there will be changes outside the data center, in the connectivity network design. 


source: Commscope


In the above illustration, the Entrance Room (ER) is the entrance facility to the data center.


The Main Distribution Area (MDA) holds equipment such as routers, LAN and SAN switches. The Zone Distribution Area (ZDA) is a consolidation point for all the data center network cabling and switches.


The Equipment Distribution Area (EDA) is the main server area where the racks and cabinets are located.

 

So even if connectivity remains a separate business from cloud computing and the data center business. But all are part of a single ecosystem these days. Cloud computing requires data centers and data centers require good connectivity


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...