Tuesday, December 28, 2021

5G Uptake Will be an S Curve

Even if the 5G networks could magically spring up fully-deployed, with no construction obstacles, there would still be a lag between availability and customer acceptance. The reason is that not all customers are early adopters 


Early on, innovators and early adopters drive take rates. For them, the value of better performance is enough to create demand, even in the absence of compelling new use cases or applications. 


source: Researchgate 


Novelty does not create demand for mainstream customers, who need a value proposition oriented around some practical value beyond bragging rights. Mainstream customers must see a solution to some existing problem.


In some cases, that problem might be “predictability of service charges” more than “speed” as such. “No overage charges” is a value people understand. In other cases the lure might be “no additional cost video streaming subscriptions.” In yet other cases the value might be the ability to “use all the features of my new phone.”


The point is that mainstream consumers need tangible benefits, and those benefits might not flow directly from “faster speed” claims. 


The concept of the S curve describes consumer adoption behavior,  product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. 


It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


I’ve seen Gompertz used to describe the adoption of internet access, fiber to the home or mobile phone usage. It is often used in economic modeling and management consulting as well. 


The S curve also fits and explains consumer adoption of new technologies.


Why Some Users Find 5G Unsatisfying

5G value is an issue for some users who have bought it, especially in some markets where low-band spectrum has been the way 5G is mostly experienced. But there arguably are reasons why user experience could be challenged even in markets where mid-band spectrum underpins 5G experience.


One reason is the difference between what users do--and what the networks must support--on fixed and mobile networks. Fixed networks are multi-use networks. So the obvious value in a fixed network setting is "speed" or "bandwidth" to support multiple simultaneous users.


That is not the case on mobile networks, where accounts are set up on a one device, one user basis. Even when there are multiple users on a single account, those users do not "share" a local access connection. So the advantage of "speed" is different on a mobile network.


There is no "sharing" of a single connection. Also, fixed networks support screens of many sizes. Mobile networks mostly support very-small screen devices. That shapes bandwidth demand.


Apps typically used on large screen or medium-screen devices further shape bandwidth demand. Entertainment devices such as 4K TVs will consume more bandwidth than standard-definition or high-definition viewing on very-small screens.


Mobile-connected devices supporting artificial reality are the exception, at the moment, but also are relatively rare. And even many of those use cases rely on a local Wi-Fi connection, not the mobile network.


Up to a point, bandwidth affects user experience. Just as surely, additional bandwidth does not improve experience, once a threshold is reached. Latency and jitter also matter, but users might not be able to discern such changes, or wrongly attribute the lack of perceived improvement to "bandwidth" issues.


But if 4G provides any evidence, 5G value is going to change over the lifespan of the network. 


The initial value will be “speed,” even if user experience is less changed than some will expect, even if the perceived value is the marketing value of 5G delivering data faster, irrespective of user experience value.


The value after a decade will be “new use cases” and apps, for consumers and business use cases. But that will take time. And consumers might well find there is "not much difference" between 4G use cases and new 5G apps. They have not been created yet.


The betting early on is that many--perhaps most--of the new use cases will come from enterprise, not consumer uses. 


After a decade or so, we are likely to have discovered new consumer apps as well. It just is hard to say what those mass deployed use cases will be. Perhaps nobody predicted the emergence of ride sharing as an important 4G use case. 


Few predicted turn-by-turn navigation would be important. And though streaming video and audio were foreseen, even those apps do not rely so much on “speed” as the creation of easy-to-use and popular streaming apps.


In fact, the rise of “mobile-first” apps does not depend, strictly speaking, on bandwidth improvements brought by 4G, though faster speeds are an enabler. 


That would not be unusual for a next-generation mobile network, up to a point. If nothing else, coverage is an issue, early on. Even a better network does not help if it is not “generally available.”


Complicating matters is the rollout of 5G during the Covid pandemic and many restrictions on “out of home” and “on the go” usage. Working or learning remotely, many users likely spend most of their time connected to home Wi-Fi. So even if 5G is faster, the amount of time any single user might use it is far more limited than under normal circumstances. 


Still, faster speeds should help, up to a point, with existing applications, as page loading on a 600-Mbps fixed network connection should provide some noticeable advantages compared to a 300-Mbps connection (especially in multi-user and simultaneous multi-device usage cases. 


Since 3G, the key user experience gain has been “faster mobile data access.” Sometimes that is tangible; but sometimes not so much.


An argument can be made that latency has even greater user experience impact on a mobile network. Beyond some relatively low point, additional speed might not improve user experience. We can debate what that threshold is, as it changes over time. 


If a consumer’s primary reason for buying 4G was a tethering experience closer to fixed network experience, the 4G advantage was immediately tangible. If the primary advantage sought was mobile web browsing experience similar to fixed network experience, then the advantage might well have been tangible. 


5G poses a bit of a tougher problem. When downstream 4G speeds are routinely in the 20 Mbps to 30 Mbps to 35 Mbps range, how much does experience change when 5G offers 165 Mbps? It should help, but how much?


It depends on what a user does on a phone. Web page loading will be faster, but how much faster? Ignore for the moment the authoring of a web page (optimized for mobile access or not; how well optimized). 


For fixed network access, faster access speeds have not necessarily meant that web pages are loading faster, for example. 


On mobile networks, connection speeds have improved, but mobile page load times tracked by have increased, according to the Nielsen Norman Group.


source: Nieslen Norman Group 


Of course, page and landing page loading times are not a direct function of access speed but perhaps largely an artifact of remote server performance. So access speed is not the only, or perhaps not even primary determinant of user experience. 


The build-out phase of a national next-generation network takes years, so coverage outside of urban cores will typically be an issue. In some markets, where low-band and millimeter wave frequencies have been the mainstay, users might not often find there is much mobile data performance difference.


Wednesday, December 22, 2021

Fixed Network Broadband Grows, Mostly in China

Even if mobile internet access remains the main way most humans use the internet, fixed broadband accounts continue to grow at a two-percent rate, according to Point Topic. 

source: Point Topic 


Where global fixed network broadband accounts number a bit over one billion, mobile broadband accounts are nearly seven billion, according to Ericsson. 

source: Ericsoon 


And much of the increase in fixed broadband is happening in China. 


source: Point Topic 


"Middle Mile" Sometimes is "Muddled Mile"

Some seemingly define middle mile too loosely. In fact, some projects talked about as middle mile might not have anything to do with middle mile infrastructure. In fact, local access often is what is meant, not middle mile infrastructure.  


Terminology changes in the connectivity business, over time. The term refers to the part of the network segment between the core network backbone (the wide area network) and the local access network.


Think of this as what we used to refer to as the trunking network, or perhaps the distribution network. If the core network terminates at a class 4 switch or a colocation facility, then the “middle mile” is the transport network connecting the colo to the local access network (a central office or headent, for example).

 

Some illustrations tend to distort the network architecture, even when subject matter experts correctly understand the concept. In this illustration, which shows the way an enterprise user might see matters, the entire WAN is considered “middle mile,” not simply the connections between a colo site and the WAN. 

source: Telegeography


That is understandable if we conceive of the network the way an enterprise might: that “everything not part of my own network” (“my local area network”) is “in the cloud,” an abstraction. 


Even viewed that way, the middle mile is an abstraction. It is part of the network “cloud,” in the sense network architects have depicted it: all the network that is not owned by the enterprise. 


The point is that there is a difference between network terms such as “middle mile” as a description of network facilities and the use of the term (perhaps even incorrectly) as a matter of networking architecture. 


“WAN transport” is not “middle mile,” in terms of network function. But everything other than the enterprise LAN is “cloud” or “not owned by me” in terms of data architecture. But in that sense the term middle mile is unnecessary.


Of course, such terms are more important for service providers than end users and customers. Even if WAN and “network core” refer to specific parts of a network; access to a different part;  and middle mile, distribution, backhaul or trunking network being a third part, none of that matters to most enterprises or consumers buying connectivity. 


The term arguably matters most to retail internet service providers needing to reach internet points of presence. It matters most when ISPs need to buy capacity on such networks to reach an internet PoP. 


Still, usage does change, over time. 


We used to define “broadband” as any data rate at 1.544 Mbps or faster. Now the definition is more flexible, and deliberately changes over time. The U.S. Federal Communications Commission defines “broadband” as a minimum downstream speed of 25 Mbps, with 3 Mbps upstream. 


In a mobile network, “backhaul” used to mean the trunk connection between a cell tower and a mobile switching center. These days, as networks are virtualized, we talk about fronthaul, mid-haul and backhaul. All those deployments occur within a local network, but “fronthaul” applies to the connection between a baseband site and a radio site, for example.


Mid-haul can refer to the connection between a baseband processing site and a controller. Backhaul then refers to the connection with a wide area network access point. 



So it is with “middle mile.” Classic fixed telco networks featured wide area networks for long-haul traffic, connecting tandem offices, for example. Connections from central offices to end users or remote hubs were part of the local trunking network. 


The local access network then ran from central offices or remote hubs to end user locations. We did not use the term “middle mile.”


In the context of  internet traffic, “middle mile” often refers to that portion of the network connecting internet traffic from an ISP’s servers to an internet traffic exchange point or peering location. 


Partly a physical concept and partly a business concept, the middle mile is the segment of a network between an internet peering point or collocation center and central offices, headends, or ISP data centers. 


Still, it is a somewhat-murky concept. Facebook, for example, sometimes refers to the middle mile as facilities linking its own data centers at distances of hundreds of miles. That is hard to reconcile with a definition focused on connections between internet peering locations and headends or ISP data centers or telco central offices. 


Others appear to use the term “middle mile” to refer to private networks of almost any distance that move traffic between a wide area network colo location and an ISP’s headend or data center. 


Traditional telco voice networks connecting central offices within a city or region might also be called “middle mile” instead of trunking networks. 


It might be easier to look at “middle mile” as a business concept, representing capacity costs or investments that are made to move traffic between an ISP headend and an internet traffic exchange point.


Tuesday, December 21, 2021

"Evergreen Stories" Mean Nil Chances of Change

Some storylines are “evergreen:” they always are independent of current events and are not time sensitive. The good or bad news--depending on how one is affected by the evergreen stories--is that the unchanging nature means precisely that: the story does not change.

And that is likely the bad news for European mobile operators anxious to see more Europe-based supply of Open RAN infrastructure.

n a white paper, Deutsche Telekom, Orange, Telecom Italia (TIM), Telefónica and Vodafone want policymakers to make Open Radio Access Networks a strategic priority, arguing that (as has been argued for decades about other areas of innovation) Europe is “falling behind” the United States and Japan in developing O-RAN.

The problem is that this is an “evergreen” story. Europe has been seen as “falling behind” or lagging” in many areas of technology innovation and sales leadership for some decades. Whether it is “only” two decades or as much as four decades is the issue.

Indeed, most observers might well be forced to agree that technology leadership in computing, digital apps and communications seems to be coming from China or the United States. Open RAN is simply another example of that trend.

At stake, the telcos say, are global vendor revenues in the Open RAN value chain, with 38 percent of total revenue, followed by RAN hardware (24 percent), Cloud (18 percent), Semiconductors (11 percent) and RAN software (nine percent)

For infrastructure supplies, the global market is said to be worth EUR36.1 billion by about 2026. That includes Open RAN hardware and software (EUR13.2 billion) and revenue from the broader RAN platform as well.

And that might be a large part of the problem. The ecosystem spans so many other areas where European suppliers are not leaders that “catching up” in a short time seems highly unlikely.

And though legitimate questions can be asked about how soon Open RAN becomes a substantial commercial reality, it is hard to argue with the argument that--eventually--it will do so, as part of the broader move to cloud-native and virtualized telecom networks.

“Open RAN is coming regardless of what Europe decides,” says the white paper.

The study identified 13 major Open RAN players in Europe compared to 57 major Non-European players. However, many European players are at an early stage of development and have not yet secured commercial Open RAN contracts, while vendors from other regions are moving ahead in actual sales.

“European vendors are not even present in all Open RAN sub-categories (e.g. Cloud Hardware), and are outnumbered in almost all categories by Non-European players (e.g. 2 major European vs 9 major Non-European players in the semiconductor category),” the paper says.

That “Europe is falling behind” argument has been made for a couple to four decades, whether it is levels of research and development spending, digital technology, economic growth or innovation in general.

Probably few--if any--observers would be optimistic about changing Open RAN supplier capabilities in a short period of time, if it could be done at all.

It’s simply an evergreen story.

Saturday, December 18, 2021

Home Broadband Speed Tests Using Wi-Fi-Connected Devices are Rubbish

Does your smartphone have an Ethernet port? Do you own spare Ethernet cables? Do you own a port converter to connect Ethernet to your smartphone?


And if you do run speed tests on your PC, do you use Wi-Fi or direct connect using Ethernet? All those questions matter because they essentially invalidate all the home broadband speed test data we see so often. 


Testing your smartphone’s “speed” when connected to Wi-Fi only tells you the bandwidth you are getting from that device, at that location, for the moment, over the Wi-Fi connection. It does not tell you the actual speed delivered to your home broadband location by the internet service provider. And the home broadband speed enabled by the ISP can be as much as 10 times higher than the measured speed on your Wi-Fi device. 


Methodology matters. 


The Central Iowa Broadband Internet Study, for example, conducted in the first half of 2021,  illustrates many issues faced by rural households as well as the testing methodology issue. 


 In rural areas studied, some 27.5 percent of internet users had some form of non-cabled access--satellite, fixed wireless or mobile. 


Area wide, 42 percent of download speed tests failed to reach 25 Mbps, the study says. The number of town/city respondents failing to meet the threshold was about 32 percent. In rural areas the percentage of tests delivering less than 25 Mbps was about 64 percent. 


But the study also suggests a big methodological problem: speeds delivered by the internet service provider likely were not tested. Instead, respondents likely used their Wi-Fi connections. And that can mean underreporting the actual speed of the connection by 10 times. 


To be sure, that same problem happens with almost every consumer speed test data, as most such tests use Wi-Fi-connected devices. 


The point is that ISP delivered speeds quite often degraded by performance of the in-home Wi-Fi networks, older equipment or in-building obstructions. Actual speeds delivered by the internet service provider to a router are one matter. Actual speeds experienced by any Wi-Fi-connected device within the home are something else. 


source: CMIT Solutions 


One important caveat is that speed tests made by consumers using their Wi-Fi connections might not tell us too much that is useful about internet access speeds. In other words, consumers who say they do not get 25 Mbps on their Wi-Fi-connected devices could well be on access networks that actually are bringing speeds 10 times faster (250 Mbps) than reported. 


Of the respondents reporting they use a non-terrestrial (cabled network) for home broadband, 41 percent used a satellite provider. Some 30 percent used a fixed wireless provider and 29 percent reported using a mobile network. 


Only about 1.5 percent of survey respondents buying internet access reported they use a non-terrestrial provider for internet access. About 6.7 percent of survey respondents said “no internet service is available at their home.”


“The average download speed recorded was 80.7 Mbps, but the median download speed was just 34.0 Mbps,” the study reports. 


The median download speed for city/town respondents (101.6 Mbps) was three times higher than the median speed among rural respondents (34.0 Mbps), the study says. 


Keep in mind, however, that the speed tests likely were conducted over a local Wi-Fi connection, the study says. That matters, as speed actually delivered to the premises quite often is significantly higher--as much as an order of magnitude--than the Wi-Fi speed experienced by any single device within the home or business. 


Complain all you want about map inaccuracies. The amount of divergence from “reality” from that source of error arguably pales with testing error that only measures Wi-Fi device performance, not the actual speeds delivered to any location by an ISP.


In fact, virtually all user tests of speed are outside the margin of error by such a huge margin that the reported speeds are likely wrong--and undercounted--by as much as an order of magnitude.


Thursday, December 16, 2021

Only 28% of U.K. Customers Able to Buy FTTH Broadband Do So

Ofcom’s latest research shows the continuing lag between broadband supply and demand. In other words, it is one thing to make FTTH or gigabit-per-second internet access available. It is something else to entice customers to buy such services.


Fiber-to-home facilities now are available to more than eight million U.K. homes, or 28 percent of dwelling units. 


Meanwhile, gigabit-capable broadband is available to 13.7 million homes, or 47 percent of total homes. But take-up of gigabit speed services is still low, with around seven percent of FTTH  customers buying gigabit services, says Ofcom. 


source: Ofcom 


Fully 96 percent  of U.K. premises have access to 30 Mbps broadband connections. About 69 percent of locations able to buy 30 Mbps actually buy it, says Ofcom. Also, Ofcom notes that “94 percent of U.K. premises have access to an MNO (mobile network operator) FWA (fixed wireless access) service.” 


Mobile operators claim average download speeds up to 100 Mbps to 200 Mbps on their 5G fixed wireless services, Ofcom says. 


Satellite services add more potential coverage. “For example, Konnect states that its satellite covers around 75 percent of the U.K. and offers commercial services on a 24/7 basis direct to consumers with download speeds between 30 Mbps and 100 Mbps, with upload speeds averaging 3 Mbps.”


New low earth orbit satellite services such as Starlink also are coming. “Starlink indicates that users can currently expect to see 100 Mpbs to 200 Mbps or greater download speeds and upload speeds of 10 Mbps to 20 Mbpss with latency of 20 milliseconds or lower in most locations,” says Ofcom. 


The point is that although we might think consumers would jump at the chance to buy either FTTH service or gigabit-per-second service, that is not the case. Only about 28 percent of households able to buy FTTH service do so, while just seven percent of households able to buy gigabit service do so. 


To a large extent, internet service providers are investing ahead of demand, rather than following consumer demand. That is one key reason why customer experience did not fall off a cliff when pandemic-related shutdowns happened. ISPs already had created excess supply. 


That is likely to be the trend virtually forever.


Wednesday, December 15, 2021

Deutsche Telekom Speeds FTTH, Cable Already Supplies Gigabit Per Second Service to Half of German Households.

If Germany has about 40 million households, then Deutsche Telekom’s goal of connecting 10 million homes with fiber-to-home facilities by 2025 suggests coverage of about 25 percent of German homes with FTTH. 

source: IDATE   


Of course, physical media is one thing; bandwidth another. Vodafone's hybrid fiber coax network already covers at least 22 million German households with gigabit-per-second speeds, meaning more than half of German households can buy gigabit service. Cable gigabit households should reach 25 million homes soon.  


source: Viavi

Tuesday, December 14, 2021

What Exactly is Web3?


Juan Benet, Founder & CEO of Protocol Labs, talks about Web3.

Monday, December 13, 2021

Do Network Effects Still Drive Connectivity Business Moats?

Theodore Vail and Bob Metcalfe are among the entrepreneurs whose thinking has implicitly or explicitly relied on the notion of network effect, the increase in value or utility that happens when more people use a product or service. 


source: Medium 


James Currier and NfX argue there are some clear different types of network effect, which they argue drive 70 percent of the value of technology companies. That is reason enough to understand the principle. 


Essentially, network effects create business moats; barriers to entry by rivals. But some may argue that “network effects” are overrated sources of advantage. 


Are network effects explainable some other way? Can “economies of scale” explain advantage? Are the supposed advantages of network effects explainable by something else?


Perhaps “platform” is a way of explaining the success of a business model otherwise considered to be anchored in network effects. “Even among the companies that have come to define the sector--Facebook, Amazon, Apple, Netflix and Google--only Facebook’s franchise was primarily built on network effects,” some argue. 


Might  “viral” status, “branding,” “switching costs,” critical mass or other advantages explain defensive moats? It might not be so clear.  


When the network itself--the number of people one can reach on a particular communications network, for example--drives value, that is an example of network effect, somewhat clearly.


As an example of a business moat, Theodore Vail, the chairman of AT&T, said in 1908 that “no one has use for two telephone connections if he can reach all with whom he desires connection through one. 


In the connectivity business in the internet era, one might actually question the network effect to a large extent, since, by definition, every customer or user can reach any other lawful user without regard to the particular details of access network supply. 


As important as network effect might have been for monopolist AT&T, it is unclear whether such advantage still is possible in the internet era. Scale arguably continues to matter. But network effects? Unclear. 


Is There Really an Enterprise "Middle Mile?"

Terminology changes in the connectivity business, over time. Consider the term “middle mile,” which has come into use over the past decade. The term refers to the part of the network segment between the core network backbone (the wide area network) and the local access network. 

Think of this as what we used to refer to as the trunking network, or perhaps the distribution network. If the core network terminates at a class 4 switch or a colocation facility, then the “middle mile” is the transport network connecting the colo to the local access network (a central office or headent, for example).

 

Some illustrations tend to distort the network architecture, even when subject matter experts correctly understand the concept. In this illustration, which shows the way an enterprise user might see matters, the entire WAN is considered “middle mile,” not simply the connections between a colo site and the WAN. 

source: Telegeography


That is understandable if we conceive of the network the way an enterprise might: that “everything not part of my own network” (“my local area network”) is “in the cloud,” an abstraction. 


Even viewed that way, the middle mile is an abstraction. It is part of the network “cloud,” in the sense network architects have depicted it: all the network that is not owned by the enterprise. 


The point is that there is a difference between network terms such as “middle mile” as a description of network facilities and the use of the term (perhaps even incorrectly) as a matter of networking architecture. 


“WAN transport” is not “middle mile,” in terms of network function. But everything other than the enterprise LAN is “cloud” or “not owned by me” in terms of data architecture. But in that sense the term middle mile is unnecessary.

 

Sunday, December 12, 2021

How Do Network Effects Underpin Business Models?

Friday, December 10, 2021

How Big a Problem are "As Built" Maps?

If you know anything about outside plant operations, you know that lots of maps--especially “as built” maps--are not fully and accurately updated. They should be, but they perhaps are not. That can result in discrepancies between the way a service provider believes specific network locations are configured, and the way they actually exist. 


That is not to say malfeasance is never possible, but it is much more likely that maps are incorrect simply because, over time, not all the changes are reflected in official maps. To use a simple mobile analogy, coverage maps indicating data speeds will show one set of numbers in the winter, and a different set in the summer, where there are lots of deciduous trees. 


It also is possible fixed network data speeds will show one set of numbers at the hottest point of the summer and the coldest part of winter, or even different performance based on thermal effects across a day or a week.


Temperature affects both processor and cable performance, for example. 


Measurements for parts of the network that are newer might diverge from parts of the network that are older, even in the same neighborhoods. 


The point is that there are lots of reasons why end user data speeds are not as the maps suggest they should be operating.


Spain Connectivity Markets Remain Contestable

Spain’s communications regulator, the National Markets and Competition Commission, says consumer spending on bundled connectivity dipped slightly in the first two quarters of 2021. The report notes annualized 2.5 euro declines in quadruple-play and 2.8 euro declines in spending on quintuple-play packages. 


source: CNMC 


As always, any number of reasons could explain such trends. Economic weakness exacerbated by the Covid endemic could cause consumer spending to drop, though not directly explaining price declines for these packages. 


Competition might have led to price declines for existing products. Though three top firms have about 75 percent market share, the market structure does not seem to have the stable “rule of four” structure that inhibits price wars.


In fact, neither Spain’s fixed network broadband nor mobile markets have yet to reach the “rule of four” structure. That suggests the markets still remain unstable in terms of market share. Competitive share gains and losses  remain possible. 


source: CNMC, Financial Times 


Has AI Use Reached an Inflection Point, or Not?

As always, we might well disagree about the latest statistics on AI usage. The proportion of U.S. employees who report using artificial inte...