Wednesday, December 22, 2021

"Middle Mile" Sometimes is "Muddled Mile"

Some seemingly define middle mile too loosely. In fact, some projects talked about as middle mile might not have anything to do with middle mile infrastructure. In fact, local access often is what is meant, not middle mile infrastructure.  


Terminology changes in the connectivity business, over time. The term refers to the part of the network segment between the core network backbone (the wide area network) and the local access network.


Think of this as what we used to refer to as the trunking network, or perhaps the distribution network. If the core network terminates at a class 4 switch or a colocation facility, then the “middle mile” is the transport network connecting the colo to the local access network (a central office or headent, for example).

 

Some illustrations tend to distort the network architecture, even when subject matter experts correctly understand the concept. In this illustration, which shows the way an enterprise user might see matters, the entire WAN is considered “middle mile,” not simply the connections between a colo site and the WAN. 

source: Telegeography


That is understandable if we conceive of the network the way an enterprise might: that “everything not part of my own network” (“my local area network”) is “in the cloud,” an abstraction. 


Even viewed that way, the middle mile is an abstraction. It is part of the network “cloud,” in the sense network architects have depicted it: all the network that is not owned by the enterprise. 


The point is that there is a difference between network terms such as “middle mile” as a description of network facilities and the use of the term (perhaps even incorrectly) as a matter of networking architecture. 


“WAN transport” is not “middle mile,” in terms of network function. But everything other than the enterprise LAN is “cloud” or “not owned by me” in terms of data architecture. But in that sense the term middle mile is unnecessary.


Of course, such terms are more important for service providers than end users and customers. Even if WAN and “network core” refer to specific parts of a network; access to a different part;  and middle mile, distribution, backhaul or trunking network being a third part, none of that matters to most enterprises or consumers buying connectivity. 


The term arguably matters most to retail internet service providers needing to reach internet points of presence. It matters most when ISPs need to buy capacity on such networks to reach an internet PoP. 


Still, usage does change, over time. 


We used to define “broadband” as any data rate at 1.544 Mbps or faster. Now the definition is more flexible, and deliberately changes over time. The U.S. Federal Communications Commission defines “broadband” as a minimum downstream speed of 25 Mbps, with 3 Mbps upstream. 


In a mobile network, “backhaul” used to mean the trunk connection between a cell tower and a mobile switching center. These days, as networks are virtualized, we talk about fronthaul, mid-haul and backhaul. All those deployments occur within a local network, but “fronthaul” applies to the connection between a baseband site and a radio site, for example.


Mid-haul can refer to the connection between a baseband processing site and a controller. Backhaul then refers to the connection with a wide area network access point. 



So it is with “middle mile.” Classic fixed telco networks featured wide area networks for long-haul traffic, connecting tandem offices, for example. Connections from central offices to end users or remote hubs were part of the local trunking network. 


The local access network then ran from central offices or remote hubs to end user locations. We did not use the term “middle mile.”


In the context of  internet traffic, “middle mile” often refers to that portion of the network connecting internet traffic from an ISP’s servers to an internet traffic exchange point or peering location. 


Partly a physical concept and partly a business concept, the middle mile is the segment of a network between an internet peering point or collocation center and central offices, headends, or ISP data centers. 


Still, it is a somewhat-murky concept. Facebook, for example, sometimes refers to the middle mile as facilities linking its own data centers at distances of hundreds of miles. That is hard to reconcile with a definition focused on connections between internet peering locations and headends or ISP data centers or telco central offices. 


Others appear to use the term “middle mile” to refer to private networks of almost any distance that move traffic between a wide area network colo location and an ISP’s headend or data center. 


Traditional telco voice networks connecting central offices within a city or region might also be called “middle mile” instead of trunking networks. 


It might be easier to look at “middle mile” as a business concept, representing capacity costs or investments that are made to move traffic between an ISP headend and an internet traffic exchange point.


Tuesday, December 21, 2021

"Evergreen Stories" Mean Nil Chances of Change

Some storylines are “evergreen:” they always are independent of current events and are not time sensitive. The good or bad news--depending on how one is affected by the evergreen stories--is that the unchanging nature means precisely that: the story does not change.

And that is likely the bad news for European mobile operators anxious to see more Europe-based supply of Open RAN infrastructure.

n a white paper, Deutsche Telekom, Orange, Telecom Italia (TIM), Telefónica and Vodafone want policymakers to make Open Radio Access Networks a strategic priority, arguing that (as has been argued for decades about other areas of innovation) Europe is “falling behind” the United States and Japan in developing O-RAN.

The problem is that this is an “evergreen” story. Europe has been seen as “falling behind” or lagging” in many areas of technology innovation and sales leadership for some decades. Whether it is “only” two decades or as much as four decades is the issue.

Indeed, most observers might well be forced to agree that technology leadership in computing, digital apps and communications seems to be coming from China or the United States. Open RAN is simply another example of that trend.

At stake, the telcos say, are global vendor revenues in the Open RAN value chain, with 38 percent of total revenue, followed by RAN hardware (24 percent), Cloud (18 percent), Semiconductors (11 percent) and RAN software (nine percent)

For infrastructure supplies, the global market is said to be worth EUR36.1 billion by about 2026. That includes Open RAN hardware and software (EUR13.2 billion) and revenue from the broader RAN platform as well.

And that might be a large part of the problem. The ecosystem spans so many other areas where European suppliers are not leaders that “catching up” in a short time seems highly unlikely.

And though legitimate questions can be asked about how soon Open RAN becomes a substantial commercial reality, it is hard to argue with the argument that--eventually--it will do so, as part of the broader move to cloud-native and virtualized telecom networks.

“Open RAN is coming regardless of what Europe decides,” says the white paper.

The study identified 13 major Open RAN players in Europe compared to 57 major Non-European players. However, many European players are at an early stage of development and have not yet secured commercial Open RAN contracts, while vendors from other regions are moving ahead in actual sales.

“European vendors are not even present in all Open RAN sub-categories (e.g. Cloud Hardware), and are outnumbered in almost all categories by Non-European players (e.g. 2 major European vs 9 major Non-European players in the semiconductor category),” the paper says.

That “Europe is falling behind” argument has been made for a couple to four decades, whether it is levels of research and development spending, digital technology, economic growth or innovation in general.

Probably few--if any--observers would be optimistic about changing Open RAN supplier capabilities in a short period of time, if it could be done at all.

It’s simply an evergreen story.

Saturday, December 18, 2021

Home Broadband Speed Tests Using Wi-Fi-Connected Devices are Rubbish

Does your smartphone have an Ethernet port? Do you own spare Ethernet cables? Do you own a port converter to connect Ethernet to your smartphone?


And if you do run speed tests on your PC, do you use Wi-Fi or direct connect using Ethernet? All those questions matter because they essentially invalidate all the home broadband speed test data we see so often. 


Testing your smartphone’s “speed” when connected to Wi-Fi only tells you the bandwidth you are getting from that device, at that location, for the moment, over the Wi-Fi connection. It does not tell you the actual speed delivered to your home broadband location by the internet service provider. And the home broadband speed enabled by the ISP can be as much as 10 times higher than the measured speed on your Wi-Fi device. 


Methodology matters. 


The Central Iowa Broadband Internet Study, for example, conducted in the first half of 2021,  illustrates many issues faced by rural households as well as the testing methodology issue. 


 In rural areas studied, some 27.5 percent of internet users had some form of non-cabled access--satellite, fixed wireless or mobile. 


Area wide, 42 percent of download speed tests failed to reach 25 Mbps, the study says. The number of town/city respondents failing to meet the threshold was about 32 percent. In rural areas the percentage of tests delivering less than 25 Mbps was about 64 percent. 


But the study also suggests a big methodological problem: speeds delivered by the internet service provider likely were not tested. Instead, respondents likely used their Wi-Fi connections. And that can mean underreporting the actual speed of the connection by 10 times. 


To be sure, that same problem happens with almost every consumer speed test data, as most such tests use Wi-Fi-connected devices. 


The point is that ISP delivered speeds quite often degraded by performance of the in-home Wi-Fi networks, older equipment or in-building obstructions. Actual speeds delivered by the internet service provider to a router are one matter. Actual speeds experienced by any Wi-Fi-connected device within the home are something else. 


source: CMIT Solutions 


One important caveat is that speed tests made by consumers using their Wi-Fi connections might not tell us too much that is useful about internet access speeds. In other words, consumers who say they do not get 25 Mbps on their Wi-Fi-connected devices could well be on access networks that actually are bringing speeds 10 times faster (250 Mbps) than reported. 


Of the respondents reporting they use a non-terrestrial (cabled network) for home broadband, 41 percent used a satellite provider. Some 30 percent used a fixed wireless provider and 29 percent reported using a mobile network. 


Only about 1.5 percent of survey respondents buying internet access reported they use a non-terrestrial provider for internet access. About 6.7 percent of survey respondents said “no internet service is available at their home.”


“The average download speed recorded was 80.7 Mbps, but the median download speed was just 34.0 Mbps,” the study reports. 


The median download speed for city/town respondents (101.6 Mbps) was three times higher than the median speed among rural respondents (34.0 Mbps), the study says. 


Keep in mind, however, that the speed tests likely were conducted over a local Wi-Fi connection, the study says. That matters, as speed actually delivered to the premises quite often is significantly higher--as much as an order of magnitude--than the Wi-Fi speed experienced by any single device within the home or business. 


Complain all you want about map inaccuracies. The amount of divergence from “reality” from that source of error arguably pales with testing error that only measures Wi-Fi device performance, not the actual speeds delivered to any location by an ISP.


In fact, virtually all user tests of speed are outside the margin of error by such a huge margin that the reported speeds are likely wrong--and undercounted--by as much as an order of magnitude.


Thursday, December 16, 2021

Only 28% of U.K. Customers Able to Buy FTTH Broadband Do So

Ofcom’s latest research shows the continuing lag between broadband supply and demand. In other words, it is one thing to make FTTH or gigabit-per-second internet access available. It is something else to entice customers to buy such services.


Fiber-to-home facilities now are available to more than eight million U.K. homes, or 28 percent of dwelling units. 


Meanwhile, gigabit-capable broadband is available to 13.7 million homes, or 47 percent of total homes. But take-up of gigabit speed services is still low, with around seven percent of FTTH  customers buying gigabit services, says Ofcom. 


source: Ofcom 


Fully 96 percent  of U.K. premises have access to 30 Mbps broadband connections. About 69 percent of locations able to buy 30 Mbps actually buy it, says Ofcom. Also, Ofcom notes that “94 percent of U.K. premises have access to an MNO (mobile network operator) FWA (fixed wireless access) service.” 


Mobile operators claim average download speeds up to 100 Mbps to 200 Mbps on their 5G fixed wireless services, Ofcom says. 


Satellite services add more potential coverage. “For example, Konnect states that its satellite covers around 75 percent of the U.K. and offers commercial services on a 24/7 basis direct to consumers with download speeds between 30 Mbps and 100 Mbps, with upload speeds averaging 3 Mbps.”


New low earth orbit satellite services such as Starlink also are coming. “Starlink indicates that users can currently expect to see 100 Mpbs to 200 Mbps or greater download speeds and upload speeds of 10 Mbps to 20 Mbpss with latency of 20 milliseconds or lower in most locations,” says Ofcom. 


The point is that although we might think consumers would jump at the chance to buy either FTTH service or gigabit-per-second service, that is not the case. Only about 28 percent of households able to buy FTTH service do so, while just seven percent of households able to buy gigabit service do so. 


To a large extent, internet service providers are investing ahead of demand, rather than following consumer demand. That is one key reason why customer experience did not fall off a cliff when pandemic-related shutdowns happened. ISPs already had created excess supply. 


That is likely to be the trend virtually forever.


Wednesday, December 15, 2021

Deutsche Telekom Speeds FTTH, Cable Already Supplies Gigabit Per Second Service to Half of German Households.

If Germany has about 40 million households, then Deutsche Telekom’s goal of connecting 10 million homes with fiber-to-home facilities by 2025 suggests coverage of about 25 percent of German homes with FTTH. 

source: IDATE   


Of course, physical media is one thing; bandwidth another. Vodafone's hybrid fiber coax network already covers at least 22 million German households with gigabit-per-second speeds, meaning more than half of German households can buy gigabit service. Cable gigabit households should reach 25 million homes soon.  


source: Viavi

Tuesday, December 14, 2021

What Exactly is Web3?


Juan Benet, Founder & CEO of Protocol Labs, talks about Web3.

Monday, December 13, 2021

Do Network Effects Still Drive Connectivity Business Moats?

Theodore Vail and Bob Metcalfe are among the entrepreneurs whose thinking has implicitly or explicitly relied on the notion of network effect, the increase in value or utility that happens when more people use a product or service. 


source: Medium 


James Currier and NfX argue there are some clear different types of network effect, which they argue drive 70 percent of the value of technology companies. That is reason enough to understand the principle. 


Essentially, network effects create business moats; barriers to entry by rivals. But some may argue that “network effects” are overrated sources of advantage. 


Are network effects explainable some other way? Can “economies of scale” explain advantage? Are the supposed advantages of network effects explainable by something else?


Perhaps “platform” is a way of explaining the success of a business model otherwise considered to be anchored in network effects. “Even among the companies that have come to define the sector--Facebook, Amazon, Apple, Netflix and Google--only Facebook’s franchise was primarily built on network effects,” some argue. 


Might  “viral” status, “branding,” “switching costs,” critical mass or other advantages explain defensive moats? It might not be so clear.  


When the network itself--the number of people one can reach on a particular communications network, for example--drives value, that is an example of network effect, somewhat clearly.


As an example of a business moat, Theodore Vail, the chairman of AT&T, said in 1908 that “no one has use for two telephone connections if he can reach all with whom he desires connection through one. 


In the connectivity business in the internet era, one might actually question the network effect to a large extent, since, by definition, every customer or user can reach any other lawful user without regard to the particular details of access network supply. 


As important as network effect might have been for monopolist AT&T, it is unclear whether such advantage still is possible in the internet era. Scale arguably continues to matter. But network effects? Unclear. 


Is There Really an Enterprise "Middle Mile?"

Terminology changes in the connectivity business, over time. Consider the term “middle mile,” which has come into use over the past decade. The term refers to the part of the network segment between the core network backbone (the wide area network) and the local access network. 

Think of this as what we used to refer to as the trunking network, or perhaps the distribution network. If the core network terminates at a class 4 switch or a colocation facility, then the “middle mile” is the transport network connecting the colo to the local access network (a central office or headent, for example).

 

Some illustrations tend to distort the network architecture, even when subject matter experts correctly understand the concept. In this illustration, which shows the way an enterprise user might see matters, the entire WAN is considered “middle mile,” not simply the connections between a colo site and the WAN. 

source: Telegeography


That is understandable if we conceive of the network the way an enterprise might: that “everything not part of my own network” (“my local area network”) is “in the cloud,” an abstraction. 


Even viewed that way, the middle mile is an abstraction. It is part of the network “cloud,” in the sense network architects have depicted it: all the network that is not owned by the enterprise. 


The point is that there is a difference between network terms such as “middle mile” as a description of network facilities and the use of the term (perhaps even incorrectly) as a matter of networking architecture. 


“WAN transport” is not “middle mile,” in terms of network function. But everything other than the enterprise LAN is “cloud” or “not owned by me” in terms of data architecture. But in that sense the term middle mile is unnecessary.

 

Sunday, December 12, 2021

How Do Network Effects Underpin Business Models?

Friday, December 10, 2021

How Big a Problem are "As Built" Maps?

If you know anything about outside plant operations, you know that lots of maps--especially “as built” maps--are not fully and accurately updated. They should be, but they perhaps are not. That can result in discrepancies between the way a service provider believes specific network locations are configured, and the way they actually exist. 


That is not to say malfeasance is never possible, but it is much more likely that maps are incorrect simply because, over time, not all the changes are reflected in official maps. To use a simple mobile analogy, coverage maps indicating data speeds will show one set of numbers in the winter, and a different set in the summer, where there are lots of deciduous trees. 


It also is possible fixed network data speeds will show one set of numbers at the hottest point of the summer and the coldest part of winter, or even different performance based on thermal effects across a day or a week.


Temperature affects both processor and cable performance, for example. 


Measurements for parts of the network that are newer might diverge from parts of the network that are older, even in the same neighborhoods. 


The point is that there are lots of reasons why end user data speeds are not as the maps suggest they should be operating.


Spain Connectivity Markets Remain Contestable

Spain’s communications regulator, the National Markets and Competition Commission, says consumer spending on bundled connectivity dipped slightly in the first two quarters of 2021. The report notes annualized 2.5 euro declines in quadruple-play and 2.8 euro declines in spending on quintuple-play packages. 


source: CNMC 


As always, any number of reasons could explain such trends. Economic weakness exacerbated by the Covid endemic could cause consumer spending to drop, though not directly explaining price declines for these packages. 


Competition might have led to price declines for existing products. Though three top firms have about 75 percent market share, the market structure does not seem to have the stable “rule of four” structure that inhibits price wars.


In fact, neither Spain’s fixed network broadband nor mobile markets have yet to reach the “rule of four” structure. That suggests the markets still remain unstable in terms of market share. Competitive share gains and losses  remain possible. 


source: CNMC, Financial Times 


Edge Computing Partnerships Reveal Strategic Choices

Partnership is a funny word in the computing and connectivity industries. It typically is spun as a source of competitive advantage, and that arguably is true when a firm tries to add features and functionality outside its historic core business that are complementary to its core. 


Partnerships are often said to be advantageous when a firm wants to move out of its core and into an adjacency where it does not already have domain competence. The strategy often is to build volume and domain expertise to the point where a firm can source product features internally, rather than relying on a partner. 


When a firm partners in any area related to its core business, that is probably an indication of weakness, often the result of  financial limitations that prevent a firm from developing its own resources. 


That arguably is the case for cable operators looking at edge computing. A survey of cable operators by Heavy Reading found 16 percent of respondents planned to build at least some of their own infrastructure. But most respondents indicated their present thinking was to partner with one or more hyperscale computing as a service suppliers to create edge computing businesses.  

source: Light Reading 


At this point, as is true for many telcos as well, edge computing as a service is largely viewed as the domain of the hyperscalers, with some exceptions in regions where hyperscaler presence is undeveloped. The reliance on partnerships seems a realistic recognition that the actual computing as a service is outside the connectivity domain, and that hyperscalers have too many advantages to beat. 


Instead, in most cases, edge computing is seen as a product that can leverage connectivity provider real estate and connectivity assets, providing incremental revenue growth. There seems little belief that edge computing offers hope of a new role for connectivity providers as branded suppliers of computing as a service.


Anti-Trust Rarely Succeeds Permanently

Most mature markets feature a rule of three or a rule of four. “A stable competitive market never has more than three significant competitors,” BCG founder Bruce Henderson said in 1976. That often means the top-three providers have market share in the 70 percent to 90 percent range. 


The rule of four refers to the expected market share in a stable market, where leader market share is twice that of provider number two, and where the number-two supplier has double the share of the number-three provider. 


That creates a stable market share structure of 4:2:1. It arguably is stable because there is little incentive for either number one or number two to disrupt the market by attacking to gain share.


All of that explains the periodic waves of anti-trust action we see in many markets. Though there seems to be non-existent interest in anti-trust in the cloud computing “as a service” markets, some speculate it could eventually happen. 


The issue is that such regulatory action never lasts. Competitive markets will revert to the rule of three or rule of four structure again. Look at U.S. telecommunications, where a former monopoly by AT&T was ended in 1982, creating eight new contestants instead of one AT&T. 


What do we see some 40 years later? Essentially the rule of three. The rule of four is not yet in place, though. 


There also generally is a direct relationship between market share and profitability.  Some note there is a similar return on sales and market share relationship.


source: Marketing Science Institute


The point is that it is reasonable to expect that profits are directly related to market share, with a pattern where the leading three firms have something like a 40-20-10 share pattern, or perhaps 35-17-8 pattern.

Source: Reperio Capital


That pattern is--contrary to often-made claims--not a result of lack of competition, but instead evidence that competition exists. Competition means buyers gravitate to the perceived better products. That, in turn, leads to market share gains. 


At some point, it is in the self interest of contestants not to wage ruinous price wars. Such wars depress earnings and profit margins for all contestants, but rarely change the relative standings. The more-profitable leader can absorb the losses more easily than the less-profitable attackers in second or third place. 


Anti-trust action rarely, if ever, results in permanent change.


Will Mobile Operators Get 10% of MEC Revenue?

Security services, IoT and edge computing combined will amount to about $50 billion in annual service provider revenues by 2024, according to the IN Forum.  If cumulative growth rate for those services is 17.9 percent, then 2026 revenue might be about $70 billion globally. 

source: IN Forum 


To be sure, much-larger “total market” forecasts are possible if one adds to service revenue the contributions of hardware, software and other infrastructure, application licenses, system integration revenues and private enterprise investments and operations spending on edge computing, IoT, security. 


Looking only at edge computing, and all revenue segments, as much as $250 billion in annual revenue might be possible in 2025. It is possible service provider revenues from edge computing in that year might amount only to $20 billion.

Where Might Telcos Have Advantages in Multi-Access Edge Computing?

If we agree that edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user, then we also might agree that edge computing value will be generated as computing and cloud services can be executed locally. 


A corollary is likely that the suppliers of brand-name cloud apps and computing as a service will have a big role in edge computing, as buyers will be looking for functionality provided by the name-brand apps. 

source: STL Partners 


Mobile and fixed network connectivity suppliers have viewed edge computing as a way to increase the value of their assets and services. Mobile operators see 5G private and public network access as an opportunity to support ultra-low latency use cases, for example. 


Other participants in the connectivity value chain (tower operators, data centers, system integrators and infrastructure providers) might also see opportunities. 


Local real estate also is an opportunity, in the form of space, cooling, security and power for edge servers. Such real estate can be provided at a cell tower, street cabinet, network aggregation point, a central office or internet exchange point, for example. 


source: STL Partners


More complicated are moves to supply the actual computing as a service function. In fact, recent moves by leading U.S. mobile operators to use hyperscalers as the suppliers of the cloud computing to support the 5G network cores illustrates the advantages of not creating a custom computing function, even to support the internal operations of the 5G virtualized network. 

source: STL Partners 


Most telcos are looking at the brand-name hyperscalers to supply the computing platform, for example. That is not to say all telcos will do so. 


It is possible, in some regions, that connectivity providers will have greater opportunities to create general purpose edge computing infrastructures that have brand-name computing as a service suppliers present as tenants, much as hyperscalers are tenants at third party data centers. 


On the other hand, hyperscalers also will aim to supply on-the-premises edge computing facilities for enterprises, thus avoiding the need for much “in the metro area” real estate. 


It is too early to predict precisely which business models will flourish, as far as telco edge computing involvement.

Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...