Wednesday, December 22, 2021

"Middle Mile" Sometimes is "Muddled Mile"

Some seemingly define middle mile too loosely. In fact, some projects talked about as middle mile might not have anything to do with middle mile infrastructure. In fact, local access often is what is meant, not middle mile infrastructure.  


Terminology changes in the connectivity business, over time. The term refers to the part of the network segment between the core network backbone (the wide area network) and the local access network.


Think of this as what we used to refer to as the trunking network, or perhaps the distribution network. If the core network terminates at a class 4 switch or a colocation facility, then the “middle mile” is the transport network connecting the colo to the local access network (a central office or headent, for example).

 

Some illustrations tend to distort the network architecture, even when subject matter experts correctly understand the concept. In this illustration, which shows the way an enterprise user might see matters, the entire WAN is considered “middle mile,” not simply the connections between a colo site and the WAN. 

source: Telegeography


That is understandable if we conceive of the network the way an enterprise might: that “everything not part of my own network” (“my local area network”) is “in the cloud,” an abstraction. 


Even viewed that way, the middle mile is an abstraction. It is part of the network “cloud,” in the sense network architects have depicted it: all the network that is not owned by the enterprise. 


The point is that there is a difference between network terms such as “middle mile” as a description of network facilities and the use of the term (perhaps even incorrectly) as a matter of networking architecture. 


“WAN transport” is not “middle mile,” in terms of network function. But everything other than the enterprise LAN is “cloud” or “not owned by me” in terms of data architecture. But in that sense the term middle mile is unnecessary.


Of course, such terms are more important for service providers than end users and customers. Even if WAN and “network core” refer to specific parts of a network; access to a different part;  and middle mile, distribution, backhaul or trunking network being a third part, none of that matters to most enterprises or consumers buying connectivity. 


The term arguably matters most to retail internet service providers needing to reach internet points of presence. It matters most when ISPs need to buy capacity on such networks to reach an internet PoP. 


Still, usage does change, over time. 


We used to define “broadband” as any data rate at 1.544 Mbps or faster. Now the definition is more flexible, and deliberately changes over time. The U.S. Federal Communications Commission defines “broadband” as a minimum downstream speed of 25 Mbps, with 3 Mbps upstream. 


In a mobile network, “backhaul” used to mean the trunk connection between a cell tower and a mobile switching center. These days, as networks are virtualized, we talk about fronthaul, mid-haul and backhaul. All those deployments occur within a local network, but “fronthaul” applies to the connection between a baseband site and a radio site, for example.


Mid-haul can refer to the connection between a baseband processing site and a controller. Backhaul then refers to the connection with a wide area network access point. 



So it is with “middle mile.” Classic fixed telco networks featured wide area networks for long-haul traffic, connecting tandem offices, for example. Connections from central offices to end users or remote hubs were part of the local trunking network. 


The local access network then ran from central offices or remote hubs to end user locations. We did not use the term “middle mile.”


In the context of  internet traffic, “middle mile” often refers to that portion of the network connecting internet traffic from an ISP’s servers to an internet traffic exchange point or peering location. 


Partly a physical concept and partly a business concept, the middle mile is the segment of a network between an internet peering point or collocation center and central offices, headends, or ISP data centers. 


Still, it is a somewhat-murky concept. Facebook, for example, sometimes refers to the middle mile as facilities linking its own data centers at distances of hundreds of miles. That is hard to reconcile with a definition focused on connections between internet peering locations and headends or ISP data centers or telco central offices. 


Others appear to use the term “middle mile” to refer to private networks of almost any distance that move traffic between a wide area network colo location and an ISP’s headend or data center. 


Traditional telco voice networks connecting central offices within a city or region might also be called “middle mile” instead of trunking networks. 


It might be easier to look at “middle mile” as a business concept, representing capacity costs or investments that are made to move traffic between an ISP headend and an internet traffic exchange point.


Tuesday, December 21, 2021

"Evergreen Stories" Mean Nil Chances of Change

Some storylines are “evergreen:” they always are independent of current events and are not time sensitive. The good or bad news--depending on how one is affected by the evergreen stories--is that the unchanging nature means precisely that: the story does not change.

And that is likely the bad news for European mobile operators anxious to see more Europe-based supply of Open RAN infrastructure.

n a white paper, Deutsche Telekom, Orange, Telecom Italia (TIM), Telefónica and Vodafone want policymakers to make Open Radio Access Networks a strategic priority, arguing that (as has been argued for decades about other areas of innovation) Europe is “falling behind” the United States and Japan in developing O-RAN.

The problem is that this is an “evergreen” story. Europe has been seen as “falling behind” or lagging” in many areas of technology innovation and sales leadership for some decades. Whether it is “only” two decades or as much as four decades is the issue.

Indeed, most observers might well be forced to agree that technology leadership in computing, digital apps and communications seems to be coming from China or the United States. Open RAN is simply another example of that trend.

At stake, the telcos say, are global vendor revenues in the Open RAN value chain, with 38 percent of total revenue, followed by RAN hardware (24 percent), Cloud (18 percent), Semiconductors (11 percent) and RAN software (nine percent)

For infrastructure supplies, the global market is said to be worth EUR36.1 billion by about 2026. That includes Open RAN hardware and software (EUR13.2 billion) and revenue from the broader RAN platform as well.

And that might be a large part of the problem. The ecosystem spans so many other areas where European suppliers are not leaders that “catching up” in a short time seems highly unlikely.

And though legitimate questions can be asked about how soon Open RAN becomes a substantial commercial reality, it is hard to argue with the argument that--eventually--it will do so, as part of the broader move to cloud-native and virtualized telecom networks.

“Open RAN is coming regardless of what Europe decides,” says the white paper.

The study identified 13 major Open RAN players in Europe compared to 57 major Non-European players. However, many European players are at an early stage of development and have not yet secured commercial Open RAN contracts, while vendors from other regions are moving ahead in actual sales.

“European vendors are not even present in all Open RAN sub-categories (e.g. Cloud Hardware), and are outnumbered in almost all categories by Non-European players (e.g. 2 major European vs 9 major Non-European players in the semiconductor category),” the paper says.

That “Europe is falling behind” argument has been made for a couple to four decades, whether it is levels of research and development spending, digital technology, economic growth or innovation in general.

Probably few--if any--observers would be optimistic about changing Open RAN supplier capabilities in a short period of time, if it could be done at all.

It’s simply an evergreen story.

Saturday, December 18, 2021

Home Broadband Speed Tests Using Wi-Fi-Connected Devices are Rubbish

Does your smartphone have an Ethernet port? Do you own spare Ethernet cables? Do you own a port converter to connect Ethernet to your smartphone?


And if you do run speed tests on your PC, do you use Wi-Fi or direct connect using Ethernet? All those questions matter because they essentially invalidate all the home broadband speed test data we see so often. 


Testing your smartphone’s “speed” when connected to Wi-Fi only tells you the bandwidth you are getting from that device, at that location, for the moment, over the Wi-Fi connection. It does not tell you the actual speed delivered to your home broadband location by the internet service provider. And the home broadband speed enabled by the ISP can be as much as 10 times higher than the measured speed on your Wi-Fi device. 


Methodology matters. 


The Central Iowa Broadband Internet Study, for example, conducted in the first half of 2021,  illustrates many issues faced by rural households as well as the testing methodology issue. 


 In rural areas studied, some 27.5 percent of internet users had some form of non-cabled access--satellite, fixed wireless or mobile. 


Area wide, 42 percent of download speed tests failed to reach 25 Mbps, the study says. The number of town/city respondents failing to meet the threshold was about 32 percent. In rural areas the percentage of tests delivering less than 25 Mbps was about 64 percent. 


But the study also suggests a big methodological problem: speeds delivered by the internet service provider likely were not tested. Instead, respondents likely used their Wi-Fi connections. And that can mean underreporting the actual speed of the connection by 10 times. 


To be sure, that same problem happens with almost every consumer speed test data, as most such tests use Wi-Fi-connected devices. 


The point is that ISP delivered speeds quite often degraded by performance of the in-home Wi-Fi networks, older equipment or in-building obstructions. Actual speeds delivered by the internet service provider to a router are one matter. Actual speeds experienced by any Wi-Fi-connected device within the home are something else. 


source: CMIT Solutions 


One important caveat is that speed tests made by consumers using their Wi-Fi connections might not tell us too much that is useful about internet access speeds. In other words, consumers who say they do not get 25 Mbps on their Wi-Fi-connected devices could well be on access networks that actually are bringing speeds 10 times faster (250 Mbps) than reported. 


Of the respondents reporting they use a non-terrestrial (cabled network) for home broadband, 41 percent used a satellite provider. Some 30 percent used a fixed wireless provider and 29 percent reported using a mobile network. 


Only about 1.5 percent of survey respondents buying internet access reported they use a non-terrestrial provider for internet access. About 6.7 percent of survey respondents said “no internet service is available at their home.”


“The average download speed recorded was 80.7 Mbps, but the median download speed was just 34.0 Mbps,” the study reports. 


The median download speed for city/town respondents (101.6 Mbps) was three times higher than the median speed among rural respondents (34.0 Mbps), the study says. 


Keep in mind, however, that the speed tests likely were conducted over a local Wi-Fi connection, the study says. That matters, as speed actually delivered to the premises quite often is significantly higher--as much as an order of magnitude--than the Wi-Fi speed experienced by any single device within the home or business. 


Complain all you want about map inaccuracies. The amount of divergence from “reality” from that source of error arguably pales with testing error that only measures Wi-Fi device performance, not the actual speeds delivered to any location by an ISP.


In fact, virtually all user tests of speed are outside the margin of error by such a huge margin that the reported speeds are likely wrong--and undercounted--by as much as an order of magnitude.


Thursday, December 16, 2021

Only 28% of U.K. Customers Able to Buy FTTH Broadband Do So

Ofcom’s latest research shows the continuing lag between broadband supply and demand. In other words, it is one thing to make FTTH or gigabit-per-second internet access available. It is something else to entice customers to buy such services.


Fiber-to-home facilities now are available to more than eight million U.K. homes, or 28 percent of dwelling units. 


Meanwhile, gigabit-capable broadband is available to 13.7 million homes, or 47 percent of total homes. But take-up of gigabit speed services is still low, with around seven percent of FTTH  customers buying gigabit services, says Ofcom. 


source: Ofcom 


Fully 96 percent  of U.K. premises have access to 30 Mbps broadband connections. About 69 percent of locations able to buy 30 Mbps actually buy it, says Ofcom. Also, Ofcom notes that “94 percent of U.K. premises have access to an MNO (mobile network operator) FWA (fixed wireless access) service.” 


Mobile operators claim average download speeds up to 100 Mbps to 200 Mbps on their 5G fixed wireless services, Ofcom says. 


Satellite services add more potential coverage. “For example, Konnect states that its satellite covers around 75 percent of the U.K. and offers commercial services on a 24/7 basis direct to consumers with download speeds between 30 Mbps and 100 Mbps, with upload speeds averaging 3 Mbps.”


New low earth orbit satellite services such as Starlink also are coming. “Starlink indicates that users can currently expect to see 100 Mpbs to 200 Mbps or greater download speeds and upload speeds of 10 Mbps to 20 Mbpss with latency of 20 milliseconds or lower in most locations,” says Ofcom. 


The point is that although we might think consumers would jump at the chance to buy either FTTH service or gigabit-per-second service, that is not the case. Only about 28 percent of households able to buy FTTH service do so, while just seven percent of households able to buy gigabit service do so. 


To a large extent, internet service providers are investing ahead of demand, rather than following consumer demand. That is one key reason why customer experience did not fall off a cliff when pandemic-related shutdowns happened. ISPs already had created excess supply. 


That is likely to be the trend virtually forever.


Wednesday, December 15, 2021

Deutsche Telekom Speeds FTTH, Cable Already Supplies Gigabit Per Second Service to Half of German Households.

If Germany has about 40 million households, then Deutsche Telekom’s goal of connecting 10 million homes with fiber-to-home facilities by 2025 suggests coverage of about 25 percent of German homes with FTTH. 

source: IDATE   


Of course, physical media is one thing; bandwidth another. Vodafone's hybrid fiber coax network already covers at least 22 million German households with gigabit-per-second speeds, meaning more than half of German households can buy gigabit service. Cable gigabit households should reach 25 million homes soon.  


source: Viavi

Tuesday, December 14, 2021

What Exactly is Web3?


Juan Benet, Founder & CEO of Protocol Labs, talks about Web3.

Monday, December 13, 2021

Do Network Effects Still Drive Connectivity Business Moats?

Theodore Vail and Bob Metcalfe are among the entrepreneurs whose thinking has implicitly or explicitly relied on the notion of network effect, the increase in value or utility that happens when more people use a product or service. 


source: Medium 


James Currier and NfX argue there are some clear different types of network effect, which they argue drive 70 percent of the value of technology companies. That is reason enough to understand the principle. 


Essentially, network effects create business moats; barriers to entry by rivals. But some may argue that “network effects” are overrated sources of advantage. 


Are network effects explainable some other way? Can “economies of scale” explain advantage? Are the supposed advantages of network effects explainable by something else?


Perhaps “platform” is a way of explaining the success of a business model otherwise considered to be anchored in network effects. “Even among the companies that have come to define the sector--Facebook, Amazon, Apple, Netflix and Google--only Facebook’s franchise was primarily built on network effects,” some argue. 


Might  “viral” status, “branding,” “switching costs,” critical mass or other advantages explain defensive moats? It might not be so clear.  


When the network itself--the number of people one can reach on a particular communications network, for example--drives value, that is an example of network effect, somewhat clearly.


As an example of a business moat, Theodore Vail, the chairman of AT&T, said in 1908 that “no one has use for two telephone connections if he can reach all with whom he desires connection through one. 


In the connectivity business in the internet era, one might actually question the network effect to a large extent, since, by definition, every customer or user can reach any other lawful user without regard to the particular details of access network supply. 


As important as network effect might have been for monopolist AT&T, it is unclear whether such advantage still is possible in the internet era. Scale arguably continues to matter. But network effects? Unclear. 


It Will be Hard to Measure AI Impact on Knowledge Worker "Productivity"

There are over 100 million knowledge workers in the United States, and more than 1.25 billion knowledge workers globally, according to one A...