Sunday, January 8, 2023

Marketing Claims Aside, How Much Capacity Do Home Broadband Users Really Need?

How much internet access speed or usage allowance does a customer really need? It actually is hard to say. U.S. data suggests there are clearer answers about what customers expect to pay, which is about $50 a month, on average, even if some studies suggest wildly higher prices.  


source: Broadband Now 


Average prices are lower or higher than $50 a month, depending on what adjustments are made, such as adjusting for currency differences or cost of living differences between markets. Adjustments of that sort tend to show rather uniform global pricing of internet access, also adjusting for “quality” (speed, for example) differences. 


Also, any assessment should be based on service plans people actually buy, not posted retail prices for any particular tier of service. Bundle pricing adds another layer of complication. 


Internet service provider business models always require matching supply with demand; deployment speed and cost. What is needed to market effectively against competitors also matters. 


Time to market does matter. “It took us 22 years to pass 17 million households with fiber: 22 years,” says Hans Vestberg, Verizon CEO. “That’s how hard it is.”


“We basically had 30 million households covered with fixed wireless access in less than one year,” he also notes. 


So there is the trade off: rapid deployment of a lower-cost network versus slower deployment of a higher capacity network; wide coverage now versus higher capacity later; lower capital investment versus high. 


As typically is the case, wireless platforms can be provisioned faster than cabled networks, at lower cost. The Verizon data illustrates that fact. 


Cost also matters, as no internet service provider--especially those in competitive markets--can afford to spend unlimited sums on its infrastructure. Verizon and T-Mobile tout fixed wireless access in large part because they can afford to supply itt and can supply it fast, at lower costs than building fiber-to-home would cost. 


But marketing also matters: internet service providers do compete on the basis of speeds and feeds; do compete on price; do compete on perceptions of quality; terms and conditions and value. 


In that regard, even as home broadband speeds continue to rise, marketing claims are a battleground. Cable executives, for example, make light of fixed wireless as they claim it will not scale the way hybrid fiber coax and fiber-to-home can. FWA proponents argue that the platform does not have to scale as fast as FTTH or HFC to provide value for segments of the customer base. 


For example, even households that buy the fastest tiers of service rarely have a “need” for all that capacity. According to a survey by HighSpeedInternet.com, survey respondents say the “perfect plan” features a “610 Mbps fiber connection for $49 per month.”


In the third quarter of 2022, about 15 percent of U.S. households bought service operating at 1 Gbps, while 55 percent purchased service running from 200 Mbps to 400 Mbps. 


source: OpenVault 

 

The point is that, no matter what they tell researchers, U.S. home broadband customers do not seem especially eager to buy gigabit services at the moment, or services running at about half that speed. 


Speed demands will keep climbing, of course. But it does not appear, based on history, that most consumers will switch to buying the fastest tiers of service, or the lowest tiers of service, either. Historically, U.S. consumers have purchased internet access costing about $50 a month, with performance “good enough” to satisfy needs.


In fact, one might make the argument that is consumption (gigabytes consumed) that matters more than speed. Average data consumption stood at about 500 gigabytes per month in the third quarter of 2022, according to OpenVault. But the percentage of power users consuming a terabyte or more was growing fast: up about 18 percent, year over year, and representing about 14 percent of customer accounts. 


So speed claims are about marketing, as much as customer requirements. “It's turned into really a marketing game,” adds Kyle Malady, Verizon Communications EVP. ISPs compete on claimed speeds, even if there is little evidence most households require gigabit speeds at the moment. 


Beyond a certain point of provisioned capacity per user and device in any household, additional speed brings subtle if any benefits. Consumption allowances do matter, especially for households that rely on streaming for video entertainment. 


Nobody can give you a convincing answer why gigabit per second or multi-gigabit per second networks are required, beyond noting that multi-user and multi-device households need a certain amount of capacity if all are using the ISP connection at the same time. 


No single application, for any single user and device, requires a gigabit connection. So the real math is how much total bandwidth, at any moment, is needed to support the expected number of users, apps and devices in simultaneous use. 


For a single user or two, using one or two devices each, simultaneously, it is hard to see how a gigabit or faster connection is required. 


Some version of that argument--that a customer “does not need” a particular capability, is at the heart of much ISP marketing. ISPs whose platforms have some speed limitations point out that the limits do not matter for some customers, or that the price paid for higher-speed services does not provide value, commensurate with cost.


Friday, January 6, 2023

Communications Regulation Has Obvious Implications for Political Freedom

Communications regulators generally argue--essentially--that “if it walks like a duck, and talks like a duck, it is a duck” when applying policy frameworks to different kinds of communication networks (general purpose public networks; broadcast TV and radio; satellite; cable TV and data networks, for example). 


In other regards, irrespective of technology, protocol or architecture, public networks are regulated one way; other networks often in different ways. Private local area networks such as Wi-Fi have few, if any restrictions beyond power emission limits.  Wide area networks likewise have few, if any, limits.


Data networks are not regulated, or lightly regulated, if at all. The internet seems to exist in a different space, as governments retain the ability to block access, block apps or services if they choose. 


In that regard, the internet is somewhat less regulated than public communications for phones, PCs and other devices, but more regulated than enterprise WANs or other local area private networks. 


Advocates for content freedom and governments often stand apart where it comes to internet regulation. But most of the movement over the last 20 years has been in the direction of “more regulation.” VoIP services increasingly have become regulated just like older forms of voice using the public switched telephone network, for example. 


Private actors often are free to impose their own restrictions as well. And the direction there also has been towards less freedom. 


So “technology is not destiny.” Regulators are free to make their decisions any way they choose. 


Still, all networks these days now are computer networks, even if the manner of use can be different. Broadcast TV, broadcast radio, mobile networks, satellite networks, low power wide area networks and emergency networks, for example, often combine the right to use spectrum with a purpose-built network. 


Quite often, the applications supported by the network are vertically integrated, and the licensee controls who and what gets access to the network. In other cases, use of the network requires authorization (you are a customer of a mobile or satellite network and pay for access,  for example). 


The internet--and internet-based applications--do not operate that way. Each application can set its own access rules for customers and users. 


Oddly enough, highly-regulated networks such as the "phone network" and other public networks have historically not interferred with content, and are, in that sense, about as permissive as are data networks or other media such as newspapers and magazines, in some countries.


Content regulation has at times been more stringent, at times less stringent,  for broadcast TV, radio or cable TV networks. 


Again, oddly enough, it now appears as though private actors are suppressing freedom as much, if not more than governments. Rarely are content regulations imposed by governments as much as they are by private firms. 


The point is that regulators have discretion and choice. So do private actors who use communicatiions. They can make decisions that promote more or less freedom, in almost any sphere where data networks operate. Perhaps it is worth pointing out that societies and people might gain from more freedom, rather than less.


Taxing Hyperscalers to Fund ISP Networks has Losers, Including End Users

For every public policy decision, there are winners and losers. That is no different for proposals to tax a few hyperscalers to support home broadband networks. ISPs would gain; app providers would lose. Ultimately, so would users of internet-delivered apps and services.


Communications policy almost always is based on precedent and prior conceptions. All this might be relevant when thinking about how public networks are funded, especially not that regulators are looking at unprecedented funding mechanisms, such as levying fees on third parties that are not “customers” of connectivity providers. 


It’s a bit like taxing appliance makers whose products create demand for electricity. Today, the electrical networks are common carriers, all the devices are private and the cost of using electricity is borne by the actual end user customers. 


But some regulators want to essentially tax device manufacturers for the amount of electricity use they generate. 


There are simpler solutions, such as charging customers on a usage basis, based on their consumption. That would have a possible added benefit of not disturbing the data communications regulatory framework. 


And that matters, at least for observers who care about freedom of expression. Data networks have always separated the movement of data from the content of data. Devices and software do not require the permission of the data infra owner to traverse the network, once access rights are paid for. 


The important point is that all networks now are computer networks. 


To be clear, some will argue that changes in how networks are built (architecture, media, protocols) do not matter. It is the function that matters, not the media. If a network is used for broadcast TV or radio, that is the crucial distinction, not whether broadcasting uses analog or digital modulation; particular protocols or radios. 


If a network is a public communications carrier, the types of switches, routers, cables, protocols and software used to operate that business do not matter. What is regulated is the function. 


The function of a public network is to allow paying customers to communicate with each other. Each account is an active node on the network, and pays to become a node (a customer and user of the network). 


Service providers are allowed to set policies that include usage volume and payment for other features. In principle, a connectivity provider may charge some customers more than others based on usage. 


But one element is quite different in the internet era. Connectivity providers have customers, but generally do not own the applications that customers use their networks to interact with. There is no business relationship between the access provider and all the other application providers--as app providers. Every app provider is a customer of a local access provider or many access providers. 


Operators of different domains can charge each other for use of each others’ networks by other networks, which is where the intercarrier settlements function plays. And volume does matter, in that regard. 


The point is that it is the networks who settle up on any discontinuities in traffic exchange. Arbitrage always is possible whenever traffic flows are unequal, and where rules are written in ways that create an arbitrage opportunity. The classic example is a call center, which features lots of inbound traffic, compared to outbound. 


So some might liken video streaming services to a form of arbitrage, in that video streaming creates highly unequal traffic flows: little outgoing traffic and lots incoming, for the consumer of streaming content. 


But that also depends on where the servers delivering the content are located. In principle, traffic flows might well balance out--between connectivity domains-- if streaming customers and server sites are distributed evenly. 


Historically, big networks and small networks also have different dynamics. When the media type is voice, for example, bigger networks will get more inbound traffic from smaller networks, while smaller networks should generate more outbound traffic to the larger networks. 


For streaming and other content, traffic flows on public networks might largely balance, since the biggest content firms build and operate their own private networks to handle the large amount of traffic within any single data center and between data centers. Actual distribution to retail customers (home broadband users of streaming video, for example) likewise is conditioned by the existence of server farms entirely located within a single domain (servers and users are all on one service provider’s network). 


The point is that inter-domain traffic flows, and any compensation that different domains might “owe” each other, is a complicated matter, and arguably should apply only to domains and their traffic exchange. 


In other words, one might argue that traditional inter-carrier settlements, traffic peering and transit are sufficient to accommodate unequal traffic flows between the domains. 


In other words, the argument that internet service providers make that a few hyperscale app providers are sending much more traffic than they are receiving “should” or “could” be settled between the access provider domains, as always has been done. 


If the argument goes beyond that, into notions of broadband cost recovery, then we arguably are dealing with something different. Going beyond inter-carrier settlements, such notions add a new idea, that traffic sources (content providers and streaming services)  should pay for traffic demand generated by their traffic sinks (users and subscribers of streaming services).  


This is a new concept that conceptually is not required. If ISPs claim they cannot afford to build and operate their own access networks, they are free to change charging mechanisms for their own customers. Customers who use more can pay more. It’s simpler, arguably more fair and does not require new layers of business arrangements that conflict with the “permissionless” model.


Data networks (wide area and local area) all are essentially considered private, even when using some public network resources. Data networks using public network resources pay whatever the prevailing tariffs are, and that is that. Entities using data networks do not contribute, beyond that, to the building and operating of the public underlying networks. 


Public transport and access providers might argue that they cannot raise prices, or if they did, would simply drive customers to build their own private networks for WAN transport.


That obviously would not happen often in the access function. Local networks are expensive. But there already exists a mechanism for networks to deal with unequal traffic flows between access domains. 


So there is a clash here between private data networking and public communications models. What is new is that, in the past, the applications supported by the network were entirely owned by the network services provider. 


Now, the assumption is that almost none of the applications used by any ISP’s customers are owned by the ISP itself. So the business model has to be built on an ISP’s own data access customer payments. Application revenue largely does not factor into the business model. 


But that is the way private computer networks work. Cost is incurred to create the network. Revenue might be created when public network access and transport is required. But all those payments are made by an ISP’s local customers, even when the ISP bundles in access to other ISP domains required to construct the private network. 


“Permissionless”  development and operation now is foundational for software design and computing networks. All networks now are computing networks, and all now rely on functional layers. 


The whole design allows changes and innovation at each functional layer without disturbing all the functions of the other layers. What we sometimes forget is that below the physical layer is layer 0, the networks of cables that create the physical pathways to carry data. 


Of course, any connectivity network must operate at several layers: physical, data link and network. By the “transport” layer functions tend to be embedded in edge devices. 


source: Comparitech 


To be sure, connectivity networks--especially access networks that sell home broadband and other connectivity services to businesses--must operate at many layers, including the modems used to support broadband access. 


So some might add, in addition to a “layer zero” network of cables, a layer eight for software and applications that run on networks. 

source: NetworkWalks 


Local area networks typically are less complex, but still use the layers architecture. The difference is that LANs (Wi-Fi, Ethernet  or other) primarily rely on layers one to three of the model. 


source: Electricalfundablog 


“Permissionless” access and transport have sparked enormous innovation. That should remain the case. Additional taxes, which means higher costs, will not help that process. Other networks charge for usage. Public IP networks could do the same. Settlement policies between access domains already exist. And, to be clear, app domains can create facilities that do not cross access domains, if they choose. 


So ISPs can charge for usage if they choose. Unlimited usage could be a higher price. Lower amounts of usage can still be sold in tiers. Problem essentially solved.


Wednesday, January 4, 2023

U.S. Home Broadband Actually is Neither Slow Nor Expensive

Critics of U.S. home broadband often claim that service is slow and expensive. Both opinions can be challenged. In fact, U.S. median home broadband speeds were among the fastest in the world in 2021 and climbed in 2022. 

source: Ookla 


“Price” sometimes is a bit more subtle. Though prices have declined in every speed category, some might still argue “prices are too high.”


For example, ana analysis shows that U.S. home broadband prices have fallen since 2016, according to a study by Broadband Now. 


Broadband Now says that the average price for internet in each speed bucket starting in the first quarter of 2016 compared to the fourth quarter of 2021 has fallen:

  • The average price decreased by $8.80 or 14% for 25 – 99 Mbps.

  • The average price decreased by $32.35 or 33% for 100 – 199 Mbps.

  • The average price decreased by $34.39 or 35% for 200 – 499 Mbps.

  • The average price decreased by $59.22 or 42% for 500+ Mbps.


The analysis is subtle because if there is a movement by customers from lower speeds to higher speeds, which clearly is happening, then “prices” might climb, though not for the same products. Customers are choosing to buy higher-priced, higher-performance products, instead of the lower-priced, lower-performance products they used to buy. 


Other studies show the same trend.  


Also, because of inflation, price levels rise over time. So virtually any product can be accused of “costing more” in 2022 than it cost in 1996. 


Some may intuitively feel this cannot be the full story where it comes to digital products, which keep getting better, while prices either stay the same or decline. Such hedonic change applies to  home broadband. 


Hedonic qualIty adjustment is a method used by economists to adjust prices whenever the characteristics of the products included in the consumer price index change because of innovation. Hedonic quality adjustment also is used when older products are improved and become new products. 


That often has been the case for computing products, televisions, consumer electronics and--dare we note--broadband internet access services. 


Hedonically adjusted price indices for broadband internet access in the U.S. market then looks like this:

Graph of PCU5173115173116


source: Bureau of Labor Statistics 

 

Quality improvements also are seen globally. 


Adjusting for currency and living cost differentials, however, broadband access prices globally are remarkably uniform. 


The 2019 average price of a broadband internet access connection--globally--was $72..92, down $0.12 from 2017 levels, according to comparison site Cable. Other comparisons say the average global price for a fixed connection is $67 a month. 


Looking at 95 countries globally with internet access speeds of at least 60 Mbps, U.S. prices were $62.74 a month, with the highest price being $100.42 in the United Arab Emirates and the lowest price being $4.88 in the Ukraine. 


According to comparethemarket.com, the United States is not the most affordable of 50 countries analyzed. On the other hand, the United States ranks fifth among 50 for downstream speeds. 


Another study by Deutsche Bank, looking at cities in a number of countries, with a modest 8 Mbps rate, found  prices ranging between $50 to $52 a month. That still places prices for major U.S. cities such as New York, San Francisco and Boston at the top of the price range for cities studied, but do not seem to be adjusted for purchasing power parity, which attempts to adjust prices based on how much a particular unit of currency buys in each country. 


The other normalization technique used by the International Telecommunications Union is to attempt to normalize by comparing prices to gross national income per person. There are methodological issues when doing so, one can argue. Gross national income is not household income, and per-capita measures might not always be the best way to compare prices, income or other metrics. But at a high level, measuring prices as a percentage of income provides some relative measure of affordability. 


Looking at internet access prices using the PPP method, developed nation prices are around $35 to $40 a month. In absolute terms, developed nation prices are less than $30 a month. 


According to an analysis by NetCredit, which shows U.S. consumers spending about 0.16 percent of income on internet access, “making it the most affordable broadband in North America,” says NetCredit.


Looking at internet access prices using the purchasing power parity method, developed nation prices are around $35 to $40 a month. In absolute terms, developed nation prices are less than $30 a month.  


Methodology always matters. The average U.S. home broadband service  costs about $64 a month. In fact, U.S. home broadband inflation-adjusted costs have declined since the mid-1990s, according to an analysis  of U.S. Consumer Price Index data. 


U.S. home broadband is neither “slow” nor “expensive.”


Tuesday, January 3, 2023

How Many Video Streamers Have Sustainable Models?

Nobody can yet be sure how the TV business will reform for broadcast, linear video and streaming, except to note that streaming viewership is growing, while broadcast and linear subscription TV are shrinking.


Eventually, viewing audience share matters. So fragmentation also matters. So does consumer preference. Each medium, long term, has to find its greatest value and its sustainable model. As much as viewership of streaming services has grown, the economics of the business have remained challenging for most providers.


So a consolidation of video streaming services always was inevitable. The economics of a direct-to-consumer business, as attractive as the idea always has seemed for programmers, simply does not scale, for most content providers. Most networks do not have the audiences to operate DTC. Limited audience means  limited advertising upside and limited subscription revenue as well. 


At some point, most networks and content providers are going to have to become content suppliers to one or more streaming services, and get out of DTC. The advantage of the cable TV model was that smaller networks did not have to worry about distribution costs. 


The cable operators handled that. Moving to DTC means building a new distribution network, and investing in marketing as well. At least so far, that has proven daunting for most networks. 


In the linear model, smaller networks had a way to gain carriage (shelf space). Sometimes payments to operators worked. In other cases, “must-have” lead programming could be signed by a distributor only when lesser-viewed networks operated by the same content owner also were carried. 


As streaming providers rethink their business models, reducing investments in original content and adding advertising models, most content owners might eventually conclude that DTC simply does not work. 


Streaming is gaining viewing share, of course. The issue is how many entities will be able to survive long term, and what various competitors will have to do to sustain themselves. Up to this point, sports have been the enduring value of subscription video as pre-recorded content viewing has moved to streaming services. 


Even that could change as more sports rights are acquired by streamers. So some believe broadcast television's future is as a home for unscripted series. Major sports events and, for some, news also should contribute. 


Most consumers are eventually going to look at their actual viewing habits and conclude that the whole cable bundle comes down to a handful of networks or channels, with most of the pre-recorded content being watched on streaming services. 


In my own experience, live news and sports are the only forms of content being viewed on a linear service. Others might report reality TV as the key content type being watched. But that obviously shapes the value proposition of linear video, broadcast TV and streaming services, as well as the magnitude of possible monetization models.


Is Connectivity an Intangible Product?

Are connectivity products tangible or intangible? Are they products a customer can see and touch or are they services whose quality and experiences must be purchased first, before they can be evaluated? 


The answers matter, as intangible products require proxies for value, since the customer actually has no way of examining or testing the “product” before buying it. That is why brand is so important, or the quality of customer service. 

source: Simplicable 


Customers must use such proxies for value as a way of evaluating potential suppliers, and that arguably goes for connectivity service providers as well.


Some might argue connectivity is "tangible." Well, routes are quantifiable, so perhaps tangible in some sense. Addresses are tangible. Physical locations are tangible. Perhaps quoted capacity is tangible. Compute cycles are quantifiable.


But perhaps those attributes are similar to aircraft, crews and landing rights when people buy "air travel." Buyers might need compute cycles, access to certain buildings and locations and capacity at certain levels. But that is the equivalent of aircraft, departure frequencies, crews and landing rights.


No customer can really judge, before buying, the "quality" provided by one connectivity provider, compared to another, or one computing services host, compared to another. Computing and connectivity remain "intangible" products.


What Speed Tests Might, and Might Not, Indicate

What does this plot of speed tests conducted by U.K. consumers tell us? In principle, it only tells us that there are fewer tests on copper connections; about the same number of tests on hybrid fiber coax networks; declining tests on fiber-to-curb networks; while fiber-to-home customers conduct more tests.


Presumably, the number of tests is related to the number of accounts. But the number of tests also could be related to the number of trouble tickets or network issues. Most of us are prompted to test only when there is some obvious connectivity issue. 


But it also is possible that users on some of the latest networks (FTTH) are testing for other reasons, such as verifying that speeds are really faster than on the older networks. 

source: Think Broadband 


Also, since most such tests appear to be conducted from Wi-Fi-connected devices, the number of tests also likely reflects Wi-Fi issues that users are having, and that is more a reflection of indoor Wi-Fi issues than a reflection of the access network connection. 


Actual internet service provider delivered speed is going to be higher than what a Wi-Fi test shows, and also could be lower if multiple other apps or multiple users are active during the test period.


Testing algorithms also vary, which is why the same device, on the same network, yields different test results when different testing services are used. All this data appears to be from the ThinkBroadband test, so results should be comparable.

The point is that historical data on "speed" is shaped by the testing methodology: users mostly test on Wi-Fi, which almost always is slower than the ISP's "to the home" speed.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...