Sunday, December 15, 2019

5G Probably Benefits Mobile Operators More than Consumers, at First

Some complain that 5G is not being introduced fast enough in the U.S. market. But phased 5G service coverage is not the problem believe. Building one new continent-sized network always takes years. Building four simultaneously is harder. That is true of all next-generation mobile networks. It has been a decade since people lived through such a change--and some never have--so we tend to forget that. 

Still, compared to the introduction of 4G, phased deployment might not matter so much, for reasons of end user experience, service provider economics and ecosystem dynamics. We tend to forget that major device suppliers, such as Apple, historically lag the networks. 

The first Apple iPhone launched using 2G. A 3G iPhone was not launched until 2008, the year 4G launched in the U.S. market. A 4g iPhone was not available until 2013, about five years after 4G first appeared. It appears the iPhone 12, expected, in 2020, might feature both 4G and 5G models.  

The point is that the most-rapid-possible adoption is not always meaningful, nor harmful. 

That might be especially true in the 5G era, when many so-called 5G impacts actually are enabled by edge computing, internet of things, artificial intelligence or other correlated developments, not 5G itself. 

Are there network effects? Yes. Whenever an app, a process or a service or a product requires scale to provide value, it is said to be an example of network effects. Phone service, use of facsimile machines, social networks and online marketplaces of all sorts provide examples. 

Some point out that 5G is no different: 5G phones are most valuable when there is 5G service available, and 5G networks make 5G phone purchases more valuable. It’s a bit of a nuance, but what is not true is that 5G phones make mobile networks valuable, or that 5G networks make devices valuable. 

A 5G device can use a 4G network, and vice versa. So with the exception of any device features specifically related to 5G (access, mostly), all the other device value is obtainable, even when a user decides not to use--or cannot use--a 5G network. 

The point is that the speed of the 5G rollout might not matter to consumers so much. It will matter for mobile network operators, who can gain subscribers or lose them, based on the completeness and availability of their 5G offers. 

5G also will matter for mobile service providers with limited spectrum resources, as 5G will allow customers to shift usage to a new network featuring lots of bandwidth (millimeter wave, especially) and little contention, at the moment. As each user switches off 4G and on to 5G, experience for all the remaining 4G users improves, as there is less network loading. 

5G ubiquity also matters for developers of 5G use cases, apps and services, since there scale really does matter. And 5G device manufacturers also benefit from greater demand as customers actually can generally use their devices on the 5G networks. 

There, the actual performance advantages arguably are less important than the customer “knowing they can use a network they are paying for (in terms of new device purchase and possibly new service plan). 

Faster networks are “better” than slower networks, generally speaking. But that is conditional. If the user cannot benefit from the additional speed, does faster speed really matter? 

It matters most for service providers, who have to keep increasing capacity, but must realistically expect to do so for roughly the same prices as at present, best case. Lower cost per bit, in other words, is the key value, but for service providers, not consumers, directly. 

Thursday, December 12, 2019

AT&T Expects to Reach 50% Internet Access Market Share in FTTH Areas

AT&T believes it will eventually get take rates for its fiber-to-home service, across 14.5 million households, “to a 50 percent mark over the three-year period” from activation, said Jeffery McElfresh,  AT&T Communications CEO. 

AT&T bases that forecast on past experience. “As you look at the fiber that we built out in the ground in 2016, at the three-year mark, we roughly approach about a 50 percent share gain in that territory,” said McElfresh.

Adoption levels at that level would be historically high for an incumbent telco, as Verizon has in the past gotten FiOS adoption in the 40-percent range after about three years. 

Take rates for FTTH services globally vary quite a lot, and may be an artifact of network coverage. U.S. FTTH accounts are low, but mostly because perhaps 66 percent of fixed network accounts are on cable TV hybrid fiber coax plant. Telcos collectively have only about 33 percent market share. 

South Korea seems an odd case. In South Korea, FTTH  take rates seem to be only about 10 percent, for example, though network coverage is about 99 percent. 

In Japan and New Zealand, take rates have reached the mid-40-percent range, and network coverage might be about 90 percent. But in France and the United Kingdom, FTTH adoption is in the low-20-percent range. 

That is why AT&T’s expectation that its FTTH adoption will reach 50 percent is important. It would reverse the traditional market share of a telco in the fixed network internet access market from 33 percent of customers to perhaps half.

Will Ruinous Competition Return?

It has been some time since many contestants in telecom had to worry about the effects of ruinous levels of competition, which were obvious and widespread in parts of the telecom market very early in the 21st century.

Sometimes markets endure what might be termed excessive or ruinous competition, where no company is a sector is profitable.

That arguably is the case for India, where industry revenues dropped seven percent last year, following years of such results, leading regulators to consider instituting  minimum prices as a way of boosting profits. 

Such situations are not new, as early developments in the railroad industry suggest. In fact, sometimes competitors might price in predatory fashion, deliberately selling below cost in an effort to drive other competitors out of business. That sort of behavior often is prohibited by law, and can trigger antitrust action. 

Even if technology has changed network costs and economics, allowing sustained competition between firms of equal size, the unanswered question for competitive markets has been the possible outcomes of ruinous levels of competition. 

Stable market structures often have market shares that are quite unequal, which prevents firms from launching ruinous pricing attacks. 

A ratio of 2:1 in market share between any two competitors seems to be the equilibrium point at which it is neither practical nor advantageous for either competitor to increase or decrease share. 

A market with three roughly equally-situated contestants means there always will be a temptation to launch disruptive attacks, especially if one of the three has such a strategy already. 

Some studies suggest a stable market of three firms features a market share pattern of approximately 4:2:1, where each contestant has double the market share of the following contestant. 

The hypothetical stable market structure is one where market shares are unequal enough, and the leader financially strong enough, to whether any disruptive attack by the number two or number three providers. That oligopolistic structure is stable, yet arguably provides competitive benefits. 

In a classic oligopolistic market, one might expect to see an “ideal” (normative) structure something like:

Oligopoly Market Share of Sales
Number one
41%
Number two
31%
Number three
16%

As a theoretical rule, one might argue, an oligopolistic market with three leading providers will tend to be stable when market shares follow a general pattern of 40 percent, 30 percent, 20 percent market shares held by three contestants.

Another unanswered question is the minimum possible competitive market structure, where consumer benefits still are obtained but firms also can sustain themselves. Regulators have grappled with the answer largely in terms of the minimum number of viable competitors in mobile markets, the widespread thinking being that only a single facilities-based fixed network operator is possible in many countries. 

In a minority of countries, it has seemed possible for at least two fixed network suppliers to operate at scale, on a sustainable basis. 

The point is that, long-term, sustainable competition in the facilities-based parts of  the telecom business is likely to take an oligopolistic shape over the long term, and that is likely the best outcome, providing sustainable competition and consumer benefits, without ruinous levels of competition.

4K and 5G Face One Similar Problem

4K and 5G face one similar problem: performance advantages are not always able to enable better experience that clearly is perceivable by end users and customers. It is not a new problem. 

Speeds and feeds, the measurement of machine tool performance, long has been used in the computing industry as well, touting technical features and performance of processors or networks. 

Marketing based on speeds and feeds fell out of favor, however, in part because every supplier was using the same systems and chips, negating the value of such claims. Also, at some point, the rate of improvement slowed, and it also became harder to show how the better performance was reflected in actual experience. 

We are likely to see something similar where it comes to the ability of apps, devices or networks to support very-high resolution video such as 4K. Likewise, much-faster mobile and fixed networks face the same problem: the technological advances do not lead to experience advantages. 

4K video on small screens has been characterized as offering visual and experience differences somewhere between indistinguishable and non-existent. The reason is the visual acuity of the human eye. Beyond some point, at some distance from any screen, the eye cannot resolve the greater granularity of picture elements. In other words, you cannot see the difference. 

Even for younger adults (20s and 30s) with better eyesight than older people, the difference between 2K resolution and 4K on a phone is imperceptible, if perceivable at all, one study found. 

On huge screens, relatively close to where an observer is located, the greater resolution does make a difference. Conversely, on small screens or beyond a certain distance, the eye cannot distinguish between 4K and 1080 HDTV. 

Also, battery life and processor overhead are reasons--aside from visual clarity--why 4K on a smartphone might arguably be worse than 1080p resolution. If 4K requires more energy, and right now it does, then battery consumption rate is a negative.

Granted, it is possible, perhaps even likely, that 5K will prove an advantage for virtual reality or augmented reality applications. Eyes are very close to screens on VR headsets. That likely will be true for 360-degree 360-degree VR

But in most other cases, smartphones with 4K displays will not yield an advantage humans can see. 

Something like that also will happen with 5G. People sometimes tout the advantage of 5G for video streaming. But streaming services such as Netflix require, by some estimates, only about 5 Mbps to 8 Mbps

True, Netflix recommends speeds of 25 Mbps for 4K content, so in some cases, 5G might well provide a better experience than 4G. But Amazon Prime says 15 Mbps for 4K content is sufficient. 

And if viewers really cannot tell the difference between 1080 resolution and 4K, then 8 Mbps is quite sufficient for viewing streamed content at high-definition quality. In fact, usage allowances are far more important than bandwidth, for most purposes. 


Some internet service providers also point out that a connection running at 25 Mbps downstream, and 12.5 Mbps upstream, outperforms a connection offering 100 Mbps downstream and 10 Mbps upstream. 

The larger point is that some technological innovations, including 4K video and 5G networks, might not have as much impact on user experience as one might suppose, although some future use cases might well be different.

One View on Why Video Resolution Beyond 4K is Useless

Wednesday, December 11, 2019

2% of U.S. Households Buy Gigabit Internet Access

The overall percentage of U.S. fixed network internet access subscribers buying gigabit-speed service increased 25 percent, to 2.5 percent in the third quarter of 2019. In 2018 about two percent of U.S. households bought gigabit internet access service, according to Openvault. 

Other estimates peg gigabit take rates at about six percent. 

About 51 percent of U.S. fixed network internet access customers now buy service at 100 Mbps or higher. 

Some 35 percent buy service rated at  100 Mbps to 150 Mbps. About 27 percent buy service running between 50 Mbps and 75 Mbps. 

The percentage of U.S. homes able to buy gigabit service is at least 80 percent, as that is the percentage cable TV alone reaches, according to the NCTA. 

Average U.S. Fixed Network Internet Consumption Now 275 GB Per Month

In the third quarter of 2019 the average household--including both customers on unlimited and fixed usage plans--consumed about 275 gigabytes each month, up about 21 percent year over year from the third quarter of 2018. 

The weighted average data usage includes subscribers on both flat rate billing (FRB) and usage-based billing (UBB). Not surprisingly, customers on unlimited flat-rate accounts consumed more data than customers on usage-based plans. 

There are obvious implications for internet service providers, namely the usage growth rate of about 20 percent a year. That implies a doubling of consumption in less than four years. 

That usage profile also suggests the usage allowances suppliers of fixed wireless services also must match. 

In comparison, U.S. mobile users might consume between 4 GB per month and 6 GB per month on the mobile network. 

Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...