Friday, August 18, 2017

Absent a Disruption, U.S. Telcos Will See Internet Access Share Between 28% and 45%

How well can any telco do, in terms of internet access market share, when facing accomplished competitors with scale, skill and other business resources, including their own facilities?

Verizon’s experience with its FiOS service suggests the answer is “40 percent to 45 percent of the market,” even when fiber to the home is the access platform.  

"At the end of the second quarter of 2017, cable had a 64 percent market share versus 36 percent for telcos,” said Bruce Leichtman, Leichtman Research Group president and principal analyst.

Unless something breaks the current trend, telos could collectively become something of an afterthought in the access business, with market share as low as 28 percent by 2020, according to New Street Research.

Stranding 60 percent of the deployed capital in FTTH access networks is one very good reason for some service providers to look at 5G fixed wireless. If the maximum share is range bound around 40 percent to 50 percent, then any solution that minimized stranded assets will improve the business case.

Also, if one believes internet access is the anchor service for consumer customers, continued share loss is dangerous. In the second quarter of 2017, most of the telco share losses came from the three former rural telcos--CenturyLink, Frontier Communications and Windstream.





ISPs
Subscribers at End
of 2Q 2017
Net Adds in
2Q 2017
% Change
Cable Companies



Comcast
25,306,000
175,000
0.69%
Charter
23,318,000
267,000
1.15%
Altice
4,004,000
2,000
0.05%
Mediacom
1,185,000
6,000
0.51%
WOW (WideOpenWest)
727,600
-1,400
-0.19%
Cable ONE
521,724
-1,603
-0.31%
Other Major Private Company
4,845,000
15,000
0.31%
Total Top Cable
59,907,324
461,997
0.77%




Phone Companies



AT&T
15,686,000
-9,000
-0.06%
Verizon
6,988,000
-23,000
-0.33%
CenturyLink
5,868,000
-77,000
-1.31%
Frontier
4,063,000
-101,000
-2.49%
Windstream
1,025,800
-21,800
-2.13%
Cincinnati Bell
307,100
-300
-0.10%
FairPoint
304,193
-1,160
-0.38%
Total Top Telco
34,242,093
-233,260
-0.68%




Total Top ISPs
94,149,417
228,737
0.24%

What Needs Explaining is Telecom Price Increases

Even if one assumes there is relatively-constant pressure on retail communications service products, those price trends for fixed and mobile network services need deciphering.

Global prices, measured as a percentage of gross national per-capita income, have fallen at least since 2008, according to the International Telecommunications Union. But what requires explanation is higher prices, where they happen.

In the U.S. market, since at least 2009, prices have generally fallen for some products such as mobile service, mobile voice and texting.

Also, internet access prices have fallen about five percent since 2009. But prices for fixed network voice and content subscriptions have risen.




Even prices for internet access services, generally stronger in some quarters because consumers now are buying faster services that cost more than slower services, have dipped since 2009, with most of the drop happening in 2017.


The exception to the trend of falling prices are subscription TV services, which have seen growing prices since 2009. So what makes content services different from internet access and mobility? Internet access is the classic “dumb pipe” service, hard to differentiate and subject to Moore’s Law fundamental trends (constantly increasing quantity, constantly dropped price per unit).

Subscription TV is a content service, more analogous to websites, music and fashion than a “telecom” service. Also, the value of mobility these days is arguably more weighted to internet-accessed apps and content than to carrier voice and messaging functions. That is to say, more of the value of a mobile service now is the dumb pipe access to content and apps, and less the carrier voice and messaging services.




Ironically, one might also note that prices for fixed network voice service, a product far fewer consumers now buy, have grown since 1996, when local telecommunications was deregulated in the U.S. market. Since about 1996, prices have climbed about 35 percent. There are a couple reasons, amongst them the ability of suppliers to raise prices more easily, despite the countervailing trends of declining demand and greater competition.

The other issues are likely that the mix of business and consumer lines has been changing, with a greater percentage of business lines, compared to consumer lines, than has been the case in the past. Also, consumers who do not value fixed network voice services already have deserted for mobile services. The consumers who remain likely place a higher value on fixed network voice.  


On the whole, basic economic principles seem to be at work. Generally speaking, demand for any product will grow with lower prices and fall with higher prices. Higher prices for fixed network voice have definitely been accompanied by lower take rates.

Globally, fixed telephone accounts seem to have peaked about 2006.

source: ITU

Thursday, August 17, 2017

Yes, 5G is a Gamble. But it is, in Some Markets, a Very Necessary Gamble

The commercial revenue drivers for 5G are not entirely clear, argues William Webb, Ofcom senior technologist. The “vision is flawed,” he argues.

On the other hand, in many markets, mobile operators will require speeds that “can compete with fiber services,” says Sam Barker, Juniper Research analyst. That means 5G is necessary, in the same way that optical fiber has been necessary to boost fixed network bandwidth (no matter how deep into the distribution network a service provider deploys it).

In that sense, it is not so useful to know that perhaps 1.4 billion 5G connections will be in service by 2025, up from one million in 2019, the anticipated first year of commercial launch, as Juniper Research now forecasts will be the case.

Many, perhaps most, of those connections likely will be accounts that already were buying 4G services. That is a familiar situation for many fixed network service providers moving from copper access to optical fiber: for nearly every account gain for “fiber-based” internet access, those providers “lose” a customer formerly buying copper-based internet access.

So 5G--though a big gamble--is likely a necessary gamble, illustrating the point that it is the business model for 5G which poses the single greatest challenge, not the technology, not the need to support small cells or better radios, not even the capital investment and spectrum, even if all are crucial elements.

“Users will not value the higher data rates that are promised and will not need the higher capacity forecast,” Webb  argues. One does not have to agree with that sentiment to note that there are huge business model challenges.

Perhaps more debatable are some of the reasons Webb believes exist, such as
“technological advances” being “insufficient” to support the platform. Many would disagree, at least to the extent that such advances will not be available in the time frame, and with the cost parameters, required to support the business case.

Webb also argues that mobile operators “are insufficiently profitable to afford it.” Some might argue that is largely true in some markets, but not all; or true for some providers in markets, but not all; and also strategically irrelevant. If survival requires what 5G can provide, and 4G cannot, then the investments must be made.

That also is not a new situation. One might argue that, at least for a couple of decades, that same situation has been true for most deployments of fiber to the home. Even when they never say so in public, executives of firms making the investments understood, and understand, that the decision to upgrade to some next generation platform is not driven by classic investment criteria--because the profit from doing so will be X--but because failure to make the investment cedes the market (and perhaps the whole business) to new competitors.

That is important. The next generation network upgrade is not based on classic investment criteria, but for strategic reasons. “You get to stay in business” is the driver, not “our revenues will grow by X.”

Webb essentially argues that the 5G investment should not be made. There merit to that argument, perhaps strategically, often tactically, and often for some suppliers, compared to others. But, in some markets, as the colloquial expression suggests, “good luck with that.”

There are markets--developed markets where 4G adoption is saturated, for example--where there basically is no option but to make the leap. The simple answer is that suppliers have run out of things to sell customers on 4G networks, especially new things that generate incremental revenue.

So it is noteworthy that Juniper Research now argues the U.S. market will have the US alone “highest number of 5G connections for fixed wireless broadband and automotive services.”

In some markets, where strategic investment in next generation networks has been most important, 5G offers a way to dramatically reduce the costs of next generation network infrastructure; immediately offering a way to produce incremental revenue gains and boosting competitive positioning.

In other words, 5G supports what 4G cannot: a way to address a huge new revenue segment: replacing fixed network internet access, in territory and outside the current footprint. Doing so out of region is more challenging, but at least in principle, would offer a chance for the largest regionally-based fixed network service providers (AT&T, Verizon, CenturyLink) as well as new entrants, to offer internet access services with a sustainable business case.

In some instances, that is not just because 5G network platforms are available, but also because other innovations, such as spectrum sharing, bandwidth aggregation across licensed and unlicensed bands and new access to unlicensed assets and new spectrum, are available.

The point is simply that even if the 5G business case is uncertain, as it is, there are markets where the gamble to deploy must be taken.

Tuesday, August 15, 2017

Virtualization Means Old Definitions Do Not Work

The communications business has become a funny, fuzzy world. We used to be able to clearly define “narrowband, wideband and broadband.” We used to clearly demarcate “private, in-building functions” from “public network “access” functions and assets” from “trunking” or distribution network assets, from wide area network functions and assets.

These days, as networks and apps are more virtualized, the old definitions do not always work. Consider the wide area networks.

These days, it likely is the case that more than a third of all traffic moving across wide area networks does so on a “private” (enterprise owned) network, and not over the “public WAN.” In some cases the private percentage can be as high as 70 percent.

“Now networks are being built by hyperscalers,” says Tim Stronge, TeleGeography VP. That is a historic change.

On Latin American routes, about 70 percent of total traffic now moves over private networks. In other words, only about 30 percent of undersea, long haul traffic actually is sold to customers who use “public” networks,  according to Erick Contag, Globenet CEO.

On trans-Pacific routes, OTT app providers also are driving demand, accounting for about 33 percent of lit demand on the “public” networks, says Jonathan Kriegel, CEO Docomo Pacific.

Some of the WAN is private, some is public.

Just as important, as applications increasingly are virtualized, transport and access functions are virtualized as well. Transport and access happens, in a physical sense. But it is less certain, at any given time, whether those assets are public, private or shared. It is never so clear whether a particular function is “access” or “local distribution,” as used to be the case decades ago when local area networks (private) become important and widespread.



These days, “access” sort of depends on the situation. The best example is public Wi-Fi, or private Wi-Fi used by visitors or guests in a private residence.

When a consumer uses his or her own Wi-Fi, it still is possible to differentiate clearly between private and public portions of that network. The public network access supplies the internet access; the Wi-Fi the in-home signal distribution.

When third party users connect to a Wi-Fi hotspot, their “access” connection is the Wi-Fi, not the “access network.” That remains true whether the access is free of charge or the consumer has a subscription that grants Wi-Fi access.

For U.S. customers served by Comcast and others, being an internet access customer at any one location includes, as part of the service, access to all other homespots (the public side of each customer’s home internet access connection).

So when “roaming” outside the home, Comcast customers can connect to Xfinity homespot networks in a way that makes Wi-Fi the “access.”

It is even more complicated than that. Consider content streaming. From a logical standpoint, content delivery services such as BAMTech and Quickplay are the delivery infrastructure, even if, at a physical level, both require an internet connection of some type.

For content owners using delivery services such as BAMTech and Quickplay, customers”bring their own broadband.”

The historic importance of “public” and “private” parts of networking was that customers owned the private assets, while service providers owned the “public” parts of the infrastructure. These days, the distinction is never so clear cut. True, at the physical level, a consumer or enterprise might deploy private Wi-Fi assets while buying public access.

But there now is a difference between “users” and “customers,” and status changes dynamically. At the location where I have a Comcast internet access service, I am a customer. When roaming on the Xfinity network elsewhere, I am a user. When I use amenity Wi-Fi at a hotel, I am only a user. The venue is the “customer.”

When I use Wi-Fi on a plane, I am again “the customer” and Wi-Fi is the “access.” The point is there are times when I use Wi-Fi as “access” and times when I use it only as local distribution; sometimes I am a user, other times I am the customer.

That is not to deny the physical necessity of “access” facilities, whatever the direct (entity buying the access connection) and indirect relationships (users of those connections).

But there is an important shift. Application access these days “assumes” the existence of internet access, and so assumes “somebody” supplies the access to potential users (who might, or might not, be the access connection buyers). That’s just another business model implication of “separating access from app.”

Old categories are getting more porous all the time. Amazon Kindle owners often download content using Wi-Fi, and sometimes using Amazon’s special-purpose internet access, supplied by AT&T.

And it may no longer matter what the difference is between narrowband and broadband, but only what consumers consider “market standard” or “minimally viable” speeds and capacity. That is an ever-evolving figure.

The point is that anybody who studied the architecture of networking before local area networking, before the cloud, before Wi-Fi, before the internet and before different access platforms developed, would clearly understand how diffuse and porous networking concepts have gotten.

Smart Cities are Substantially About Carbon Footprint

It is interesting how much of the potential “smart cities” applications for internet of things deal with carbon: electricity consumption; auto exhaust; inefficient driver searches for parking; public lighting efficiency; green buildings; more efficient traffic flow and public transit, for example.

And then there are all the other ways the things humans do (move, eat, dwell) that also have carbon implications (opening doors, carbon footprint of foods, clothing, electronics, communications). Since most people live in cities, it is cities that produce the most human carbon impact.

How Will 5G Small Cell Costs Compare to FTTH?

Nobody yet really knows how fixed wireless enabled by a 5G network will compare, in terms of deployment cost, with fiber to home costs, except to say virtually everyone expects that cost to be less than FTTH.

The issue is “how much lower, per potential passing,” those costs will prove to be. Many of the potential data points (fixed wireless to high rise buildings; mesh networks for business or consumers) are so different from potential ubiquitous 5G small cell deployments that those other examples are not so useful. Nor can the other deployments fully capture the costs of dealing with line of sight impediments, when small cells might be deployed very densely, perhaps on every other light pole.

What is really different about dense small cell networks is that, for the first time, the total cost of the infrastructure might be dominated by the cost of the trunking network, not the radio access network.

It might not be unreasonable to assume that a fiber-deep network of the sort Verizon is building, which might be called a “fiber to light pole” deployment, would be less than, or close to, the cost of a fiber to node architecture. According to Nokia estimates, the trunking network should cost less than half the cost of fiber to the home, and conceivably just a quarter of FTTH cost.

Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...