Tuesday, December 24, 2019

After Merger with Sprint, T-Mobile US Might Want Merger with Comcast

At least some of us have long believed the “best” set of merger outcomes would be for Sprint and T-Mobile US to be merged with (pick one combination) Comcast and Charter Communications, allowing four suppliers to sustain themselves in a market where “mobile only” seems unsustainable in the consumer market. 

Now there are indications that is what T-Mobile US might be thinking, assuming its merger with Sprint is approved. 

The long-term sustainability argument has been that integrated service providers serving the U.S. consumer market would have to own both fixed and mobile network assets. 

The proposed T-Mobile US merger with Sprint, creating a big mobile-only company. That has therefore seemed the first of at least two mergers. Create a big mobile-only entity, then merge again with Comcast or Charter, Charter the more-likely candidate. 

The long-term sustainable structure for the U.S. consumer connectivity business has seemed to be four firms, including assets clustered around AT&T, Verizon, Comcast and Charter Communications, each with both mobile and fixed network facilities ownership. 

For that matter, the long-term and sustainable market might have all four of the leading contenders involved in some way with content assets, though Comcast and AT&T are likely to continue to be the only two with significant content production assets and revenues. 

Such cable-mobile combinations have a rather lengthy history, with Cox Communications and Cablevision Systems Corp. having tried a go-it-alone regional approach, with several of the biggest U.S. cable firms having partnered with Sprint decades ago for spectrum purchases and other possible uses of that spectrum. 

Cablevision Systems, in fact, was considering a non-mobile wireless service about 20 years ago, essentially modeled on Japan’s Handy-phone Handyphone wireless service, supporting voice communications at pedestrian speeds, but not full mobility at high speed. 

Cablevision’s thinking was that its hybrid fiber coax network would allow it to build such a system without the cost of acquiring licensed spectrum and building a full mobile network. That same sort of approach is used by Comcast to support its voice on Wi-Fi network using hotspots with public as well as private access

Such strategic scenarios also illustrate why the entry of Dish Network into the mobile market also is the first of at least two transactions. Dish has to replace its declining satellite video business with another big key driver, which it hopes will be mobility. 

But that only creates yet another mobile-only company, and that is unsustainable in the U.S. market, some would argue. So the eventual path will be a combination of Dish assets with another entity, especially if, eventually, every leading internet service provider owns its own mobile and fixed networks, save one. 

If the Sprint merger with T-Mobile US is approved, we might well see that as only the first of two big transactions. The second will merge the mobile-only assets of new T-Mobile with the fixed assets of a cable company.

Monday, December 23, 2019

Overprovisioning is as Dangerous as Underprovisioning Capacity

Historically, the way connectivity providers have protected themselves from growing end user data demand is by overprovisioning: building networks that supply bandwidth in excess of what present demand currently requires.

In the past, that has meant building voice networks to support the peak calling hour of the peak day levels of consumption, for example. That also means most of the time, networks have spare capacity. 

Historically, one of the examples is call volume, or data consumption, by time of day and day of week. That is true for business and consumer call patterns, as well as call centers, for example. 

Both business and consumer calling volume drops on weekends and evenings, for example. Traffic to call centers peak between noon and 3 p.m. on weekdays. 


Communications networks, however, no longer are dimensioned for voice. They are built to handle data traffic, and video traffic most of all. That still leaves service providers with the question of how much overprovisioning is required, and what that will cost, compared to revenues to be earned. 

In principle, service providers tend to overprovision in relationship to the pace of demand growth and the cost of investing in that capacity growth. It is one thing to argue that demand will grow sharply; it is quite another matter to argue that demand levels a decade from now must be provisioned today. 

That noted, there is a difference between the amount of data consumed and the speed of an access connection. The two are related, but not identical, and customers often get them confused. 

But they do tend to understand the difference between a usage allowance (“how much can I use?”) and the speed of a connection (“how fast is the connection?”). 

Service providers must overprovision, it is true. But they tend to do so in stair step fashion. Mobile network operators build a next-generation network about every decade. Fixed network operators, depending on the platform, likewise increase capacity in stair step fashion, because doing so involves changes in network elements, physical media and protocols. 


The point is that capacity increases are matched to demand needs; the pace of technology evolution; the ability to sustain the business model; revenue, operations cost and profit margin considerations. 

While some might argue, as a rule, that service providers must “deploy fiber to the home,” or build 5G or make some other stair step move, such investments always are calibrated against the business case.


It is one thing to overprovision as needed to support the business model. It might impair the business model to overprovision too much, stranding assets. Some of you will recall the vast over-expansion of wide area network transport capacity in the 1998 to 2002 period. That provides a concrete example of the danger of excessive overprovisioning. 

Yes, some service providers have business models allowing them to create gigabit per second internet access networks today, even when typical customers buy service at 100 Mbps to 200 Mbps, and use even less. 

That does not necessarily mean very access provider, on every platform, in every market, can--or should--do so. 

Sunday, December 22, 2019

Why Multi-Access Edge Computing Opportunity is Possibly Fleeting

The problem with big lucrative new markets is that they attract lots of competition. "Your margin is my opportunity," Amazon CEO Jeff Bezos notably has quipped.

So even if internet of things and multi-access edge computing are legitimate areas of potential new revenue for telcos and other internet service providers, they also are highly interesting to all sorts of other powerful potential competitors. And they are moving.

Walmart, for example, plans to build its own edge computing centers. Furthermore, Walmart expects to make that capability available to third parties. Also, Walmart expects to make its  warehouse and shipping capabilities available to third-party sellers, using a model perhaps similar to the way Amazon supports third-party retailers. 


Walmart further says it will use its large supercenter store locations to fulfill online orders for Walmart’s groceries and other items.


The point is that the market opportunity for multi-access edge computing now is being challenged by retailers, Amazon and eventually others who plan to deliver edge computing as a service themselves, perhaps foreclosing much of the potential role for connectivity service providers to become suppliers of edge computing as a service. 

“All the cloud providers see themselves as extensions of edge computing,” says James Staten, Forrester Research principal analyst. “They’re trying to become the edge gateways. One of the biggest ways they’re gaining attention--and every enterprise should pay attention to this--is that they want edge computing as a service.


At this point, it also is possible to flip that argument. Cloud computing might not be an "extension" of edge computing. Edge computing might be an extension of cloud.


In other words, to the extent the next generation of computing as a service is "at the edge," then the cloud kings have all the incentive they need to master edge computing themselves, simply to protect their existing businesses, instead of possibly losing that new business to others in the ecosystem.

The challenges for telcos at the edge are reminscent of their experiences in the data center or colocation business. In the former case, telcos generally have added little value. In the latter case, telco facilities have not proven to be the logical neutral host platforms.

It is possible that distributed telco central offices will become convenient locations for some types of edge computing facilities. Those will probably develop around a relative handful of important use cases requiring ultra-low latency and high bandwidth in outdoor settings.

The reason "outdoor" becomes important is that any on-site, indoors use cases arguably will work using on-premises edge computing platforms and fixed networks for access. The specific value of 5G access might not be relatively lower for indoors use cases, compared to
outdoor and untethered use cases.

If you think about it, that reflects the current value of mobile access networks. They are most valuable for untethered, outdoors, mobile instances, least useful inddors, where fixed network access using Wi-Fi is arguably a better substitute.

The larger point is that edge computing value will come as it supports important applications delivered "as a service." The physical details of colocation, rack space, cooling, power and security will be a business, but not as big a business as applications based on edge computing.

It remains unclear whether telco execs really believe they can master roles beyond access and rack space rental. Early moves by others in the ecosystem suggest why the multi-access edge computing (ISPs as the suppliers) opportunity will be challenging.

If the biggest customers for computing as a service increasingly become the biggest suppliers, opportunities for others in the ecosystem are limited.


Saturday, December 21, 2019

Are Consumers Rational About Internet Access Choices?

There is a big difference between supply and demand in the fixed network internet access or any other business. Some assume that supply is itself sufficient to create demand. That clearly is not the case. 

Ofcom estimates that 30 percent of consumers who could buy a fiber to home actually do so. AT&T believes it will hit 50 percent take rates for its fiber to home service after about three years of marketing. 

In South Korea, FTTH  take rates seem to be only about 10 percent, for example, though network coverage is about 99 percent. In Japan and New Zealand, take rates have reached the mid-40-percent range, and network coverage might be about 90 percent. But in France and the United Kingdom, FTTH adoption is in the low-20-percent range. 

Perhaps 10 percent of Australians buy the fastest service available, whether that speed is 100 Mbps or a gigabit. 

In other words, though some assume that what really matters is the supply of access connections--seen in calls for better coverage and higher speeds, or lower prices--consumer behavior suggests the demand is not quite what some believe. 

Even when it is made available, FTTH does not seem to lead to a massive consumer rush to buy. Quite the contrary, in fact, seems to be the case. Nor does the supply of gigabit-per-second connections necessarily lead to better results (higher take rates).

In the U.S. market, though perhaps 80 percent of potential customers can buy a gigabit-per-second service, only about two percent seem to do so. The reason seems to be that most consumers find their needs are satisfied with service in the hundreds of megabits per second range. 


About 51 percent of U.S. fixed network internet access customers now buy service at 100 Mbps or higher, according to OpenVault. 

Some 35 percent buy service rated at  100 Mbps to 150 Mbps, OpenVault says. About 27 percent buy service running between 50 Mbps and 75 Mbps. 

It is one thing for policy advocates to call for extensive, rapid, nearly universal supply of really-fast internet access services. It is quite another matter for any internet service provider to risk stranded assets on a major scale simply to satisfy such calls. It is, in fact, quite dangerous to overinvest, as consumer behavior shows that most people do not want to buy the headline speed services. 

Some might essentially argue that consumers do not know any better. Others might argue consumers are capable of making rational decisions. And the rational decision might always begin with how much speed, or usage allowance, actually satisfies a need. 

Price also matters. Generally speaking, the trend in the consumer internet access market has been to supply faster speeds for about the same price faster speeds for about the same price

Consumer budgets, in other words, are essentially fixed: they cannot and will not spend more than a certain amount for any and all communication services, while expectations about service levels creep up year by year. 

So critics will always be able to complain. No rational ISP will push its investments beyond the point where it believes it has a sustainable revenue model. And that model has to be based on reality, which is that real customers do not want the headline speeds advocates often push for.

Friday, December 20, 2019

U.S. Mobile Operator Spectrum Positions, Strategies Changing Fast

U.S. mobile service provider spectrum holdings, spectrum prices and spectrum strategies are changing very rapidly, in part because of the proposed T-Mobile US merger with Sprint, the emergence of Dish Network as a new provider, and very-active spectrum auctions.

Though scale is a clear advantage of the proposed T-Mobile US merger with Sprint, spectrum acquisition is key. A merged Sprint plus T-Mobile US would have huge spectrum assets, compared to all other leading mobile providers.

Though scale is a clear advantage of the proposed T-Mobile US merger with Sprint, spectrum acquisition is key. A merged Sprint plus T-Mobile US would have huge spectrum assets, compared to all other leading mobile providers.

Also, Verizon’s relative spectrum paucity also is clear. Verizon has more customers to support on its network than does T-Mobile US or Sprint, for example, and arguably the most to gain from using small cells to intensify its spectrum reuse. 

Verizon also has the most incentive to use new millimeter wave spectrum, as that represents the greatest immediate source of new mobile spectrum. 


Though Verizon in 2018 had about the same amount of spectrum as T-Mobile US, it had twice the number of subscribers, and three times the number of Sprint, which had almost twice the spectrum. 



Spectrum prices also might be a bit hard to evaluate using the traditional dollars per MHz per potential user, in part because the amounts of spectrum are so much greater for millimeter wave auctions. Where low-band spectrum with much more limited capacity once sold for prices above $1 per MHz POP, millimeter wave spectrum appears so far to be selling for $.01 per MHz POP, and should cost even less, on a MHz-POP basis, as frequency increases.

The reason is that higher frequencies feature much-greater capacity (orders of magnitude more MHz per POP). As with any business or consumer budget, there is only so much money to spend on any particular product. 

As consumers now pay between $40 and $80 for internet access for hundreds of megabits per second, where they once paid the same amounts for a few megabits per second, so too mobile service providers can only afford to pay so much for new blocks of spectrum.

So prices will fall. 


Rural Internet Access Will Always be Tough

Reality generally is more complicated than many would prefer, and supplying internet access to highly-isolated locations provides a clear example, even if some casually misunderstand the enormity of the problem

The big problem is that, at least for traditional fixed networks (fixed wireless is the exception), there literally is no business case for a communications network. Without subsidies, such networks cannot be sustained.

Assuming a standard fixed network investment cost, no positive business case can be produced over a 20-year period, the Federal Communications Commission has suggested. 

That is why satellite access traditionally has been the only feasible option for highly-isolated locations, with subsidized access the only way most networks can survive in rural areas. 

There is a direct relationship between housing density and network cost. Most of the coverage problem lies in the last two percent of housing locations. Consider many U.S. states where rural population density ranges between 50 and 60 locations per square mile, and ignore the vast western regions east of the Pacific coast range and west of the Mississippi River, where population density can easily range in the low single digits per square mile.


Assume 55 locations per square mile, and two fixed network suppliers in each area. That means a theoretical maximum of 27 customers per square mile, if buying is at 100 percent. Two equally skilled competitors might expect to split the market, so each provider, theoretically, gets 27 accounts per square mile.

At average revenue of perhaps $75 a month, that means total revenue of about $2025 a month, per square mile, or $24,300 per year, for all the customers in a square mile.

The network reaching all homes in that square mile might cost an average of $23,500 per home, or about $1.3 million.

At 50 percent adoption, that works out to roughly $47,000 per account in a square mile, against revenue of $900 per account, per year. Over 10 years, revenue per account amounts to $9,000.

The business case does not exist, without subsidies. 

It remains unclear whether new low earth orbit satellite constellations can attack the cost problem aggressively. What does not seem unrealistic is that revenue expectations remain in the $100 per month range, for internet access.

Facebook Creating its Own Operating System Points to Telco Strategy as Well

Some might argue operating systems now are irrelevant, or nearly so, as barriers to consumer use of applications and services. Operating systems clearly continue to matter for app, device and platform suppliers, though. Which might explain why Facebook is now working on its own operating system, apparently for use with virtual reality or augmented reality products.

As Apple now designs its own chips and Google and Amazon design and market their own hardware, so Facebook is thought to be readying its own chip designs to support the new operating system and might design its own eyewear computing appliances.

Apple long has had its own audio services and now has moved into the video streaming business as well, as well as long having its own operating systems. 

Such moves into other parts of the ecosystem are hardly new. Google the supplier of leading consumer applications also became a hardware supplier, an internet service provider and an operating system platform as well. 

Amazon designs and builds its own hardware, is a cloud computing services supplier, provides its own video and audio streaming services and builds its own servers and server chips. 

The strategy of moving into new parts of the internet and consumer products ecosystems therefore has been underway for some time. 

There are implications for connectivity providers, whose historic and unique role in the ecosystem--network connectivity--is under assault by others in the internet ecosystem. Others in the ecosystem are gradually occupying multiple roles in the value chain, sometimes to reduce costs, sometimes to create product distinctiveness, sometimes to gain strategic leverage over potential or present competitors. 

At least in principle, such moves might also make future “breakup” efforts more difficult for regulators and policymakers, or at least create potential new revenue streams in the event of any successful breakup efforts. Licensing of core technology might be a necessary part of any such efforts, even if key applications are separated. 

Connectivity providers have not been comatose in exploring ways to enter other parts of the value chain, but it has proven quite difficult. The existential danger is that, over time, value in the ecosystem will continue to shift to appliances and devices; applications and platforms; marketplaces and commerce. 

That is why some believe “connectivity alone” will not continue to be a viable revenue model for telcos, despite the obvious danger in attempting major transformations. As the old saying suggests, telcos are between a rock and a hard place; certain decline or demise and possible decline and demise. 

Even as 5G, edge computing, internet of things, augmented, artificial reality and network slicing create new potential roles and revenue streams, the decay of legacy connectivity provider revenue streams is obvious and significant. 

Also, at a high level, connectivity budgets--consumer or organization and business--are not unlimited. Even if new products are developed that replace older products, customers are unlikely to spend much more than they do at present, for all such services. 

Big new revenue opportunities therefore almost by definition must come from outside the traditional connectivity ecosystem. It will be risky. But it might be a necessary risk.

Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...