Wednesday, August 2, 2017

Artificial intelligence Can Improve Telco Revenue, Capex, Opex, Profit Margin

Artificial intelligence (machine learning) can make a key difference in forecasting telecom industry market share trends and performance, says James Sullivan, J.P. Morgan head of Asia equity research (except for Japan).

Consider the key matter of market share. “When overall industry revenue growth is stagnant, as it is with most telco markets, market share movements become the determining factor of an operator’s top-line performance,” says James Sullivan, J.P. Morgan head of Asia equity research (except for Japan).

The implication: if your forecasting models are better at detecting changes in market share, your forecasts for revenue also will be more accurate.

That is important as forecasting changes in market share trends and anticipating trend breakage has historically been challenging for the Street,” says Sullivan.

He points to the case of forecasts for AIS from 2010 to 2016. AIS posted strong market share
improvements from 2010 to 2012, which resulted in revenue significantly beating analysts’ forecasts. As analysts extrapolate from past performance to model future performance, they missed a trend change.

The same thing happened with trends changed again between 2014 and 2016, when competitor True deployed its 3G/4G networks, resulting in less AIS revenue than history suggested would be the case, from 2014 to 2016.

“For now at least, the uses of advanced analytical techniques, such as machine
learning, are mainly applicable to the gathering and processing of alternative data
sets,” says Sullivan.

“In our view, we can take a significant step towards solving long run market share and capex forecasts for telcos if we can understand and obtain the following data sets:

1) What is relative network quality by region, in terms of download / upload speeds but also network availability by technology?
2) What is current data pricing for each operator in each country analyzed, updated constantly?

“These statistics, taken together, define relative value and assist in forecasting pricing power by operator, market share shifts, incremental capex, and therefore incremental opex and margins,” argues Sullivan.

“In our view, capex is dependent on demand for data and data usage and current utilization/coverage of an operator’s network,” says Sullivan. “Changes in data
pricing can signal how demand for data will evolve, while relative network quality
can be a signal for utilization rates.”

The point is that data pricing and network quality are the key data sets required to forecast capex.

Networks “are not commoditized and therefore value is a function of price and quality.” So it is “value” that ultimately drives usage, capex and opex.

Sullivan will be speaking at the Spectrum Futures conference.


China Unicom, China Telecom Find Virtualization Reduces Power, Costs; Boosts Server Performance

Network functions virtualization, which will be a foundation for coming 5G networks, also has direct implications for cloud data center costs, Intel and China Unicom and China Telecom, have found.

Virtualization has many facets, but always builds on a separation of control plane and data plane functions. In simple terms, that allows use of hardware from multiple suppliers (“commodity hardware”) to implement the control operations, a capability that should allow lower server costs.

The intermediate objective then is “virtualized software running on a standard server,” leading to lower capital and operating expense.

However, argues Intel, “many current virtualized network functions  implementations are not well optimized for the new virtual environment and offer far less performance than their traditional proprietary counterparts.”

“While NFV, SDN, and orchestration can greatly reduce the operational complexity of deploying a telecommunications cloud, the infrastructure required to overcome the performance shortfall can be costly,” says Intel.

Control and User Plane Separation (CUPS), as defined by the 3rd Generation Partnership Project (3GPP), enables independent scaling of the control and data planes and is the next logical step for VNF design, Intel says.

While the control plane is easier to scale using standard off-the-shelf compute and memory resources, scaling the data planes to support complex packet processing at high rates can be challenging, says Intel.

The opportunity is to design a system where the data plane can support reconfigurable packet pipelines and complex packet processing at very high throughputs.

For example, in recent deployments, the Intel SmartNIC solution, deployed with HP Enterprise gear, achieved a total power reduction of about 50 percent compared to a standard NIC server.

Together, the Intel Arria 10 FPGA-based SmartNICs and commercial server central processing units optimize the data plane performance to achieve lower costs while maintaining a high degree of flexibility, as well, Intel says.

The SmartNIC provides a performance improvement per server greater than three times the base case.


Using SmartNICs improves performance and supports higher throughputs at a marginal power and cost increase, as well, Intel says.
HPE calls this their system for virtualizing data plane resources “vBRAS”  (virtual broadband remote access server) technology. VBRAS  scales the control and data planes independently on physically separate, standard off-the-shelf server platforms.

The data plane is optimized for packet processing and also accelerates computationally-intensive traffic shaping and quality of service functions, which is more efficient, and allows operation at lower costs.

In a second case, Intel is enabling mobile edge computing for China Unicom. Nokia designed and developed it to run on the Intel Xeon processor.

The design allows extending data center solutions to the very edge of the network. That, in turn,

Improves mobile user experience by providing high availability of content with reduced latency.

Tuesday, August 1, 2017

Value, Not FTTH Is the Issue,

At some point, even the biggest new developments in internet access simply become routine. So it is with the full deployment of commercial gigabit internet access capabilities by Mediacom.

Mediacom Communications says its entire  network now is able to supply gigabit per second internet access speeds to every location on its network. That represents 2.8 million homes and businesses passed, scattered across 1,000 U.S. communities.

Mediacom has argued it would become the first major U.S. cable company to fully transition to the DOCSIS 3.1Gigasphere” platform, and appears to have done so.

What might be less "routine" is mass deployment of gigabit access by telcos in the U.S. market, in areas where they have not already deployed fiber to home networks.

Unlike cable TV companies, telcos have to make big changes in their access platforms to reach ubiquitous gigabit speeds.

Between 2004 and 2013, large telcos (mostly Verizon, but including AT&T, CenturyLink and Frontier) accounted for about 83 percent of the FTTH build, while other providers added just 17 percent of the annual additions, according to Fiber to the Home Council.  

Since 2013, the large telcos only accounted for about 52 percent of the build while the “other 1000” FTTH providers added 48 percent of the new connections.

Homes connected to FTTH networks in the United States have grown to about 30 million, while subscribers are about 13.7 million. That means take rates, where FTTH is available, are about 46 percent.

That is just a bit higher than the market adoption one would expect to see in most U.S. markets where cable TV companies and telcos compete head to head. Typically, telco take rates are in the 40-percent range, with cable operators tending to get as much as 60 percent share.


So the issues is not access media, but value. Cable operators, even without fiber to the home, have been able to gain leadership of the fixed network internet access market, and now are moving towards gigabit speeds on a mass market basis.

In fact, even when they compete with FTTH, cable companies seem to be able to routinely garner most of the net new additions. In the first quarter of 2017, cable companies got more than 100 percent of net new additions, in fact.

All together, the U.S. housing market includes about 126 to 130 million dwellings. So FTTH penetration (actual customers, not passings) is about 11 percent.





Content is One Way to Move Up the Stack

Reliance Jio’s use of content to drive interest in its 4G service is going to cause other mobile service providers to respond, analysts now say.

“We expect Jio to be competitive in the race for IPL digital rights, and incumbent telcos may need to revisit their OTT (over-the-top) strategy as Jio is the only telco whose OTT offerings (JioTV and JioCinema) already feature among the top 10 video streaming apps in terms of monthly active usage during the January-to June 2017 period,” said Rajiv Sharma, HSBC director and telecoms analyst.

That emphasis on content illustrates one key facet of access provider strategy in the internet era, namely that “dumb pipe” really is the foundation of the new business, as all applications--including those competing with access provider apps-- can be created independently, by third parties, without the access provider’s permission.

By packaging third party content with its access service, Reliance Jio not only differentiates its access service, but directly stimulates demand for the access and its own branded handsets.

But even that is not the key benefit. To the extent that value lies at the application layer, Jio becomes a supplier of that value (content), with direct participation in application revenues generated by that additional value.

That will be the pattern for enterprise and business segments of the business as well. Unless an access provider is content to eke out a living selling “pipe” (internet access and enterprise data networks with declining revenue-per-bit characteristics), it must create an additional role as a “service provider.”

We sometimes forget that voice and messaging once were services 100-percent controlled by the access provider, down to the details of what devices could be attached and used on the network. All that ends in the internet era.

Carrier voice, messaging and content subscriptions remain “services” created, sold and owned by the access provider. Internet access, though, is the first mass market “dumb pipe” offering where the value is created entirely by third parties (value is created by use of all apps accessible on the internet, the internet access itself truly is a dumb pipe).

That will become increasingly clear as the era of pervasive computing emerges, enabled by 5G and “internet of things” networks. Most of the revenue and most of the value will be created by the IoT platforms, devices and apps, not the access. That will be a key problem for most access providers, as creating new application revenues seem still to be quite difficult for most access providers.

To participate in the upside, access providers will have to seek roles beyond mere supply of connections for IoT devices. Even NTT, long a leader in creating new application-based revenue streams, still earns the overwhelming share of total revenues from access services (if you will pardon the lumping of mobile and fixed subscriptions in the “access” category).

According to  Ovum, access providers will grow their revenues from managed global services to enterprise customers to at least US$297 billion by 2020.

The biggest contribution will come from new strategic ICT service revenues at nearly US$173 billion, which will increase at a compound annual growth rate of 9.9 percent over the period 2015 to 2020, Ovum says.

Strategic ICT services include business IT and IP applications, compute and hosting, enterprise mobility, managed networks, professional services, and unified communications.

Monday, July 31, 2017

Verizon Makes Huge Innovation in Fiber-to-X Network Designs

Mass market optical fiber designs do not change radically, very often, in the U.S. market. Over a process of three to four decades, we have settled into some clear design buckets, including the cable TV hybrid fiber coax network; fiber to the home (FTTH) and fiber “to the neighborhood.”

There has been a shift from active to passive designs for FTTH, but the fundamental choices have been fairly well understood for some time.

But give Verizon credit for making a huge innovation that recasts the whole "fiber to the premises" issue. You might argue the change separates the entire issue of drop (access) media from the issue of how to build trunking networks.

The new fiber-deep design essentially builds a multipurpose optical distribution network (trunking network) and leaves the actual drop media decision for later (supporting either optical access for business or wireless access).

Verizon now is building the fiber-deep trunking network in Boston, and likely will follow in other areas.

We do not yet have a well-understood and generally-accepted moniker for the design, which basically installs cables with huge numbers of fibers, virtually ubiquitously (to locations representing about every light pole, in principle).

“The architecture that we're building in Boston, and now in other cities around, is a multi-use” fiber-deep design, Verizon says, where “every light post becomes a potential small cell for 5G.”

That same network is designed to have enough fibers to handle enterprise connections, small business and also serve as the small cell foundation for mobile and fixed consumer access.

If Verizon is correct, then the economics of gigabit internet access for consumers will change significantly, not least because the optical fiber distribution cost is partially defrayed by revenues earned by serving enterprise and business customers, as well as the mobile small cell network.

The drops for gigabit consumer services then will be fixed wireless, using unlicensed or lightly-licensed spectrum. The implications for consumer gigabit access could be huge.

It remains to be seen if the actual cost of a fixed wireless connection using 28 GHz and 39 GHz assets will actually be “miniscule,” as Verizon executives have suggested. But Verizon already believes it can deliver gigabit speeds at distances of perhaps 1,000 feet or so.

That is important since street lights are spaced at distances from 100 feet (30.5 meters) to 400 feet (122 meters) on local roads. In principle, putting radios on every other light pole could mean a radio radius of about 200 feet to 800 feet, well within tested propagation ranges. Putting radios on every light pole would shrink the radius to 100 feet to 400 feet, and allow for more path diversity, in case of obstructions.

If, as some others expect, millimeter wave small cells have a transmission radius of about 50 meters (165 feet) to 200 meters (perhaps a tenth of a mile), it is easy to predict that an unusually-dense backhaul network easily can support radio drops from small cell networks that could number as many as 100,000 in an area such as Manhattan.

That fiber to the light pole network would be the first major innovation in fiber access networks for decades.

Boot Camp for a Radically-Different World

The non-profit PTC has for some time held training events for mid-career professionals. This year, for the first time, PTC has organized a week-long program of value to promising members of regulatory and policy organization staffs.






At the Industry Transformation Boot Camp (including Spectrum Futures and PTC Academy), students will learn:

  • Strategy for a business consolidating from 810 service providers to 105, in 10 years
  • What drives the change
  • What industry structure will emerge
  • How revenue will be earned
  • How 5G sets the stage
  • Who wins, who loses, as part of the change
  • How to prepare for the changes


The educational event earns students a certificate of completion, and also immerses them in tutorials and case study exercises preparing them for the most-rapid transformation of the telecom industry in half a century.

The week-long training event, including Spectrum Futures and PTC Academy curricula,  especially is designed to train top-level and mid-career staff on what is coming and why.

Full week discounts are available, especially for organizations who may wish to send several staff members.

Email spectrumfutures@ptc.org to discuss the Boot Camp program.

If I Do Not Buy a Tesla, Is that a Supply Problem?

If a particular product is widely available, and yet consumers do not want to buy it, is that a market failure, or simply a reflection of consumer choice? That is among the potential issues report on U.K. internet access might raise when it is released.

Initial reports suggest the report will show a wider availability gap than prior reports have suggested. That might be a methodological issue, many argue, as the report conflates availability with take rates. In other words, it mixes demand and supply metrics when it ought to measure either supply or demand, but not both, as a single measure.

That is important. To use a common example, I might choose not to buy a Tesla, even when Tesla availability is not an issue. That is not a market failure. That is a consumer choice.

In other markets, such as the United States, there is likely to be continuing gap between locations that can buy a gigabit internet access service, and accounts or locations that choose to buy.

When gigabit internet access service becomes widely available, it is possible, perhaps certain, that most consumers will choose to buy some other tier of service, where it is available, on entirely reasonable grounds. They might simply conclude that some other lower-speed option satisfies all their needs, at lower price.

Some observers will use more-stringent criteria for evaluating adoption, such as including retail price (“affordability”) in addition to availability as a measure of “success.” There is logic to that approach, as well. Perhaps a more-refined way of making such measurements is to compare internet access price to household income, stipulating that access should not be more than X percent of such income.

Yet others might add the additional criteria of multiple providers in a market as a criteria for success.

The point is that there are many potential criteria for assessing the success providers have had at getting fast internet access to market. Availability matters.

But once supply is in place, demand takes over. The actual percentage buying a particular offer is less important.

In coming years that is likely to be more important, as mobile internet access increasingly becomes a full substitute for fixed internet access.

Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...