Friday, October 27, 2017

"Up the Stack" Strategy Has Changed Since 2003

Lots has changed in the telecom business over the last decade or two, among them the logical growth opportunities for service providers. In 2003, a reasonable argument could have been made that telecom firms should move into the information technology  (computing) ecosystem. That, in fact, was a strategy some had attempted since the 1980s, with very mixed success.

In 2017, the more common argument is that tier-one service providers should move into video entertainment and internet of things services (connectivity as a minimum, applications and platforms where possible).

These days, executives are more often to consider extension into adjacencies “up the stack,” such as in some verticals where application or platform roles are possible. More often, horizontal acquisitions will be the obvious moves.  

“A consensus is emerging that operators should focus on growth that supports their core connectivity  business, and that their explorations of new areas (if any) should be limited to a small number of opportunities,” Analysys Mason says.

Gone is any serious interest in core computing, even if efforts to enter some parts of the cloud computing business--in the form of ownership of data centers--has been a focus.

For the most part, telcos think others have occupied key segments of the cloud computing ecosystem, including general-purpose cloud computing (Amazon Web Services, Microsoft, Google, others); data centers (most often owned by third parties, not telcos) and consumer and enterprise applications.


U.S.Federal Expectations for Hybrid Cloud are in Line With Global Expectations

A new survey of U.S. federal government information technology executives find most inclined to favor a hybrid cloud strategy for their computing requirements.

Federal IT managers say their ideal mix includes 39 percent physical servers and 61 percent cloud. Some 70 percent of 150 respondents  believe that in 10 years, the majority of federal agencies will rely on hybrid cloud environments for core applications, says MeriTalk, the firm that conducted the survey on behalf of Fortinet.

That probably is in line with what IT executives globally expect. According to IDC, by 2021, about 40 percent of computing will be done using premises data centers, with about 60 percent done in the cloud.


source: IDC

Thursday, October 26, 2017

Service Providers Seem to See Little Growth in Connectivity Services

Innovation in the access provider business that is significant enough to move the revenue needle never is easy. For the largest tier-one service providers, any single initiative--to have revenue impact--has to produce US$1 billion or more, ideally, and do so quickly.

The world's 25 biggest telecom companies  generated $1.2 trillion in revenues in 2016 and $88 billion in profits. For any single firm generating revenue of perhaps $100 billion annually, even an increase of $1 billion to several billion hardly  has any impact on overall results.

Also, few opportunities will produce that magnitude of incremental revenue, and few will do so quickly enough to boost the top and bottom lines fast enough.

Smaller companies have a different problem: they cannot afford to invest--at relevant scale--to take advantage of many opportunities elsewhere in the ecosystem, even if relatively smaller new revenue sources could have impact.  

“A consensus is emerging that operators should focus on growth that supports their core connectivity  business, and that their explorations of new areas (if any) should be limited to a small number of opportunities,” Analysys Mason says.


The problem, as outlined by Analysys Mason, is that of the four options for growth, half are largely unrealistic. Growth by subscriber or average revenue per user growth, or even internet of things connections is going to be difficult.

In many markets, subscriber growth already is impossible (except for taking market share). In many markets, ARPU is falling. And it is not hard to forecast that most of the upside from internet of things, as with all other communications-related opportunities, will accrue to application, platform or device suppliers.

Wednesday, October 25, 2017

FTTH Is Not the Only Way to Future Proof a Network, Anymore

The 5G era might be the first to dethrone thinking about the “best” or “only” way to build future-proof fixed access networks. For many decades, the thinking has been that only optical fiber to the premises could do so.

In a strict sense, that thinking has changed over the last decade, as rival hybrid fiber coax networks deployed by cable operators have shown that gigabit networks can be built, affordably and now, using HFC.

But bigger changes are coming. In the 5G era, mobile access might become a full substitute for fixed access, at least for many customers. And 5G-based fixed access will be a full substitute for other forms of fixed network access, especially optical fiber to the premises.

It is next to impossible to argue that fiber-to-home deployments are more affordable than fixed wireless, especially fixed wireless using unlicensed spectrum. Where the fiber to home distribution network might cost $600 per passing, a fixed wireless approach using unlicensed millimeter wave spectrum might cost as little as $300 per passing.

A connected fixed wireless location might cost $800, where a connected fiber to premises connection might cost $1,800, according to Maravedis.



Broadly speaking, it has made sense--for cable TV or telco networks--to deploy fiber as deep into the network as the business model will support. “Fiber to where you make money” is one colloquial way of describing the strategy.
“The strategy of deploying fiber to the most economical point in the network is still valid, but the combination of fixed fiber, wireless and other access technologies is now even more crucial to the operator’s business case,” said Federico GuillĂ©n, Nokia’s president of Fixed Networks.
We will also see a combination of fiber and fixed wireless access to deliver ultra-broadband to the home, he argued.

In other words, the strategy now is how to create gigabit access networks that are profitable, not the choice of access media.

Cisco, Google Partner for Hybrid Cloud

Cisco and Google have partnered to sell hybrid cloud solutions combining Google cloud services and Cisco premises computing solutions.  The partnership will allow enterprise applications and services to be deployed, managed and secured across on-premises environments and Google Cloud Platform.

The hybrid cloud market will grow from US$33.28 Billion in 2016 to US$91.74 Billion by 2021, at a compound annual growth rate of 22.5 percent, according to marketsandmarkets.



Tuesday, October 24, 2017

Hard to Beat Fixed Wireless for Internet Access, Some Argue

It is next to impossible to argue that fiber-to-home deployments are more affordable than fixed wireless, especially fixed wireless using unlicensed spectrum. Where the fiber to home distribution network might cost $600 per passing, a fixed wireless approach using unlicensed millimeter wave spectrum might cost as little as $300 per passing.

A connected fixed wireless location might cost $800, where a connected fiber to premises connection might cost $1,800, according to Maravedis.


The same sort of economics apply for connecting multiple dwelling units, Maravedis argues.  

Construction costs account for much of the cost differential, especially when trenching is required to place new underground facilities. Maravedis argues that a fiber-to-premises approach costs between $26,500 to $300,000, assuming a distance to the building of half a mile from a trunking network optical node.

Covering the same distance to connect a building might cost $6,000 using fixed wireless and unlicensed spectrum, Maravedis argues.

There also appear to be advantages for using unlicensed spectrum and fixed wireless, rather than fiber-to-premises, for serving multi-unit dwellings.


So small and independent U.S.  internet service providers could benefit from the release of 14 GHz of unlicensed spectrum, in the 57 GHz to 71 GHz frequencies, for communications purposes. By way of comparison, all licensed mobile spectrum presently available in the U.S. mobile business amounts to about 600 MHz, while all Wi-Fi spectrum represents about the same amount of capacity.

It would not be unreasonable to assume that a vast increase in spectrum supply--much of it offered on a non-licensed basis--will put pressure on licensed spectrum prices, in addition to enabling new competitors. That will include both “for fee” providers who take market share, as well as removal of some amount of potential business as enterprises and other entities build their own infrastructure.

And lots of new spectrum is coming, in the millimeter wave bands, as well as with spectrum sharing, including the 150 MHz in the new Citizens Broadband Radio Service. The Federal Communications Commission, for example, wants to release new spectrum in a number of millimeter wave bands.
Bea
How much impact new millimeter wave spectrum will have is unclear, as incumbents including AT&T, Verizon and others will be able to use fixed wireless, not just independent ISPs. What is clear is that the economics of gigabit internet access will fall.

Monday, October 23, 2017

AI Will Take Decades to Produce Clear Productivity Results

General purpose technologies (GPT) tend to be important for economic growth as they tend to transform consumer and businesses do things. The issue is whether artificial intelligence is going to be a GPT.  

The steam engine, electricity, the internal combustion engine, and computers are each examples of important general purpose technologies. Each increased productivity directly, but also lead to important complementary innovations.

The steam engine initially was developed to pump water from coal mines. But steam power also revolutionized sailing ship propulsion, enabled railroads and increased the power of factory machinery.

Those applications then lead to innovations in supply chains and mass marketing and the creation of standard time, which was needed to manage railroad schedules.

Some argue AI is a GPT, which means there will be significant and multiple layers of impact.

Machine learning and applied artificial intelligence already can show operational improvements in all sorts of ways. Error rates in labeling the content of photos on ImageNet, a collection of more than 10 million images, have fallen from over 30 percent in 2010 to less than five percent in 2016 and most recently as low as 2.2 percent, according to Erik Brynjolfsson, MIT Sloan School of Management professor.


Likewise, error rates in voice recognition on the Switchboard speech recording corpus, often used to measure progress in speech recognition, have improved from 8.5 percent to 5.5 percent over the past year. The five-percent threshold is important because that is roughly the performance of humans at each of these tasks, Brynjolfsson says.

A system using deep neural networks was tested against 21 board certified dermatologists and matched their performance in diagnosing skin cancer, a development with direct implications for medical diagnosis using AI systems.

On the other hand, even if AI becomes a GPT, will we be able to measure its impact? That is less clear, as it has generally proven difficult to quantify the economic impact of other GPTs, at least in year-over-year terms.

It took 25 years after the invention of the integrated circuit for U.S.  computer capital stock to reach ubiquity, for example.

Likewise, at least half of U.S. manufacturing establishments remained unelectrified until 1919, about 30 years after the shift to alternating current began.

The point is that really-fundamental technologies often take decades to reach mass adoption levels.

In some cases, specific industries could see meaningful changes in as little as a decade. In 2015, there were about 2.2 million people working in over 6,800 call centers in the United States and hundreds of thousands more work as home-based call center agents or in smaller sites.

Improved voice-recognition systems coupled with intelligence question-answering tools like IBM’s Watson might plausibly be able to handle 60 percent to 70 percent  or more of the calls. If AI reduced the number workers by 60 percent, it would increase U.S. labor productivity by one percent over a decade.

But it also is quite possible that massive investment in AI could fail to find correlation with higher productivity, over a decade or so.

It might well be far too early to draw conclusions, but labor productivity growth rates in
a broad swath of developed economies fell in the mid-2000s and have stayed low since then, according to Brynjolfsson.

Aggregate labor productivity growth in the United States averaged only 1.3 percent per
year from 2005 to 2016, less than half of the 2.8 percent annual growth rate sustained over 1995
to 2004.

Fully 28 of 29 other countries for which the OECD has compiled productivity
growth data saw similar decelerations.

So some will reach pessimistic conclusions about the economic impact of AI, generally. To be sure, there are four principal candidate explanations for the discontinuity between advanced technology deployment and productivity increases: false hopes, mismeasurement,  concentrated distribution and rent dissipation or  implementation and restructuring lags.

In other words, new technology simply will not be as transformative as expected. The second explanation is that productivity has increased, but we are not able to measure it. One obvious example: as computing devices have gotten more powerful, their cost has decreased. We cannot quantify any qualitative gains people and organizations gain. We can only measure the retail prices, which are lower.

The actual use cases and benefits might come from “time saved” or “higher quality insight,” which cannot be directly quantified.

Another possible explanations are concentrated distribution (benefits are reaped by a small number of firms and rent dissipation (where everyone investing to reap gains is inefficient, as massive amounts of investment chase incrementally-smaller returns).

The final explanation is that there is a necessary lag time between disruptive technology introduction and all the other changes in business processes that allow the new technology to effectively cut costs, improve agility and create new products and business models.

Consider e-commerce, which was recognized as a major trend before 2000. In 1999, though, actual share of retail commerce was trivial, 0.2 percent of all retail sales in 1999. Only now, after 18 years, have significant shares of retailing shifted to online channels.

In 2017, retail e-commerce might represent eight percent of total retail sales (excluding travel and event tickets).


Two decades; eight percent market share. Even e-commerce, as powerful a trend as any, has taken two decades to claim eight percent share of retail commerce.  

Something like that is likely to happen with artificial intelligence, as well. If AI really is a general purpose technology with huge ramifications, it always take decades for full benefits to be seen.

It will not be enough to apply AI to “automate” existing business processes and supply chains. Those processes and supply chains have to be recrafted fundamentally to incorporate AI. Personal computers could only add so much value when they were substitutes for typewriters. They became more valuable when they could use spreadsheets to model outcomes based varying inputs.

Computing devices arguably became more valuable still when coupled with the internet, cloud-based apps, video, rich graphics, transaction capability and a general shift to online retailing.

Costs of Creating Machine Learning Models is Up Sharply

With the caveat that we must be careful about making linear extrapolations into the future, training costs of state-of-the-art AI models hav...