Tuesday, October 24, 2017

Hard to Beat Fixed Wireless for Internet Access, Some Argue

It is next to impossible to argue that fiber-to-home deployments are more affordable than fixed wireless, especially fixed wireless using unlicensed spectrum. Where the fiber to home distribution network might cost $600 per passing, a fixed wireless approach using unlicensed millimeter wave spectrum might cost as little as $300 per passing.

A connected fixed wireless location might cost $800, where a connected fiber to premises connection might cost $1,800, according to Maravedis.


The same sort of economics apply for connecting multiple dwelling units, Maravedis argues.  

Construction costs account for much of the cost differential, especially when trenching is required to place new underground facilities. Maravedis argues that a fiber-to-premises approach costs between $26,500 to $300,000, assuming a distance to the building of half a mile from a trunking network optical node.

Covering the same distance to connect a building might cost $6,000 using fixed wireless and unlicensed spectrum, Maravedis argues.

There also appear to be advantages for using unlicensed spectrum and fixed wireless, rather than fiber-to-premises, for serving multi-unit dwellings.


So small and independent U.S.  internet service providers could benefit from the release of 14 GHz of unlicensed spectrum, in the 57 GHz to 71 GHz frequencies, for communications purposes. By way of comparison, all licensed mobile spectrum presently available in the U.S. mobile business amounts to about 600 MHz, while all Wi-Fi spectrum represents about the same amount of capacity.

It would not be unreasonable to assume that a vast increase in spectrum supply--much of it offered on a non-licensed basis--will put pressure on licensed spectrum prices, in addition to enabling new competitors. That will include both “for fee” providers who take market share, as well as removal of some amount of potential business as enterprises and other entities build their own infrastructure.

And lots of new spectrum is coming, in the millimeter wave bands, as well as with spectrum sharing, including the 150 MHz in the new Citizens Broadband Radio Service. The Federal Communications Commission, for example, wants to release new spectrum in a number of millimeter wave bands.
Bea
How much impact new millimeter wave spectrum will have is unclear, as incumbents including AT&T, Verizon and others will be able to use fixed wireless, not just independent ISPs. What is clear is that the economics of gigabit internet access will fall.

Monday, October 23, 2017

AI Will Take Decades to Produce Clear Productivity Results

General purpose technologies (GPT) tend to be important for economic growth as they tend to transform consumer and businesses do things. The issue is whether artificial intelligence is going to be a GPT.  

The steam engine, electricity, the internal combustion engine, and computers are each examples of important general purpose technologies. Each increased productivity directly, but also lead to important complementary innovations.

The steam engine initially was developed to pump water from coal mines. But steam power also revolutionized sailing ship propulsion, enabled railroads and increased the power of factory machinery.

Those applications then lead to innovations in supply chains and mass marketing and the creation of standard time, which was needed to manage railroad schedules.

Some argue AI is a GPT, which means there will be significant and multiple layers of impact.

Machine learning and applied artificial intelligence already can show operational improvements in all sorts of ways. Error rates in labeling the content of photos on ImageNet, a collection of more than 10 million images, have fallen from over 30 percent in 2010 to less than five percent in 2016 and most recently as low as 2.2 percent, according to Erik Brynjolfsson, MIT Sloan School of Management professor.


Likewise, error rates in voice recognition on the Switchboard speech recording corpus, often used to measure progress in speech recognition, have improved from 8.5 percent to 5.5 percent over the past year. The five-percent threshold is important because that is roughly the performance of humans at each of these tasks, Brynjolfsson says.

A system using deep neural networks was tested against 21 board certified dermatologists and matched their performance in diagnosing skin cancer, a development with direct implications for medical diagnosis using AI systems.

On the other hand, even if AI becomes a GPT, will we be able to measure its impact? That is less clear, as it has generally proven difficult to quantify the economic impact of other GPTs, at least in year-over-year terms.

It took 25 years after the invention of the integrated circuit for U.S.  computer capital stock to reach ubiquity, for example.

Likewise, at least half of U.S. manufacturing establishments remained unelectrified until 1919, about 30 years after the shift to alternating current began.

The point is that really-fundamental technologies often take decades to reach mass adoption levels.

In some cases, specific industries could see meaningful changes in as little as a decade. In 2015, there were about 2.2 million people working in over 6,800 call centers in the United States and hundreds of thousands more work as home-based call center agents or in smaller sites.

Improved voice-recognition systems coupled with intelligence question-answering tools like IBM’s Watson might plausibly be able to handle 60 percent to 70 percent  or more of the calls. If AI reduced the number workers by 60 percent, it would increase U.S. labor productivity by one percent over a decade.

But it also is quite possible that massive investment in AI could fail to find correlation with higher productivity, over a decade or so.

It might well be far too early to draw conclusions, but labor productivity growth rates in
a broad swath of developed economies fell in the mid-2000s and have stayed low since then, according to Brynjolfsson.

Aggregate labor productivity growth in the United States averaged only 1.3 percent per
year from 2005 to 2016, less than half of the 2.8 percent annual growth rate sustained over 1995
to 2004.

Fully 28 of 29 other countries for which the OECD has compiled productivity
growth data saw similar decelerations.

So some will reach pessimistic conclusions about the economic impact of AI, generally. To be sure, there are four principal candidate explanations for the discontinuity between advanced technology deployment and productivity increases: false hopes, mismeasurement,  concentrated distribution and rent dissipation or  implementation and restructuring lags.

In other words, new technology simply will not be as transformative as expected. The second explanation is that productivity has increased, but we are not able to measure it. One obvious example: as computing devices have gotten more powerful, their cost has decreased. We cannot quantify any qualitative gains people and organizations gain. We can only measure the retail prices, which are lower.

The actual use cases and benefits might come from “time saved” or “higher quality insight,” which cannot be directly quantified.

Another possible explanations are concentrated distribution (benefits are reaped by a small number of firms and rent dissipation (where everyone investing to reap gains is inefficient, as massive amounts of investment chase incrementally-smaller returns).

The final explanation is that there is a necessary lag time between disruptive technology introduction and all the other changes in business processes that allow the new technology to effectively cut costs, improve agility and create new products and business models.

Consider e-commerce, which was recognized as a major trend before 2000. In 1999, though, actual share of retail commerce was trivial, 0.2 percent of all retail sales in 1999. Only now, after 18 years, have significant shares of retailing shifted to online channels.

In 2017, retail e-commerce might represent eight percent of total retail sales (excluding travel and event tickets).


Two decades; eight percent market share. Even e-commerce, as powerful a trend as any, has taken two decades to claim eight percent share of retail commerce.  

Something like that is likely to happen with artificial intelligence, as well. If AI really is a general purpose technology with huge ramifications, it always take decades for full benefits to be seen.

It will not be enough to apply AI to “automate” existing business processes and supply chains. Those processes and supply chains have to be recrafted fundamentally to incorporate AI. Personal computers could only add so much value when they were substitutes for typewriters. They became more valuable when they could use spreadsheets to model outcomes based varying inputs.

Computing devices arguably became more valuable still when coupled with the internet, cloud-based apps, video, rich graphics, transaction capability and a general shift to online retailing.

Sunday, October 22, 2017

Can AI Help Move Beyond "Something Happened" to "Something Happened and the Network Fixed Itself"?

To say artificial intelligence is trendy is an understatement, experienced routinely by a growing number of consumers in the form of their smartphones and voice-activated assistants, invisibly in their consumption of content.

Investments in artificial intelligence have been highest, to date, in banking, retail, healthcare and manufacturing, IDC estimates. In the communications business, AI use cases arguably have been most pronounced in smartphones, customer service automation and possibly billing.


But it is logical to ask whether AI should not logically come to play a role in network operations and marketing, among other basic functions of communications networks.


Can AI be used by networks to make decisions based on customer activities or location? And does that create incremental value and revenue opportunities?


Can AI help network supervisors move beyond “I know what happened” to “I know what will happen” to “something happened and the network fixed itself?” That is not a terribly new idea, as the notion of “self-healing” networks has been around for some time in the form of ring networks that switch to backup facilities in the event of a primary ring failure.


The promise of AI is the ability to extend self-healing to more parts of the network and its functions.


The former might take the form of informing the creation and tear-down of actual connections and features, provisioning and monitoring of actual network requirements, internally and behalf of customers. The latter might take the form of such important insights as figuring out which customers are about to churn, and then matching new offers to them to address the churn drivers.


In other cases, AI arguably should help determine which customers, devices and services need upgraded features and querying those customers about the upgrades, without human intervention.


In other cases, AI should help inform service providers about which customers have needs for additional products, what solutions are appropriate and then pitching and provisioning without human intervention.


AI should play a role in security as well, but the broader issue is how many mundane, necessary activities could be enhanced by AI in ways that not only reduce costs and waste, but also allow the network to learn to operate more effectively. Right now, almost nothing can be done autonomously.


Ideally, AI would uncover new needs that the network actually can create and then deliver.


In other words, AI should help service providers with the long-held goal of virtualizing the network and enabling instant changes.


In its data centers, Google has used DeepMind to reduce energy consumption 40 percent. Similar benefits should be wrung from AI as applied to the operations of networks and the creation and marketing of its services. The issue is how much more self optimization is possible.

Given the need to continue reducing network operations costs, the use of AI would seem an almost-inevitable outcome.

Friday, October 20, 2017

How Much Can a Telco Afford to Invest in Faster Internet Access?

How much should any tier-one service provider invest in its internet access capabilities?

Much depends on the market dynamics: whether that firm’s role is wholesale-only; wholesale and retail; or retail only or mostly.

But in every case, the fundamentally-sound position is to invest only to the point that an adequate return on capital can be made. The level of return might be dictated wholly, or in part, by a government entity that caps the rate of return. In other markets the rate of return is limited by the amount of competition and risk of stranded assets.

In the U.S. market, some are not optimistic. Jonathan Chaplin, New Street Research equity analyst, believes cable companies could have 72 percent market share in 2020, with as much as 78 percent share of the internet access market.

Some might argue, given such trends, that telcos should simply harvest their internet access customer base. Of course, such forecasts likely include an assumption that telcos must either upgrade to fiber to home or stay with copper access of some sort, and also assume that, for capital availability reasons, the upgrades will occur relatively slowly.

The other assumption is that “telcos” are not the same as “AT&T and Verizon,” which actually are seeing very-modest declines in internet access share, with most of the losses coming from other telcos, especially the large former-rural-carrier ranks (CenturyLink, Windstream, Frontier).

AT&T and Verizon have other options, including both fiber to home, mobile substitution and fixed wireless access options that will improve dramatically with 5G. In fact, in most cases, AT&T and Verizon are likely to find the business case for mobile or fixed wireless much more compelling.


The point is that  a service provider has to invest enough in its internet access capabilities to remain competitive in the market, but not more than that level. There are, in some markets, good reasons why the upside is limited.

Consider the U.S. market, where a cable operator is the market share leader, approaching 60 percent share in most instances. That leaves a bit less than 40 percent share for the local telco.

Ignore for the moment the growing cable share, a situation many would argue exists since some key telco providers rely mostly on less-capable digital subscriber line platforms. In the second quarter of 2017, for example, internet access account losses by AT&T and Verizon were infinitesimal (on the level of hundredths of a percent of their installed base).

The logical investment criteria should then be, at a minimum, what is necessary to hold 40 percent market share.

The “maximum” position is a bit less clear, namely the level of investment that could allow either firm to take market share away from competitors. It is not so clear that taking share is possible, no matter what the level of investment, some might argue. Others might argue that this is possible, if mobile and fixed wireless offers can be used to create a superior value proposition, compared to cable.

Ironically, that might be especially true if cable companies start to raise prices as much as double current rates. That would create a higher pricing umbrella underneath which telco offers could operate.

The “take share” position is complicated, as the value proposition includes a range of value (types and quality of services, plus price, plus bundling effects, plus threat of new entrants). The “hold share” position is easier, as it mostly involves offering packages that are roughly competitive with what cable offers, in terms of speed, price, value and role in bundles.

The point is that some telcos might not be able to do much to prevent lost market share. AT&T and Verizon have other options, based on their coming 5G profiles.  Even in its FiOS areas, Verizon tends to get only about 40 percent share. Perhaps that is as good as it gets.

"Insight" is the Outcome AI Delivers

All of us have heard the phrases “data-driven business” and “digital transformation” as hallmarks of the way firms will have to evolve


Add insights-driven to that list. Though we are in the early days, that phrase is supposed to refer to the way firms mine the data they own to develop insights about customer behavior that can, in turn, be used to drive sales, retention and profit margins.




“Insight” is another way of saying “knowledge” or “understanding” about actual patterns in customer and prospect behavior, with the ability to apply such understanding to actual product features, processes and delivery, in a predictive way.


And without belaboring the point, such insights, the result of data mining using artificial intelligence or machine learning, already have been deployed in some business processes such as customized content, search, customer service operations and e-commerce.


Some firms have ad advantage, though. Etsy, the e-commerce site, created a “dedicated research department to blend quantitative and qualitative insight and embed customer insights into every department, leading to high levels of user satisfaction and smarter product decisions,” says  Brian Hopkins, Forrester VP.  “Insights-driven businesses not only excel at data analytics, but also bring quantitative insight to bear on problems then embed insights in their business models, operations, and company culture.”

Tesla auto performance data likewise is streamed in real-time to Tesla’s data scientists, who build models to help diagnose driving issues and provide software or firmware updates over the air.

Thursday, October 19, 2017

One Interesting Factoid from Verizon's 3Q 2017 Report

Just one interesting observation from Verizon’s third quarter earnings report, which probably was better than most had expected. Note just one indicator, voice connections, which shrank seven percent. At that rate, Verizon loses half its voice revenue in a decade.

That is one illustration of the argument that tier-one service providers must replace half their current revenue every decade.


Despite the shift to unlimited plans and heightened competition in the mobile services market, Verizon managed to add a net 603,000 mobile connections, 486,000 of those being the highly-regarded postpaid accounts.

Operating revenues also were up, year over year. Even Verizon’s wireless segment posted higher revenue, year over year.



Wednesday, October 18, 2017

Massive MIMO Deployments are Inevitable

It is not hard to predict that use of massive multiple input-multiple output radio technologies is going to grow, as advanced 4G and 5G networks are built. Massive MIMO is required to make use of vast new spectrum resources to be released in the millimeter wave region to support 5G.

In fact, massive MIMO is intrinsically related to use of small cells, ultra-dense cell networks and millimeter wave frequencies.

Massive MIMO trials or limited deployments in 2017 were undertaken by Sprint, Deutsche Telekom, China Mobile, China Telecom, China Unicom, Singtel, T-Mobile Netherlands, Vodafone Australia, Optus, and Telefónica. Massive MIMO also is being developed by Telecom Infra, the open source telecom infrastructure effort.

The spectrum bands at which many of these trials have taken place include 2.5 GHz, 2.6 GHz, 3.5 GHz, 1.8 GHz, and 2.3 GHz. Except for 3.5 GHz, the remaining frequencies are also allocated for LTE in many countries. Telecom Infra is testing much-higher frequencies (60 GHz), designed in the U.S. market to use unlicensed spectrum.

MIMO antenna technology has been in use since the launch of 802.11n WiFi systems, but was first ratified for use in cellular systems in 3GPP’s Release 7 in 2008.

Deployments below 1 GHz are most likely to support eight or 16 antenna elements at most. Very-high frequencies above 30 GHz can have hundreds of antenna elements with some research citing below 500 antennas as an upper limit.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...