Wednesday, June 7, 2017

Why AI Investment is Going to (Initially) Disappoint

Despite the promise of big data, industrial enterprises are struggling to maximize its value.  A survey conducted by IDG showed that “extracting business value from that data is the biggest challenge the Industrial IoT presents.”

Why? Abundant data by itself solves nothing, says Jeremiah Stone, GM of Asset Performance Management at GE Digital.

Its unstructured nature, sheer volume, and variety exceed human capacity and traditional tools to organize it efficiently and at a cost which supports return on investment requirements, he argues.

At least so far, firms  "rarely" have had clear success with big data or artificial intelligence projects. "Only 15 per cent of surveyed businesses report deploying big data projects to production,” says IDC analyst Merv Adrian.

We should not be surprised. Big waves of information technology investment have in the past taken quite some time to show up in the form of measurable productivity increases.

In fact, there was a clear productivity paradox when enterprises began to spend heavily on information technology in the 1980s.

“From 1978 through 1982 U.S. manufacturing productivity was essentially flat,” said Wickham Skinner, writing in the Harvard Business Review.

In fact, researchers have created a hypothesis about the application of IT for productivity: the Solow computer paradox. Yes, paradox.

Here’s the problem: the rule suggests that as more investment is made in information technology, worker productivity may go down instead of up.

Empirical evidence from the 1970s to the early 1990s fits the hypothesis.  

Before investment in IT became widespread, the expected return on investment in terms of productivity was three percent to four percent, in line with what was seen in mechanization and automation of the farm and factory sectors.

When IT was applied over two decades from 1970 to 1990, the normal return on investment was only one percent.

This productivity paradox is not new. Information technology investments did not measurably help improve white collar job productivity for decades. In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.

Some now argue there is a lag between the massive introduction of new information technology and measurable productivity results, and that this lag might conceivably take a decade or two decades to emerge.

The problem is that this is far outside the window for meaningful payback metrics conducted by virtually any private sector organization. That might suggest we inevitably will see disillusionment with the results of artificial intelligence investment.

One also can predict that many promising firms with good technology will fail to reach sustainability before they are acquired by bigger firms about to sustain the long wait to a payoff.

So it would be premature to say too much about when we will see the actual impact of widespread artificial intelligence application to business processes. It is possible to predict that, as was the case for earlier waves of IT investment, it does not help to automate existing processes.

Organizations have to recraft and create brand new business processes before the IT investment actually yields results.

One possibly mistaken idea is that productivity advances actually hinge on “human” processes.

Skinner argues that there is a “40 40 20” rule where it comes to measurable benefits. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and basic approaches in materials and workforce management.

Another 40 percent of improvement comes from major changes in equipment and process technology.

The final 20 percent of gain is produced by conventional approaches to productivity improvement (substitute capital for labor).

In other words, and colloquially, firms cannot “cut their way to success.” Quality, reliable delivery, short lead times, customer service, rapid product introduction, flexible capacity, and efficient capital deployment arguably were sources of business advantage in earlier waves of IT investment.

But the search for those values, not cost reduction, were the primary sources of advantage. The next wave will be the production of insights from huge amounts of unstructured data that allow accurate predictions to be made about when to conduct maintenance on machines, how to direct flows of people, vehicles, materials and goods, when medical attention is needed, what goods to stock, market and promote, and when.

Of course, there is another thesis about the productivity paradox. Perhaps we do not know how to quantify quality improvements wrought by application of the technology. The classic example is computers that cost about the same as they used to, but are orders of magnitude more powerful.

It is not so helpful, even if true, that we cannot measure quality improvements in some agreed-upon way that produce far better products sold at lower or same cost. Economies based on services have an even worse problem, since services productivity is both difficult and hard to quantify.

The bad news is that disappointment over the value of AI investments will inevitably result in disillusionment. And that condition might exist for quite some time, until most larger organizations have been able to recraft their processes in a way that builds directly on AI.


Tuesday, June 6, 2017

CBRS Neutral Host Precedent: Enterprise Wi-Fi

There is a model for neutral host deployments using the Citizens Broadband Radio Service "shared spectrum" approach: enterprise Wi-Fi. 

5G for Enterprise

The thing about 5G is the wide range of potential use cases, from consumer mobile internet access to enterprise private networks, new forms of indoor "neutral host" facilities and many new "narrowband" applications where bandwidths under 1.5 Mbps are the goal. 

Since 5G will support virtualized networks very much like virtual private networks, ideally as a native feature, there should be lots of room to create optimized enterprise networks. 

"All of the Above" Tells Us Very Little About Future Revenue Sources

When a new market is developing, and the answer to the question “where will the growth happen?” is “all the above,” we know we essentially have no idea what will happen. And that seems to be the case for the Citizens Broadband Radio Service, which will support about 150 MHz of new spectrum in the 3.5 GHz range that can be used for mobile internet access, enterprise networks or other purposes we have not yet explored fully.

To a large extent, we might say they same about prospects for 5G, where “everyone” expects applications in the consumer smartphone internet access area, but also in the fixed internet access area, internet of things and low-latency applications areas.

Still, nobody can be too sure which of those use cases will be dominant, and when, though given the size of existing mobile internet access markets, 5G might well see the greater part of revenues in the “enhanced mobile broadband” area.

On the other hand, the most-significant new development could be huge new markets for pervasive computing (internet of things), representing billions of new connections.

It remains unclear how use cases and revenue models will develop for shared spectrum platforms such as the Citizens Broadband Radio Service. At the moment, the conventional wisdom is that there will be opportunities in consumer and business markets, as primary or secondary access mechanisms, as indoor or outdoor networks.

CBRS obviously could be used to support consumer or business internet access, mobile or fixed wireless, enterprise apps, neutral host facilities that support indoor mobile access by all major mobile providers, or play other roles, such as allowing in-building access specialists (think Boingo) to support new indoor communications services.

In other words, more so than has been the case for general purpose mobile platforms, CBRS could enable industrial or vertical market applications for manufacturing, energy or healthcare, the Federal Communications Commission says.


The Citizens Broadband Radio Service uses a three-tiered access framework, dynamically managed in much the same way Television White Spaces networks work

CBRS uses three tiers: Incumbent Access, Priority Access, and General Authorized Access.

Incumbent Access users include authorized federal and grandfathered Fixed Satellite Service users currently operating in the 3.5 GHz Band. These users will be protected from harmful interference from Priority Access and General Authorized Access users.

The Priority Access tier consists of Priority Access Licenses (PALs) that will be assigned using competitive bidding within the 3550-3650 MHz portion of the band.

Each PAL is defined as a non-renewable authorization to use a 10 megahertz channel in a single census tract for three-years.

Up to seven total PALs may be assigned in any given census tract with up to four PALs going to any single applicant. Applicants may acquire up to two-consecutive PAL terms in any given license area during the first auction.

The General Authorized Access tier is licensed-by-rule to permit open, flexible access to the band for the widest possible group of potential users, on the Wi-Fi model.

General Authorized Access users are permitted to use any portion of the 3550-3700 MHz band not assigned to a higher tier user and may also operate opportunistically on unused Priority Access channels.


Inflection Point for Linear Video?

 Nobody knows yet whether an inflection point has been reached in the linear video subscription business, but it arguably has happened. If so, that means the rate of change will increase significantly, and result in faster rates of account decline.

It is not a new problem, at least in developed markets, where every legacy service has faced maturity and then decline.

“Everyone” expects streaming services to be the replacement product, with a couple significant potential implications. Average revenue per account or per user will tend to fall, as streaming services cost far less than linear video subscriptions.

The move to “skinny bundles” (smaller packages of channels that cost less) also is driving the lower ARPU.

Eventually, though, as linear subscriber numbers really start to fall, there will be a bifurcation of revenue. Today, linear video is a two-sided market, with distributors earning revenue both from advertisers and subscribers.

What already is developing in the streaming market are revenue models that rely only on subscription fees (Netflix) or advertising (Facebook, YouTube) or transactions (Amazon Prime).

What obviously happens is that the linear services will suffer on several fronts, losing subscription revenue; losing advertising revenue and losing profit margins.



source: UBS

NFV and AI: the How and the Why

With the ability to quickly analyze massive amount of consumer behavior and data, mobile devices with artificial intelligence applications might ultimately make it possible for the network itself to adapt to the needs of the end users, reconfiguring for bandwidth and speed dynamically as the end user population moves around.

In that sense, AI might create new value from virtualized networks. Network functions virtualization always has been about increasing network agility and reducing friction, allowing services and bandwidth to be immediately turned up, reconfigured or torn down.


But AI will create the business rationale for doing so.

In that sense, NFV is "how" networks can be made more liquid, and efficient enough to support many new business cases that require lower cost. AI will create the insights that drive "why" network features need to be enabled and changed.

AI Means Edge; Edge Means 5G

Artificial intelligence might prove to be a very-important driver of incremental revenue growth for mobile operators in the 5G era, both directly and indirectly.

Most would agree that what "internet of things" really means is pervasive computing, conducted by scores of devices in every house and many devices ambient with the people who use devices. So computing will be done all over the place.

At the same time, AI will be key to creating value out of all the unstructured big data created by apps, users and devices.

So we have huge numbers of smart and communicating devices, supporting apps that create huge amounts of data. AI creates value by extracting insight (patterns and predictions) from all that data.

Also, everyone would agree that AI is computationally intense.

For many use cases, there will literally not be time (latency will matter) to base the apps on use of traditional cloud computing centers. Instead, processing will have to be done "at the edge."

And that is where new roles could emerge for mobile operators and others. The need to support AI-using devices often will mean a need for distributed edge computing.

Edge computing, in turn, will create new value for locations scattered around the network that can do the processing, and mobile operators have some advantages (real estate, power sources, high bandwidth connectivity, low-latency networks, incentives to grow a role in edge computing and applications requiring edge computing) that could be leveraged to create a role in the new business.

That is, in part, why 5G networks will feature high bandwidth and low latency, but also might require use of small cell architectures that put many new potential nodes out in the network.
So, though it is not always obvious, AI could enable new sources of value, business models and revenue for mobile operators.

It is fairly easy to see how artificial intelligence (AI) is a benefit for app and device suppliers. To use the obvious examples, voice interfaces and customization of content are applied examples of AI.

And though AI enables features, not necessarily full business models, the issue is whether, as mobile operators attempt to move “up the stack,” AI can help, and if so, how?

At least one line of reasoning is that pervasive computing requires AI; which requires edge computing; which requires high-bandwidth, low cost, low latency networks. That is sort of obvious.

The big challenge is whether the shift to edge computing can be used by mobile operators to support "move up the stack" initiatives where "computing services" become part of the "communications service."



Pervasive Computing Drives Narrowband Shift in a Broadband Market

Oddly enough, in an industry where the direction of technology development has been towards more and more capacity (“broadband”), the next wave of development includes a key focus on “narrowband” capacity (below 1.5 Mbps, and often in hundreds of kiloHertz per second, not megabits or gigabits per second).

But there are other differences. For the first time, device battery life is among the platform design goals, as well as end user device cost.

Also, though the mobile industry has been based on use of licensed spectrum, there now is a move towards greater use of unlicensed spectrum, in whole or in part.

Also, 5G networks are being designed with the business model for pervasive computing in mind.

Long battery life of more than 10 years is a universal design goal for all the proposed IoT networks. The reason is that the labor cost to replace batteries in the field is too high to support the expected business model.

Also, in a pervasive computing environment, low device cost below US$5 for each module is important as deployment volumes are expected to be in the billions of devices range, and many will add value only when deployment costs per unit are quite small.

At the same time, low deployment cost to reduce operating expense is necessary, again to support a business model that often requires very low capital investment and operating cost.

Coverage requirements also are different. Mobile networks always have been designed for operation “above ground.” That is not always the case for IoT deployments, which will happen in  reception challenging areas such as basements, parking garages or  tunnels.

IoT transmitter locations also will be expected to support a massive number of devices, perhaps up to 40 devices per household or 50,000 connections per cell, or roughly 1250 homes per IoT cell location, assuming mostly stationary devices are supported.

That is a transmitter density about 10 times greater than the designed coverage area  of a “typical” fixed network central office serving area.

But some matters do not change: the crucial unknown is the ability of new platforms for internet of things (based on use of 5G networks or other low-power, wide-area networks) to support and enable huge new businesses based on pervasive computing and communications.

The 3GPP specifies maximum coupling loss (MCL), a measure of coverage, in the 160 dB range, including signal loss from all sources in the link.

Note the difference in platform availability. The low-power, wide-area platforms are commercially available now. The mobile-based platforms will be commercialized, or have been activated by some tier-one carriers, this year (2017).

As often is the case, challengers enter markets before the legacy mobile or telco suppliers can respond. In the past, scale has mattered, however, and the legacy providers eventually have taken leadership of those new markets, even when the telcos were not “first to market.”
x

Monday, June 5, 2017

Use AI to Move Up the Stack?

It is fairly easy to see how artificial intelligence (AI) is a benefit for app and device suppliers. To use the obvious examples, voice interfaces and customization of content are applied examples of AI. And though AI enables features, not necessarily full business models, the issue is whether, as mobile operators attempt to move “up the stack,” AI can help, and if so, how?

According to Gartner analysts, there will be many practical applications for AI, in the near future, though most do not immediately and obviously have a “mobile” underpinning.

By 2018, for example, up to 20 percent of business content will be authored by machines.The obvious examples are structured content such as shareholder reports, legal documents, market reports, press releases, articles and white papers all are candidates for automated writing tools.

Likewise, financial services will undoubtedly move early to use AI to support investing, trading and forecast operations. Banking and insurance likewise will likely be early adopters.

Still, there are a few areas noted by Gartner that seem to have significant and more direct implications for mobile scenarios, and possibly, therefore, for opportunities to move “up the stack.”

Sensors and other devices themselves will begin generating huge numbers of “customer service” requests. According to Gartner, by 2018, six billion connected things will be requesting support. It is not clear how well horizontal services to support such requests can be created, but many of those requesting devices will use mobile and wireless connections.

Even as artificial intelligence is used to handle a growing number of human-initiated customer service requests, so we will have to develop ways of efficiently handling “machine” requests as well.

Also, by  2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment, including first responders.

Employee safety is the issue. In addition to emergency responders, professional athletes, political leaders, airline pilots, industrial workers and remote field workers could also be required to use fitness trackers, and those devices will rely on mobile connections as a primary requirement.

By 2020, smart agents will support 40 percent of mobile interactions, Gartner also says. To be sure, it often will be the app providers and device suppliers that directly provide those capabilities. The point is that virtual assistants routinely will monitor user content and behavior in conjunction with AI-based inference engines that will draw inferences about people, content and contexts.

The goal will be prediction. If the agents can learn what users want and need, they also can act autonomously to fulfill those needs.


So it is easier to see how mobile networks and service providers could use AI to support their own operations than to see how they could create horizontal platforms or vertical applications, beyond the autonomous vehicle, connected vehicle spaces or perhaps consumer health technology.

Artificial Intelligence Will be Democratized

Source: Google  
 If artificial intelligence becomes a big part of the next big wave of growth for cloud computing, that should therefore allow firms of all sizes to use advanced machine-learning algorithms just as they today buy computing or storage.

In other words, cloud workloads of the future likely will include AI capabilities. “We believe AI will revolutionize almost all aspects of technology, making it easier to do things that take considerable time and effort today like product fulfillment, logistics, personalization, language understanding, and computer vision, to big forward-looking ideas like self-driving cars,” said Swami Sivasubramanian, Amazon AI VP.

“Today, building these machine learning models for products requires specialized skills with deep Ph.D. level expertise in machine learning,” he said. “However, this is changing.”




Increasingly,  AI will be part of cloud services and open source software as well, he argues.

Amazon Web Services has added predictive analytics for data mining and forecasting, to its cloud services, opening up machine-learning algorithms first developed for internal use, to customers of AWS.

Google application program interfaces are being made available to its cloud services customers to support translation, speech recognition and computer vision.

Microsoft likewise talks about  “conversation as a platform,” where voice-responsive systems use artificial intelligence to handle simple customer requests.

Over time, though, that capability will extend, allowing the AI-enhanced interfaces to integrate information from different sources, allowing more complicated transactions to be supported.

AI will be democratized, some would say.


Will Edge Computing Allow Mobile Operators to Move Up the Stack?

It is hard right now to know whether internet of things apps and services, enabled largely--but not exclusively--by 5G, are going to be as important as expected. But it is reasonable to argue that 5G is a platform that could enable mobile service providers “moving up the stack” in enterprise and some consumer services.

Edge computing, in other words, required by many proposed new apps, the most-frequently-mentioned being autonomous vehicles, which will require such low latency that cloud computing has to be done at the edge of the network. The issue, perhaps, is how many other new apps then could benefit from an edge computing network.

"Software gives us this capability to actually play in a different space than the connectivity
space for the consumer and the enterprise," said Ed Chan, Verizon SVP.

The assumption is that many new apps will require those interactions to be nearly real-time, requiring mobile edge computing. MEC is about packing the edge with computing power, like "making the cloud as if it's in your back pocket," Chan said.

Many of the apps benefitting from edge computing might be a bit prosaic. Real-time video at stadiums might provide one example. Even high-end metropolitan-area networks often have capacity to support about 100 Gbps, supporting uploads of 1080p streams from only 12,000 users at YouTube’s recommended upload rate of 8.5 Mbps. A million concurrent uploads would require 8.5 terabytes per second.

Some have predicted that,  by 2018, some 40 percent of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge, of a network, according to IDC.

Some even argue that analyzing data from offshore oil rigs, or managing automated parking systems or smart lighting, could require edge computing.


Friday, June 2, 2017

Linear Video Business is "Failing," Says ACA

Small U.S. telcos and cable TV companies have noted for a couple of decades that it is hard to make profits in the linear video subscription market. The reason is simply that scale is necessary, and, by definition, very small telcos and cable TV companies do not have scale.

Still, it is almost shocking to hear American Cable Association president Matt Polka say that the cable TV portion of the access business is "failing."

That is analogous to a major telco industry executive saying the voice business is failing.

And, of course, the same process has happened for telcos: voice, the traditional revenue driver, has ceased to support growth for quite some time. In 2013, for example, global revenues were dominated by mobility services. Voice services on fixed networks contributed less than 20 percent of total.

Already, internet access drives U.S. cable operator gross profit, while video contribution continues to shrink, even for the tier-one cable operators.


source: Insight Research

What Big Revenue Source Will a Technology Firm Discover Next?

In the past, technology firms were known either for making computers and devices or software widely used by computers. That still is largely true. But what is dramatically different are the new revenue models.

Alphabet (Google) and Facebook make nearly all their revenues from advertising. Amazon makes most of its revenue from retailing. Uber’s revenue comes from ride sharing. That explains the adage that “every company is a tech company” these days. That goes too far, but you get the point.

For a number of very-large firms, technology drives a revenue model based on sales of some product other than computing devices or computing software, and on a scale much more significant than that the enterprise uses computers, software, mobile phones and other devices.

That is why Airbnb, Hubspot, Expedia, Zillow, LinkedIn also are tech companies, whatever the revenue model.

source: Business Insider

U.S. Consumers Still Buy "Good Enough" Internet Access, Not "Best"

Optical fiber always is pitched as the “best” or “permanent” solution for fixed network internet access, and if the economics of a specific...