Tuesday, June 6, 2017

AI Means Edge; Edge Means 5G

Artificial intelligence might prove to be a very-important driver of incremental revenue growth for mobile operators in the 5G era, both directly and indirectly.

Most would agree that what "internet of things" really means is pervasive computing, conducted by scores of devices in every house and many devices ambient with the people who use devices. So computing will be done all over the place.

At the same time, AI will be key to creating value out of all the unstructured big data created by apps, users and devices.

So we have huge numbers of smart and communicating devices, supporting apps that create huge amounts of data. AI creates value by extracting insight (patterns and predictions) from all that data.

Also, everyone would agree that AI is computationally intense.

For many use cases, there will literally not be time (latency will matter) to base the apps on use of traditional cloud computing centers. Instead, processing will have to be done "at the edge."

And that is where new roles could emerge for mobile operators and others. The need to support AI-using devices often will mean a need for distributed edge computing.

Edge computing, in turn, will create new value for locations scattered around the network that can do the processing, and mobile operators have some advantages (real estate, power sources, high bandwidth connectivity, low-latency networks, incentives to grow a role in edge computing and applications requiring edge computing) that could be leveraged to create a role in the new business.

That is, in part, why 5G networks will feature high bandwidth and low latency, but also might require use of small cell architectures that put many new potential nodes out in the network.
So, though it is not always obvious, AI could enable new sources of value, business models and revenue for mobile operators.

It is fairly easy to see how artificial intelligence (AI) is a benefit for app and device suppliers. To use the obvious examples, voice interfaces and customization of content are applied examples of AI.

And though AI enables features, not necessarily full business models, the issue is whether, as mobile operators attempt to move “up the stack,” AI can help, and if so, how?

At least one line of reasoning is that pervasive computing requires AI; which requires edge computing; which requires high-bandwidth, low cost, low latency networks. That is sort of obvious.

The big challenge is whether the shift to edge computing can be used by mobile operators to support "move up the stack" initiatives where "computing services" become part of the "communications service."



Pervasive Computing Drives Narrowband Shift in a Broadband Market

Oddly enough, in an industry where the direction of technology development has been towards more and more capacity (“broadband”), the next wave of development includes a key focus on “narrowband” capacity (below 1.5 Mbps, and often in hundreds of kiloHertz per second, not megabits or gigabits per second).

But there are other differences. For the first time, device battery life is among the platform design goals, as well as end user device cost.

Also, though the mobile industry has been based on use of licensed spectrum, there now is a move towards greater use of unlicensed spectrum, in whole or in part.

Also, 5G networks are being designed with the business model for pervasive computing in mind.

Long battery life of more than 10 years is a universal design goal for all the proposed IoT networks. The reason is that the labor cost to replace batteries in the field is too high to support the expected business model.

Also, in a pervasive computing environment, low device cost below US$5 for each module is important as deployment volumes are expected to be in the billions of devices range, and many will add value only when deployment costs per unit are quite small.

At the same time, low deployment cost to reduce operating expense is necessary, again to support a business model that often requires very low capital investment and operating cost.

Coverage requirements also are different. Mobile networks always have been designed for operation “above ground.” That is not always the case for IoT deployments, which will happen in  reception challenging areas such as basements, parking garages or  tunnels.

IoT transmitter locations also will be expected to support a massive number of devices, perhaps up to 40 devices per household or 50,000 connections per cell, or roughly 1250 homes per IoT cell location, assuming mostly stationary devices are supported.

That is a transmitter density about 10 times greater than the designed coverage area  of a “typical” fixed network central office serving area.

But some matters do not change: the crucial unknown is the ability of new platforms for internet of things (based on use of 5G networks or other low-power, wide-area networks) to support and enable huge new businesses based on pervasive computing and communications.

The 3GPP specifies maximum coupling loss (MCL), a measure of coverage, in the 160 dB range, including signal loss from all sources in the link.

Note the difference in platform availability. The low-power, wide-area platforms are commercially available now. The mobile-based platforms will be commercialized, or have been activated by some tier-one carriers, this year (2017).

As often is the case, challengers enter markets before the legacy mobile or telco suppliers can respond. In the past, scale has mattered, however, and the legacy providers eventually have taken leadership of those new markets, even when the telcos were not “first to market.”
x

Monday, June 5, 2017

Use AI to Move Up the Stack?

It is fairly easy to see how artificial intelligence (AI) is a benefit for app and device suppliers. To use the obvious examples, voice interfaces and customization of content are applied examples of AI. And though AI enables features, not necessarily full business models, the issue is whether, as mobile operators attempt to move “up the stack,” AI can help, and if so, how?

According to Gartner analysts, there will be many practical applications for AI, in the near future, though most do not immediately and obviously have a “mobile” underpinning.

By 2018, for example, up to 20 percent of business content will be authored by machines.The obvious examples are structured content such as shareholder reports, legal documents, market reports, press releases, articles and white papers all are candidates for automated writing tools.

Likewise, financial services will undoubtedly move early to use AI to support investing, trading and forecast operations. Banking and insurance likewise will likely be early adopters.

Still, there are a few areas noted by Gartner that seem to have significant and more direct implications for mobile scenarios, and possibly, therefore, for opportunities to move “up the stack.”

Sensors and other devices themselves will begin generating huge numbers of “customer service” requests. According to Gartner, by 2018, six billion connected things will be requesting support. It is not clear how well horizontal services to support such requests can be created, but many of those requesting devices will use mobile and wireless connections.

Even as artificial intelligence is used to handle a growing number of human-initiated customer service requests, so we will have to develop ways of efficiently handling “machine” requests as well.

Also, by  2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment, including first responders.

Employee safety is the issue. In addition to emergency responders, professional athletes, political leaders, airline pilots, industrial workers and remote field workers could also be required to use fitness trackers, and those devices will rely on mobile connections as a primary requirement.

By 2020, smart agents will support 40 percent of mobile interactions, Gartner also says. To be sure, it often will be the app providers and device suppliers that directly provide those capabilities. The point is that virtual assistants routinely will monitor user content and behavior in conjunction with AI-based inference engines that will draw inferences about people, content and contexts.

The goal will be prediction. If the agents can learn what users want and need, they also can act autonomously to fulfill those needs.


So it is easier to see how mobile networks and service providers could use AI to support their own operations than to see how they could create horizontal platforms or vertical applications, beyond the autonomous vehicle, connected vehicle spaces or perhaps consumer health technology.

Artificial Intelligence Will be Democratized

Source: Google  
 If artificial intelligence becomes a big part of the next big wave of growth for cloud computing, that should therefore allow firms of all sizes to use advanced machine-learning algorithms just as they today buy computing or storage.

In other words, cloud workloads of the future likely will include AI capabilities. “We believe AI will revolutionize almost all aspects of technology, making it easier to do things that take considerable time and effort today like product fulfillment, logistics, personalization, language understanding, and computer vision, to big forward-looking ideas like self-driving cars,” said Swami Sivasubramanian, Amazon AI VP.

“Today, building these machine learning models for products requires specialized skills with deep Ph.D. level expertise in machine learning,” he said. “However, this is changing.”




Increasingly,  AI will be part of cloud services and open source software as well, he argues.

Amazon Web Services has added predictive analytics for data mining and forecasting, to its cloud services, opening up machine-learning algorithms first developed for internal use, to customers of AWS.

Google application program interfaces are being made available to its cloud services customers to support translation, speech recognition and computer vision.

Microsoft likewise talks about  “conversation as a platform,” where voice-responsive systems use artificial intelligence to handle simple customer requests.

Over time, though, that capability will extend, allowing the AI-enhanced interfaces to integrate information from different sources, allowing more complicated transactions to be supported.

AI will be democratized, some would say.


Will Edge Computing Allow Mobile Operators to Move Up the Stack?

It is hard right now to know whether internet of things apps and services, enabled largely--but not exclusively--by 5G, are going to be as important as expected. But it is reasonable to argue that 5G is a platform that could enable mobile service providers “moving up the stack” in enterprise and some consumer services.

Edge computing, in other words, required by many proposed new apps, the most-frequently-mentioned being autonomous vehicles, which will require such low latency that cloud computing has to be done at the edge of the network. The issue, perhaps, is how many other new apps then could benefit from an edge computing network.

"Software gives us this capability to actually play in a different space than the connectivity
space for the consumer and the enterprise," said Ed Chan, Verizon SVP.

The assumption is that many new apps will require those interactions to be nearly real-time, requiring mobile edge computing. MEC is about packing the edge with computing power, like "making the cloud as if it's in your back pocket," Chan said.

Many of the apps benefitting from edge computing might be a bit prosaic. Real-time video at stadiums might provide one example. Even high-end metropolitan-area networks often have capacity to support about 100 Gbps, supporting uploads of 1080p streams from only 12,000 users at YouTube’s recommended upload rate of 8.5 Mbps. A million concurrent uploads would require 8.5 terabytes per second.

Some have predicted that,  by 2018, some 40 percent of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge, of a network, according to IDC.

Some even argue that analyzing data from offshore oil rigs, or managing automated parking systems or smart lighting, could require edge computing.


Friday, June 2, 2017

Linear Video Business is "Failing," Says ACA

Small U.S. telcos and cable TV companies have noted for a couple of decades that it is hard to make profits in the linear video subscription market. The reason is simply that scale is necessary, and, by definition, very small telcos and cable TV companies do not have scale.

Still, it is almost shocking to hear American Cable Association president Matt Polka say that the cable TV portion of the access business is "failing."

That is analogous to a major telco industry executive saying the voice business is failing.

And, of course, the same process has happened for telcos: voice, the traditional revenue driver, has ceased to support growth for quite some time. In 2013, for example, global revenues were dominated by mobility services. Voice services on fixed networks contributed less than 20 percent of total.

Already, internet access drives U.S. cable operator gross profit, while video contribution continues to shrink, even for the tier-one cable operators.


source: Insight Research

What Big Revenue Source Will a Technology Firm Discover Next?

In the past, technology firms were known either for making computers and devices or software widely used by computers. That still is largely true. But what is dramatically different are the new revenue models.

Alphabet (Google) and Facebook make nearly all their revenues from advertising. Amazon makes most of its revenue from retailing. Uber’s revenue comes from ride sharing. That explains the adage that “every company is a tech company” these days. That goes too far, but you get the point.

For a number of very-large firms, technology drives a revenue model based on sales of some product other than computing devices or computing software, and on a scale much more significant than that the enterprise uses computers, software, mobile phones and other devices.

That is why Airbnb, Hubspot, Expedia, Zillow, LinkedIn also are tech companies, whatever the revenue model.

source: Business Insider

Don't Expect Measurable AI Productivity Boost in the Short Term

Many have high expectations for the impact artificial intelligence could have on productivity. Longer term, that seems likely, even if it mi...