Tuesday, October 9, 2018

Large Enterprises Want to Compute at the Edge, Study Finds

Most large enterprises are looking to deploy Internet of Things devices (IoT) on the edge but are struggling to do so, a survey by VansonBourne, sponsored by Software AG has found.

Some 80 percent of respondents want to deploy IoT on the edge but only eight percent are actually doing that already.

What firms would like to do is process data locally. Instead of sending all the data from a wind turbine to the cloud and processing the data centrally, users want to process data and analytics locally and then send the results to the cloud.

That reduces network load, cloud processing and storage requirements while making IoT feasible in areas without reliable networks. (Get full survey results)

Many other applications with real-time and low-latency requirements likewise will benefit from processing at the edge: medical, smart cities, image recognition, speed recognition, smart house or gaming applications, for example.

According to market research firm IDC, the IT spend on edge infrastructure will reach up to 18% of the total spend on IoT infrastructure by 2020.

At Least 8.4 Billion IoT Devices in Use in 2018: Where are They?

Gartner says 8.4 billion connected things were in commercial use globally in 2017, up 31 percent from 2016, a fact that might surprise many, as the internet of things often is viewed as a “future” business.

By 2020 there will be 20.4 billion IoT devices in use.

Today, consumer use cases represent perhaps 63 percent of IoT usage, at 5.2 billion units.
Total spending on endpoints and services will reach almost $2 trillion in 2017. "Aside from automotive systems, the applications that will be most in use by consumers will be smart TVs and digital set-top boxes, while smart electric meters and commercial security cameras will be most in use by businesses," said Peter Middleton, research director at Gartner.

In addition to smart meters, applications tailored to specific industry verticals (including manufacturing field devices, process sensors for electrical generating plants and real-time location devices for healthcare) drove the use of connected things among businesses through 2017, with 1.6 billion units deployed.

Regionally, Greater China, North America and Western Europe are driving the use of connected things and the three regions together will represent 67 percent of the overall Internet of Things (IoT) installed base in 2017.


                           IoT Units Installed Base by Category (Millions of Units)
Category
2016
2017
2018
2020
Consumer
3,963.0
5,244.3
7,036.3
12,863.0
Business: Cross-Industry
1,102.1
1,501.0
2,132.6
4,381.4
Business: Vertical-Specific
1,316.6
1,635.4
2,027.7
3,171.0
Grand Total
6,381.8
8,380.6
11,196.6
20,415.4

From 2018 onwards, cross-industry devices, such as those targeted at smart buildings (including LED lighting, HVAC and physical security systems) will take the lead as connectivity is driven into higher-volume, lower cost devices.

In 2020, cross-industry devices will reach 4.4 billion units, while vertical-specific devices will amount to 3.2 billion units, Gartner predicts.
While consumers purchase more devices, businesses spend more. In 2017, in terms of hardware, the use of connected things among businesses will drive $964 billion worth of activity.  

Consumer applications will amount to $725 billion in 2017. By 2020, hardware spending from both segments will reach almost $3 trillion.

IoT Endpoint Spending by Category (Millions of Dollars)
Category
2016
2017
2018
2020
Consumer
532,515
725,696
985,348
1,494,466
Business: Cross-Industry
212,069
280,059
372,989
567,659
Business: Vertical-Specific
634,921
683,817
736,543
863,662
Grand Total
1,379,505
1,689,572
2,094,881
2,925,787

Total IoT services spending (professional, consumer and connectivity services) is on pace to reach $273 billion in 2017, Gartner also predicts.

Monday, October 8, 2018

IBM Revenues Illustrate Rule: Replace 1/2 of Current Revenue Every Decade

In both the computing and communications business, one good rule of thumb is that firms must replace about half their current revenue every decade. That works for IBM just as for Intel, Comcast and AT&T.

International Business Machines Corp. said in the second quarter of 2018 it generated more than half of its revenue from newer services such as cloud and artificial intelligence, a first for the company.

In 1994, IBM earned half its revenue from hardware sales.

One sees the same pattern at Intel.


By 2012, IBM was earning 57 percent of revenue from services and just 18 percent from hardware sales.

The point is that in the computing industry, as in the communications industry, a key principle is that firms must replace about half their current revenue every decade.

 
source: Annex Bulletin

Edge Computing and Changing TV Channels

As content delivery networks have improved user experience for web and content applications, so edge computing will enhance that role in the 5G era, a time when consumption of ultra-high-definition TV content will become more common.

Aside from reducing latency, CDNs also reduce traffic across internet backbones. But it is a prosaic use case that should become more important in the 4K and 8K video use cases: changing channels.

TV users expect that when they press a button on a remote control, and change a channel, that the new video appears instantly. That will be harder in the UHD era, as the amount of information to be displayed on the screen will grow.

High definition content requires bit rates in the 3 Mbps to 5 Mbps range. By some estimates, 4K requires bit rates of between 15 Mbps and 25 Mbps for high-quality, fast-motion content like live sports. That is roughly a four-fold increase.

But 8K could push those requirements up to 80 Mbps or even 100 Mbps for each channel or stream. That is about a 400-percent increase over 4K signals.


But that is only part of the user experience issue. Latency is the bigger issue. Early in the digital TV era, video subscription service providers encountered the lag time between sending a request from a remote control to change a channel (often especially when switching to a channel guide, or between channels with different resolutions), and the response time to act.

That problem of delay when changing a channel  will get worse in the 4K and 8K eras.

So though it might seem quite prosaic, an early use for edge computing will be to allow video subscribers to change TV channels (4K, 8K) without noticeable lag.

Sunday, October 7, 2018

Selling the Invisible Always is Tough

Selling the invisible always is difficult. Some products are intangible,  there being no way the buyer actually can determine “quality” in any direct way, until the services are provided. Legal, medical, financial or other services provide good examples.

Marketing advice, crisis management and other services also are very hard for buyers to evaluate, in advance of purchase. There is no physical object to inspect, so a potential buyer has to try and determine value some other way.


So think about it: all  communications and connectivity services are intangible products, for which a buyer has no way to determine quality in advance of purchase, and no way to compare quality to other potential buyers except to “try them.”

There are some obvious consequences. If a buyer cannot independently determine value or quality, buyers might be prone to distrusting quality claims. Perhaps that is why service providers tend to score low on consumer satisfaction surveys. People might know they have no way of making judgments as they can with physical products. And when “value” cannot be determined, it is hard to determine whether “price” is right, either.

In fact, every connectivity service--video, voice or internet access--scores at the bottom of multi-industry indexes in surveys of customer satisfaction conducted by the American Customer Satisfaction Index.

One might object that all internet apps also are “intangible,” and that is true. Consumer surveys tend to show higher satisfaction with apps than connectivity services.

Make it personal is one typical bit of advice for sellers of intangible services. In other words, explain “how it makes your life better.” “Show the benefits” (outcomes) is another way to sell an intangible product. When even that is tough, sell “peace of mind.”

That's why credentials, furniture, street address, references and "experience" become proxies for value and competence where an intangible product is concerned. Even tangible products such as fashion items or vacation resorts have a huge and similar problem, namely creating a brand or mystique that helps potential buyers evaluate the product, which either is a means to another end, or an "experience."

Trust also is important for selling intangibles.  As there is nothing tangible to show customers, customers have to trust their suppliers. And though it would be hard to show a direct correlation, one element that promotes trust might be that lots of other customers have chosen a particular supplier. So market share becomes a proxy for value and a reason for greater trust.

In his book Selling the Invisible: A Field Guide to Modern Marketing, Harry Beckwith makes the point that an intangible product cannot be sold in the same way as a physical product.

“In fact, a service does not even exist when you buy one,” notes financial analyst Ben Carlson. “If you go so a salon, you cannot see, touch, or try out a haircut before you buy it. You order it. Then you get it.”

Product failure also is harder to determine. Did you get good advice? How a good a job did your painter, dentist or doctor do?

That is unknowable. That is why products can have warranties. There is some way of knowing and quantifying the risk of product failure.

Most services cannot be similarly quantified, with the possible exception of outage or availability performance.

The big point is that customers buy connectivity services that mostly come without guarantees or certainty. So anything suppliers can do to provide proxies for quality should help.

And that is why “brand” reputation matters. Irt is a proxy for quality and a reason for trust. That is why personal relationships matter: they are proxies for quality and reasons for trust.

That is why good storytelling matters.

Companies and people sell themselves, their vision, philosophy and values. Being likable is a prerequisite when the customer has endless choices.

“Prospects do not buy how good you are at what you do. They buy how good you are at who you are,” says Beckwith.

That is why sellers of broadly similar products benefit from “accentuating the trivial.” That might be one of the few ways to differentiate, when products perform in broadly similar ways.

Service businesses are built on promises, it is fair to say. A good brand is synonymous with a firm or person who will fulfill those promises on a consistent basis so you know exactly what you’re getting when you sign up.

Verizon’s brand promise has long been “best network.” So anytime that is challenged, the whole value of the brand is challenged. It really is quite a bit harder to explain what the key brand promise is for the other tier-one service providers.

Comcast’s latest tagline is “the future of awesome.” Comcast customers who have used other major service providers might agree that tagline actually does tend to resonate with user experience.

T-Mobile US says it is the “uncarrier,” with a clear “we are not like the other guys” positioning. Many of T-Mobile US customers might agree.

AT&T changes its tagline every so often, so it is hard to say what the core brand promise happens to be. Sprint also makes changes every now and then. The point is that it is not clear what the brand promise is.  

That arguably is a problem for everyone except Verizon, which has been consistent. And that might illustrate the problem of selling intangible connectivity services. “Best network” is at least a proxy for quality that people understand. I am hard pressed, in most cases, to explain what the value proxies are, where it comes to service providers.

SDN and NFV are Different, But Telcos Will Use Both

Even if the terms are used interchangeably, network functions virtualization (NFV) and software defined networking (SDN) arguably are different. But both seem to be part of the broader push towards core networks that are virtualized and easily programmable. So SD-WANs reflects an SDN approach, while use of white box network elements represents NFV. Telcos will do both.

At a high level, one might argue that the business outcome for network functions virtualization (NFV) is lower cost networks. One might also argue that the business outcome for SDN, while contributing to lower cost, is greater network control.

NFV  is the process of moving services such as load balancing, firewalls and intrusion prevention systems away from dedicated hardware into a virtualized environment, many would agree. Others might note that NFV makes use of “virtual machines” to supply network functions.

And, as with many innovations, the initial drive to reduce cost eventually leads to new thinking about use cases, revenue streams and service creation. On the other hand, SD-WANs are a prime example of the opposite trend: a new service offering requiring separation of control plane and data plane.


And many would say one core attribute of NFV is the ability to separate control plane functions from data plane operations, so software can run on commodity hardware. But that is a core principle of SDN as well.  

Both are based on network abstraction. But SDN and NFV differ in how they separate functions and abstract resources.

SDN abstracts physical networking resources--switches, routers and other network elements--and moves decision making to a virtual network control plane. In this approach, the control plane decides where to send traffic, while the hardware continues to direct and handle the traffic.

NFV aims to virtualize all physical network resources beneath a hypervisor, which allows the network to grow without the addition of more devices.

In other words, SDN separates network control functions from network forwarding functions.  NFV abstracts network forwarding from the hardware on which it runs. You might also argue that NFV provides basic networking functions, while SDN controls and orchestrates them for specific uses.

So what makes that different from a software defined network (SDN)? That often is hardly to explain. SDN aims to automate processes, while NFV often aims only to virtualize them.  Software defined networking (SDN) is an approach to using open protocols, such as OpenFlow, to apply globally aware software control at the edges of the network to access network switches and routers that typically would use closed and proprietary firmware, some would say.

In principle, then, an entity can deploy SDN without NFV, or NFV without SDN.  

As a practical matter, NFV often ie easiest to understand as a way of separating software and controller functions from dedicated network elements. By implementing network functions in software that can run on a range of industry standard servers hardware, NFV aims to reduce cost and make networks more flexible.

SDN, on the other hand, seeks to create a network that is centrally managed and programmable. In other words, SDN separates lower-level packet forwarding from higher-level network control.

Alphabet Sees Significant AI Revenue Boost in Search and Google Cloud

Google CEO Sundar Pichai said its investment in AI is paying off in two ways: fueling search engagement and spurring cloud computing revenu...