Friday, February 15, 2019

More Support for U.S. Rural Broadband

The U.S. Federal Communications Commission has awarded $1.5 billion in support for rural internet access services, intended to upgrade some 713,000 locations, at an average subsidy of $2103 per location.

As always is the case, the small percentage of very-rural locations have per-line capital investment costs far above those of urban and suburban locations.

The most isolated 250,000 U.S. homes of the seven million that in 2010 did not have fixed network internet access (or did not meet a minimum 4 Mbps downstream speed),  representing about 3.5 percent of those locations, require 57 percent of the capital required to connect all seven million locations.

“The highest-gap 250,000 housing units account for $13.4 billion of the total $23.5 billion investment gap,” an FCC paper has estimated.


“Our analysis indicates that there are seven million housing units (HUs) without access to terrestrial broadband infrastructure capable of meeting the National Broadband Availability Target of 4 Mbps download and 1 Mbps upload,” the FCC said in its Broadband Availability Gap technical paper.

Created in support of the FCC’s National Broadband Plan, the document says simply that “because the total costs of providing broadband service to those seven million HUs exceed the revenues expected from providing service, it is unlikely that private capital will fund infrastructure.”

Cost and density are inversely related, the FCC noted. The density pattern follows a basic Pareto rule, that 80 percent of the population lives in perhaps 20 percent of the land area.

source: FCC

Network Slicing and Native Edge Apps

Network slicing, enabled by network virtualization in a broad sense, might be directly related to performance requirements of edge computing and native edge apps.

Edge-native applications, as their name suggests, are applications which require the unique characteristics provided by edge computing to function satisfactorily, or in some cases to function at all. These applications will typically rely on the low latency, locality information or reduced cost of data transport that edge computing provides in comparison to the centralized cloud.

One practical issue is how to decide when edge computing is the preferred solution. In addition to the typical scheduling attributes such as requirements around processor, memory, operating system, and occasionally some simple affinity/anti-affinity rules, edge workloads might be interested in specifying some or all of the following:
• Geolocation
• Latency
• Bandwidth
• Resilience and/or risk tolerance (i.e., how many 9s of uptime)
• Data sovereignty
• Cost
• Real-time network congestion
• Requirements or preferences for specialized hardware (GPUs, FPGAs)

One important core network implication is that many of those attributes (geolocation, latency, bandwidth, resilience, cost and real-time congestion performance) are precisely the issues network slicing addresses.

The State of the Edge report (get it here) notes that edge computing grows out of the legacy of content delivery networks, which tells you much the potential advantages and use cases: application acceleration and lower latency are where edge computing adds value.

“There are lots of edges, but the edge we care about today is the edge of the last mile network,” the report authors suggest.    

Living at the Infrastructure Edge

Business author Will Willis (not talking about edge computing) penned the phrase “if you’re not living on the edge you’re taking up too much space.”

One of the complexities of “edge computing” is that, in a factual sense, every computing resource lives on the edge. But clients of particular resources might be located locally or far away. And that is the issue: “computing on the edge,” not “living at the edge.”

Every hyperscale data center and every smartphone, smart speaker, personal computer,  smartwatch, industrial sensor all “live on the edge.” They are fixed in space, but connect over some access link to the core networks and internet. Everything is physically at the edge.

The exception, some might argue, are devices communicating over a satellite link. In those scenarios, the “access” link and the “wide area network” connections often are indistinguishable. We can argue about whether the repeater (the satellite) sits in the core, or is simply another network edge.  

But the issue is distributed computing operations; computing “somewhere else than where the device is.”  The “cloud,” remember is an abstraction, not a place.

“Edge computing places high-performance compute, storage and network resources as close as possible to end users and devices,” the State of the Edge report says. The advantages include reduced WAN transport costs and lower latency.

And latency is the key issue. Human-scale apps can have response tolerances as long as a few seconds. It’s mostly an inconvenience.

Annoying, yes; but not life-threatening. In comparison, for applications that operate at machine-scale, where latency is measured in microseconds, even small delays in data transmission can have severe and even fatal consequences.

To be sure, debate exists about whether computing on an edge device directly can be considered “edge computing.”  Though processing can and will occur wherever it is best placed for any given application, it is probably the general understanding that edge computing implies that resources on the device edge are as close as you can possibly get to the end user, both physically and logically.

Some refer to the “infrastructure edge” refers to IT resources which are positioned on the network operator or service provider side of the last mile network, such as at a cable headend or at the base of a cell tower.

In such cases, the primary building blocks are edge data centers, which are typically placed at five-mile to 10-mile intervals in urban and suburban environments.

The difference between device edge and infrastructure edge might be characterized as “zero hop” (computing on the device itself) and “one short hop” with total round-trip latency of a few milliseconds.

The infrastructure edge, in many cases, will include an access edge, directly connected to clients, as well as aggregation edge that is two network hops from any end user and both simplifies connection to all other cloud resources as well as provides a more-powerful analytical capability that still is close to the device edge.

The access edge might be thought of as providing real-time processing tasks, while the aggregation edge handles less-immediate but higher-order analytical tasks (such as trend analysis).

Individual edge data centers might typically feature 50 to 150 kiloWatts of capacity. Hyperscale data centers can feature scores of megaWatts of capability, essentially three orders of magnitude difference, or about 1,000 times or so greater computing capability.

5G, IoT, Edge Computing, SDN, Network Slicing and AI All are Related

It is difficult to separate 5G from edge computing from network virtualization from internet of things from applied forms of artificial intelligence, because value created from each hinges on the value supplied by the other advances.

If one believes that the big value drivers associated with 5G will come from enterprise use cases, rather than reinforcing 5G network capacity, then internet of things stands at the forefront of the opportunities. Many of the potential use cases hinge on ultra-low latency, which drives the need for distributed edge computing and analytics embedding AI, plus lower-cost and more-flexible core networks that are optimized for particular computing tasks.

Consider that edge computing infrastructure might represent some seven million locations, according to one recent IDC estimate. To put that into perspective, consider that there are nearly five million mobile cell sites active globally. Assume four million are macrocell sites, and candidates for an edge data center.

Though deployment would not be equal, everywhere, one edge center at every macrocell site implies a universe of four million sites. If you assume small cells are placed in areas of high density and high communications usage, then one million to two million additional sites are conceivable, representing potentially five to six million locations.

Those are “infrastructure edge” facilities, arguably owned by mobile or telco-associated entities. In addition to that, many logical edge computing facilities will be located on enterprise premises. That would had a couple to several million more sites.

Another way to look at it is by the population of devices needing to communicate with an edge data center. Assume a population of five to six billion devices. Assume the capacity of an edge data center is peak use by 100 devices. Then you would “need” five million edge data centers.  

Supporting 100 devices concurrently might seem an absurdly low number, but, by analogy, 72 percent of traditional data center traffic remains inside the data center. Data center to user traffic has been about 15 percent of total traffic. The point is that a flood of sensor data might impose greater processing tasks than consumer web or video requires.

That would not include any edge data aggregation centers, where non-real-time analytics are conducted, or where traffic to be passed to remote data centers and locations is originated.

The whole point is that edge data centers, working with 5G, would be necessary to ensure latency of a few milliseconds roundtrip; core network virtualization to contain costs and network slicing to ensure WAN performance; plus application of artificial intelligence (machine learning at the edge, deep learning elsewhere in the cloud, or at edge aggregation centers).

Source: Cisco

Network Slicing at an Edge Data Center Requires New Switches or Routers, Plus Orchestration

If you are going to use or support software defined network services using network slicing--at edge data centers or hyperscale data centers, you need a new orchestration capability and new routers or switches.

The Kaloom Software Defined Fabric is a software solution and orchestration function for networking white boxes in a software defined network context.

It promises a performance enhancement of two times; an increase in per-server throughput of seven times and a reduction in latency between virtual machines in the same rack of five to 10 times.

Versions for hyperscale and edge data centers are supported.  As a programmable data center fabric, the solution offers integrated routing and switching, plus P4 platform tools to support programmers.

The Kaloom SDF supports software defined networks and network slicing, and can run several virtual data-centers with different network services, Kaloom says.

“Kaloom SDF features advanced self-forming and self-discovery capabilities, zero touch provisioning (ZTP) of the virtual networking and virtual components, and automated software upgrades, thus minimizing human intervention and errors while saving time and effort,” the company  says.

Network provisioning time is reduced from several days to minutes and is automatically updated during runtime, the company says.

A physical data center can be partitioned into multiple independent and fully isolated virtual data centers (vDCs).

Each vDC operates with its own Virtual Fabric (vFabric), which can host millions of IPv4 or IPv6 based tenant networks.

Additional compute and storage resources can be dynamically assigned or removed from a vDC, thus creating a flexible and elastic pool of network resources suitable for network slicing, Kaloom says.

Kaloom SDF is a pre-tested and certified software solution for networking white boxes from Accton and Delta.

Kaloom Topology v6 - No Titie-1


Huge Gulf Between European, North American Telcos on 5G

Rarely does one see such wide variations in telecom executive views of coming technology as now seems to be the case for 5G. In a recent survey, McKinsey found 100 percent of European execs citing the business case as the biggest 5G challenge. Just 11 percent of North American leaders though the business case was the "biggest" challenge.



Thursday, February 14, 2019

Kinetic Edge Alliance Aims to Build Infrastructure Edge Capabilities in 30 U.S. Metros

The Kinetic Edge Alliance now is targeting the top 30 U.S. metro markets, representing which almost 50 percent of the US population, for infrastructure edge computing facilities.

In 2019, the Alliance will focus on Chicago, Pittsburgh, Atlanta, Dallas, Los Angeles and Seattle.


Linode supplies cloud computing infrastructure. MobiledgeX will provide developers with automated workload orchestration.

Packet supplies bare-metal infrastructure. StackPath will supply edge computing services including containers and virtual machines.

Vapor IO supplies the “buildings.”

Key deployment partners include Alef Mobitech, which supplies Mutli-Access Edge Computing applications. Detecon International: Detecon International, a subsidiary of Deutsche Telekom will supply consulting services.

Hitachi Vantara, a wholly-owned subsidiary of Hitachi Ltd., delivers data center and IoT expertise including analytics. New Continuum will cross-connect its West Chicago data center with the Kinetic Edge to provide local colocation capacity as well as a software-enabled Internet Exchange Point (IXP).

Pluribus Networks will supply management. Seagate will provide technical blueprints and deployment support for end users and partners related to storage.

Tuesday, February 12, 2019

Service Providers Cannot be Everything to Everyone, Anymore

The greatest opportunity for connectivity service providers over the next five years is getting right the balance of wholesale and retail operations, says Dean Bubley of Disruptive Analysis. What he means is that service providers cannot do everything, anymore, and must make choices.


Most telcos will have to pick one to five areas where they can be viable platforms and then partner for everything else, Bubley argues.


And though many have tried some way to mimic the app provider business model, that mostly has not worked. App providers hope to earn 50 cents a year from billions of customers; telcos have to hope to earn $30 to $35 a month from possibly millions of customers, says Benoit Felten, Diffraction Analysis principal. Those models are almost mutually exclusive.


“Telcos have been in a weird place for 10 years, between pipes and platforms, and they have to decide which they want to be,” says Felten. “You can’t be everything to everybody.”


One example is new revenue generated by enterprise services, ranging from new internet of things use cases to new forms of indoor access infrastructure. “But the revenue might not go to traditional telcos,” says Bubley.  


The greatest opportunity is that infrastructure can be deployed in lots of ways, and not just by service providers, says Felten.  In fact, it likely is no longer possible to say with complete certainty when and where network infrastructure must be owned.


“If you are AT&T today, do you need to own your own infrastructure?” Felten asks. “Maybe owning the pipe is not strategic anymore.”


There is something nonsensical about owning lots of popular content and putting it behind a walled garden, when you’d really rather sell it to everyone. In other words, strategies that make sense for a major content owner (sell more things to everyone globally, on any network) are not necessarily those of a major connectivity provider (sell more things on my network, to my customers).


Dr. George Ford Talks About What Regulators Get Wrong

Telecom and Internet regulators often create policies that have effects opposite of what they intended. They want more competition and then create policies that lead to less competition. They want more investment in next-generation networks and produce less. Good intentions produce harmful policies.

Dr. George Ford, Phoenix Center chief economist, discusses a number of key examples, examining policies on competition, investment, network neutrality, broadband deployment, and sponsored data access. He will also discuss how measures of success can be very misleading.

What are the Biggest Opportunities and Threats in the Connectivity Business?

Dean Bubley and Benoit Felten tackle a variety of big telecom issues, lightning fast.

Where are the greatest opportunities and biggest threats to the connectivity business model? Where are the big new revenue sources; the new values to be monetized? Who are the new competitors and platforms? How do incumbents respond? What are the key technologies, and why? How do service, app, platform, and device suppliers maximize their chances of success? What does 5G, edge-computing, and IoT mean for the wholesale and connectivity players? How might thinking about owning or operating infrastructure change? 

Does Zero Rating Cause Price Increases?

Correlation is not causation, in communications markets or anywhere else.

A study conducted recently by epicenter.works reports that, between 2015 and 2016, “in markets where zero-rating offers had existed in oth years, prices increased by two percent , whereas in markets with no zero-rating offers in both years, prices dropped by eight percent.”

Of course, methodology always matters. There is no practical way to compare prices across countries without picking some benchmark, whether that is the “lowest cost” retail price, the “average” price or some other common metric. So the report’s price changes refer only to changes in the lowest-priced tier, not all other tiers.

Also, such data only compares the tariffs; not the actual prices actually paid by most consumers, because consumption behavior is outside the analysis parameters. The point is that posted retail prices for specific products only matter if “most people” buy those products. Posted retail prices also matter to the degree that people actually pay those prices, and not some other promotional or bundled prices.

In markets where bundling is prevalent, it can be difficult to determine the actual price of internet access services.

Nor does the study consider price changes in markets that might be plausibly explained some other way. It is perhaps an intuitive assumption that zero-rating and differentially-priced offers are more attractive where data volume is expensive. If that is the case, then retail prices in such markets are almost, by definition, “more expensive” than markets where data costs are lower.

It is a bit of a tautology: zero rating is offered where data costs are high, and therefore we find that markets with zero rating have higher data costs. In other words, prices in Portugal, Spain or Germany are high, relative to France, Denmark or Sweden. It does not seem self evident that zero rating changes those dynamics.


On the other hand, it seems logical enough that zero rating, if not offered by every major operator in a market, could increase customer loyalty, and thus confer some additional pricing power, which would again lead to higher prices where zero rating is offered.

“We assume our findings can ae explained in part by the fact that zero-rating distorts the normal competition between IAS providers based on data volumes and speeds,” the researchers suggest.

Others might argue that retail packaging and pricing are not a “distortion” but part of the fundamental pricing and packaging backdrop. The analogy would be seeing the introduction of the Apple iPhone as a distortion of the smartphone market, or simply an innovation.


Zero rating may be correlated with mobile internet access price changes. But that does not mean zero rating causes prices to rise or fall.

Scripts, Coding or Autonomous Behavior?

Artificial intelligence includes a number of approaches, some more akin to “automation,” others more like self-learning that leads to system autonomous behavior. The intended benefits can be very practical, though.

Consider the alarm resolution process. Quite often, one fault generates multiple alerts on multiple systems, even when they have the same root cause. So an applied AI capability would recognize redundant alerts and take preventative action, while suppressing alerts related to a single fault, so the alerts do not cascade, said Bhanu Singh, OpsRamp SVP.

Bhanu Singh, OpsRamp SVP

Controversial though it might be, the key ultimately is autonomous behavior. The whole point is to avoid having humans code or create scripts, said Taly Dunevich, Ayehu global business development VP.

Taly Dunevich, Ayehu VP

Also, contrary to some opinion, AI “does not require algorithms.” The systems should learn by themselves, without human intervention. Enterprise software systems these days are complex and highly dynamic; too complex for a limited number of humans to manage, said Frank Yue, KEMP Technologies solutions architect. “AI has to identify the problems, know what has to be done, and then do it,” Yue said.

Frank Yue, KEMP Technologies

There simply is too much data to make sense of.

“It doesn’t work if you only extrapolate from past events,” noted Will Houston, GAVS Technologies VP. “You have to auto-discover everything.”
“AIOps is about extracting actionable data insights and then applying those insights to information technology operations,” said Will Houston, GAVS Technologies VP. “By 2023, 30 percent of large enterprises will use AI for IT operations, where today perhaps two percent do so.”

“AI used to about rule-based systems, while machine learning is statistical,” said John Byrnes, SRI International senior computer scientist. “Both are used today.”

In a practical sense, AI use cases often are about automating existing processes such as handling trouble tickets, said Dunevich. In other cases, applied AI can be used to configure and maintain competing Wi-Fi networks, said Marcel Chenier, KodaCloud CTO.

Applied AI is diverse because it includes a range of intelligent capabilities related to autonomous infrastructure, that  “sense, think, act and learn,” said Katie Fritsch, HPE product marketing lead. Observation is what the sensors do. Learning is what the servers do when they look for patterns. Prediction is how the patterns are applied to identify abnormal behavior.

As applied to information technology operations, experts say to start small, with simple processes. “Automate the easy things first,” said Dunevich. “Eliminate the smaller problems first,” advised Houston. That might mean using AI to “keep trucks in lanes” at first, and not moving to full autonomous driving,” said Byrnes.

And people--not technology--are key parts of the journey, said Yue. “What is the incentive for the ops team if AI replaces their jobs?”

Paul Brittain, Metaswitch

“If the ops team doesn’t trust the new solution, they won’t support it,” noted Paul Brittain, Metaswitch VP.

U.S. Consumers Still Buy "Good Enough" Internet Access, Not "Best"

Optical fiber always is pitched as the “best” or “permanent” solution for fixed network internet access, and if the economics of a specific...