Friday, February 15, 2019

66% of North American IT Pros say They Use SD-WAN, or Will, by End of 2020

Software-defined WAN (SD-WAN) will be used at 66 percent of surveyed North American companies by the end of 2020, a survey by IHS Market finds.

Companies deploying SD-WAN use over 50 percent more bandwidth than those who have not deployed it, the survey suggests. Their bandwidth needs are also growing at twice the rate of companies using traditional WANs.

The first wave of SD-WAN deployments focused on cost reduction, and this is still clearly the case, with survey respondents indicating their annual megabits-per-second cost is 30 percent lower, with costs declining at a faster rate than in traditional WAN deployments.

SD-WAN solutions not only solve the transportation and WAN cost reduction issue, but also help enterprises create a fabric for the multi-cloud.



51% of North America Firms Surveyed will Hybrid Cloud in 2019

In 2019, 51 percent of North American network professionals surveyed by IHS Markit say they will use hybrid cloud. Some 37 percent will adopt multi-cloud for application delivery.

Enterprise IT architectures and consumption models are changing, from servers and applications placed at individual enterprise sites, to a hybrid-cloud model where centralized infrastructure-as-a-service (IaaS) complements highly utilized servers in enterprise-operated data centers, IHS Markit says.

Respondents also suggested that hybrid cloud is a stepping stone to multi-cloud.

Over time, certain functions requiring low latency will migrate back to the enterprise edge, residing on universal customer premises equipment (uCPE) and other shared compute platforms. This development is still in its infancy, though.

Performance is a top concern, and enterprises are not only adding more WAN capacity and redundancy, but also adopting SD-WAN, IHS Markit says. The primary motivation for deploying SD-WAN is to improve application performance and simplify WAN management.

Bandwidth consumption continues to rise. Companies are expecting to increase provisioned wide-area network (WAN) bandwidth by more than 30 percent annually across all site types.
Data backup and storage is the leading reason for traffic growth, followed by cloud services.

More Support for U.S. Rural Broadband

The U.S. Federal Communications Commission has awarded $1.5 billion in support for rural internet access services, intended to upgrade some 713,000 locations, at an average subsidy of $2103 per location.

As always is the case, the small percentage of very-rural locations have per-line capital investment costs far above those of urban and suburban locations.

The most isolated 250,000 U.S. homes of the seven million that in 2010 did not have fixed network internet access (or did not meet a minimum 4 Mbps downstream speed),  representing about 3.5 percent of those locations, require 57 percent of the capital required to connect all seven million locations.

“The highest-gap 250,000 housing units account for $13.4 billion of the total $23.5 billion investment gap,” an FCC paper has estimated.


“Our analysis indicates that there are seven million housing units (HUs) without access to terrestrial broadband infrastructure capable of meeting the National Broadband Availability Target of 4 Mbps download and 1 Mbps upload,” the FCC said in its Broadband Availability Gap technical paper.

Created in support of the FCC’s National Broadband Plan, the document says simply that “because the total costs of providing broadband service to those seven million HUs exceed the revenues expected from providing service, it is unlikely that private capital will fund infrastructure.”

Cost and density are inversely related, the FCC noted. The density pattern follows a basic Pareto rule, that 80 percent of the population lives in perhaps 20 percent of the land area.

source: FCC

Network Slicing and Native Edge Apps

Network slicing, enabled by network virtualization in a broad sense, might be directly related to performance requirements of edge computing and native edge apps.

Edge-native applications, as their name suggests, are applications which require the unique characteristics provided by edge computing to function satisfactorily, or in some cases to function at all. These applications will typically rely on the low latency, locality information or reduced cost of data transport that edge computing provides in comparison to the centralized cloud.

One practical issue is how to decide when edge computing is the preferred solution. In addition to the typical scheduling attributes such as requirements around processor, memory, operating system, and occasionally some simple affinity/anti-affinity rules, edge workloads might be interested in specifying some or all of the following:
• Geolocation
• Latency
• Bandwidth
• Resilience and/or risk tolerance (i.e., how many 9s of uptime)
• Data sovereignty
• Cost
• Real-time network congestion
• Requirements or preferences for specialized hardware (GPUs, FPGAs)

One important core network implication is that many of those attributes (geolocation, latency, bandwidth, resilience, cost and real-time congestion performance) are precisely the issues network slicing addresses.

The State of the Edge report (get it here) notes that edge computing grows out of the legacy of content delivery networks, which tells you much the potential advantages and use cases: application acceleration and lower latency are where edge computing adds value.

“There are lots of edges, but the edge we care about today is the edge of the last mile network,” the report authors suggest.    

Living at the Infrastructure Edge

Business author Will Willis (not talking about edge computing) penned the phrase “if you’re not living on the edge you’re taking up too much space.”

One of the complexities of “edge computing” is that, in a factual sense, every computing resource lives on the edge. But clients of particular resources might be located locally or far away. And that is the issue: “computing on the edge,” not “living at the edge.”

Every hyperscale data center and every smartphone, smart speaker, personal computer,  smartwatch, industrial sensor all “live on the edge.” They are fixed in space, but connect over some access link to the core networks and internet. Everything is physically at the edge.

The exception, some might argue, are devices communicating over a satellite link. In those scenarios, the “access” link and the “wide area network” connections often are indistinguishable. We can argue about whether the repeater (the satellite) sits in the core, or is simply another network edge.  

But the issue is distributed computing operations; computing “somewhere else than where the device is.”  The “cloud,” remember is an abstraction, not a place.

“Edge computing places high-performance compute, storage and network resources as close as possible to end users and devices,” the State of the Edge report says. The advantages include reduced WAN transport costs and lower latency.

And latency is the key issue. Human-scale apps can have response tolerances as long as a few seconds. It’s mostly an inconvenience.

Annoying, yes; but not life-threatening. In comparison, for applications that operate at machine-scale, where latency is measured in microseconds, even small delays in data transmission can have severe and even fatal consequences.

To be sure, debate exists about whether computing on an edge device directly can be considered “edge computing.”  Though processing can and will occur wherever it is best placed for any given application, it is probably the general understanding that edge computing implies that resources on the device edge are as close as you can possibly get to the end user, both physically and logically.

Some refer to the “infrastructure edge” refers to IT resources which are positioned on the network operator or service provider side of the last mile network, such as at a cable headend or at the base of a cell tower.

In such cases, the primary building blocks are edge data centers, which are typically placed at five-mile to 10-mile intervals in urban and suburban environments.

The difference between device edge and infrastructure edge might be characterized as “zero hop” (computing on the device itself) and “one short hop” with total round-trip latency of a few milliseconds.

The infrastructure edge, in many cases, will include an access edge, directly connected to clients, as well as aggregation edge that is two network hops from any end user and both simplifies connection to all other cloud resources as well as provides a more-powerful analytical capability that still is close to the device edge.

The access edge might be thought of as providing real-time processing tasks, while the aggregation edge handles less-immediate but higher-order analytical tasks (such as trend analysis).

Individual edge data centers might typically feature 50 to 150 kiloWatts of capacity. Hyperscale data centers can feature scores of megaWatts of capability, essentially three orders of magnitude difference, or about 1,000 times or so greater computing capability.

5G, IoT, Edge Computing, SDN, Network Slicing and AI All are Related

It is difficult to separate 5G from edge computing from network virtualization from internet of things from applied forms of artificial intelligence, because value created from each hinges on the value supplied by the other advances.

If one believes that the big value drivers associated with 5G will come from enterprise use cases, rather than reinforcing 5G network capacity, then internet of things stands at the forefront of the opportunities. Many of the potential use cases hinge on ultra-low latency, which drives the need for distributed edge computing and analytics embedding AI, plus lower-cost and more-flexible core networks that are optimized for particular computing tasks.

Consider that edge computing infrastructure might represent some seven million locations, according to one recent IDC estimate. To put that into perspective, consider that there are nearly five million mobile cell sites active globally. Assume four million are macrocell sites, and candidates for an edge data center.

Though deployment would not be equal, everywhere, one edge center at every macrocell site implies a universe of four million sites. If you assume small cells are placed in areas of high density and high communications usage, then one million to two million additional sites are conceivable, representing potentially five to six million locations.

Those are “infrastructure edge” facilities, arguably owned by mobile or telco-associated entities. In addition to that, many logical edge computing facilities will be located on enterprise premises. That would had a couple to several million more sites.

Another way to look at it is by the population of devices needing to communicate with an edge data center. Assume a population of five to six billion devices. Assume the capacity of an edge data center is peak use by 100 devices. Then you would “need” five million edge data centers.  

Supporting 100 devices concurrently might seem an absurdly low number, but, by analogy, 72 percent of traditional data center traffic remains inside the data center. Data center to user traffic has been about 15 percent of total traffic. The point is that a flood of sensor data might impose greater processing tasks than consumer web or video requires.

That would not include any edge data aggregation centers, where non-real-time analytics are conducted, or where traffic to be passed to remote data centers and locations is originated.

The whole point is that edge data centers, working with 5G, would be necessary to ensure latency of a few milliseconds roundtrip; core network virtualization to contain costs and network slicing to ensure WAN performance; plus application of artificial intelligence (machine learning at the edge, deep learning elsewhere in the cloud, or at edge aggregation centers).

Source: Cisco

Network Slicing at an Edge Data Center Requires New Switches or Routers, Plus Orchestration

If you are going to use or support software defined network services using network slicing--at edge data centers or hyperscale data centers, you need a new orchestration capability and new routers or switches.

The Kaloom Software Defined Fabric is a software solution and orchestration function for networking white boxes in a software defined network context.

It promises a performance enhancement of two times; an increase in per-server throughput of seven times and a reduction in latency between virtual machines in the same rack of five to 10 times.

Versions for hyperscale and edge data centers are supported.  As a programmable data center fabric, the solution offers integrated routing and switching, plus P4 platform tools to support programmers.

The Kaloom SDF supports software defined networks and network slicing, and can run several virtual data-centers with different network services, Kaloom says.

“Kaloom SDF features advanced self-forming and self-discovery capabilities, zero touch provisioning (ZTP) of the virtual networking and virtual components, and automated software upgrades, thus minimizing human intervention and errors while saving time and effort,” the company  says.

Network provisioning time is reduced from several days to minutes and is automatically updated during runtime, the company says.

A physical data center can be partitioned into multiple independent and fully isolated virtual data centers (vDCs).

Each vDC operates with its own Virtual Fabric (vFabric), which can host millions of IPv4 or IPv6 based tenant networks.

Additional compute and storage resources can be dynamically assigned or removed from a vDC, thus creating a flexible and elastic pool of network resources suitable for network slicing, Kaloom says.

Kaloom SDF is a pre-tested and certified software solution for networking white boxes from Accton and Delta.

Kaloom Topology v6 - No Titie-1


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...