Network slicing, enabled by network virtualization in a broad sense, might be directly related to performance requirements of edge computing and native edge apps.
Edge-native applications, as their name suggests, are applications which require the unique characteristics provided by edge computing to function satisfactorily, or in some cases to function at all. These applications will typically rely on the low latency, locality information or reduced cost of data transport that edge computing provides in comparison to the centralized cloud.
One practical issue is how to decide when edge computing is the preferred solution. In addition to the typical scheduling attributes such as requirements around processor, memory, operating system, and occasionally some simple affinity/anti-affinity rules, edge workloads might be interested in specifying some or all of the following:
• Geolocation
• Latency
• Bandwidth
• Resilience and/or risk tolerance (i.e., how many 9s of uptime)
• Data sovereignty
• Cost
• Real-time network congestion
• Requirements or preferences for specialized hardware (GPUs, FPGAs)
One important core network implication is that many of those attributes (geolocation, latency, bandwidth, resilience, cost and real-time congestion performance) are precisely the issues network slicing addresses.
The State of the Edge report (get it here) notes that edge computing grows out of the legacy of content delivery networks, which tells you much the potential advantages and use cases: application acceleration and lower latency are where edge computing adds value.
“There are lots of edges, but the edge we care about today is the edge of the last mile network,” the report authors suggest.
No comments:
Post a Comment