CoreWeave appears to be the first data center operator to deploy Spectrum-XGS Ethernet, an Nvidia term for “scale-across” capability linking remote data centers so they can operate as though all resources were locally present.
The innovation is a third approach to AI computing that complements existing “scale-up” (more powerful processors) and “scale-out” (more processors at the same location) strategies.
Spectrum-X Ethernet aims to provide:
Distance-adaptive algorithms that automatically adjust network behavior based on the physical distance between facilities
Advanced congestion control that prevents data bottlenecks during long-distance transmission
Precision latency management to ensure predictable response times
End-to-end telemetry for real-time network monitoring and optimisation
“Spectrum-XGS Ethernet nearly doubles the performance of the Nvidia Collective Communications Library,” the company says.
At least in principle, Spectrum-XGS Ethernet could allow data center operators to build smaller facilities that do not strain local power grids.
The dramatic rise in AI workloads is causing network traffic inside and between data centers to surge at rates much higher than historical averages, with projections of 30%+ annual growth for AI-related data traffic—versus the long-term rate of about 20–30% pre-AI. Surveys indicate that within 2-3 years, over half of data center operators expect AI workloads to dominate inter-facility bandwidth needs, outpacing even traditional cloud computing. This is prompting a shift towards scalable fiber-optic connectivity and advanced network fabrics, enabling fast, reliable data transfer at unprecedented scale.
Demand Growth Estimates
Analyst estimates highlight explosive growth:
By 2030, global data center capacity is expected to increase 2.5-fold, and about 70% of data center capacity will need to support advanced AI workloads.
Demand for AI-ready data center capacity is projected to rise at an average rate of 33% annually between 2023 and 2030.
By 2025, AI workloads could make up nearly 30% of all data center traffic, with AI inference and training driving massive “east-west” data flows between facilities, rather than traditional “north-south” patterns.
Operators are increasing investment in high-capacity, low-latency WAN and fiber interconnections, with single-mode fiber and optical transceivers—often at 800 Gbps or higher—becoming the norm for these links.
And such “AI data center to AI data center” connections are projected by some to dominate data center networking demand within five years.
Projected Data Center Network Demand
No comments:
Post a Comment