It might be debatable how much changes in artificial intelligence compute workloads (training versus inference operations, for example) create new and fundamental requirements for wide area connectivity, beyond the already-existing requirement for low latency and high bandwidth.
We might, at a high level, argue that model training depends on high throughput, while inference is more dependent on low latency. Similarly, we might argue that model training relies on dense, locally-connected processors inside a single building, or in a cluster of buildings, while inference can be more-widely distributed.
The issue is how much connectivity decisions could be affected, aside from the overall emphasis on high bandwidth and low latency that already exists to support cloud and distributed computing.
Some will argue that model training might always require a specialized architecture optimized for really-high bandwidth. Lumen Technologies, for example, has a vested interest in making such an argument.
And, to be sure, the shift to inference ought to move architectures and requirements from "compute-centric" (focusing on raw math speed) to "data-centric" (focusing on moving data efficiently).
Still, it remains unclear how much the fundamental architecture, focused on both high bandwidth and low latency, could be affected. Already, some would note that memory becomes more important for inference operations.
Overall, when inference is the driver, the "network" is no longer just a pipe for moving datasets; it becomes a live extension of the AI's memory and reasoning path. And while innovations inside data centers are coming (optical connections replace electrical), it might be the case that physical media becomes an important change, not architecture.
Hollow-core fiber networks, for example, are said to offer 47 percent faster throughput than glass core fibers.
So perhaps the most-important wide area networking change for interconnects between data centers is new physical media, rather than architectural changes as such.
No comments:
Post a Comment