Sunday, May 17, 2026

AI Bottlenecks Might be Shifting

The artificial intelligence buildout has reached the networking layer. Cisco's latest quarterly financial report provides an example.  


For three years, the capital expenditure story has been about GPUs. NVIDIA's data center revenue dominated the narrative. Hyperscalers committed hundreds of billions to compute clusters. But a GPU cluster without networking infrastructure is a warehouse full of processors that cannot talk to each other. Training runs that span tens of thousands of GPUs require switching fabrics that move data between them at speeds measured in terabits per second. Every additional GPU added to a cluster multiplies the networking demand, because the communication overhead scales faster than the compute scales linearly.


The AI infrastructure value chain is filling in a predictable order. Compute came first: NVIDIA's data center revenue grew from $15 billion in fiscal 2023 to nearly $200 billion in fiscal 2026, a roughly thirteenfold increase in three years. Networking is arriving now: Cisco's AI infrastructure run rate just quadrupled year over year. Storage will follow.


The bottlenecks are  moving down the stack from processors to the physical infrastructure that connects them. The backplane is where the bottleneck lives.


No comments:

Ethical AI is Very Complicated

There are signs of anxiety about artificial intelligence that are well grounded but also “Luddite.” AI concerns do include a legitimate focu...