Sunday, May 10, 2026

Neoclouds and CLECs

For some of us who were active in the competitive local exchange carrier market around the time of the passage of The Telecommunications Act of 1996, neocloud providers such as CoreWeave, Nebius and many others seem to present a market opportunity that is temporary, if potentially lucrative in the short term. 


Though price arbitrage was the temporary CLEC opportunity, shortages of high-performance computing (graphics processing units and other accelerators) are the opportunity for neocloud providers.


A perhaps-lucrative but temporary market window seems to exist for neoclouds, as it once did for CLECs. 


By mandating that incumbent local exchange carriers unbundle their network elements and lease them at attractive wholesale rates to competitors at regulated rates, Congress effectively handed competitive local exchange carriers (CLECs) a business model: arbitrage the gap between the regulated wholesale price of network access and the retail price customers would pay.


But the discounts ultimately ended and the access market eventually shifted to broadband access on owned facilities. The wholesale model effectively collapsed for most CLECs. 


The generative artificial intelligence boom created excess demand for GPUs. 


So the neocloud model is structurally arbitrage: 

  • As GPU supply is constrained, offer “GPU as a service”

  • Sell access to that resource at a margin, reselling compute

  • Build customer relationships before the incumbents close the gap.


CoreWeave, for example had a simple price pitch: 

  • we have H100s

  • we're GPU-native

  • we'll get you capacity faster and cheaper than AWS or Azure. 


Dimension

CLECs (1996–2002)

Neoclouds (2022–?)

Enabling condition

Regulatory mandate opening ILEC networks

GPU supply shock creating hyperscaler rationing

Capital model

Debt-heavy buildout of switching infrastructure

Equity/debt-heavy GPU cluster acquisition

Competitive advantage

Access to regulated wholesale inputs

Early access to scarce NVIDIA allocations

Customer value prop

Cheaper/faster local access

Faster GPU availability, simpler pricing

Incumbent response

Network upgrade, litigation, lobbying

Massive capex, custom silicon, long-term NVIDIA contracts

Structural vulnerability

Unbundling obligations could be reversed

GPU scarcity is inherently temporary

Timeline pressure

~5 years before model collapsed

Likely 3–6 years before hyperscalers close gap


Of course, markets eventually will normalize:

  • Nvidia has boosted production of H100s and is ramping B200/B300 series

  • Hyperscalers have developed  custom silicon (Google's TPU v5, AWS's Trainium 2 and Inferentia, Microsoft's Maia, and Meta's MTIA)

  • Hyperscaler capex is going to be hard to beat, long term

  • The software stack advantage will benefit AWS, Google Cloud and Azure

  • Customer lock-in dynamics favor hyperscalers.


Of course, history likely rhymes rather than repeating.


The neocloud endgame probably looks similar to the CLEC industry in many ways:

  • Most will struggle as GPU spot prices normalize and hyperscaler capacity floods the market (2025–2027)

  • A few might be acquired

  • One or two may find durable niches

  • But hyperscalers likely will dominate the enterprise AI compute market by 2028–2030.


The CLEC parallel is perhaps a reminder that cyclical scarcity is not long-term structural advantage. 


The neoclouds that survive will be those that use the current window not just to sell GPUs, but to build something (software, relationships, operational expertise or specialized capability) that persists after the scarcity evaporates. 


That will be hard to do. 


As investors, we might make some money on neocloud providers in the near term. But the CLEC experience might temper enthusiasm for some of us.


No comments:

AI Ecosystem "Rule of Three" Coming?

The eventual market structure for the artificial intelligence value chain is a reasonable question, as it was for the internet value chain b...