One logical question to be asked about the neocloud segment of the artificial intelligence compute value chain is how sustainable the role might become, over time, as some amount of consolidation occurs.
History suggests a new and sustainable role in the AI computing value chain could emerge.
Hardware and platform layers tend to consolidate first, which might suggest to some that neocloud service providers (as infrastructure) could consolidate and eventually be absorbed by hyperscalers.
Middleware and applications repeatedly re-fragment, on the other hand (databases, runtimes, ML frameworks). .
But there also is an argument to be made that intermediation layers survive. As time-sharing bureaus to value-added resellers and managed service providers have emerged as sustainable niches, neoclouds could emerge as a permanent part of the value chain, providing customers (hyperscalers, for example):
Hyperscalers dominate integrated platforms, but merchant compute and specialized capacity might be sustainable positions in the value chain.
In every computing era, the dominant platform provider tries to absorb adjacent layers. But a neutral or merchant layer re-emerges when:
That pattern strongly suggests neocloud is not an anomaly, even if there are business reasons the hyperscalers providing “AI compute as a service” might prefer the role be limited.
For starters, to the extent there are supply constraints for graphics processing units, neoclouds compete for that supply, and reduce hyperscaler leverage over chip vendors.
Neoclouds also can expose:
High gross margins on certain workloads
Cross-subsidies inside hyperscaler pricing
Arbitrage opportunities hyperscalers don’t want visible
For the hyperscalers, non-existence of neoclouds strengthens the “buy from us; there is no alternative” positioning. Without neocloud alternatives, there are fewer opportunities for customers to ask “why is this cheaper elsewhere?”
So there will be some logic for hyperscalers to absorb, starve, or outflank neoclouds.
On the other hand, there are structural reasons an independent neocloud role persists. Hyperscalers are bad at merchant compute, one might argue.
Hyperscalers prefer:
They are not optimized for, or do not prefer:
Even if hyperscalers can do neocloud-style offerings, they often won’t, because doing so:
cannibalizes higher-margin SKUs
disrupts enterprise sales narratives
complicates investor messaging
introduces volatile revenue sources.
On the demand side, customers (including the hyperscalers themselves) want a neutral compute layer that supports multi-cloud capabilities, without a “platform” agenda. Cost and balance sheet advantages (moving capex to opex) also exist.
Neoclouds might also offer faster access to new silicon and more flexible or negotiable terms.
In terms of value chain positioning, the hyperscalers will control:
The value for their customers will include convenience, integration and trust.
The neoclouds, on the other hand, providing a merchant compute layer, will provide capacity arbitrage,
specialized hardware and price-performance leadership. The value is raw compute, predictable economics and speed to deployment.
Era | Hardware, Infrastructure | Systems, Platform Layer | Operating System | Middleware, Runtime | Applications | Services, Intermediation |
Mainframe (1960s–1970s) | Vertically integrated mainframes (IBM-dominated) | Proprietary system architectures | Proprietary (IBM OS/360, etc.) | Embedded in OS | Enterprise custom apps | Systems integrators, time-sharing bureaus |
Minicomputer (1970s–1980s) | DEC, HP, Data General | Vendor-specific platforms | UNIX variants, VMS | Early databases, transaction monitors | Departmental apps | VARs, integrators |
Client–Server (1980s–1990s) | Commodity servers (x86) | Wintel standard | Windows, UNIX | Databases (Oracle), app servers | Enterprise packaged software | Hosting, VARs, IT outsourcers |
Early Cloud (2000s–2010s) | Hyperscale data centers | Virtualized compute platforms | Linux | Cloud middleware, containers | SaaS | MSPs, CDNs, colocation |
Mature Cloud (2015–2022) | Hyperscalers dominate scale | IaaS / PaaS platforms | Linux | Kubernetes, managed databases | Cloud-native SaaS | MSPs, FinOps, cloud brokers |
Emerging AI Era (2023– ) | Accelerators (GPUs, TPUs, ASICs); power & data centers | Hyperscale AI platforms + neocloud capacity | Linux | ML frameworks, inference runtimes | AI-native apps, copilots | Neoclouds, AI infra brokers, model hosts |
So there is reason to believe that neoclouds will emerge as a permanent part of the AI compute value chain, supplying:
The value chain seemingly always creates a layer where price discovery, specialization, and customer leverage are the values. Neocloud is that layer, some will argue.
And while enterprise compute will be part of the market, much of the current market is driven by compute needs of the hyperscalers themselves.
Company | Percentage from Hyperscalers | Key Details/Notes |
CoreWeave | ~80-100% | Primary revenue from hyperscalers and AI labs. Microsoft alone: 62% (2024 full year), rising to ~70-72% in early 2025 periods. Top 2 customers (likely Microsoft + Meta/OpenAI): 77% in 2024. Additional contracts with Meta ($14B+), OpenAI, and others. Acts as overflow capacity for hyperscalers. |
TeraWulf | ~14-20% (growing rapidly) | Primarily Bitcoin mining revenue; HPC/AI (hosting for hyperscalers via partners like Fluidstack/Core42, backed by Google) contributed ~14% in recent quarters, with major multi-year contracts ramping in 2025-2026. |
CleanSpark | ~0-5% (early stage) | Still primarily Bitcoin mining (>95% revenue). Pivoting with AI data center hires and site wins (e.g., beat Microsoft for Wyoming site), but minimal hyperscaler revenue recognized yet; focus on future diversification. |
Hut 8 | ~10-30% (growing rapidly) | Shifting from mining; major 15-year $7B+ lease (potentially $17B+) with Fluidstack (Google-backed) for AI hosting starting ramp in 2025. Earlier GPU-as-a-Service for AI clients; hyperscaler deals driving pivot. |
Others (e.g., Core Scientific, IREN) | 20-50%+ (varies) | Similar miners pivoting: Core Scientific ~21-30% from HPC (deals with CoreWeave/hyperscalers); many in 10-30% range amid transition. Full pivot companies approach 100%. |
Some might question the “permanence” of neocloud providers in the “AI compute as a service” space, but current thinking tends to be that a new role within the value chain is being created.
Analysts view NeoClouds as emerging with enduring roles through specialization, partnerships, and niche dominance, rather than widespread buyouts.
Hyperscalers (Microsoft, Google, Amazon, Meta) prefer massive long-term offtake contracts and partnerships to secure capacity quickly, while building their own infrastructure. This hybrid approach allows them to use NeoCloud balance sheets for off-balance-sheet scaling without full integration risks.
Others might argue that the window for neoclouds is somewhat less certain, to the extent it is driven by hyperscale inability to rapidly supply the current demand for AI compute. Eventually, the argument goes, the hyperscalers will be able to build and operate their own internal capacity, reducing reliance on neoclouds.
Source/Estimate | Timeframe | Unmet/Shortfall Capacity | Key Notes/Reasons |
McKinsey | By 2030 (incremental 2025-2030) | ~125-205 GW (AI-related global) | Total AI demand 156-260 GW by 2030; hyperscalers capture ~70%, but build lags due to power/grid. |
CBRE / Utility Requests | US hyperscale 2025-2026 | ~14-40 GW incremental (2025 surge) | Vacancy at record low 1.6%; requests far exceed grid additions; multi-year delays in key markets. |
Seaport Global / Industry | Near-term (2025-2027) | Significant GPU/power shortage | NeoClouds fill "shortage of graphics chips and electricity"; temporary 3-5 year window. |
NVIDIA / Analyst Backlogs | Blackwell supply 2025-2026 | 3.6M units backlog (hyperscalers) | Sold out through mid-2026; drives outsourcing to NeoClouds for immediate access. |
Overall Analyst Consensus | 2025-2028 | Tens of GW + millions of GPUs unmet soon | Power as #1 bottleneck; hyperscalers' $350-600B annual CapEx still constrained by grid/energy. |
When scale providers win on unit economics, merchant or brokerage layers appear wherever customers value flexibility, neutrality, or pricing innovation. In the case of AI compute, hyperscale AI compute suppliers, no less than enterprise customers, will have such needs.
Content delivery networks provide a good example of how new specialist roles can emerge. CDNs are specialized data centers whose value is edge location and latency reduction for media and content delivery.