Technology market structures tend to change as they age. Small upstart companies get acquired; bigger firms merge; a few dominant leaders emerge, taking a “winner takes most” structure.
Any market researcher, studying any particular capital-intensive market, will tend to find something like a Pareto distribution often applies: up to 80 percent of results are produced by 20 percent of actors. Some might call that the rule of three.
Market share structures in computing, connectivity and software tend to be fairly similar: leadership by three firms, corresponding to the rule of three.
“A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest,” BCG founder Bruce Henderson said in 1976.
Codified as the rule of three, the observations explains the stable competitive market structure that develops over time, in many industries.
Others might call this winner take all economics.
So a logical question is what happens in the high-performance computing market, including the market space for neocloud providers such as CoreWeave, Nebius and others changing their business models from bitcoin mining to focus on artificial intelligence model training and inference operations.
Some might argue we are shifting from a focus on training capabilities and towards inference operations. It’s hard to argue with that observation, as models become routine apps used by businesses and consumers.
So some might argue we could see less need for highest-performance compute capabilities of the sort neocloud providers offer. Others might argue more of the computational load will be handled by edge devices, and there is some truth to that position as well.
But inference operation ubiquity does not necessarily mean less power; less powerful chips; fewer operations inside massive data center complexes; less physical real estate or water consumption.
Although pre-training growth is slowing, and compute is shifting from training to inference, the compute demands from post-training scaling and test-time scaling, and increased usage suggest that the world likely needs a lot more AI-focused data centers, and the ramp from US$300 billion to US$400 billion in 2025 to roughly US$1 trillion in 2028 is directionally realistic, according to one Deloitte estimate.
So the future might not include less need for high-performance computing facilities.
On the other hand, what technology market has not evolved over time to patterns with just a handful of market leaders?
So if the independent neocloud provider market follows the historic pattern, market consolidation will happen, with a handful of major, scaled neocloud providers; traditional hyperscalers, plus a long tail of smaller, niche players.
Some argue the process has already begun.
But there are other possibilities as well. The neocloud provider market might not consolidate but instead collapse.
The "Big Three" hyperscalers possess massive scale, deep financial resources, and comprehensive service portfolios, allowing them to engage in price wars and continuously innovate at a pace the smaller players cannot match. So some would argue this creates immense and unsustainable pressure on the neoclouds' margins and ability to compete effectively in the long term.
Without a genuinely-unique value proposition or niche, the independent neocloud providers might struggle to retain customers who often prefer the security and breadth of services offered by the large providers.
The hyperscalers also will be better positioned to handle the likely higher regulatory costs, ability to attract talent and risk aversion of enterprise customers, as well.
A collapse scenario might happen for at least some providers if customers abandon the neoclouds because of longevity fears. The danger cannot be dismissed.
That happened around the turn of the century to many would-be capacity providers and competitive local exchange carriers.
In the late 1990s, driven by the Telecommunications Act of 1996, which opened markets to competition, hundreds of new companies rushed to build wide-area optical fiber networks and local access facilities.
This resulted in a vast oversupply of "dark fiber" (unused capacity), with estimates suggesting 85 percent to 95 percent of constructed fiber went unused after the bust.
The industry and investors widely believed demand for bandwidth would grow indefinitely, leading to an investment frenzy based on the mentality of "if you build it, they will come". Actual demand and revenue growth, however, did not keep pace with the rapid network construction, creating an unsustainable business model for many.
CLECs and fiber providers were able to secure massive amounts of funding through debt and speculative equity offerings. When the broader stock market began to decline in 2000, this financing dried up, immediately pushing heavily leveraged companies into bankruptcy.
Hypercompetition and Price Wars: The presence of too many competitors in the same markets led to vicious price wars that drove down bandwidth prices (in some cases, by 60 percent per year), making it difficult for many new entrants to become profitable or even cover their costs.
In that case, rational merger activity did not drive the consolidation. Instead, the sectors mostly collapsed into bankruptcy. It’s impossible to tell, today, which of these outcomes develops. Over-investment, over-capacity and inadequate demand have happened with many earlier technologies, including railroads in the nineteenth century; the telecom and internet bubbles of the late 1990s and early 2000 era.
No comments:
Post a Comment