Oracle is lining up $38 billion for data center investments in two states tied to its “Project Stargate joint venture with OpenAI and SoftBank to build as much as $500 billion worth of new facilities supporting artificial intelligence operations over a four year period.
The immediate $38 billion will support new facilities in Wisconsin and Texas that will be developed by Vantage Data Centers. Critics have argued that Project Stargate, when announced, was unfunded.
The latest package is a step in that direction.
Of course, there eventually will be concern about the payback from those investments, and investors already seem to be having an AI allergy at the moment.
To a large extent, payback concerns center on the huge amount of depreciation the investments will trigger. One projection suggests that AI data centers to be built in 2025 will incur around $40 billion in annual depreciation, while generating only a fraction of that in revenue initially.
The argument is that this massive depreciation, driven by the rapid obsolescence of specialized AI hardware, will outpace the revenue generation for years.
Some might recall similar sorts of thinking in the first decade of the 21st century as cloud-based enterprise software began to replace traditional software licenses as a revenue model. Then, as now, there was concern about the relatively-low profit margin of cloud-delivered services, compared to use of traditional per-seat site licenses.
That argument eventually was settled in favor of cloud delivery, however. So optimists will continue to argue that the financial return from massive AI data center investments will emerge, despite the high capital investment and equally-high depreciation impact on financial performance in the short term.
But skeptics will likely continue to argue that, for processing-intensive AI operations, there is essentially no “long term,” as foundational chip investments must be continually refreshed with the latest versions, mirroring consumer experience with smartphone generations, for example, or mobile service provider experience with their infrastructure (2G, 3G, 4G, 5G and continuing).
Among the issues is whether key graphics processing unit generations really need to be replaced every three or five years (possibly up to six years). Longer useful life means lower annual depreciation cost.
But optimists expect demand will grow to more than match investments.
Metric | Current Demand (2025 Est.) | Future Demand (2030 Est.) |
Global AI Market Value | ~$390 - $450 billion | ~$1.3 - $1.8 trillion |
Global Data Center Power Demand | ~55 GW (14% from AI) | ~122 GW (27% from AI) |
Total AI-Related Capital Expenditure | ~$200 billion/year | ~$1 trillion/year |
Required GPU Compute (Exaflops) | Low single-digit Exaflops | Hundreds of Exaflops |
Demand-Supply Balance | Significant shortage | Supply-demand balance may be reached, but with continued investment pressure |
Some cite the dot-com over-investment in optical fiber transport facilities as the basis for concern about AI data center investment. But there already seem to be key differences. The speculative investment in optical transport was based in large part on the expected survival and success of the many emerging “internet” firms.
That did not happen when most of the startups failed. Also, there were some key instances of accounting fraud where firms were booking orders or revenue that did not actually exist.
Depreciation schedules affect some capital-intensive businesses in a significant way.
Venture capitalist David Cahn has estimated that for AI investments to be profitable, companies need to generate roughly four dollars in revenue for every dollar spent on capex.
In the enterprise or business space, subscriptions seem the logical revenue model. In the consumer space, it is more likely that advertising will develop as the revenue model. But the issue is when that will happen.
But there are other, more prosaic issues, such as the impact of depreciation on reported profitability.
Historically, hyperscalers depreciated servers over three years.
In 2020 they started to extend server depreciation from three years to four years. That might be deemed a simple recognition that better software enables existing hardware to provide value over longer time periods.
As a practical matter, that front loads profits as training and inference revenues are booked before a significant amount of depreciation expenses are recorded.
In 2021, the hyperscalers investing in AI servers further extended useful server life to five years and the useful life of networking gear from to six years, citing internal efficiency improvements that ‘lower stress on the hardware and extend the useful life’.
Between 2021-2022, Microsoft, Alphabet and Meta followed suit, collectively lifting the useful lives of their server equipment to four years. In the year following, Microsoft and Alphabet further extended the depreciable lives for server equipment to six years, and Meta to five years.
It is too early to know which side of the debate will prove correct.