Not all compute is the same, where it comes to artificial intelligence models, argues entrepreneur Dion Lim, especially where it comes to the massive levels of current investment, which some worry are an example of over-investment of bubble proportions.
Maybe not, he argues.
The first pool of invesments is training compute made up of massive clusters used to create new AI models. This is where the game of chicken is being played most aggressively by the leading contenders.
No lab has a principled way of deciding how much to spend; each is simply responding to intelligence about competitors’ commitments.
If your rival is spending twice as much, they might pull the future forward by a year.
The result is an arms race governed less by market demand than by competitive fear, with Nvidia sitting in the middle as the arms dealer.
To a great extent, he suggests, that is where the bubble danger exists.
Training the largest foundational AI models (like large language models) requires an extraordinary, one-time investment in specialized hardware (primarily high-end GPUs) to process huge datasets.
The second area of investment is inference compute, the use of AI models in production, serving actual sers. Here, the dynamics look entirely different, he argues.
Inference costs are ongoing operational expenses (Opex) that scale directly with usage (the number of user queries or requests).
As GPUs become commoditized and compute abundance arrives, inference capabilities will become the next major market, especially given growing demand for efficient agentic tools.
So LLM inference might not be a "bubble" in the sense that investment professionals worry about.
The companies that can deliver intelligence most efficiently, at the lowest cost per token or per decision, will capture disproportionate value, Lim argues.
Training the biggest model matters less now; running models efficiently at planetary scale matters more.
But this might differ fundamentally from the dot-com bubble, which was fueled primarily by advertising spend for firms and products that had yet to establish a revenue model. Of course, some claim neither has AI, for the time being.
Back then, companies burned cash on Super Bowl commercials to acquire customers they hoped to monetize later. That was speculative demand chasing speculative value.
Many would argue that AI already is producing measurable results for companies that do have revenue models and viable products used at scale.