Many financial analysts and investors are concerned that lower-cost language models such as DeepSeek will reduce demand for high-end graphics processor units, servers and high-performance computing services. The drop in Nvidia market value by about $600 billion after the DeepSeek unveiling of its latest model, which, it is claimed, was developed at a fraction of the cost of other leading generative AI models, attests to that fear.
Others do not believe a dip in need for high-end Nvidia servers will happen. The Jevons Paradox or Jevons Effect has been cited in this regard. The Jevons Effect, also known as the rebound effect, is the proposition that increases in efficiency in resource use can lead to an increase in the consumption of that resource, rather than a decrease.
So the reasoning is that cheaper AI models and cheaper inferences will lead to much more AI use,and therefore continued demand for Nvidia servers, even high-end machines. Some might point out that the claimed DeepSeek cost advantages should properly mean the Jevons Effect happens to DeepSeek and other models, not the servers running those models.
So the reduction in demand for Nvidia servers would be indirect. That has not deterred many observers, who believe significantly-lower-cost model training and inference operations are possible (with or without DeepSeek contributions). We already have seen such effects.
In principle, some argue, such trends might mean model operators could reduce the need for high-end graphics processing units and other acceleration chips. That remains to be seen.
Broadly speaking, the history of computing suggests the “lower costs lead to wider usage; more usage; more demand for servers” argument has legs. Computing resource demand always has increased as the cost drops.
But there also is an argument that lower-cost models and inferences will increase demand for all those products. Paradoxically, lower training and inference costs will stimulate more demand, possibly lead to new use cases
Also, keep in mind that generative AI is only one form of AI. Perhaps SeepSeek points the way to lower processing costs for GenAI. That does not necessarily mean improvements for general AI or even machine learning.
One might argue it remains necessary to use high-end processors and accelerators for use cases moving towards artificial general intelligence.
The Jevons Effect, or Jevons Paradox suggests that an increase in efficiency in resource use can lead to an increase in the consumption of that resource, rather than a decrease, as some might expect, over the long term.
As applied to more-affordable generative artificial intelligence solutions, a drastic decrease in cost might-also lead to a decrease in buying of high-end AI servers, for example, even if model training can be done on lower-capability servers. (at least in some cases).
The argument is that efficiency doesn’t reduce demand—it increases it. As AI becomes cheaper and more accessible, more businesses, startups, and individuals will adopt it and new higher-end use cases will emerge, driving more higher-end GPU sales.
Perhaps the bigger immediate concern is that many contestants have essentially overpaid for their infrastructure platforms.
That becomes a business issue to the extent that, in competitive markets, the lower-cost producers tend to win.
On the other hand, the Jevons Effect works when the price of the inputs does not change. If the price of high-end Nvidia servers does drop, then supply and demand principles--and not the Jevons Effect--will operate. And lower prices for high-end Nvidia GPUs then sustains demand.
The Jevons Effect suggests that improved product efficiency leads to greater overall resource consumption rather than reductions. That might apply to use of AI models, AI as a service or power consumption.
But many have speculated that AI models such as DeepSeerk would lessen the need for high-end graphics processor units, for example. That might hold only if the prices for such servers remains the same.
And the general rules of computing economics suggest lower prices with scale and time. So the ultimate impact of DeepSeek and other possible model contenders on demand for high-end Nvidia servers might be more neutral than some fear.
And, in any cases, one might argue any effect on high-end server demand might affect GenAI models more than broader and more complicated artificial general intelligence operations.