There is much talk now about generative artificial intelligence model improvement rates slowing. But such slowdowns are common for most--if not all--technologies. In fact, "hitting the performance plateau," is common.
For generative AI, the “scaling” problem is at hand. The generative AI scaling problem refers to diminishing returns from increasing model size (number of parameters), the amount of training data, or computational resources.
In the context of generative AI, power laws describe how model performance scales with increases in resources such as model size, dataset size, or compute power. And power laws suggest performance gains will diminish as models grow larger or are trained on more data.
Power laws also mean that although model performance improves with larger training datasets, but the marginal utility of more data diminishes.
Likewise, the use of greater computational resources yields diminishing returns on performance gains.
But that is typical for virtually all technologies: performance gains diminish as additional inputs are increased. Eventually, however, workarounds are developed in other ways. Chipmakers facing a slowing of Moore’s Law rates of improvement got around those limits by creating multi-layer chips, using parallel processing or specialized architectures for example
The limits of scaling laws for generative AI will eventually be overcome. But a plateau is not unexpected.