Many observers now worry that artificial intelligence models are hitting an improvement wall, or that scaling the models will not bring the same level of improvements we have seen over the past few years.
That might be worrisome to some because of high levels of investment in the models themselves, before we actually get to useful applications that produce value and profits for businesses.
Of course, some would note that, up to this point, large language model performance improvements have been based on the use of larger data sets or more processing power.
And slowing rates of improvement suggest that further value using just those two inputs might be reaching its current limit.
Of course, some of us might note that there is a sort of “stair step” pattern to computing improvements, including chipsets, hardware and most software. Moore's Law, where the doubling of transistor density on integrated circuits happens about every two years, is a prime example of stairstep progress.
The expansion of internet bandwidth also tends to follow this pattern, as do capacity improvements on backbone and access networks, fixed and mobile.
The evolution of operating systems, smartphones and productivity tools also often sees periods of rapid innovation followed by stabilization for a time, before the next round of upgrades.
So concern about maturing scaling laws, while apt, does not prevent us uncovering different architectures and methods for significant performance improvement.
No comments:
Post a Comment