Anyone trying to model artificial intelligence usage is immediately faced with a number of problems. AI already is used to support natural language processing, image processing on phones, recommendations, customer service queries, search functions and e-commerce.
And some would argue the embedding of AI into popular smartphone processes started in 2007, meaning the widespread consumer use of AI on smartphones capable of natural language processing and camera image processing has been underway for at least 16 years already.
In that sense, some might argue the use of AI on smartphones already has reached levels in excess of 95 percent. And that analysis ignores other areas of common consumer use, such as search, e-commerce, recommendation engines and social media, for example.
Still, that example probably strikes most people as overstating the present application of AI. What most people likely have in mind is a future where virtually all popular web-related or app-related interactions embed AI in their core operations, so that AI becomes a foundational part of any experience.
That might also imply that AI “usage” could grow much faster than any other discrete application or technology, since it would be part of nearly-all app experiences.
All that shows the importance of defining what we mean by “AI use” and when such use is said to have started. In some discrete use cases, such as NLP and camera processing on smartphones, AI might plausibly be said to have reached 95 percent adoption by consumers.
Generative AI and large language models, on the other hand, are still at the beginning, and arguably have not yet reached anywhere close to regular use by 10 percent of internet users.
‘Beyond that, AI is a cumulative trend, representing different types of function and eventually to be used in virtually all popular apps and hardware.
So we already face the problem that AI is not like earlier app adoption.
Popular internet-using applications generally have taken two to five years to reach 10-percent usage by all internet users, for example. That is generally an inflection point where usage then grows to become a mass market trend.
The difference with AI is that it will be embedded into core operations of virtually every popular app, hardware and software. So the “adoption or use of AI” will have a cumulative effect we have not seen before, with the possible exception of the internet itself.
Still, it took roughly 12 years for internet usage to reach a level of 10 percent of people. With AI, embedded into virtually all major forms of software and hardware, adoption should be faster than that. Just how much faster remains the issue.
The AI advantage is that if we set 2022 as the AI equivalent of 1995 for the internet, AI already begins with a higher start point, as it is used widely for smartphone image recognition, natural language queries, speech-to-text, recommendation engines and e-commerce.
Unlike virtually all prior innovations, AI starts with higher usage from the inception, and is a multi-app, multiple-use case trend.
So it might make more sense to set start levels for AI much earlier:
Recommendation engines, 1990s
Image processing on smartphones, 2000s
Search, 2000s
E-commerce, 2000s
Social media, 2000s
Natural Language Processing, 2009
Looked at in that way, and looking only at AI use on smartphones, the AI trend has been underway since at least 2007.
The point is that the expression “AI usage” by the general public and most internet users is already problematic. In some ways, such as smartphone image recognition and natural language processing, AI already is nearly ubiquitous. In other areas use cases are nascent.