Wednesday, December 6, 2023

On Smartphones, AI Already is Nearly Ubiquitous

Anyone trying to model artificial intelligence usage is immediately faced with a number of problems. AI already is used to support natural language processing, image processing on phones, recommendations, customer service queries, search functions and e-commerce. 


And some would argue the embedding of AI into popular smartphone processes started in 2007, meaning the widespread consumer use of AI on smartphones capable of natural language processing and camera image processing has been underway for at least 16 years already. 


In that sense, some might argue the use of AI on smartphones already has reached levels in excess of 95 percent. And that analysis ignores other areas of common consumer use, such as search, e-commerce, recommendation engines and social media, for example. 

Still, that example probably strikes most people as overstating the present application of AI. What most people likely have in mind is a future where virtually all popular web-related or app-related interactions embed AI in their core operations, so that AI becomes a foundational part of any experience.  


That might also imply that AI “usage” could grow much faster than any other discrete application or technology, since it would be part of nearly-all app experiences. 


All that shows the importance of defining what we mean by “AI use” and when such use is said to have started. In some discrete use cases, such as NLP and camera processing on smartphones, AI might plausibly be said to have reached 95 percent adoption by consumers. 


Generative AI and large language models, on the other hand, are still at the beginning, and arguably have not yet reached anywhere close to regular use by 10 percent of internet users. 


‘Beyond that, AI is a cumulative trend, representing different types of function and eventually to be used in virtually all popular apps and hardware. 

So we already face the problem that AI is not like earlier app adoption. 


Popular internet-using applications generally have taken two to five years to reach 10-percent usage by all internet users, for example. That is generally an inflection point where usage then grows to become a mass market trend. 

The difference with AI is that it will be embedded into core operations of virtually every popular app, hardware and software. So the “adoption or use of AI” will have a cumulative effect we have not seen before, with the possible exception of the internet itself. 


Still, it took roughly 12 years for internet usage to reach a level of 10 percent of people. With AI, embedded into virtually all major forms of software and hardware, adoption should be faster than that. Just how much faster remains the issue. 


The AI advantage is that if we set 2022 as the AI equivalent of 1995 for the internet, AI already begins with a higher start point, as it is used widely for smartphone image recognition, natural language queries, speech-to-text, recommendation engines and e-commerce. 


Unlike virtually all prior innovations, AI starts with higher usage from the inception, and is a multi-app, multiple-use case trend. 


So it might make more sense to set start levels for AI much earlier: 

  • Recommendation engines, 1990s

  • Image processing on smartphones, 2000s

  • Search, 2000s 

  • E-commerce, 2000s

  • Social media, 2000s

Natural Language Processing, 2009

Looked at in that way, and looking only at AI use on smartphones, the AI trend has been underway since at least 2007. 


Year

Smartphone Image Processing

Natural Language Processing

2007

Apple's iPhone introduced face detection for unlocking.

Siri, Apple's virtual assistant, is first introduced.

2009

HTC's Desire integrates Google's Goggles, an image recognition app.

Nuance's Dragon Dictate for iOS is released.

2011

Samsung's Galaxy S II introduces image stabilization and HDR photography.

Apple's Siri expands its capabilities to include voice commands for various tasks.

2012

Google Camera app introduces features like panorama mode and HDR+.

Google Now, a personal assistant, is launched.

2013

HTC's One M8 features a dual-lens camera for depth-of-field effects.

Apple's iMessage gets voice recognition for dictation.

2014

Google's Pixel smartphone introduces computational photography with features like HDR+ and Night Sight.

Google Assistant is introduced.

2015

Dual-lens cameras became more common.

Apple's Siri gains the ability to control smart home devices.

2016

Artificial intelligence (AI) starts to play a more significant role in smartphone image processing.

Google Assistant continues to evolve.

2017

AI-powered facial recognition becomes widely used in smartphones for security purposes.

AI-powered facial recognition becomes widely used in smartphones for security purposes.

2018

AI-powered image editing tools become more sophisticated.

Google Assistant expands its capabilities to include language translation.

2019

AI-powered augmented reality (AR) apps begin to gain traction.

Google Assistant becomes more integrated with other Google products.

2020

AI-powered chatbots become more common in smartphone apps.

Google Assistant gains the ability to make phone calls and send text messages.

2021

AI-powered health and fitness tracking apps become more sophisticated.

Google Assistant gains the ability to interpret and respond to natural language conversations.

2022

AI-powered language translation becomes more accurate and real-time.

Google Assistant becomes more integrated with smart home devices.


The point is that the expression “AI usage” by the general public and most internet users is already problematic. In some ways, such as smartphone image recognition and natural language processing, AI already is nearly ubiquitous. In other areas use cases are nascent.


No comments:

Have LLMs Hit an Improvement Wall, or Not?

Some might argue it is way too early to worry about a slowdown in large language model performance improvement rates . But some already voic...