Wednesday, December 6, 2023

On Smartphones, AI Already is Nearly Ubiquitous

Anyone trying to model artificial intelligence usage is immediately faced with a number of problems. AI already is used to support natural language processing, image processing on phones, recommendations, customer service queries, search functions and e-commerce. 


And some would argue the embedding of AI into popular smartphone processes started in 2007, meaning the widespread consumer use of AI on smartphones capable of natural language processing and camera image processing has been underway for at least 16 years already. 


In that sense, some might argue the use of AI on smartphones already has reached levels in excess of 95 percent. And that analysis ignores other areas of common consumer use, such as search, e-commerce, recommendation engines and social media, for example. 

Still, that example probably strikes most people as overstating the present application of AI. What most people likely have in mind is a future where virtually all popular web-related or app-related interactions embed AI in their core operations, so that AI becomes a foundational part of any experience.  


That might also imply that AI “usage” could grow much faster than any other discrete application or technology, since it would be part of nearly-all app experiences. 


All that shows the importance of defining what we mean by “AI use” and when such use is said to have started. In some discrete use cases, such as NLP and camera processing on smartphones, AI might plausibly be said to have reached 95 percent adoption by consumers. 


Generative AI and large language models, on the other hand, are still at the beginning, and arguably have not yet reached anywhere close to regular use by 10 percent of internet users. 


‘Beyond that, AI is a cumulative trend, representing different types of function and eventually to be used in virtually all popular apps and hardware. 

So we already face the problem that AI is not like earlier app adoption. 


Popular internet-using applications generally have taken two to five years to reach 10-percent usage by all internet users, for example. That is generally an inflection point where usage then grows to become a mass market trend. 

The difference with AI is that it will be embedded into core operations of virtually every popular app, hardware and software. So the “adoption or use of AI” will have a cumulative effect we have not seen before, with the possible exception of the internet itself. 


Still, it took roughly 12 years for internet usage to reach a level of 10 percent of people. With AI, embedded into virtually all major forms of software and hardware, adoption should be faster than that. Just how much faster remains the issue. 


The AI advantage is that if we set 2022 as the AI equivalent of 1995 for the internet, AI already begins with a higher start point, as it is used widely for smartphone image recognition, natural language queries, speech-to-text, recommendation engines and e-commerce. 


Unlike virtually all prior innovations, AI starts with higher usage from the inception, and is a multi-app, multiple-use case trend. 


So it might make more sense to set start levels for AI much earlier: 

  • Recommendation engines, 1990s

  • Image processing on smartphones, 2000s

  • Search, 2000s 

  • E-commerce, 2000s

  • Social media, 2000s

Natural Language Processing, 2009

Looked at in that way, and looking only at AI use on smartphones, the AI trend has been underway since at least 2007. 


Year

Smartphone Image Processing

Natural Language Processing

2007

Apple's iPhone introduced face detection for unlocking.

Siri, Apple's virtual assistant, is first introduced.

2009

HTC's Desire integrates Google's Goggles, an image recognition app.

Nuance's Dragon Dictate for iOS is released.

2011

Samsung's Galaxy S II introduces image stabilization and HDR photography.

Apple's Siri expands its capabilities to include voice commands for various tasks.

2012

Google Camera app introduces features like panorama mode and HDR+.

Google Now, a personal assistant, is launched.

2013

HTC's One M8 features a dual-lens camera for depth-of-field effects.

Apple's iMessage gets voice recognition for dictation.

2014

Google's Pixel smartphone introduces computational photography with features like HDR+ and Night Sight.

Google Assistant is introduced.

2015

Dual-lens cameras became more common.

Apple's Siri gains the ability to control smart home devices.

2016

Artificial intelligence (AI) starts to play a more significant role in smartphone image processing.

Google Assistant continues to evolve.

2017

AI-powered facial recognition becomes widely used in smartphones for security purposes.

AI-powered facial recognition becomes widely used in smartphones for security purposes.

2018

AI-powered image editing tools become more sophisticated.

Google Assistant expands its capabilities to include language translation.

2019

AI-powered augmented reality (AR) apps begin to gain traction.

Google Assistant becomes more integrated with other Google products.

2020

AI-powered chatbots become more common in smartphone apps.

Google Assistant gains the ability to make phone calls and send text messages.

2021

AI-powered health and fitness tracking apps become more sophisticated.

Google Assistant gains the ability to interpret and respond to natural language conversations.

2022

AI-powered language translation becomes more accurate and real-time.

Google Assistant becomes more integrated with smart home devices.


The point is that the expression “AI usage” by the general public and most internet users is already problematic. In some ways, such as smartphone image recognition and natural language processing, AI already is nearly ubiquitous. In other areas use cases are nascent.


Tuesday, December 5, 2023

Hard to Visualize Novel AI Use Cases and Firms

Thinking back to 1995, it seems obvious that people had a hard time imagining what would be possible and how business models, behavior and products could change as the internet took hold. 


Until 1994, for example, no visual web browser had become popular. Internet Explorer was not released until 1995. Prior to about 1993, the internet was text-based. Only with the commercialization of web browsers could sites develop multimedia capabilities. 


But multimedia experiences were limited by internet access bandwidth as well. Generally speaking, home broadband speeds did not become widespread until about 2001, using the traditional definition of 1.544 Mbps as the minimum for “broadband” connections. 


Year

Typical Speed (kbps)

1995

28.8

1996

33.6

1997

56

1998

128

1999

256

2000

512

2001

1024

2002

2048

2003

4096

2004

6144

2005

8192


Keep in mind that Amazon, for example, was founded in 1994, and only sold books. Netflix was founded in 1997, but only shipped compact disks. Google was not founded until 1998. Facebook did not emerge until 2004. 


You get the point: early on in any new era, it is hard to envision what will emerge. 


Company

Founding Date

Amazon

1994

Alphabet (Google)

1998

JD.com

1998

Meta (Facebook)

2004

Alibaba

1999

Tencent

1998

ByteDance

2012

Netflix

1997

Meituan

2010

PayPal

1998

Salesforce.com


1999



We can reasonably assume some new forms of infrastructure will be necessary, as internet service providers and cloud computing “as a service” were necessary for the broadband-supported and multimedia  internet. 


Harder to determine early on are the emergence of new applications and use cases, as was the case for search firms, e-commerce giants, social media, video and audio streaming services, ridesharing, software and computing “as a service,” fintech, the gig economy, cybersecurity and many forms of analytics are some of the new industries created as a result of the commercialization of the web, multimedia web and broadband-supported web


Obviously, artificial intelligence likewise will create some new industries and functions, many related to the “infrastructure” part of AI, as was the case for internet service providers, cloud computing “as a service” firms and mobile apps. 


Other trends might not create specific companies or industries but underpin all of them. At a broader level, the multimedia and “easy to use” nature of the internet was enabled by the World Wide Web, which was an enabler, not a “company.” The web, and the internet, in turn, was built on part on appliances, including mobile phones and personal computing, which did create specific companies. 


The point is that architectures often do not create companies, but infrastructure does. And devices and apps virtually always lead to company creation. All of which might seem pedestrian, but are quite relevant as we try and envision the changes AI will bring. 


Most firms and processes will change because of AI, but that does not necessarily create “new” industries or functions, except in infrastructure areas. All firms use computers and the internet and mobility, for example. 


But some new industries have arisen with personal computing, the internet and mobility, and likely will also happen with the emergence of AI. The trick is imagining what new industries and functions could arise, aside from AI reshaping all existing industries and functions.


Early "Wearable" Leaders Might not be Eventual Winners

Where AI goes next, and where AI devices could go next, is a huge issue at the moment. Sometimes such devices are described as “wearables” that some believe could be a replacement for the smartphone. Almost all rely on natural language processing for user input. 


Tab is a wearable device that "ingests the context" of your daily life by listening in to all of your conversations and using artificial intelligence to function as an assistant. By actively monitoring user conversations, Tab intends to provide  instant access to a vast reservoir of person-specific knowledge and provide concise, relevant summaries to user inquiries.


Humane is developing an “AI Pin” that features a projector to allow its simple user interface to appear on a hand or other nearby surface.


Rewind.ai has developed a neck-worn pendant that's designed to record conversations and transfer them securely to a smartphone. Its AI software sorts through and gleans insights from that mass of audio info, creating a sort of searchable database.


Meta smart glasses now can use an AI chatbot to interact. 


And then there are efforts by Jony Ive, former lead Apple designer, said to be working with  SoftBank and OpenAI for an AI device of some sort. 


Those of you familiar with the development of computing appliances and devices know that the “early leaders” often are not the ultimate winners of big device markets. That is likely to happen for wearable AI devices as well. 


Why Orange Will Not Market "6G"

It is a bit of a subtlety, but Orange is not sure it will “market 6G,” which is not the same thing as saying it will not use 6G. Unless something very unusual happens, such as the global industry deciding it does not want creation of a “6G” standard, 6G is going to happen, for the simple reason that mobile operators will continue to need additional bandwidth and capacity, and 6G is going to be needed to accomplish that.


Aside from all other matters, 6G will mean regulators must authorize additional spectrum for the platform, and additional spectrum is among the main tools mobile operators have for increasing capacity on their networks. 


Nor does such a stance really mean that Orange will stop investing in the latest generations of mobile networks. It does mean Orange will deemphasize “generation” as personal computer makers have deemphasized “clock speed” as a value driver or differentiator. 


Mobile phone suppliers, meanwhile, once marketed “smartphones” based on screen size,  touchscreen interfaces rather than keypads and ability to use mobile internet and apps. 


These days, much more emphasis is placed on battery life and camera features. One can safely predict that artificial intelligence features will be the next marketing battleground. 


In similar fashion, personal computers once marketed their devices on “performance” and a few lead use cases (word processing or spreadsheets). So processor speed, storage and memory were key messages. 


Later, bundled apps, connectivity and user-friendly interfaces became more important. These days, mobility (weight, form factor), multi-function use or sustainability are more prominent messages. 


The point is that features once considered differentiators often lose their appeal as markets mature. 


DIY and Licensed GenAI Patterns Will Continue

As always with software, firms are going to opt for a mix of "do it yourself" owned technology and licensed third party offerings....