Showing posts sorted by date for query inflection. Sort by relevance Show all posts
Showing posts sorted by date for query inflection. Sort by relevance Show all posts

Monday, September 30, 2024

Amara's Law and Generative AI Outcomes: Less than You Expect Now; More than You Anticpate Later

Generative artificial intelligence is as likely to show the impact of Amara's Law as any other new technology, which is to say that initial outcomes will be less than we expect, while long-term impact will be greater than we anticipate.


Amara’s Law suggests that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.


Source


Amara’s Law seemingly is the thinking behind the Gartner Hype Cycle, for example, which suggests that initial enthusiasm wants when outcomes do not appear, leading to disillusionment and then a gradual appearance of relevant outcomes later. 


lots of other "rules" about technology adoption also testify to the asymmetrical and non-linear outcomes from new technology.  


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara and some people call it “Gate’s Law.”


The principle is useful for technology market forecasters, as it seems to illustrate other theorems including the S curve of product adoption. The expectation for virtually all technology forecasts is that actual adoption tends to resemble an S curve, with slow adoption at first, then eventually rapid adoption by users and finally market saturation.   


That sigmoid curve describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


Another key observation is that some products or technologies can take decades to reach mass adoption.


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use. 


Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The S curve is related to the product life cycle, as well. 


Another key principle is that successive product S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. The chasm is crossed at perhaps 15 percent of persons, according to technology theorist Geoffrey Moore.

source 


For most consumer technology products, the chasm gets crossed at about 10 percent household adoption. Professor Geoffrey Moore does not use a household definition, but focuses on individuals. 

source: Medium


And that is why the saying “most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is so relevant for technology products. Linear demand is not the pattern. 


One has to assume some form of exponential or non-linear growth. And we tend to underestimate the gestation time required for some innovations, such as machine learning or artificial intelligence. 


Other processes, such as computing power, bandwidth prices or end user bandwidth consumption, are more linear. But the impact of those linear functions also tends to be non-linear. 


Each deployed use case, capability or function creates a greater surface for additional innovations. Futurist Ray Kurzweil called this the law of accelerating returns. Rates of change are not linear because positive feedback loops exist.


source: Ray Kurzweil  


Each innovation leads to further innovations and the cumulative effect is exponential. 


Think about ecosystems and network effects. Each new applied innovation becomes a new participant in an ecosystem. And as the number of participants grows, so do the possible interconnections between the discrete nodes.  

source: Linked Stars Blog 


Think of that as analogous to the way people can use one particular innovation to create another adjacent innovation. When A exists, then B can be created. When A and B exist, then C and D and E and F are possible, as existing things become the basis for creating yet other new things. 


So we often find that progress is slower than we expect, at first. But later, change seems much faster. And that is because non-linear change is the norm for technology products. So is Amara’s Law.


Sunday, September 29, 2024

How Soon Could Huge New Generative AI Industries Emerge?

How soon will generative artificial intelligence produce some obvious huge new behaviors, firms, apps, use cases, business models and industries, as happened with the internet?


Consumer products generally reach an adoption inflection point at about 10-percent consumer adoption. So if consumer AI use cases follow precedent, mass market success will happen when any single use case or app hits about 10-percent usage. 


Generative AI usage likely will reach 10 percent in 2024 in many markets, suggesting a rapid uptake period will commence. 


But use of generative AI, quite often as a feature of an existing experience, is a possibly-different matter from creation of wholly-new use cases, value propositions and industries, as happened with the growth of internet use. 


And it will still take some time for such new use cases, apps, value propositions and industries to emerge. 


Some leading internet apps--including Google search; Facebook social media; Amazon e-commerce and Google Maps for navigation--took between three and eight years to reach 10-percent usage levels. 


Keep in mind those innovations represented new behaviors, value and business models for new firms in new industries, as opposed to use of the internet by legacy firms and processes. 




It took longer--almost twice as long--for each of these apps to reach adoption by half of people. The point is that even if generative artificial intelligence is highly successful at creating new behaviors, use cases, apps and firms, it will take up to a decade and a half for that success to be quite obvious, as defined by usage. And it probably goes without saying that this is true only for the most-popular, most commercially-successful new use cases, apps and firms. Most implementations will prove to be insignificant or actually fail to achieve success.

So it might be rational and realistic to assume huge new industries will emerge only after some time. Even if GenAI propagates faster than did the leading new search, social media and e-commerce apps did in the earlier internet era. 


And it is always possible that development times wind up being slower or equal to that of the new internet use cases (search, social media and e-commerce). 


In other words, any huge new AI-based behaviors, apps, use cases and business models and industry categories might still take some years to emerge clearly. Right now, most AI use cases are as enhancements to existing products and services.


That’s useful and helpful, but probably not disruptive. And with AI, we really will be looking for huge disruptive impact, as is the case for other general-purpose technologies.


Wednesday, December 6, 2023

On Smartphones, AI Already is Nearly Ubiquitous

Anyone trying to model artificial intelligence usage is immediately faced with a number of problems. AI already is used to support natural language processing, image processing on phones, recommendations, customer service queries, search functions and e-commerce. 


And some would argue the embedding of AI into popular smartphone processes started in 2007, meaning the widespread consumer use of AI on smartphones capable of natural language processing and camera image processing has been underway for at least 16 years already. 


In that sense, some might argue the use of AI on smartphones already has reached levels in excess of 95 percent. And that analysis ignores other areas of common consumer use, such as search, e-commerce, recommendation engines and social media, for example. 

Still, that example probably strikes most people as overstating the present application of AI. What most people likely have in mind is a future where virtually all popular web-related or app-related interactions embed AI in their core operations, so that AI becomes a foundational part of any experience.  


That might also imply that AI “usage” could grow much faster than any other discrete application or technology, since it would be part of nearly-all app experiences. 


All that shows the importance of defining what we mean by “AI use” and when such use is said to have started. In some discrete use cases, such as NLP and camera processing on smartphones, AI might plausibly be said to have reached 95 percent adoption by consumers. 


Generative AI and large language models, on the other hand, are still at the beginning, and arguably have not yet reached anywhere close to regular use by 10 percent of internet users. 


‘Beyond that, AI is a cumulative trend, representing different types of function and eventually to be used in virtually all popular apps and hardware. 

So we already face the problem that AI is not like earlier app adoption. 


Popular internet-using applications generally have taken two to five years to reach 10-percent usage by all internet users, for example. That is generally an inflection point where usage then grows to become a mass market trend. 

The difference with AI is that it will be embedded into core operations of virtually every popular app, hardware and software. So the “adoption or use of AI” will have a cumulative effect we have not seen before, with the possible exception of the internet itself. 


Still, it took roughly 12 years for internet usage to reach a level of 10 percent of people. With AI, embedded into virtually all major forms of software and hardware, adoption should be faster than that. Just how much faster remains the issue. 


The AI advantage is that if we set 2022 as the AI equivalent of 1995 for the internet, AI already begins with a higher start point, as it is used widely for smartphone image recognition, natural language queries, speech-to-text, recommendation engines and e-commerce. 


Unlike virtually all prior innovations, AI starts with higher usage from the inception, and is a multi-app, multiple-use case trend. 


So it might make more sense to set start levels for AI much earlier: 

  • Recommendation engines, 1990s

  • Image processing on smartphones, 2000s

  • Search, 2000s 

  • E-commerce, 2000s

  • Social media, 2000s

Natural Language Processing, 2009

Looked at in that way, and looking only at AI use on smartphones, the AI trend has been underway since at least 2007. 


Year

Smartphone Image Processing

Natural Language Processing

2007

Apple's iPhone introduced face detection for unlocking.

Siri, Apple's virtual assistant, is first introduced.

2009

HTC's Desire integrates Google's Goggles, an image recognition app.

Nuance's Dragon Dictate for iOS is released.

2011

Samsung's Galaxy S II introduces image stabilization and HDR photography.

Apple's Siri expands its capabilities to include voice commands for various tasks.

2012

Google Camera app introduces features like panorama mode and HDR+.

Google Now, a personal assistant, is launched.

2013

HTC's One M8 features a dual-lens camera for depth-of-field effects.

Apple's iMessage gets voice recognition for dictation.

2014

Google's Pixel smartphone introduces computational photography with features like HDR+ and Night Sight.

Google Assistant is introduced.

2015

Dual-lens cameras became more common.

Apple's Siri gains the ability to control smart home devices.

2016

Artificial intelligence (AI) starts to play a more significant role in smartphone image processing.

Google Assistant continues to evolve.

2017

AI-powered facial recognition becomes widely used in smartphones for security purposes.

AI-powered facial recognition becomes widely used in smartphones for security purposes.

2018

AI-powered image editing tools become more sophisticated.

Google Assistant expands its capabilities to include language translation.

2019

AI-powered augmented reality (AR) apps begin to gain traction.

Google Assistant becomes more integrated with other Google products.

2020

AI-powered chatbots become more common in smartphone apps.

Google Assistant gains the ability to make phone calls and send text messages.

2021

AI-powered health and fitness tracking apps become more sophisticated.

Google Assistant gains the ability to interpret and respond to natural language conversations.

2022

AI-powered language translation becomes more accurate and real-time.

Google Assistant becomes more integrated with smart home devices.


The point is that the expression “AI usage” by the general public and most internet users is already problematic. In some ways, such as smartphone image recognition and natural language processing, AI already is nearly ubiquitous. In other areas use cases are nascent.


Sunday, April 16, 2023

We Will Overestimate what Generative AI can Accomplish Near Term

For most people, it seems as though artificial intelligence has suddenly emerged as an idea and set of possibilities. Consider the explosion of interest in large language models or generative AI.


In truth, AI has been gestating for many many decades. And forms of AI already are used in consumer applicances such as smart speakers, recommendation engines and search functions.


What seems to be happening now is some inflection point in adoption. But the next thing to happen is that people will vastly overestimate the degree of change over the near term, as large language models get adopted, just as they overestimate what will happen longer term.


That is an old--but apt--story.


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara. Some people call it the “Gate’s Law.”


The principle is useful for technology market forecasters, as it seems to illustrate other theorems including the S curve of product adoption. The expectation for virtually all technology forecasts is that actual adoption tends to resemble an S curve, with slow adoption at first, then eventually rapid adoption by users and finally market saturation.   


That sigmoid curve describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


Another key observation is that some products or technologies can take decades to reach mass adoption.


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use. 


Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The S curve is related to the product life cycle, as well. 


Another key principle is that successive product S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. The chasm is crossed at perhaps 15 percent of persons, according to technology theorist Geoffrey Moore.

source 


For most consumer technology products, the chasm gets crossed at about 10 percent household adoption. Professor Geoffrey Moore does not use a household definition, but focuses on individuals. 

source: Medium


And that is why the saying “most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is so relevant for technology products. Linear demand is not the pattern. 


One has to assume some form of exponential or non-linear growth. And we tend to underestimate the gestation time required for some innovations, such as machine learning or artificial intelligence. 


Other processes, such as computing power, bandwidth prices or end user bandwidth consumption, are more linear. But the impact of those linear functions also tends to be non-linear. 


Each deployed use case, capability or function creates a greater surface for additional innovations. Futurist Ray Kurzweil called this the law of accelerating returns. Rates of change are not linear because positive feedback loops exist.


source: Ray Kurzweil  


Each innovation leads to further innovations and the cumulative effect is exponential. 


Think about ecosystems and network effects. Each new applied innovation becomes a new participant in an ecosystem. And as the number of participants grows, so do the possible interconnections between the discrete nodes.  

source: Linked Stars Blog 


Think of that as analogous to the way people can use one particular innovation to create another adjacent innovation. When A exists, then B can be created. When A and B exist, then C and D and E and F are possible, as existing things become the basis for creating yet other new things. 


So we often find that progress is slower than we expect, at first. But later, change seems much faster. And that is because non-linear change is the norm for technology products.


Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...