Showing posts sorted by date for query inflection. Sort by relevance Show all posts
Showing posts sorted by date for query inflection. Sort by relevance Show all posts

Wednesday, January 28, 2026

Has AI Use Reached an Inflection Point, or Not?

As always, we might well disagree about the latest statistics on AI usage.


The proportion of U.S. employees who report using artificial intelligence daily rose from 10 percent to 12 percent in the fourth quarter of 2025, a Gallup survey finds. 


Frequent use, defined as using AI at work at least a few times a week, has also inched up three percentage points to 26 percent.


source: Gallup 


The percentage of those who use AI at work at least a few times a year was flat in the fourth quarter of 2025.  


And nearly half of U.S. workers (49 percent) report that they “never” use AI in their role.


As always, that data will be interpreted in several possible and contradictory ways:

  • Not every job role requires AI

  • Some use cases and verticals use AI heavily

  • Adoption has reached an inflection point

  • Adoption is quite fast

  • Adoption is slowing


source: Gallup 


Some of us might argue that AI is at an adoption rate inflection point, the historical precedent being that adoption shifts to a higher gear once about 10 percent of consumers use any particular technology. 


Also, Amara's Law suggests the impact is likely to be less than we expect in the short term (as in, “now” or “today”), while long-term impact will be greater than we anticipate.


Amara’s Law suggests that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.


Source


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara. Some people call it the “Gate’s Law.”


Some products or technologies (and AI might be among them) can take decades to reach mass adoption, especially if we start tracking adoption from the time a new technology is discovered, rather than “starting the clock” when “commercialization” begins. 


The “next big thing” will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in widespread use. 


Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would also note.


The point is that it is way too early to discern the actual productivity gains AI will eventually deliver. We will expect more, and be disappointed, over the short term. But we will underestimate impact over the longer term. 


And there is good reason to believe that the uptake in adoption has only just been reached.


Friday, March 21, 2025

Good Outcomes Beat Good Intentions: How Dumb Are We?

Good intentions clearly are not enough when designing policies to improve home broadband availability in underserved areas. In fact, since 2021, more than three years after its passage, the U.S. Broadband Equity, Access, and Deployment (BEAD) program has yet to install a single new connection.  


It seems we were determined to make the perfect the enemy of the good, preventing construction until we mostly were certain our maps were accurate. A rival approach would have proceeded on the assumption that residents and service providers pretty much know where they have facilities and where they do not; where an upgrade can be conducted fast and easily, and where it cannot. 


And perhaps (despite the clear industry participant interests that always seem to influence our decisions) we should not have insisted on the “fastest speed” platforms. Maybe we’d have prioritized “good enough” connections that could be supplied really fast and enabled the outcomes we were looking for (getting the unconnected connected; getting the underserved facilities that do not impede their use of internet apps). 


This is not, to use the phrase, “rocket science.” We have known for many decades that “good enough” home broadband can be supplied fast, and affordably, if we use satellite (geostationary or low earth orbit, but particularly now LEO) or wireless to enable the connections. 


To those who say we need to supply fiber to the home, some of us might argue the evidence suggests relatively-lower speed (such as 100 Mbps downstream) connections supply all the measurable upside we seek, for homework, shopping, telework. The touted gigabit-per-second or multi-gigabit-per-second connections are fine, but there is very little evidence consumers can even use that much bandwidth. 


Study/Source

Key Findings

Distinguishing Bandwidth and Latency in Households' Willingness to Pay for Broadband Internet Speed (2017)

Consumers value increasing bandwidth from 10 to 25 Mbps at about $24 per month, but the additional value of increasing from 100 Mbps to 1 Gbps is only $19. This suggests diminishing returns for speeds beyond 100 Mbps.

Are you overpaying for internet speeds you don't need? (2025)

Research indicates that many Australians are overspending on high-speed internet connections they don't need. Most households can manage well with a 50 Mbps plan unless they engage in high-bandwidth tasks like 4K streaming or online gaming.

Simple broadband mistake costing 9.5 million households up to £113 extra a year (2024)

Millions of UK households are overpaying for broadband by purchasing higher speeds than necessary. Smaller households often need speeds up to 15 Mbps but pay for over 150 Mbps, wasting £113 annually.

ITIF (2023)

- US broadband speeds outpace everyday demands

- Only 9.1% of households choose to adopt 250/25 Mbps speeds when available

- Clear inflection point past 100 Mbps where consumers no longer see value in higher speeds

ITIF (2020)

- Average existing connections comfortably handle more than typical applications require

- A household with 5 people streaming 4K video simultaneously only needs 2/3 of current average tested speed

- Research shows reaching a critical threshold of basic broadband penetration is more important for economic growth than faster speeds

European Research (2020)

- Full fiber networks are not worth the costs

- Partial, not full end-to-end fiber-based broadband coverage entails the largest net benefits

US Broadband Data Analysis

- Compared to normal-speed broadband, faster broadband did not generate greater positive effects on employment

OpenVault Q3 2024 Report

- Average US household uses 564 Mbps downstream and 31 Mbps upstream

- Speeds around 500 Mbps sufficient for most families

FCC Guidelines

- 100-500 Mbps is enough for 1-2 people to run videoconferencing, streaming, and online gaming simultaneously

- 500-1000 Mbps suitable for 3 or more people with high bandwidth needs


We might all agree that, where it is feasible, fiber to home makes the most long-term sense. But we might also agree that where we want useful home broadband speeds, right now, everywhere, with performance that enables remote work, homework, online shopping and all other internet apps, then any platform delivering 100 Mbps (more for multi-user households, but likely not more than 500 Mbps even in the most-challenging use cases) will do the job, right now. 


Good intentions really are not enough. Good outcomes are what we seek. And that often means designing programs that we can implement fast, at lower cost, with wider impact, immediately or nearly so. “Better” platforms that cost more and are not built are hardly better.


Monday, September 30, 2024

Amara's Law and Generative AI Outcomes: Less than You Expect Now; More than You Anticpate Later

Generative artificial intelligence is as likely to show the impact of Amara's Law as any other new technology, which is to say that initial outcomes will be less than we expect, while long-term impact will be greater than we anticipate.


Amara’s Law suggests that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.


Source


Amara’s Law seemingly is the thinking behind the Gartner Hype Cycle, for example, which suggests that initial enthusiasm wants when outcomes do not appear, leading to disillusionment and then a gradual appearance of relevant outcomes later. 


lots of other "rules" about technology adoption also testify to the asymmetrical and non-linear outcomes from new technology.  


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara and some people call it “Gate’s Law.”


The principle is useful for technology market forecasters, as it seems to illustrate other theorems including the S curve of product adoption. The expectation for virtually all technology forecasts is that actual adoption tends to resemble an S curve, with slow adoption at first, then eventually rapid adoption by users and finally market saturation.   


That sigmoid curve describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


Another key observation is that some products or technologies can take decades to reach mass adoption.


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use. 


Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The S curve is related to the product life cycle, as well. 


Another key principle is that successive product S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. The chasm is crossed at perhaps 15 percent of persons, according to technology theorist Geoffrey Moore.

source 


For most consumer technology products, the chasm gets crossed at about 10 percent household adoption. Professor Geoffrey Moore does not use a household definition, but focuses on individuals. 

source: Medium


And that is why the saying “most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is so relevant for technology products. Linear demand is not the pattern. 


One has to assume some form of exponential or non-linear growth. And we tend to underestimate the gestation time required for some innovations, such as machine learning or artificial intelligence. 


Other processes, such as computing power, bandwidth prices or end user bandwidth consumption, are more linear. But the impact of those linear functions also tends to be non-linear. 


Each deployed use case, capability or function creates a greater surface for additional innovations. Futurist Ray Kurzweil called this the law of accelerating returns. Rates of change are not linear because positive feedback loops exist.


source: Ray Kurzweil  


Each innovation leads to further innovations and the cumulative effect is exponential. 


Think about ecosystems and network effects. Each new applied innovation becomes a new participant in an ecosystem. And as the number of participants grows, so do the possible interconnections between the discrete nodes.  

source: Linked Stars Blog 


Think of that as analogous to the way people can use one particular innovation to create another adjacent innovation. When A exists, then B can be created. When A and B exist, then C and D and E and F are possible, as existing things become the basis for creating yet other new things. 


So we often find that progress is slower than we expect, at first. But later, change seems much faster. And that is because non-linear change is the norm for technology products. So is Amara’s Law.


Sunday, September 29, 2024

How Soon Could Huge New Generative AI Industries Emerge?

How soon will generative artificial intelligence produce some obvious huge new behaviors, firms, apps, use cases, business models and industries, as happened with the internet?


Consumer products generally reach an adoption inflection point at about 10-percent consumer adoption. So if consumer AI use cases follow precedent, mass market success will happen when any single use case or app hits about 10-percent usage. 


Generative AI usage likely will reach 10 percent in 2024 in many markets, suggesting a rapid uptake period will commence. 


But use of generative AI, quite often as a feature of an existing experience, is a possibly-different matter from creation of wholly-new use cases, value propositions and industries, as happened with the growth of internet use. 


And it will still take some time for such new use cases, apps, value propositions and industries to emerge. 


Some leading internet apps--including Google search; Facebook social media; Amazon e-commerce and Google Maps for navigation--took between three and eight years to reach 10-percent usage levels. 


Keep in mind those innovations represented new behaviors, value and business models for new firms in new industries, as opposed to use of the internet by legacy firms and processes. 




It took longer--almost twice as long--for each of these apps to reach adoption by half of people. The point is that even if generative artificial intelligence is highly successful at creating new behaviors, use cases, apps and firms, it will take up to a decade and a half for that success to be quite obvious, as defined by usage. And it probably goes without saying that this is true only for the most-popular, most commercially-successful new use cases, apps and firms. Most implementations will prove to be insignificant or actually fail to achieve success.

So it might be rational and realistic to assume huge new industries will emerge only after some time. Even if GenAI propagates faster than did the leading new search, social media and e-commerce apps did in the earlier internet era. 


And it is always possible that development times wind up being slower or equal to that of the new internet use cases (search, social media and e-commerce). 


In other words, any huge new AI-based behaviors, apps, use cases and business models and industry categories might still take some years to emerge clearly. Right now, most AI use cases are as enhancements to existing products and services.


That’s useful and helpful, but probably not disruptive. And with AI, we really will be looking for huge disruptive impact, as is the case for other general-purpose technologies.


Enterprise Apps Need to Become AI-Native Faster than AI Rearchitects the User Interface

The phrase “ Netflix wants to become HBO faster than HBO becomes Netflix ” captures a classic dynamic in technology-driven industry change, ...