Showing posts sorted by date for query productivity paradox. Sort by relevance Show all posts
Showing posts sorted by date for query productivity paradox. Sort by relevance Show all posts

Saturday, February 28, 2026

Layoffs: An Unfortunate Effort to Quantify AI Productivity Gains

Layoffs might be an unfortunate way of attempting to prove artificial intelligence productivity gains when there are few other ways to quantify the benefits in the near term. 


When important new technologies are introduced, there is almost always a lag between adoption and quantifiable productivity gains. In fact, it often happens that productivity drops as the new technology is adopted. 


Employees must take time away from current tasks to learn how to use the technology; test its results and so forth. 


There is no reason to expect the J curve of technology adoption will fail to be seen for, either. 

source 


Economic historians such as Erik Brynjolfsson and Paul David have documented that transformative, general-purpose technologies tend to follow the J-curve pattern. 


Initial deployment generates negative or flat productivity returns relative to investment, often for a surprisingly long time. 


David's famous 1990 paper on the "dynamo paradox" showed that electrification of US industry began in earnest in the 1880s but didn't produce measurable aggregate productivity gains until the 1920s.


The reasons are structural: firms must reorganize workflows, retrain workers, build complementary infrastructure, and abandon legacy processes before the technology's benefits materialize. 


The productivity gains, when they finally arrive, are real and large, but they accrue after enormous sunk costs and a long gestation period.


Amara's Law also suggests we will overestimate the immediate impact of artificial intelligence but also underestimate the long-term impact. But, again, that suggests it will be hard to quantify AI productivity results in the near term. 


All that is going to be a problem for financial analysts and observers who demand an immediate boost in observable firm earnings or revenue, as well as the firms deploying AI that will strive to demonstrate the benefit.


But layoffs are quite quantifiable, even if we might argue it is still too early to measure AI productivity impact.


Sunday, February 22, 2026

NBER Study Finds "No Productivity Impact" from AI So Far (And Nobody Should be Surprised)

Maybe we should not be surprised that studies of AI productivity often show few results so far. A recent study published by the National Bureau of Economic Research, for example, found:

  • around 70 percent of firms actively use AI

  • More than 66 percent of top executives regularly use AI, their average use is only 1.5 hours a week, with one quarter reporting no AI use

  • firms report little impact of AI over the last three years, with over 80 percent of firms reporting no impact on either employment or productivity. 

source: NBER 


None of that should come as a surprise. Sure, AI adoption is widespread among survey respondents. across the four countries (U.S.; U.K.; Germany; Australia) studied:


source: NBER 


But none of those use cases can easily be tied to bottom-line quantitative results very easily, if at all. They should be time savers, but faster text or image creation or some data manipulation, at modest usage rates, in the context of existing business processes, are probably reasonably described as relatively trivial contributors to measurable productivity. 


Also, there is no reason to expect the J curve of technology adoption will fail to be seen here. 

source 


Amara's Law suggests we will overestimate the immediate impact of artificial intelligence but also underestimate the long-term impact. 


Economic historians such as Erik Brynjolfsson and Paul David have documented that transformative, general-purpose technologies tend to follow the J-curve pattern. 


Initial deployment generates negative or flat productivity returns relative to investment, often for a surprisingly long time. 


David's famous 1990 paper on the "dynamo paradox" showed that electrification of US industry began in earnest in the 1880s but didn't produce measurable aggregate productivity gains until the 1920s.


The reasons are structural: firms must reorganize workflows, retrain workers, build complementary infrastructure, and abandon legacy processes before the technology's benefits materialize. 


The productivity gains, when they finally arrive, are real and large, but they accrue after enormous sunk costs and a long gestation period.


And that is going to be a problem for financial analysts and observers who demand an immediate boost in observable firm earnings or revenue, as well as the firms deploying AI that will strive to demonstrate the benefit.


Friday, February 20, 2026

Measurable AI Returns; Technology J-Curve: Big Disconnect

Amara's Law suggests we will overestimate the immediate impact of artificial intelligence but also underestimate the long-term impact. 


And that is going to be a problem for financial analysts and observers who demand an immediate boost in observable firm earnings or revenue, as well as the firms deploying AI that will strive to demonstrate the benefit. 


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara and some people call it “Gate’s Law.”


In fact, decades might pass before the fullest impact is measurable, even if some tangible results are already seen. 


Error rates in labeling the content of photos on ImageNet, a collection of more than 10 million images, have fallen from over 30 percent in 2010 to less than five percent in 2016 and most recently as low as 2.2 percent, according to Erik Brynjolfsson, MIT Sloan School of Management professor.


Likewise, error rates in voice recognition on the Switchboard speech recording corpus, often used to measure progress in speech recognition, have improved from 8.5 percent to 5.5 percent over the past year. The five-percent threshold is important because that is roughly the performance of humans at each of these tasks, Brynjolfsson says. 


A system using deep neural networks was tested against 21 board certified dermatologists and matched their performance in diagnosing skin cancer, a development with direct implications for medical diagnosis using AI systems.


Codified or understood as Amara's Law, the principle is that it generally takes entities some time to reorganize business processes in ways that enable wringing productive results from important new technologies. 


Source


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use. 


Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.


Likewise, economic historians such as Erik Brynjolfsson and Paul David have documented that transformative, general-purpose technologies tend to follow the J-curve pattern. 


Initial deployment generates negative or flat productivity returns relative to investment, often for a surprisingly long time. 


David's famous 1990 paper on the "dynamo paradox" showed that electrification of US industry began in earnest in the 1880s but didn't produce measurable aggregate productivity gains until the 1920s.


The reasons are structural: firms must reorganize workflows, retrain workers, build complementary infrastructure, and abandon legacy processes before the technology's benefits materialize. 


The productivity gains, when they finally arrive, are real and large, but they accrue after enormous sunk costs and a long gestation period.


source 


Maybe AI really will prove different. But there is ample evidence that quantifying impact could be difficult in the near term. Buckle up. 


Wednesday, February 4, 2026

AI is Solow Paradox at Work

An analysis of 4,500 work-related artificial intelligence use cases suggests we are only in the very-early stages of applying AI at work and that most of the use cases have not yet moved to a stage where we can measure return on investment or productivity impact


That is worth keeping in mind. 


Most use cases so far only affect speed or time savings. Few use cases are more-directly integrated into customer-facing revenue-generating activities. 


The vast majority of use cases are very basic, says a Section AI report. Some 14 percent of workers say their most valuable AI use case is Google search replacement. As helpful as that might be, it is hard to measure productivity gains at this point. 


source: Section AI


About 17 percent of workers use AI for drafting, editing, and summarizing documents. Again, productivity improvements are difficult in those cases, but perhaps more measurable in terms of time savings. 


So far, Section AI researchers found only two percent of users have built automations for copy generation, which would save more time, for example. 


About three percent say their most valuable use case is data analysis or code generation, and there the ROI seems easiest to document in terms of time saved or effort avoided, rather than other revenue-generating metrics. 


source: Section AI


In fact, nearly a quarter of respondents say AI does not save them any time at all, which might seem odd unless those users are having to spend time learning how to use AI, which would, in fact, take more time. 


In other cases, they might find they are having to spend time checking the answers and output, which again might take additional time. 


The point is that we are in early stages of deployment, where it remains difficult to assess productivity gains. 


source: Section AI


As unhelpful as it might be, transformative technologies often fail to show up in productivity statistics for years, or even decades, after their introduction, as the Solow Paradox describes. 


Measuring language model impact by "minutes saved per task" captures only the shallowest layer of value, many would argue. The reason is that what we can measure sometimes is not all that important. 


Productivity metrics are generally designed to measure output per hour (quantity). They are notoriously bad at measuring quality. 


If a model helps a software engineer write safer, more robust code, or helps a marketer generate a campaign that resonates better with customers, standard productivity metrics might show zero gain (or even a loss.


Also, In the early stages of adoption, productivity often dips, since firms and workers must invest time and capital into training, restructuring workflows, and figuring out how to use the new tools. 


This "intangible capital" investment does not produce immediate revenue.


Also, as always, adopters are using language models to do existing tasks faster (writing emails). True productivity explosions only occur when businesses re-architect their entire workflows to do things that were previously impossible, rather than just speeding up legacy processes.


Innovation

Initial Era

The "Lag Phase"

Primary Reason for Lag

When Productivity Finally Spiked

Electric Power

Late 1880s (Electric motor introduced)

~30–40 Years

Factory owners swapped steam engines for electric motors without changing factory layouts.

1920s: When "unit drive" systems allowed for the assembly line and decentralized manufacturing.

Computers (IT)

1970s–80s (Mainframes & PCs)

~15–20 Years

The "Solow Paradox." Computers were used for isolated tasks (word processing) rather than networked data flow.

Mid-1990s: When the internet and enterprise software (ERP) enabled supply chain integration and instant communication.

The Internet

Early 1990s (World Wide Web)

~10–15 Years

The "Dot Com Bubble." Investment rushed in, but business models (e-commerce, cloud) were immature.

Late 2000s/2010s: When mobile internet, cloud computing, and smartphone adoption created the app economy.

Generative AI (language models)

2022–Present (ChatGPT moment)

Ongoing

Current focus is on "task replacement" (writing, coding) rather than "workflow redesign" (autonomous agents, new R&D methods).

Prediction (2027–2030+): Likely when AI moves from a "copilot" (assistant) to an "agent" that can autonomously execute complex, multi-step workflows.


That sort of measurable productivity gain cannot be demonstrated so soon. 


Thursday, September 11, 2025

70% of IT and AI Projects Fail for Simple Reasons

Information technology investments are often treated as purely technical endeavors rather than organizational transformations that require changes in processes, culture, and human behavior, which possibly explains the gap between IT investments and observed results. 


source: McKinsey 


Some of the exceptions might be retail; communications and media, where productivity seems higher. For communications and media, technology often creates the platform for services, delivering a higher degree of observable value. 


source: McKinsey 


Still, many studies suggest that IT projects have a high failure rate overall. 


Study/Source

Year

Key Finding

Sample Size/Scope

Failure Rate/Metric

Standish Group CHAOS Report

2020

Only 31% of IT projects are successful (on time, on budget, with required features)

50,000+ projects across multiple industries

69% challenged or failed

McKinsey Global Institute

2012

Large IT projects run 45% over budget and 7% over time, while delivering 56% less value than predicted

Analysis of 5,400 IT projects

17% of projects are "black swans" with cost overruns >200%

Harvard Business Review - Flyvbjerg & Budzier

2011

Average cost overrun for large IT projects is 27%, with one in six projects having cost overruns of 200%

Study of IT project performance patterns

16.7% massive overruns

PwC Global CEO Survey

2019

73% of CEOs believe their digital investments are not delivering expected returns

1,378 CEOs globally

73% not meeting ROI expectations

Deloitte Tech Trends

2021

70% of digital transformation initiatives fail to meet their goals

Survey of 1,000+ executives

70% failure to meet objectives

MIT Sloan - Brynjolfsson & Hitt

2003

IT productivity paradox: firms with higher IT spending don't always show proportional productivity gains

Longitudinal study of 527 large firms

Mixed correlation between IT spending and productivity

Gartner IT Spending Analysis

2019

85% of big data projects fail to deliver business value

Analysis of enterprise big data initiatives

85% failure rate

Accenture Technology Vision

2020

Only 37% of organizations successfully scale their digital pilots to enterprise-wide implementations

Survey of 4,000+ business and IT executives

63% fail to scale successfully

Boston Consulting Group

2018

70% of digital transformation efforts fall short of their goals

Analysis of transformation initiatives across industries

70% shortfall rate

KPMG Global CEO Outlook

2018

65% of CEOs question whether their technology investments create competitive advantage

Survey of 1,300 CEOs

65% uncertain about competitive value

IBM Institute for Business Value

2019

Organizations realize only 20% of anticipated benefits from AI investments

Study of AI implementation across enterprises

80% benefit shortfall

Forrester Research

2020

60% of customer experience technology investments fail to improve customer satisfaction scores

Analysis of CX technology implementations

60% fail to improve target metrics

EY Digital Transformation Study

2018

55% of digital transformation programs are abandoned before completion

Survey of 500+ executives across industries

55% abandonment rate

Capgemini Digital Transformation Institute

2017

Only 36% of organizations are digital transformation leaders achieving significant benefits

Study of 1,000+ organizations globally

64% are laggards or followers

McKinsey Technology Trends

2021

Cloud migration projects deliver only 65% of expected cost savings on average

Analysis of cloud transformation initiatives

35% savings shortfall

Outcomes Matter, Not Virtue Signaling

Adam Garfinkle 's book Telltale Hearts argues that the U.S. antiwar movement of the 1960s (yes, Baby Boomers ) did not meaningfully sh...