Amara's Law suggests we will overestimate the immediate impact of artificial intelligence but also underestimate the long-term impact.
And that is going to be a problem for financial analysts and observers who demand an immediate boost in observable firm earnings or revenue, as well as the firms deploying AI that will strive to demonstrate the benefit.
“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara and some people call it “Gate’s Law.”
In fact, decades might pass before the fullest impact is measurable, even if some tangible results are already seen.
Error rates in labeling the content of photos on ImageNet, a collection of more than 10 million images, have fallen from over 30 percent in 2010 to less than five percent in 2016 and most recently as low as 2.2 percent, according to Erik Brynjolfsson, MIT Sloan School of Management professor.
Likewise, error rates in voice recognition on the Switchboard speech recording corpus, often used to measure progress in speech recognition, have improved from 8.5 percent to 5.5 percent over the past year. The five-percent threshold is important because that is roughly the performance of humans at each of these tasks, Brynjolfsson says.
A system using deep neural networks was tested against 21 board certified dermatologists and matched their performance in diagnosing skin cancer, a development with direct implications for medical diagnosis using AI systems.
Codified or understood as Amara's Law, the principle is that it generally takes entities some time to reorganize business processes in ways that enable wringing productive results from important new technologies.
It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use.
Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.
Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.
It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.
Likewise, economic historians such as Erik Brynjolfsson and Paul David have documented that transformative, general-purpose technologies tend to follow the J-curve pattern.
Initial deployment generates negative or flat productivity returns relative to investment, often for a surprisingly long time.
David's famous 1990 paper on the "dynamo paradox" showed that electrification of US industry began in earnest in the 1880s but didn't produce measurable aggregate productivity gains until the 1920s.
The reasons are structural: firms must reorganize workflows, retrain workers, build complementary infrastructure, and abandon legacy processes before the technology's benefits materialize.
The productivity gains, when they finally arrive, are real and large, but they accrue after enormous sunk costs and a long gestation period.
Maybe AI really will prove different. But there is ample evidence that quantifying impact could be difficult in the near term. Buckle up.
No comments:
Post a Comment