Quantifying the earnings impact of artificial intelligence is going to be as difficult as other relatively indirect measurements of information technology impact. Survey respondents almost always report that applied AI boosted revenue or sales, while reducing costs.
Eventually, when there is enough deployment to study, we might find that, in some cases, AI has not measurably affected earnings, revenue or profits, at least in some cases. In at least some cases, we might even find that those metrics have gotten worse.
The reason is that the actual business impact of new information technology often is hard to assess, even if people think it is helping. When asked, managers almost always say they think AI has helped reduce costs and boost outcomes.
Of course, those opinions often cannot be precisely verified. Even when cost decreases or revenue increases occur, there always are other independent variables in operation. For that reason, correlation is not necessarily causation.
In fact, the impact of new information technology always has been difficult to measure--and sometimes even detect--over the last 50 years. This productivity paradox has been seen in IT since the 1970s, as global productivity growth has slowed, despite an increasing application of technology in the economy overall, starting especially in the 1980s.
Basically, the paradox is that the official statistics have not borne out the productivity improvements expected from new technology.
Before investment in IT became widespread, the expected return on investment in terms of productivity was three percent to four percent, in line with what was seen in mechanization and automation of the farm and factory sectors.
When IT was applied over two decades from 1970 to 1990, the normal return on investment was only one percent. Also, the Solow productivity paradox suggests that applied technology can boost--or lower--productivity. Though perhaps shocking, it appears that technology adoption productivity impact can be negative.
This productivity paradox is not new. Information technology investments did not measurably help improve white collar job productivity for decades. In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.
Some now argue there is a lag between the massive introduction of new information technology and measurable productivity results, and that this lag might conceivably take a decade or two decades to emerge.
We might expect similar degrees of unclarity as artificial intelligence is applied in heavy doses.
Output and value added are the traditional concerns, but it is hard to estimate the actual incremental impact of new information technology.
It is even harder in any industry where most of the output is “a service” that is hard to measure in a traditional output per unit of input way. Some say “value” and “impact” also matter, but those are squishy outcomes similarly hard to quantify.
Services are, almost by definition, intangible. It often is nearly impossible to measure “quality” in relation to “price” in advance of purchase. Think about hiring any realtor, lawyer or consultant: “quality” cannot be measured until the actual service is consumed.
And even then, especially for any infrequently-used service, there is no way to directly compare performance or value compared to other alternatives.
“Productivity is lower in services because they tend to be less standardized than goods and some of them have to be delivered in person,” researchers at the Organization for Economic Cooperation and Development have said.
That services often are heterogeneous and ambiguous, requiring interaction between people, is a good way of characterizing the problem of measurement.
The ability to standardize is often a precondition for applying IT to business processes. And some services must be delivered--or typically are delivered--”in person.” That makes scale efficiencies challenging.
Services often are not fungible in the same way that physical objects are.
To complicate matters, many services used today are supplied at no direct cost to the end user. While we might try to quantify productivity at the supplier level, there is not a direct financial measure related to end user consumption, as that is “free.”
For public organizations, the challenges are equally great. No single agency can claim credit for producing health, education, national defense, justice or environmental protection outcomes, for example. Those outcomes depend on many things outside the control of any single agency, or group of agencies.
So we often resort to counting activities, occurrences or events, as the ultimate outcomes cannot be quantified. The issue, of course, is that knowing “how many” is not the same thing as “how good” or “how valuable?”
Knowledge work poses additional issues. Desired outcomes have even less routine content, higher capital intensity and higher “research and development” intensity.