Showing posts sorted by date for query productivity paradox. Sort by relevance Show all posts
Showing posts sorted by date for query productivity paradox. Sort by relevance Show all posts

Sunday, November 13, 2022

Expect 70% Failure Rates for Metaverse, Web3, AI, VR Efforts in Early Days

It long has been conventional wisdom that up to 70 percent of innovation efforts and major information technology projects fail in significant ways, either failing to produce predicted gains, or producing a very-small level of results. If we assume applied artificial intelligence, virtual reality, metaverse, web3 or internet of things are “major IT projects,” we likewise should assume initial failure rates as high as 70 percent.


That does not mean ultimate success will fail to happen, only that failure rates, early on, will be quite high. As a corollary, we should continue to expect high rates of failure for companies and projects, early on. Venture capitalists will not be surprised, as they expect such high rates of failure when investing in startups. 


But all of us need to remember that failure rates for innovation generally and major IT efforts specifically will have high failure rates of up to 70 percent. So steel yourself for bad news as major innovations are attempted in areas ranging from metaverse and web3 to cryptocurrency to AR, VR or even less “risky” efforts such as internet of things, network slicing, private networks or edge computing. 


Gartner estimated in 2018 that through 2022, 85 percent of AI projects would deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.


That is analogous to arguing that most AI projects will fail in some part. Seven out of 10 companies surveyed in one study report minimal or no impact from AI so far. The caveat is that many such big IT projects can take as much as a decade to produce quantifiable results. 


Investing in more information technology has often and consistently failed to boost productivity, or appear to have done so only after about a decade of tracking.  Some would argue the gains are there; just hard to measure, but the point is that progress often is hard to discern. 


Still, the productivity paradox seems to exist. Before investment in IT became widespread, the expected return on investment in terms of productivity was three percent to four percent, in line with what was seen in mechanization and automation of the farm and factory sectors.


When IT was applied over two decades from 1970 to 1990, the normal return on investment was only one percent.


This productivity paradox is not new. Even when investment does eventually seem to produce improvements, if often takes a while to produce those results. So perhaps even AI project near-term failure might be seen as a success a decade or more later. 


Sometimes measurable change takes longer. Information technology investments did not measurably help improve white collar job productivity for decades, for example. In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.


Most might simply agree  there is a lag between the massive introduction of new information technology and measurable productivity results.


Most of us likely assume quality broadband “must” boost productivity. Except when it does not. The consensus view on broadband access for business is that it leads to higher productivity. 


But a study by Ireland’s Economic and Social Research Institute finds “small positive associations between broadband and firms’ productivity levels, none of these effects are statistically significant.”


Among the 90 percent of companies that have made some investment in AI, fewer than 40 percent report business gains from AI in the past three years, for example.


Saturday, June 4, 2022

Innovation Takes Time, Be Patient

Anybody who expected early 5G to yield massive upside in the form of innovative use cases and value has not been paying attention to history. Since 3G, promised futuristic applications and use cases have inevitably disappointed, in the short term. 


In part, that is because some observers mistakenly believe complicated new ecosystems can be developed rapidly to match the features enabled by the new next-generation mobile platform. That is never the case. 


Consider the analogy of information technology advances and the harnessing of such innovations by enterprises. There always has been a lag between technology availability and the retooling of business processes to take advantage of those advances. 


Many innovations expected during the 3G era did not happen until 4G. Some 4G innovations might not appear until 5G is near the end of its adoption cycle. The point is that it takes time to create the ubiquitous networks that allow application developers to incorporate the new capabilities into their products and for users to figure out how to take advantage of the changes. 


Non-manufacturing productivity, in particular, is hard to measure, and has shown relative insensitivity to IT adoption.






Construction of the new networks also takes time, especially in continent-sized countries. It easily can take three years to cover sufficient potential users so that app developers have a critical mass of users and customers. 


And that is just the start. Once a baseline of performance is created, the task of creating new use cases and revenue models can begin. Phone-based ride hailing did develop during the 4G era. 


But that was built on ubiquity of mapping and turn-by-turn directions, payment methods and other innovations such as social media and messaging.


Support for mobile entertainment video also flourished in 4G, built on the advent of ubiquitous streaming platforms. But that required new services to be built, content being assembled and revenue models created. 


The lag between technology introduction and new use cases is likely just as clear for business use cases. 


The productivity paradox remains the clearest example of the lag time. Most of us assume that higher investment and use of technology improves productivity. That might not be true, or true only under some circumstances. 


Investing in more information technology has often and consistently failed to boost productivity.  Others would argue the gains are there; just hard to measure.  There is evidence to support either conclusion.


Most of us likely assume quality broadband “must” boost productivity. Except when it does not. The consensus view on broadband access for business is that it leads to higher productivity. 


But a study by Ireland’s Economic and Social Research Institute finds “small positive associations between broadband and firms’ productivity levels, none of these effects are statistically significant.”


“We also find no significant effect looking across all service sector firms taken together,” ESRI notes. “These results are consistent with those of other recent research that suggests the benefits of broadband for productivity depend heavily upon sectoral and firm characteristics rather than representing a generalised effect.”


“Overall, it seems that the benefits of broadband to particular local areas may vary substantially depending upon the sectoral mix of local firms and the availability of related inputs such as highly educated labour and appropriate management,” says ESRI.


Before investment in IT became widespread, the expected return on investment in terms of productivity was three percent to four percent, in line with what was seen in mechanization and automation of the farm and factory sectors.


When IT was applied over two decades from 1970 to 1990, the normal return on investment was only one percent.


This productivity paradox is not new. Information technology investments did not measurably help improve white collar job productivity for decades. In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.


Some now argue there is a lag between the massive introduction of new information technology and measurable productivity results, and that this lag might conceivably take a decade or two decades to emerge.


Work from home trends were catalyzed by the pandemic, to be sure. Many underlying rates of change were accelerated. But the underlying remote work trends were there for decades, and always have been expected to grow sharply. 


Whether that is good, bad or indifferent for productivity remains to be seen. The Solow productivity paradox suggests that applied technology can boost--or lower--productivity. Though perhaps shocking, it appears that technology adoption productivity impact can be negative


All of that should always temper our expectations. 5G is nowhere near delivering change. It takes time.


Tuesday, October 19, 2021

AI Impact on Outcomes Might be Hard to Measure

Quantifying the earnings impact of artificial intelligence is going to be as difficult as other relatively indirect measurements of information technology impact. Survey respondents almost always report that applied AI boosted revenue or sales, while reducing costs. 


Eventually, when there is enough deployment to study, we might find that, in some cases, AI has not measurably affected earnings, revenue or profits, at least in some cases. In at least some cases, we might even find that those metrics have gotten worse.


The reason is that the actual business impact of new information technology often is hard to assess, even if people think it is helping. When asked, managers almost always say they think AI has helped reduce costs and boost outcomes.

source: McKinsey 


Of course, those opinions often cannot be precisely verified. Even when cost decreases or revenue increases occur, there always are other independent variables in operation. For that reason, correlation is not necessarily causation. 


In fact, the impact of new information technology always has been difficult to measure--and sometimes even detect--over the last 50 years. This productivity paradox has been seen in IT since the 1970s, as global productivity growth has slowed, despite an increasing application of technology in the economy overall, starting especially in the 1980s. 

 

Basically, the paradox is that the official statistics have not borne out the productivity improvements expected from new technology.

 

Before investment in IT became widespread, the expected return on investment in terms of productivity was three percent to four percent, in line with what was seen in mechanization and automation of the farm and factory sectors.


When IT was applied over two decades from 1970 to 1990, the normal return on investment was only one percent. Also, the Solow productivity paradox suggests that applied technology can boost--or lower--productivity. Though perhaps shocking, it appears that technology adoption productivity impact can be negative.  


This productivity paradox is not new. Information technology investments did not measurably help improve white collar job productivity for decades. In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.


Some now argue there is a lag between the massive introduction of new information technology and measurable productivity results, and that this lag might conceivably take a decade or two decades to emerge.


We might expect similar degrees of unclarity as artificial intelligence is applied in heavy doses. 


source: McKinsey 


Output and value added are the traditional concerns, but it is hard to estimate the actual incremental impact of new information technology. 


It is even harder in any industry where most of the output is “a service” that is hard to measure in a traditional output per unit of input way. Some say “value” and “impact” also matter, but those are squishy outcomes similarly hard to quantify. 


Services are, almost by definition, intangible. It often is nearly impossible to measure “quality” in relation to “price” in advance of purchase. Think about hiring any realtor, lawyer or consultant: “quality” cannot be measured until the actual service is consumed. 


And even then, especially for any infrequently-used service, there is no way to directly compare performance or value compared to other alternatives. 


Productivity is lower in services because they tend to be less standardized than goods and some of them have to be delivered in person,” researchers at the Organization for Economic Cooperation and Development have said. 


That services often are heterogeneous and ambiguous, requiring interaction between people, is a good way of characterizing the problem of measurement. 


The ability to standardize is often a precondition for applying IT to business processes. And some services must be delivered--or typically are delivered--”in person.” That makes scale efficiencies challenging. 


Services often are not fungible in the same way that physical objects are. 


To complicate matters, many services used today are supplied at no direct cost to the end user. While we might try to quantify productivity at the supplier level, there is not a direct financial measure related to end user consumption, as that is “free.”


For public organizations, the challenges are equally great. No single agency can claim credit for producing health, education, national defense, justice or environmental protection outcomes, for example. Those outcomes depend on many things outside the control of any single agency, or group of agencies. 


So we often resort to counting activities, occurrences or events, as the ultimate outcomes cannot be quantified. The issue, of course, is that knowing “how many” is not the same thing as “how good” or “how valuable?”


Knowledge work poses additional issues. Desired outcomes have even less routine content, higher capital intensity and higher “research and development” intensity.


Friday, October 1, 2021

Is "Digital Transformation" Simply the Latest Buzzword for Decades Worth of Applied Digital Technology?

Skeptics or cynics might argue that much of what passes for today’s digital transformation is simply the latest buzzword for applying technology to business processes. The buzzword desired outcomes might include agility, personalization, automation, data-driven decision making, improved customer experience or any other set of information technology outcomes. 


source: Fujitsu 


Of course, businesses and organizations and consumers have been adding digital products to their everyday lives for decades. And it would be hard to clearly differentiate any of the desired outcomes of technology deployment since the mid-1980s with the outcomes of digital transformation


source: Fujitsu 


Of course, the problem is that all information technology becomes commoditized. Any single firm would gain sustainable advantage if it were the only firm in its industry to adopt a particular technology.


The problem is that never is possible. Eventually, all competitors in any industry have access to, and use, all the relevant technologies. Though efficiency or effectiveness might arguably be improved, it does so for all competitors in the market, eventually negating any first-mover advantage a single firm might have tried to gain. 


The other problem is that applying technology does not often seem to yield tangible advantages in terms of productivity. This productivity paradox has been noted since the 1980s, when enterprises began to apply lots of digital technology to their businesses. 


Many technologists noted the lag of productivity growth in the 1970s and 1980s as computer technology was introduced. In the 1970s and 1980s, business investment in computer technology were increasing by more than 20 percent per year. But productivity growth fell, instead of increasing. 


So the productivity paradox is not new.  Massive investments in technology do not always result in measurable gains. In fact, sometimes negative productivity results. 


Information technology investments did not measurably help improve white collar job productivity for decades in the 1980s and earlier.  In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.


Some now argue there is a similar lag between the massive introduction of new information technology and measurable productivity results, and that this lag might conceivably take a decade or two decades to emerge. 


The Solow productivity paradox suggests that applied technology can boost--or lower--productivity. Though perhaps shocking, it appears that technology adoption productivity impact can be negative


The productivity paradox was what we began to call it. In fact, investing in more information technology has often and consistently failed to boost productivity. Others would argue the gains are there; just hard to measure. Still, it is hard to claim improvement when we cannot measure it. 


Saturday, September 18, 2021

There are No KPIs for Knowledge Workers

Key performance indicators often are a recommended practice for improving organizational output. But knowledge work and office work in general do not allow us to create meaningful KPIs related to productivity.


The problem with all studies of officer worker or knowledge worker productivity is measurement. What can be counted so we know whether inputs have changed. And how do we measure the output of knowledge work? 


Presumably a call center operation has quantifiable metrics, but most office or knowledge work does not have any obvious and convenient measurement criteria. We commonly measure “time working” with the assumption that additional time worked is better. Maybe. But hours worked is an input metric, not an output metric. It is the denominator, not the numerator. 


Logically, increasing input (denominator) can work to reduce productivity (output) unless output measures also increase faster than inputs increase. 


The other common issue is that we equate worker attitudes with outcomes. Happier workers might, or might not, be more productive. All we can measure is a subjective attitude. More happy or less happy does not necessarily correlate with outcomes. 


In principle, one could have happier but less productive workers; less happy but more productive workers. One would need a way to correlate output and outcomes with feelings in ways that outlive simple Hawthorne effects (people work better when they know they are part of an experiment). 


Work team collaboration might have fared better under full remote work conditions, but there is some evidence that firm-wide collaboration has decreased, though the amount of time spent collaborating (meetings, emails, messaging) has grown.  


Actual output is different from input or collaboration time and effort. It might be difficult to measure “creativity,” but there is some belief that has not done better under conditions of remote work.  


Meetings are inputs, not outputs. Having more meetings, or spending more time in meetings, does not make firms or organizations more productive. A Microsoft survey of 182 senior managers in a range of industries found support for that thesis. 


“65 percent said meetings keep them from completing their own work,” according to Microsoft. “71 percent said meetings are unproductive and inefficient.”


Fully  64 percent said meetings come at the expense of deep thinking while 62 percent said meetings miss opportunities to bring the team closer together (which is sort of a paradox). 


Unless directly correlated with output, meetings actually can reduce productivity. Some seem to believe emails are outcomes, when in fact they take away time that might otherwise have been spent actually producing an output. 


source: Lucidspark 


The point is that we actually can say very little about whether productivity has grown, stayed the same or decreased because of enforced remote work. We could not measure productivity before, so we have no baseline against which to compare, even if we thought we could measure it.


Sunday, August 1, 2021

Prepare for Digital Transformation Disappointment

Prepare for digital transformation disappointment. The investments firms and organizations are rushing to make to “digitally transform” will largely fail, history suggests. For starters, the theory is that whole business processes can be transformed.


But those are the thorniest, toughest problems to solve in the short to medium term, as human organization and habits must change, not simply the computer tools people use. 


Secondly, DX necessarily involves big changes in how things are done, requiring significant application of computing technology. Historically, big information technology projects have  failed about 70 percent of the time.


Finally, understanding how best to use a new technology approach takes some time, as suggested by prior technology paradoxes. 


Many technologists noted the lag of productivity growth in the 1970s and 1980s as computer technology was introduced. In the 1970s and 1980s, business investment in computer technology were increasing by more than 20 percent per year. But productivity growth fell, instead of increasing. 


So the productivity paradox is not new.  Massive investments in technology do not always result in measurable gains. In fact, sometimes negative productivity results. 


Information technology investments did not measurably help improve white collar job productivity for decades in the 1980s and earlier.  In fact, it can be argued that researchers have failed to measure any improvement in productivity. So some might argue nearly all the investment has been wasted.


Some now argue there is a similar lag between the massive introduction of new information technology and measurable productivity results, and that this lag might conceivably take a decade or two decades to emerge. 


The Solow productivity paradox suggests that applied technology can boost--or lower--productivity. Though perhaps shocking, it appears that technology adoption productivity impact can be negative


The productivity paradox was what we began to call it. In fact, investing in more information technology has often and consistently failed to boost productivity. Others would argue the gains are there; just hard to measure. Still, it is hard to claim improvement when we cannot measure it. 


Most of us are hopeful about the value of internet of things. But productivity always is hard to measure, and is harder when many inputs change simultaneously. Consider the impact of electricity on agricultural productivity.


“While initial adoption offered direct benefits from 1915 to 1930, productivity grew at a faster rate beginning in 1935, as electricity, along with other inputs in the economy such as the personal automobile, enabled new, more efficient and effective ways of working,” the National Bureau of Economic Research says.  


There are at least two big problems with the “electricity caused productivity   to rise” argument. The first is that other inputs also changed, so we cannot isolate any specific driver. Note that the automobile, also generally considered a general-purpose technology, also was introduced at the same time.


Since 1970, global productivity growth has slowed, despite an increasingly application of technology in the economy overall, starting especially in the 1980s. “From 1978 through 1982 U.S. manufacturing productivity was essentially flat,” said Wickham Skinner, writing in the Harvard Business Review. 


Skinner argues that there is a “40 40 20” rule where it comes to measurable IT investment benefits. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and basic approaches in materials and workforce management.


Another 40 percent of improvement comes from major changes in equipment and process technology.


The final 20 percent of gain is produced by conventional approaches to productivity improvement (substitute capital for labor).


Cloud computing also is viewed as something of a disappointment by C suite executives, as important as it is.  

 

A corollary: has information technology boosted living standards? Not so much,  some say.


By the late 1990s, increased computing power combined with the Internet to create a new period of productivity growth that seemed more durable. By 2004, productivity growth had slowed again to its earlier lethargic pace. 


Today, despite very real advances in processing speed, broadband penetration, artificial intelligence and other things, we seem to be in the midst of a second productivity paradox in which we see digital technology everywhere except in the economic statistics.


Despite the promise of big data, industrial enterprises are struggling to maximize its value.  A survey conducted by IDG showed that “extracting business value from that data is the biggest challenge the Industrial IoT presents.”


Why? Abundant data by itself solves nothing, says Jeremiah Stone, GM of Asset Performance Management at GE Digital.


Its unstructured nature, sheer volume, and variety exceed human capacity and traditional tools to organize it efficiently and at a cost which supports return on investment requirements, he argues.


At least so far, firms  "rarely" have had clear success with big data or artificial intelligence projects. "Only 15 percent of surveyed businesses report deploying big data projects to production,” says IDC analyst Merv Adrian.


So we might as well be prepared for a similar wave of disappointment over digital transformation. The payoff might be a decade or more into the future, for firms investing now.


AI Will Improve Productivity, But That is Not the Biggest Possible Change

Many would note that the internet impact on content media has been profound, boosting social and online media at the expense of linear form...