Thursday, February 20, 2025

We All Believe Computing is Prodcutive, But Struggle to Measure It

Though virtually everybody would agree that computing technologies are useful, enabling and productivity-enhancing, we still find it difficult to precisely quantify the gains.


For starters, the U.S. Bureau of Labor Statistics, which tracks productivity, does not break out the actual “causes” of productivity change by source. It studies “total factor productivity” only, so all we can say is that all information technology likely contributes a non-zero amount to productivity change.

source: Economic Strategy Group 


Also, the U.S. Bureau of Labor Statistics, measures employee productivity by calculating “output per hour” of work. That’s a measurement problem because we have to create proxies for “output.” And most quantitative measures you might think of might, or might not, also represent “productive output.” 


You might measure the volume of emails generated, lines of code written or some other quantitative activity metric. But you probably are skeptical that such “inputs” are really “outputs.” And we are likely looking at correlations rather than causation in any case. In other words, higher IT investment might be correlated with higher outputs, but we cannot say for certain how much the IT investment “caused” or “lead to” the estimated productivity gains. 


Study

Key Findings

Measurement Method

Advanced Workplace Associates & Center for Evidence Based Management

Identified 6 factors correlating with knowledge worker productivity at team level

Analyzed academic databases for peer-reviewed research

Time on Task

Measures time spent on specific tasks (e.g., reading x-rays, answering support tickets)

Apps like RescueTime to track time in applications

Completed Intentions

Assesses number of intended tasks completed in a day

Self-reporting of task completion

APQC Research

Found average knowledge worker spends 8.2 hours/week on information-related tasks

Survey of knowledge workers

Qualitative Metrics

Focuses on how workers feel about their work

End-of-day questionnaires with agree/disagree statements

Empowered Productivity System

Trains workers to use workflow management system

Organizational implementation and observation

McKinsey Research

Explored productivity barriers in knowledge interactions

Daily logs of knowledge interactions from workers at multiple organizations

Situational Metrics

Develops metrics specific to type of work (e.g., software development cycle length, bug count)

Custom metrics for each knowledge work type

Modern Intranet Analytics

Measures intranet usage trends, content performance, and employee engagement

SharePoint analytics tools



If outputs are intangible and difficult to define, so are “results” produced by teams rather than individuals. 


And since we are measuring “output by hour,” that is an obvious problem where salaried employees are evaluated. The “hours” denominator is uncertain. The problem is worse with remote and mobile working. 


All that will be worth keeping in mind as artificial intelligence increasingly is deployed across industries and economies. We’ll be looking to measure output changes that might be quite subtle and subjective. 


Period

Labor Productivity Growth (Annual)

Trends

1980s

2.0%

Slowdown from previous decades 1

1990s

2.9%

Productivity surge, partly attributed to IT advancements 1,7

2000-2004

2.9%

Continuation of 1990s productivity growth 7

2004-2023

1.5%

Long-term decline in productivity growth 7

2023

2.7%

Recent uptick, approaching 1990s levels 7


If past experience provides any guide, it is that the actual net impact of AI will be very hard to measure, and might or might not actually produce an identifiable productivity boost in the near term. In the past,  positive productivity impact has often taken some time--as much as a decade--to correlate with higher productivity growth rates. 


Study

Date

Publisher

Key Conclusions

The Impact of Information Technology on Worker Productivity: Firm-Level Evidence

1999

The Quarterly Journal of Economics

Found a strong positive correlation between IT investment and labor productivity growth.

Does Information Technology Cause Productivity Growth?

2000

American Economic Review

Concluded that IT investment alone does not guarantee productivity gains; effective implementation and organizational change are crucial.

The Productivity Paradox: Are Computers a New Age of Diminishing Returns?

1999

Harvard Business Review

Explored the idea that early IT investments may not have yielded significant productivity gains due to factors like learning curves and organizational adjustments.

Measuring the Impact of Information Technology on Productivity Growth

2001

Brookings Institution

Examined various methodologies for measuring IT's impact on productivity, highlighting the challenges of isolating the effect of technology from other factors.

Information Technology and Productivity: A Review of the Literature

2002

Journal of Economic Literature

Provided a comprehensive review of existing research on IT and productivity, summarizing key findings and identifying areas for future research.

The Diffusion of the Internet and the Productivity Paradox

2004

Review of Economic Studies

Investigated the role of the internet in productivity growth, finding that its impact may be more significant in the long run as businesses adapt and integrate internet technologies.

Does IT Really Matter?

2005

MIT Press

Explored the broader societal and economic impacts of IT, beyond just productivity, considering factors like job displacement, income inequality, and social change.

The Productivity Paradox Revisited: Resolving the Debate

2006

Information Economics and Policy

Re-examined the productivity paradox debate, arguing that earlier studies may have underestimated the impact of IT due to measurement challenges and the time lag between investment and productivity gains.

The Impact of Information Technology on Economic Growth

2008

National Bureau of Economic Research

Analyzed the long-term impact of IT on economic growth, finding evidence that IT has played a significant role in driving economic growth in recent decades.

Measuring the Impact of Information Technology on Productivity Growth

2001

Brookings Institution

Examined various methodologies for measuring IT's impact on productivity, highlighting the challenges of isolating the effect of technology from other factors.

Information Technology and Productivity: A Review of the Literature

2002

Journal of Economic Literature

Provided a comprehensive review of existing research on IT and productivity, summarizing key findings and identifying areas for future research.

The Diffusion of the Internet and the Productivity Paradox

2004

Review of Economic Studies

Investigated the role of the internet in productivity growth, finding that its impact may be more significant in the long run as businesses adapt and integrate internet technologies.

The Economics of Information Technology

2010

Addison-Wesley

Provided a comprehensive overview of the economics of IT, covering topics such as investment, innovation, productivity, and market structure.

The Digital Revolution and the New Economy

2011

Oxford University Press

Explored the broader social and economic transformations brought about by the digital revolution, including the rise of the internet, e-commerce, and the gig economy.

Revenue Often Does Not Drive FTTH Value

It often is hard to determine when it is worthwhile to upgrade copper access facilities to fiber-to-home platforms, in large part because competitive dynamics, customer density and total investment (own copper and upgrade to fiber; buy copper and upgrade to fiber) costs vary so much. 


In many cases, the financial upside comes not so much from operating revenue results but from equity value increase.


Fiber networks generally command higher valuations compared to copper networks. For example, while copper access lines from Lumen were acquired by Apollo Global in 2022 for about $1,154 per passing, the estimated value post-fiber upgrade ranged from $2,154 to $2,654 per passing.


The value of fiber assets can increase dramatically with higher customer take rates. A network with a 40% take rate may be worth roughly twice as much as one with a 20% take rate.


As a rule, the “average” cost of upgrading a telco copper access line to fiber is roughly $1,000 to $1,500 per passing (location), assuming 50-80 homes per mile, a suburban density. 


Costs arguably are lower for urban densities and higher for rural passings. Financial return often hinges on population density and competitive dynamics, however. Assuming the presence of at least two competent internet access providers, the fiber upgrade of owned assets might assume revenue from half to less than half of passed locations, since the other competent competitor will roughly take half the market share. 


For such reasons, many independent ISPs choose to build only in parts of any metro area, while many incumbent asset owners will tend to follow suit. In other words, it might generally make sense to upgrade in urban and suburban areas (often focusing on single-family residences) while delaying or finding different platforms for rural and ex-urban areas (fixed wireless, mobile substitution, satellite). 


But the key point is that the financial opportunity is to rebuild networks for fiber access and boost take rates for those assets. 


The cost per passing is one figure, but even after spending the money to upgrade to fiber, if take rates climb, the value of the assets still exceed the cost of acquisition and upgrade.  


For Apollo Global, for example, the acquisition of “mostly” copper access lines from Lumen in 2022 was about $1154 per passing. Once upgrade for fiber access (boosting per location investment to between $2154 and $2654), and assuming take rates can be boosted to 40 percent, the financial value of the assets still grows.


Year

Seller

Buyer

Assets

Valuation

Notes

2022

Lumen

Apollo Global

Mostly copper access lines

$1,154 per passing

Acquisition cost before fiber upgrade

2022

Lumen

Apollo Global

After fiber upgrade

$2,154 - $2,654 per passing

Estimated value post-upgrade

2023-2024

Various

Various

Fiber networks

$2,000 - $3,000 per passing

Typical range for suburban areas1

2023-2024

Various

Various

Copper networks

$500 - $1,000 per passing

Estimated range based on industry trends


Beyond those considerations, incumbent owners of copper access assets have other values to consider. Any telco that does not upgrade from copper to fiber likely cannot survive long term in the market when competitors do so. 


So irrespective of the actual business case, any access provider that wants to remain in business must consider fiber upgrades. “You get to keep your business” is the strategic rationale, not “higher revenues, lower costs and higher profits.” 


Wednesday, February 19, 2025

Lower-Cost LLMs are Necessary, but not Sufficient, for Agentic AI

There often is a tendency to believe that lower-cost large language models (generative artificial intelligence) have direct implications for the cost of other forms of AI, that is at best partly true, one can argue. 


Consider the relationship between LLMs and agentic AI or “artificial general intelligence.”  

While LLMs provide language fluency and broad knowledge, they lack deep reasoning, memory, planning, and real-world interaction. A true AGI would integrate LLMs with other AI paradigms, including: 

  •  Symbolic AI for logic & reasoning

  • Reinforcement Learning for decision-making

  •  Memory systems for persistent knowledge

  •  Multimodal AI for vision, speech, and sensory input

  • Self-learning and world modeling for adaptability


Artificial General Intelligence (AGI) would require a system that can learn, reason, adapt, and generalize across a wide range of tasks, much like a human. While LLMs (Large Language Models) are powerful in processing and generating text, they have key limitations that prevent them from achieving AGI on their own. However, they can play an important role as a component within a larger AGI system.


Likewise, LLMs provide language understanding, reasoning, and decision support, making them useful for agentic AI in several ways:

  • Language comprehension and generation – LLMs enable agents to process natural language instructions, communicate with users, and generate responses.

  • Reasoning and planning – Through prompt engineering (e.g., Chain-of-Thought prompting), LLMs can simulate step-by-step problem-solving.

  • Knowledge retrieval and synthesis – LLMs act as information processors, integrating and summarizing knowledge from different sources.

  • Code and automation – LLMs can generate and execute code, allowing agents to perform automated workflows


However, LLMs alone are reactive. They respond to prompts rather than initiating actions autonomously. To become true agents, AI needs additional capabilities.


The Role of LLMs in AGI

LLMs can serve as a language and knowledge engine in AGI by:

  • Understanding and generating natural language (communication)

  • Encoding vast amounts of world knowledge

  • Generating code, plans, and reasoning chains (for problem-solving)


However, AGI needs much more than just language modeling. It requires learning, reasoning, memory, perception, and real-world interaction.


To create an AGI system, additional AI subsystems beyond LLMs would be needed, including:

  • Memory and long-term knowledge retention, as LLMs do not retain memory between sessions.AGI needs episodic memory (remembering past interactions) and semantic memory (storing structured facts over time). So LLMs need to integrate with or interface with databases or vector memory systems.

  • Reasoning and planning (LLMs can do some reasoning, but they do not truly understand causality or plan long-term. So any AGI would require logic-based reasoning systems, similar to symbolic AI or neuro-symbolic approaches.

  • Learning beyond pretraining (AGI must be able to continually learn and update its knowledge based on new experiences. This might involve meta-learning, reinforcement learning, and active learning approaches.

  • Multimodal perception (AGI would need vision, audio, and sensor-based perception

  • Goal-directed behavior and autonomy (AGI would need an autonomous agent system that can pursue objectives, optimize actions, and self-correct over time)

  • Embodiment and real-world interaction (Some argue AGI will need a physical or simulated "body" to interact with the world, similar to how humans learn)


Instead of replacing LLMs, AGI systems may incorporate them as a central knowledge and communication layer while combining them with other AI components. 


Role of LLMs for Agentic AI


To transform LLMs into autonomous agents, researchers combine them with additional components, such as:

  • Memory-Augmented LLMs (vector databases (such as Pinecone, Weaviate, ChromaDB) to store and retrieve past interactions,  allowing agents to remember previous tasks and refine their behavior over time. AutoGPT and BabyAGI use memory to track goals and intermediate steps, for example.

  • Planning and Decision-Making Modules (LLMs are combined with reinforcement learning, symbolic AI, or search-based planning systems to enable structured reasoning. OpenAI’s tool-use framework lets LLMs call APIs, retrieve information, and solve complex problems step-by-step, for example.

  • API and Environment Interaction (LLM-powered agents need tools to execute actions, such as calling APIs, running scripts, or manipulating environments. LangChain and OpenAI Functions enable LLMs to interact with external tools (databases, automation scripts), for example. 

  • Feedback & Self-Improvement Loops (Agents use self-reflection to evaluate and refine their outputs).


Several AI frameworks integrate LLMs into agentic systems:


  • AutoGPT and BabyAGI (LLM-based agents autonomously define objectives, plan tasks, execute steps, and iterate on results. An AutoGPT agent for market research might break down tasks into three major tasks: research competitors; summarize trends and draft a strategy report. 

  • LangChain Agents (which enable external tools and application programming interfaces; store and recall memory; plan and execute workflows. An example is a customer service agent that remembers user history and escalates issues as needed.

  • ReAct (Reasoning + Acting) is an architecture allowing LLMs to reason about tasks, proceed step-by-step and decide on actions. For example, a travel agent would, on behalf of a user, conclude that "I need to find a flight to New York." So "I should check Google Flights" and then "compare prices before booking."


The main point is that LLMs are a functional part of platforms aiming to provide agentic AI and future AGI systems, but that LLM cannot do so in the same way that an operating system enables a personal computer to function. An LLM is part of a suite of capabilities. 


The implication is that lower-cost LLM training and inference costs contribute to other developments in agentic AI and AGI, but are not a sole and sufficient driver of those developments.


Why Agentic AI "Saves" Google Search

One reason Alphabet’s equity valuation has been muted recently, compared to some other “Magnificent 7” firms, is the overhang from potential...