Wednesday, November 5, 2025

What Do We Do When AGI Automates Much Economically-Essential Work?



It is reasonable to suggest that, at the moment, agentic artificial intelligence is not yet ready to displace many full human jobs. Hopes are higher (or more worrisome, depending on one’s point of view) for artificial general intelligence. 


The equally far-reaching implications, though, might happen if artificial general intelligence does acquire such capabilities. For as hard as it might be to imagine a world where nearly all essential work can be done by the “compute,” the economic ramifications would be stunning and unprecedented.


“Before AGI, human skill was the main driver of output, and wages reflected the scarcity of skills needed for bottleneck tasks,” says Pascual Restrepo, author of the paperWe Won’t be Missed: Work and Growth in the AGI World,” published by the National Bureau of Economic Research


Consider the potential impact on jobs, wages and sources of value. “In an AGI world, compute takes that central role, and wages are anchored to the computing cost of replicating human skill,” he argues. “While human wages remain positive, and on average exceed those in the pre-AGI world, their value becomes decoupled from GDP, the labor share converges to zero, and most income eventually accrues to compute.” 


There are some caveats. 


AGI assumes we can replicate what people do if we throw in enough compute at the tasks. That does not mean it is practical or efficient to automate everything. 


Depending on the computing costs 𝛼(𝜔), it may be better to leave some tasks to humans and

allocate our finite computational resources elsewhere.


Also, some work requires interacting physically with the world. AGI optimists assume that, when needed, and if economically rational, computer systems can control machines and hardware to accomplish this work. 


Some work requires empathy and social interaction and, it is argued, must be carried out by humans. The “human touch” and “empathy” of a therapist or healthcare provider may be impossible to replicate, creating a premium for work completed by people. 


The issue is whether we can substitute so much compute that the alternative is really between a human and an AI system that “perfectly emulates the best therapists in the world (from a functional point

of view).” 


Assuming we can afford to do so, one might rationally argue there are some, or many, instances where the AI is an acceptable substitute. 


One must also assume that compute capabilities and costs continue to scale over time on something like the Moore’s Law rate. 


All that noted, we might still argue that even if some work can be automated, it might not be. There will of course be a cost for using AGI. And if the costs are significant enough, and the tasks being considered for AI substitution can be handled by humans at equivalent or lower cost, then using AGI will not make sense. 


Hospitality, live performance or entertainment might provide examples. 


Also, AGI compute might be a scarce resource. If so, then normal cost-benefit logic should hold:AGI replaces human labor when it makes economic sense to do so.


A new theory of value might include the idea that human labor is worth what it saves in compute costs, Restrepo suggests. But algorithmic progress, which arguably is less linear than “compute infrastructure,” should also be an issue, as uncertainty introduces volatility. 


The social implications are huge. In an AGI economy, most income accrues to owners of compute. How society manages such a transition, in terms of impact on social inequality, is unclear. 


As Restrepo says, “today, if half of us stopped working, the economy would collapse.” That might not be true in a future where AGI can be economically deployed to displace humans in economy-central roles. 


All of which raises new issues around “abundance” that humans have not generally had to deal with in the past: what do people do when they do not actually ne


Tuesday, November 4, 2025

AI Equity Volatility Shows 30-Point Swing Between Fear and Greed

“Fear and greed” notoriously are drivers of equity market sentiment and that is clear in the yo-yo behavior surrounding artificial intelligence equities recently.


A positive development such as a new chip announcement, a major partnership like the AWS/OpenAI compute services deal, or strong earnings from an AI leader pushes the market into "extreme greed" territory, driving up prices quickly.


But then reports of high AI capital expenditure, delayed profitability for end-users, or a general sentiment survey warning of a "bubble" causes profit-taking and selling, plunging the market into "fear" sentiment, leading to sharp, temporary pullbacks.




Month

Major Event

Sentiment

Notable Impact

2025-01

DeepSeek Launch

Fear

Sharp drop, infrastructure risk flagged

2025-04

Trump Tariffs Threat

Fear

Market volatility spiked, quick rebound after walkback

2025-09

NVIDIA-OpenAI Chip Deal, Fed Rate Cut

Greed

Strong surge, positive sentiment returned

2025-10

Bubble Talk Surge

Fear

Renewed caution, market exhaustion warnings


The cycle resets because the fundamental belief in AI's future remains generally strong. Investors who sold out of fear often rush back in for fear of missing the next leg up (greed), making the dips short-lived and creating the current high-volatility, upward-trending cycle. 


But skepticism and hope continue to coexist and oscillate. 


Beyond the volatility, we might argue that “high-performance computing capability” has become a strategic commodity.


High-performance compute capacity arguably has become the single most critical, scarce, and expensive strategic resource in the AI industry. 


If so, long-term, multi-billion-dollar compute contracts are now a competitive necessity, resembling procurement models for essential commodities like energy or raw materials. But volatility will persist until some future time when there is much more predictability about AI investments and revenue gains.


Monday, November 3, 2025

Enterprise Leaders Say They Now Use Generative AI Tools Routinely

A new survey by the Wharton School (University of Pennsylvania) Human-AI Research suggests that enterprise leaders now use generative artificial intelligence routinely, for tasks including data analysis, document summarization, and document editing and writing.


Language models also are reported used routinely for information technology professionals writing code, human relations personnel for employee recruitment and onboarding and legal contract generation by legal personnel.


The survey found:

  • 82 percent of respondents use Gen AI at least weekly

  • 46 percent of respondents use it daily

  • 89 percent agree that Gen AI enhances employees’ skills

  • As usage climbs, 43 percent see risk of declines in skill proficiency

  • 72 percent are formally measuring return on investment (productivity gains,incremental profit). 


The key caveat is that the ROI is based on hard-to-measure outcomes including employee efficiency,  productivity, quality, creativity and security.


It might be fairer to characterize the findings as leaders subjectively believing there is ROI based on efficiency metrics, quality or creativity, but being fundamentally unable to measure such outcomes in a discrete way. 


In other words, when choosing quantitative metrics, we have to assume the chosen metrics bear a direct relationship to outcomes, even when as nebulous as "quality" or "creativity." Such metrics as hours, clicks, speed, performance, impact, benchmarking or retention rates arguably can be measured quantitatively, but with often-subjective assessments required. 


 

source: Wharton School, University of Pennsylvania 


Also, in real life, it often is the case that multiple business functions (customer service, marketing, product development) are changed, simultaneously. It then becomes difficult to isolate the precise extent to which the chatbot, versus other factors (marketing campaigns or seasonal demand), is responsible for a positive shift in a proxy metric.


If a chatbot helps a firm draft a "better-quality" marketing email, and sales subsequently increase, it's hard to attribute the revenue gain solely to the chatbot's contribution to the email's quality. And how do we quantify “better?” 


How do you assign a dollar value to an employee who is "more creative?" One might estimate the value of  time saved, but not the value of the new ideas generated with the freed-up time requires a proxy metric. What is that metric? 


If a new marketing campaign is launched, even with no increase in spending overall, but the campaign also uses new and different channels, how do we assess the contributions of the change in channels versus the “quality” of the chatbot-assisted campaign?


All such metrics require creation of “baseline” performance, often just as subjective as the possible improved outcomes. 


Also, there is the time element. If the outcome is “better brand awareness” or “perception,” there is a lag time between initiative and outcome, even if most other elements of the marketing mix are held “constant.”


Most of us would agree that LLMs do save time. Using them does save time or increase speed. What is harder to estimate are elements such as the quality of output; the “creativity” or other outcomes; the value of rival work effort that is enabled. 


The fundamental problem is that the “cost” can be quantified rather easily: (license fees, training costs, IT support). The outcomes tend to be softer and harder to measure, even when tracking employee time to complete specific tasks.


But that is true for all sorts of innovations, not just AI or language models.


Sunday, November 2, 2025

AI Investment Bubble or Not? Dot-Com Danger or "Only" Normal Overinvestment in a Major New Technology?

Nobody knows yet whether the investment boom in artificial intelligence we now see is a bubble, or not. Conventional wisdom seems to suggest AI is a bubble, but there is disagreement. 


And if some argue it is a bubble, there remains an argument that there is a significant difference between a dot-com style bubble and an “ordinary” investment bubble associated with introduction of any major new technology


To be sure, for some of us, there are hints to parallels of excesses akin to the excessive dot-com investment at the turn of the century. As I was writing one startup business plan, I was told “there’s plenty of money, make it bigger.” 


As it turned out, “this time is different” and admonitions that some of us “did not get it” were wrong. Economics was not different and normal business logic was not suspended. 


But some might note that there are important differences between AI investment and dot-com startup investment. Back then, many bets were placed on small firms with no actual revenue. 


Today, it is the cash flow rich, profitable hyperscalers that dominate much of the activity. Investment burdens are real, but so are immense cash flows and profits to support that investment. 


And by some financial metrics, valuations do not seem as stretched as they were in the dot-com era, though everyone agrees equity market valuations are high, at the moment. 



We also can’t tell yet what impact artificial intelligence might have on productivity and economic growth, much less future revenues for industries and firms. 


And that might be crucial to the argument that there actually is not an investment bubble; that there are real financial and economic upsides to be reaped; new products and industries to be created. 


There is some thinking by economists that AI impact could be greater than electricity and at least as important and positive as information technology in general. 


General-Purpose Technology

Primary Timeframe of Peak Impact

Estimated Annual Productivity Boost (Peak Rate)

Macro-Level Impact Metric

Steam Engine

Mid-19th Century (Decades after invention)

0.2% - 0.3%

Contribution to annual TFP* or Labor Productivity Growth

Electrification

1920s - 1940s (30+ years after initial adoption)

~0.4% - 0.5%

Contribution to annual TFP or Labor Productivity Growth

Information Technology (IT) / Computers

Mid-1990s - Early 2000s

~1.0% - 1.5%

Acceleration in annual Labor Productivity Growth (U.S.)

Artificial Intelligence (AI) (Current Forecasts)

Early 2030s (7–15 years after GenAI breakthrough)

1.0% - 1.5%

Projected increase in annual Labor Productivity Growth over 10 years



Study/Source

Projection Focus

Estimated Gain (Over Baseline)

Caveats

Goldman Sachs (2023)

Macroeconomic Forecast (Global/U.S.)

7% increase in Global GDP over 10 years; 1.5 ppt annual U.S. labor productivity growth 

Highly optimistic, assuming rapid adoption and task automation.

McKinsey Global Institute (2023)

Economic Potential of Generative AI 

$2.6 to $4.4 Trillion added annually to the global economy.

Based on value from 63 specific use cases across business functions.

Acemoglu (MIT)

Conservative Macroeconomic Model

0.7% increase in TFP  over 10 years (U.S. economy).

More modest, based on historical adoption rates and cost-benefit analysis of task automation.

Brynjolfsson et al. (Micro Studies)

Firm/Task-Level Productivity

10% - 40% increase in productivity for tasks like coding, customer service, and professional writing.

These are early, firm-level gains, which historically take time to translate into aggregate macro statistics.


Each of us has to make a call: bubble or not; big bubble or only “normal” overinvestment?


Contrary to Some Expectations, AI Appears to Grow Google Search Revenue

Perhaps highly contrary to expectations, artificial intelligence and chatbots seem to be supporting, rather than attacking, Google search volumes and revenues.


Google CEO Sundar Pichai points out that overall queries and commercial queries grew in the second quarter this year and that the growth rate accelerated in the third quarter. Search revenue was up about 15 percent during the third quarter, for example. 


AI Overviews drive meaningful query growth,” he noted. “It's particularly encouraging to see the effect was more pronounced with younger people.”


source: Alphabet, Seeking Alpha

Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...