Showing posts sorted by date for query Moore's Law. Sort by relevance Show all posts
Showing posts sorted by date for query Moore's Law. Sort by relevance Show all posts

Wednesday, November 5, 2025

What Do We Do When AGI Automates Much Economically-Essential Work?



It is reasonable to suggest that, at the moment, agentic artificial intelligence is not yet ready to displace many full human jobs. Hopes are higher (or more worrisome, depending on one’s point of view) for artificial general intelligence. 


The equally far-reaching implications, though, might happen if artificial general intelligence does acquire such capabilities. For as hard as it might be to imagine a world where nearly all essential work can be done by the “compute,” the economic ramifications would be stunning and unprecedented.


“Before AGI, human skill was the main driver of output, and wages reflected the scarcity of skills needed for bottleneck tasks,” says Pascual Restrepo, author of the paperWe Won’t be Missed: Work and Growth in the AGI World,” published by the National Bureau of Economic Research


Consider the potential impact on jobs, wages and sources of value. “In an AGI world, compute takes that central role, and wages are anchored to the computing cost of replicating human skill,” he argues. “While human wages remain positive, and on average exceed those in the pre-AGI world, their value becomes decoupled from GDP, the labor share converges to zero, and most income eventually accrues to compute.” 


There are some caveats. 


AGI assumes we can replicate what people do if we throw in enough compute at the tasks. That does not mean it is practical or efficient to automate everything. 


Depending on the computing costs 𝛼(𝜔), it may be better to leave some tasks to humans and

allocate our finite computational resources elsewhere.


Also, some work requires interacting physically with the world. AGI optimists assume that, when needed, and if economically rational, computer systems can control machines and hardware to accomplish this work. 


Some work requires empathy and social interaction and, it is argued, must be carried out by humans. The “human touch” and “empathy” of a therapist or healthcare provider may be impossible to replicate, creating a premium for work completed by people. 


The issue is whether we can substitute so much compute that the alternative is really between a human and an AI system that “perfectly emulates the best therapists in the world (from a functional point

of view).” 


Assuming we can afford to do so, one might rationally argue there are some, or many, instances where the AI is an acceptable substitute. 


One must also assume that compute capabilities and costs continue to scale over time on something like the Moore’s Law rate. 


All that noted, we might still argue that even if some work can be automated, it might not be. There will of course be a cost for using AGI. And if the costs are significant enough, and the tasks being considered for AI substitution can be handled by humans at equivalent or lower cost, then using AGI will not make sense. 


Hospitality, live performance or entertainment might provide examples. 


Also, AGI compute might be a scarce resource. If so, then normal cost-benefit logic should hold:AGI replaces human labor when it makes economic sense to do so.


A new theory of value might include the idea that human labor is worth what it saves in compute costs, Restrepo suggests. But algorithmic progress, which arguably is less linear than “compute infrastructure,” should also be an issue, as uncertainty introduces volatility. 


The social implications are huge. In an AGI economy, most income accrues to owners of compute. How society manages such a transition, in terms of impact on social inequality, is unclear. 


As Restrepo says, “today, if half of us stopped working, the economy would collapse.” That might not be true in a future where AGI can be economically deployed to displace humans in economy-central roles. 


All of which raises new issues around “abundance” that humans have not generally had to deal with in the past: what do people do when they do not actually ne


Friday, September 26, 2025

AI Impact Will Come Mostly from Consumer Products and Services, Not Enterprise

It is fair enough to raise questions about whether the coming investment in AI compute infrastructure is matched to new AI revenues that investment is expected to generate. 


“Two trillion dollars in annual revenue is what’s needed to fund computing power needed to meet anticipated AI demand by 2030,” according to researchers at Bain and Company. “However, even with AI-related savings, the world is still $800 billion short to keep pace with demand.”


Bain’s sixth annual Global Technology Report predicts that, by 2030, global incremental AI compute requirements could reach 200 gigawatts, with the United States accounting for half of the capability. 


So here’s the thinking: even if companies in the U.S. market shifted all of their on-premise information technology budgets to cloud and reinvested the savings from applying AI in sales, marketing, customer support, and research and development into capital spending on new data centers, the amount would still fall short of the revenue needed to fund the full investment, as AI’s compute demand grows at more than twice the rate of Moore’s Law, Bain argues. 


The return on investment arguably looks different if we look at AI impact on consumer products, though. 


PwC estimates that up to $9.1 trillion of the total global GDP gain from AI by 2030 will come from consumption-side effects (increased demand due to personalized, higher-quality products and services). 


In other words, productivity improvements are part of the story, but not the whole story. 


AI-Influenced Consumer Spending: A report by Cognizant and Oxford Economics projects that U.S. consumers who embrace AI could drive $4.4 trillion in AI-influenced consumer spending in the US alone by 2030.


The global consumer AI market size is projected to reach approximately $674.49 billion by 2030, growing at a CAGR of 28.3% (NextMSC forecast). 


Feature

Bain Argument (B2B/Enterprise Focus)

Consumer AI (B2C/Consumption Focus)

Primary Metric

Annual revenue needed to fund AI compute capital expenditure ($2T needed, $800B shortfall).

Increased consumer spending and consumption-side GDP boost (e.g., $4.4T influenced spending in the United States, $9.1T global GDP from consumption).

Key Conclusions

Supply-side funding shortfall to build the necessary data centers and computing power.

Demand-side explosion creating massive new market value and consumption.



Study Name

Date

Publisher(s)

Key Conclusion on Consumer Impact

Web Link

Sizing the Prize

Oct 2017

PwC

AI will boost global GDP by $15.7 trillion by 2030. Crucially, $9.1 trillion (58%) of this gain will come from consumption-side effects (increased consumer demand for personalized, higher-quality products and services).

https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf 

New Minds, New Markets

Jan 2025

Cognizant & Oxford Economics

Consumers who embrace AI could drive $4.4 trillion in AI-influenced consumer spending in the U.S. by 2030, accounting for 46% of total U.S. spending. AI will revolutionize the purchase journey (Learn, Buy, Use).

https://investors.cognizant.com/news-and-events/news/news-details/2025/Cognizant-Study-Shows-Consumers-Who-Embrace-AI-Could-Drive-4.4-Trillion-in-Spending-Over-Five-Years/default.aspx 

The economic potential of generative AI

June 2023

McKinsey Global Institute

Generative AI could add an equivalent of $400 billion to $660 billion annually to the retail and consumer packaged goods sectors across the 63 use cases analyzed globally.

McKinsey 

The State of Consumer AI

June 2025

Menlo Ventures

The consumer AI market has reached $12 billion in the 2.5 years since generative AI went public. The low conversion rate (3% paying for premium) indicates a massive monetization opportunity, especially for specialized AI tools and Voice AI.

https://menlovc.com/perspective/2025-the-state-of-consumer-ai/ 

AI's transformation of consumer industries

Apr 2025

World Economic Forum (WEF)

GenAI could yield an extra $1.2 trillion in economic value across seven geographies within consumer industries by 2038. Projected impacts include a 10−20% revenue uplift and a 60% reduction in content production costs.

https://www.weforum.org/stories/2025/04/ai-transformation-consumer-industries-wef-report/ 


The point is that we do not yet know the size of markets and benefits of AI, to evaluate against the cost of computing infrastructure to support AI use cases. But enterprise impact is likely the lesser of the drivers. Consumer products and services are where most of the returns are likely to happen. 


Friday, May 30, 2025

Will AI Boost Productivity Enough to Pay for Universal Basic Income?

One important issue national policymakers will have to confront is what has to be done if artificial intelligence does lead to widespread job losses because machines really are able to do work that today is done by humans.


At a high enough level, social dislocations are inevitable. And much hinges on the unknowns of how much productivity can be lifted to pay for social programs offsetting the loss of jobs. But it seems fairly obvious that a huge increase in productivity will be required. 


One study suggests automation productivity gains at about five times to six times greater than what we have tended to see from earlier technology innovations. 


Study/Author(s)

Year

Key Contributions / AI Productivity Insights

Degree of AI Productivity Required for Robust UBI

Considerations/Caveats for UBI Implementation

Krohn, J.

2025

Analyzes AI advancements, focusing on the reduction in model size and cost of running LLMs, and the increasing capabilities of AI agents. Notes that in short time-horizon settings, top AI systems can outperform human experts.

While not providing a specific "degree" of productivity, Krohn (2025) suggests that the increasing efficiency and decreasing cost of AI models will lead to significant corporate AI adoption and productivity gains. This implies a future where AI's contribution to overall economic output is substantial enough to support broad social programs.

The study by Jon Krohn (2025) on "The "State of AI" Report 2025" focuses more on the current trends and capabilities of AI rather than directly calculating the precise productivity threshold for UBI. However, the reported exponential improvements in model efficiency, cost reduction, and agent performance strongly imply that AI is on a trajectory to generate the necessary wealth. The challenge then shifts from if AI can generate enough, to how that wealth is distributed.

Nayebi, A.

2025

Develops a closed-form condition for AI capital profits to sustainably finance UBI without additional taxes or new job creation in a Solow-Zeira economy. Examines how the AI capability threshold (productivity relative to pre-AI automation) varies.

AI systems must achieve approximately 5-6 times existing automation productivity to finance an 11%-of-GDP UBI, in a worst-case scenario where no new jobs are created. This threshold can be halved to 3 times existing automation productivity if public revenue share of AI capital is raised to 33% (from current 15%).

This study provides a concrete, model-based estimation. It highlights the importance of public revenue share from AI profits and market structure (monopolistic markets could make UBI easier to fund due to higher rents). Assumes a "worst-case scenario" of no new job creation.

Goldman Sachs Report

2023 (cited in various 2024/2025 discussions)

Projects that generative AI alone could automate 300 million full-time jobs globally, while also leading to a significant rise in global GDP (estimated at 7%).

Implicitly, the productivity gains from automating 300 million jobs and a 7% rise in global GDP are seen as the wellspring from which UBI could be funded. The sheer scale of potential productivity increase is a key factor.

Focuses on the potential for AI to create immense wealth, but the challenge remains in the distribution of that wealth. The report doesn't explicitly state the "degree of productivity" needed to fund UBI, but rather the overall economic impact.

Brynjolfsson, Li, and Raymond

2023 (cited by Brookings)

Showed that call center operators became 14% more productive when using generative AI, with gains over 30% for the least experienced workers. Also noted improved customer sentiment and lower employee attrition.

Specific percentage gains in productivity for certain tasks/occupations are reported. While not an economy-wide figure, these micro-level gains contribute to the larger picture of AI's productivity potential.

These are early, specific examples of productivity gains. Scaling these individual gains to a robust, economy-wide UBI requires further aggregation and consideration of broader economic impacts.

Korinek

2023 (cited by Brookings)

Estimates that economists can be 10-20% more productive using large language models.

Similar to Brynjolfsson et al., this provides a specific example of AI-driven productivity enhancement in a knowledge-based profession.

Like other micro-level studies, these gains need to be considered in the context of broader economic shifts and potential job displacement.

Santens, S.

2025 (various articles/discussions)

Argues that AI-driven automation is fundamentally different from previous shifts, automating cognitive labor, decision-making, and creativity, leading to productivity without widespread prosperity (i.e., increased inequality).

Argues that the current level of AI-driven productivity has already decoupled from wage growth, leading to massive inequality. Thus, the "degree of productivity" is already sufficient to address poverty, but the distribution mechanism is flawed.

Highlights that the issue might not be a lack of productivity from AI, but rather how the economic benefits are currently concentrated. Advocates for UBI as a tool to redistribute this wealth.

Bertello, G. P., & Almeida, T.

2025

Analyze the history of UBI and argue for its necessity as a new social contract in the age of AI and automation to address wage inequality, job insecurity, and widespread job losses.

Does not quantify a specific productivity degree, but implicitly argues that the disruptive potential of AI (leading to job displacement and enhanced productivity) necessitates UBI.

Emphasizes the need for sustainable funding, investment in education, and attention to social and psychological aspects, not just economic and labor market outcomes.


The core challenge is to ensure that the wealth generated by AI is sufficient to cover the UBI, without stifling innovation or leading to undesirable economic side effects such as inflation or capital flight. This implies that the rate of productivity growth must outpace the cost of UBI relative to the existing economy.


Even with significant productivity gains, the sheer cost of a meaningful UBI for an entire population is immense. For example, providing $12,000 per adult annually in the US would cost trillions of dollars.


Some observers will recall past expectations about automation or the cost of inputs (“energy too cheap to meter” in the case of nuclear energy). That has been a recurring theme for forecasters and futurists. 


Era/Technology

Key Expectations & Predictions (at the time)

"Too Cheap to Meter" Parallel for Computing/IT Goods

Underlying Technological Advances/Reasons for Optimism

Realization & Caveats

1960s-1970s: Early Mainframes & Minicomputers

Automation of administrative tasks, complex calculations, and inventory management leading to significant efficiency gains. Early visions of "paperless offices."

The idea that computation itself would become so affordable that it would be integrated into every aspect of business and decision-making, with the cost of individual calculations becoming negligible.

Moore's Law (1965): Gordon Moore's observation that the number of transistors on an integrated circuit doubles approximately every two years, leading to exponential increases in computing power at decreasing costs. This fundamental principle fueled optimism for decades.

While costs decreased, the scale of problems being tackled also grew. Early computers were expensive and complex, requiring specialized personnel and significant infrastructure, limiting the "too cheap" aspect for many goods. The focus was on efficiency rather than outright cost elimination for end products.

1980s-1990s: Personal Computers & Early Internet

Democratization of computing, empowering individuals and small businesses. Increased productivity in offices through word processing, spreadsheets, and desktop publishing. The rise of email and digital communication reduces communication costs.

The cost of information creation, storage, and transmission would become effectively free, allowing for near-instantaneous and borderless exchange of data and ideas. "Information Superhighway" promises a world connected.

Further advancements in microprocessors, memory, and storage: Smaller, more powerful, and cheaper components enabled the widespread adoption of PCs. Development of the Internet and World Wide Web: Protocols and infrastructure made global digital communication possible and increasingly affordable.

While costs plummeted for personal computing and basic internet access, the "free" aspect often overlooked the underlying infrastructure, software development, and the human capital required to leverage these tools effectively. The "productivity paradox" emerged in the 1990s, where significant IT investment didn't always translate to immediate, measurable economy-wide productivity gains.

2000s-2010s: Broadband, Cloud Computing, Mobile Devices

Ubiquitous connectivity, on-demand computing resources, and access to vast amounts of information from anywhere. Transformation of industries through e-commerce, digital media, and mobile applications.

The expectation that access to immense computing power and data storage through the cloud would be a utility-like service, with marginal costs approaching zero. Data itself would become "too cheap to meter" in terms of its accessibility and use.

Cloud Computing: Shifting from owning computing infrastructure to renting it on demand, drastically reducing upfront costs and scaling capabilities. Mobile Technology: Smartphones and tablets made computing and information access portable and pervasive. Big Data: The ability to collect, store, and analyze massive datasets, leading to new insights and efficiencies.

While cloud services significantly reduced the variable cost of computing for many, the total cost of complex IT systems, cybersecurity, and data management remained substantial. The increasing volume of data also presented new challenges and costs. The shift was more towards "cost-effective on-demand" rather than truly "too cheap to meter."

2020s (Current & Near Future): Generative AI

Automation of cognitive tasks, content generation, and sophisticated problem-solving. Significant productivity boosts across knowledge work, design, and even scientific research. Potential for AI to optimize production processes in unprecedented ways.

OpenAI CEO Sam Altman: Has stated that AI is heading toward being "too cheap to meter," particularly the cost of "intelligence" (i.e., AI inferences or "tokens"). Google CEO Sundar Pichai has echoed similar sentiments, stating "intelligence, just like air, too cheap to meter," given the rapid fall in the cost of generating AI outputs.

Large Language Models (LLMs) and Generative AI: Breakthroughs in AI architecture and training data have enabled highly capable models. Decreasing cost of AI inference: As models become more efficient and specialized hardware develops, the computational cost per AI operation is falling rapidly.

While the marginal cost of a single AI query or generated output is indeed plummeting, the initial development and training of these massive AI models are incredibly expensive. The ethical, societal, and regulatory costs associated with widespread AI adoption are also emerging. The focus is now on how this "cheap intelligence" can be effectively integrated to reduce the cost of final goods and services.


In all likelihood, AI will not reduce the net number of jobs as some fear, but also probably will not improve productivity enough to easily support new UBB policies. It might take some combination of productivity growth, elimination of existing welfare programs, new taxes or government funding to compensate for extensive AI-driven changes in an economy’s need for labor.


Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...