Monday, April 27, 2026

Using AI is Not Always "Cheaper" than Using Humans

Although many argue that using artificial intelligence can be a substitute for human workers, it also can be argued that using AI could be more expensive. It depends on the task. 


The MIT CSAIL/Sloan Study on Economic Limits of AI Automation (2024) analyzed computer vision tasks and found that for many jobs or tasks, developing and deploying AI is more expensive than continuing with human workers.


In other cases the opposite can be true. "How Do AI Agents Do Human Work? Comparing AI and Human Workflows" (arXiv, ~2025) found AI agents 88.3 percent faster and 90 to 96 percent cheaper for tasks across occupations, with per-interaction costs (e.g., $0.015–$0.12 for customer service vs. human $0.25–$0.42/min). 


An MIT/Oak Ridge "Iceberg Index" Simulation (2025) found that current AI tools can perform tasks tied to about 12 percent of U.S. labor market wage value at competitive or lower cost. 


An evaluation conducted for the National Bureau of Economic Research notes the trade offs.


A study by the McKinsey Global Institute suggests that half of work activities are potentially automatable, but suggests hybrid human-AI approaches are “best.”


A study by Goldman Sachs Research AI estimates AI could automate 25 percent of work hours globally, which might suggest AI can save organizations money. .


"Human Labor Versus Artificial Intelligence: A Total Cost of Ownership and Task-Suitability Framework" (2026) suggests the displacement might work best for narrow/repetitive/high-volume tasks. Humans might still be superior for complex/creative tasks.


A study by IDC and McKinsey suggests Hybrid models often maximize value. PwC also suggests the hybrid approach is best.  


Anthropic found in one study that  no major unemployment spike happened in high-exposure roles post-ChatGPT use, but did find some hiring slowdown for younger workers. The emphasis there is probably on “early” effects, as AI capabilities will increase with time while organizations will become more skillful at deploying in high-value ways. 


Generally speaking, organizations must balance the raw cost savings from AI for specific tasks, but total value maximization requires balancing against human strengths and hidden costs.


As you might expect, the “right” deployment models will balance use of digital and human workers. 


Study / Report

Key Findings on Cost/Productivity Tradeoffs

Source/Link

IDC Global AI Survey (2023, referenced in analyses)

For every $1 invested in AI, average return of $3.5–4 (up to 4.2x in financial services). Achieved within ~14 months in many cases. Emphasizes scalability benefits over linear human costs.

Multiple references, e.g., Microsoft/IDC summaries

McKinsey Global Institute – Generative AI Economic Potential (2023/updated)

GenAI could add $2.6–4.4T annually to global economy across 63 use cases (15–40% boost on prior AI). Automation of ~30% of work hours by 2030 in some scenarios, but value unlocked via redesigning workflows around human-AI collaboration (e.g., $2.9T in US by 2030 in midpoint agent/robot scenario). Labor costs often 20–35% of ops; hybrids yield higher ROI.

McKinsey reports (e.g., economic potential of generative AI)

Anthropic – Estimating AI Productivity Gains from Claude Conversations (2025)

Across 100k real conversations, AI reduces task time by ~80% on average. Tasks valued at median ~$54–$55 in human labor cost; extreme cases (e.g., curriculum dev) imply $115 human equivalent vs. minutes with AI. Suggests potential 1.8% annual US labor productivity growth boost.

Anthropic research page

arXiv: "How Do AI Agents Do Human Work? Comparing AI and Human Workflows" (2025)

AI agents complete tasks 88.3% faster and 90.4–96.2% cheaper than humans in tested occupations. Per-interaction costs: ~$0.015–$0.12 (tokens) vs. human $15–25/hr equivalents. Notes caveats: reliability, oversight needs; best as hybrid.

arXiv (linked via analyses)

Ernst & Young (2022) – AI Document Intelligence

AI reduced document review time by ~90% and costs by ~80%. Example: 1M documents ~$1.7M human vs. ~$450K AI (3–4x savings). Hybrids (AI filter + human check) improve accuracy/volume.

EY analysis, referenced in cost comparisons

MIT / Brynjolfsson et al. studies (e.g., customer support, 2025)

15% average productivity increase (issues resolved/hour); up to 36% for lower-skilled workers. ChatGPT experiments: ~40% time reduction, 18% quality increase (larger gains for lower performers). Augmentation > pure replacement for ROI.

Various field experiments (e.g., Fortune 500 support; professional services)

OECD – Macroeconomic Productivity Gains from AI (2024)

Models AI as delivering cost savings/productivity boosts. Scenarios project 0.24–0.61 pp annual TFP growth over 10 years depending on adoption/exposure (lower than some optimistic Goldman Sachs estimates). Partial automation often optimal due to scaling laws.

OECD report PDF

ResearchGate: "Human Labor Versus Artificial Intelligence – Total Cost of Ownership and Task-Suitability Framework" (2026)

Proposes framework: AI best for narrow/repetitive/high-volume/moderate-risk tasks; humans for others. Synthesizes TCO (total cost of ownership) including oversight, quality, and suitability.

ResearchGate publication

Goldman Sachs – AI Labor Market Impacts (various 2025–2026)

~25% of work hours automatable in US; 300M global jobs exposed; base case 6–7% displacement over 10 years. AI already trimming ~16K US jobs/month net in some estimates, but augmentation effects offset some losses. Focus on exposure vs. actual displacement.

Goldman Sachs insights

PwC Global AI Jobs Barometer (2025)

AI-exposed sectors see 3x higher revenue-per-worker growth; wages rise faster in AI-exposed jobs. AI makes workers more valuable via productivity, not just substitution.

PwC report

arXiv: "Economics of Human and AI Collaboration" (2026)

Partial automation often cost-minimizing equilibrium (interior solution) due to diminishing returns in scaling AI. Full automation rarely optimal; hybrids capture ~11% of exposed labor compensation in some models.

arXiv paper

Deloitte / Gartner / MIT references (various)

Conversational AI handles 5–10x volume; error rates lower for AI on rule-based tasks. Highest ROI from augmentation/hybrid models (e.g., 40% greater than all-human or max-automation in one manufacturing case).

Aggregated in workforce decision articles


Sunday, April 26, 2026

Where Enterprise Agentic AI Offers Highest Payback

The clearest enterprise agentic artificial intelligence payback usually comes from high-volume, repetitive workflows with low-to-moderate judgment, especially customer service triage and routine code review, because the unit economics improve sharply once fixed AI overhead is spread across many transactions. 


Conversely, the weakest payback tends to show up in low-volume or highly customized work, especially contract development or review that still needs heavy lawyer review, because human-in-the-loop labor, governance, and update costs can dominate the savings.


So customer service often offers fast payback;  code review a longer payback and some low-volume sales proposals might never offer a payback. 

source: The Architect 


As you would guess, the economics are most favorable for  high-volume call center operations. 


source: Sobot 


In such cases, AI payback can be high because:: 

  • volume is high

  • task structure is repeatable

  • the human agent’s time per-interaction is 10 times as high as the AI agent. . 


On the other hand, volume really does matter for the payback. So do total costs of ownership. 


In practice, that means a workflow can look attractive in a pilot and still disappoint at scale if review rates or adaptation costs are high.


Use case

Typical volume profile

Human review burden

Payback

Why

Customer service triage;  support automation

Very-high volume

Moderate; escalations still needed

Strongest

High ticket volumes let AI offset labor quickly; reported cost per interaction can fall sharply, and hybrid models often show 3–9 month ROI windows.

Code review;  pull request reviews

High volume in engineering orgs.

Moderate; senior engineers still review critical issues

Very strong

AI can eliminate trivial issues, compress review time, and return expensive developer time to feature work; reported payback is often 3–6 months for enterprise teams.

Contract review; clause extraction

Medium volume, but high value per document

High; legal sign-off remains required

Good, but more variable

AI is effective at first-pass screening and standard clause checks, but legal judgment and compliance review remain substantial, so savings depend on deal flow and standardization.

Contract development; drafting from scratch

Lower volume, bespoke

Very high

Weakest

Drafting is more variable, more sensitive to nuance, and more likely to require iterative human correction, which erodes automation savings.

Low-volume customer support or niche workflows

Low volume

Moderate to high

Weak to marginal

Fixed costs for setup, monitoring, and maintenance are hard to amortize, so payback stretches out unless the labor saved is unusually expensive.


Saturday, April 25, 2026

Google, AWS Investments in Anthropic are About Pre-Selling Compute Demand, Mostly

The recent wave of massive investments by Google and Amazon into Anthropic is easy to misread as a simple “bet on a promising AI startup.” 


The more important story is the way the investments allow each firm to defend their positions in the AI computing-as-a-service market. 


To be sure, Anthropic’s Claude models have become credible top-tier competitors to OpenAI and Google’s own models, Reuters reports. 


And Claude is already embedded in both Amazon’s Bedrock platform and Google Cloud’s Vertex AI, (Bessemer Venture Partners notes. 


So investing in Anthropic might be viewed as a way to own stakes in a high-growth “application layer” company getting traction with enterprise AI workloads.


True, up to a point.


The bigger story is securing demand for AI infrastructure:

  • Anthropic has committed $100B+ of spend on AWS over a decade (AP News)

  • It uses AWS as its primary training and deployment platform (Anthropic)

  • Claude is tightly integrated into Amazon Bedrock, driving enterprise usage (GeekWire)

  • Anthropic is training on Amazon’s custom chips (Trainium, Inferentia) (GeekWire)

  • Amazon is building massive data centers explicitly to support Claude workloads (GeekWire)


So the stakes are less about venture investing and more about locking in compute services demand from one of the largest AI compute customers in the world.


Google might be parrying an AWS thrust, aiming to prevent AWS from becoming the default supplier of compute services demand:

  • Up to $40B committed, tied to performance and partnership depth (Reuters)

  • Anthropic gets access to massive TPU compute capacity via Google Cloud (The Times of India)

  • Anthropic is a distribution channel (bring Claude customers onto Google Cloud)

  • A counterweight to AWS exclusivity

  • A hedge against its own model risk (Gemini may not win every workload)


So the strategy  is fundamentally about “AI computing as a service”

The key shift: AI is collapsing three layers into one integrated market:

  1. Models (Claude, GPT, Gemini)

  2. Infrastructure (GPUs, TPUs, custom chips)

  3. Cloud platforms (AWS, Google Cloud, Azure).


Whoever controls all three layers—or tightly couples them—wins. Or at the very least, the strategy is about “not losing.”


Anthropic could have gone all-in on a single cloud (AWS or Azure). So the equity investments by Google and AWS are at least partly aimed to ensure that does not happen. 


On the other hand, Anthropic likely wishes to avoid dependency on a single cloud services provider. 


So the equity investments provide capacity pre-selling, the means to protect against Anthropic becoming a single cloud platform. And Anthropic secures independence from any single cloud platform. 


As worrisome as “circular investment” might appear, it is useful for the firms who do it. 

source: Bloomberg

Using AI is Not Always "Cheaper" than Using Humans

Although many argue that using artificial intelligence can be a substitute for human workers, it also can be argued that using AI could be m...