Thursday, December 4, 2025

If AI Development Continues at Current Pace, What Changes in 5-10 Years?

If the current pace of artificial intelligence development continues at current rates (doesn’t slow or speed up), life might look significantly different in five to 10 years. Some predict many of these changes will happen by 2030, but it might also take a decade for most of them to fully develop as expected. 


Among the more-important changes are those related to work and productivity, with impact on government spending for universal basic income, who works and why.


Global gross domestic product could be 15 percent to 30 percent higher than baseline forecasts, based solely on automation of knowledge work.


That could well mean 20 percent to 40 percent of current jobs are heavily transformed or gone. But new jobs are created in the areas of AI orchestration, data curation, human-AI interaction design, and robot maintenance.


We face the danger of “K-shaped” economic impact, where massive gains accrue to AI owners, the top-five-percent of people who direct AI development and capital owners, while many others find themselves out of work because of automation. 


So universal basic income or similar policies will become mainstream political topics in most developed countries. If people are not needed to do “work,” how do they sustain themselves?


Area

Daily Life in 2030

How Work Gets Done in 2030

Industries That Benefit Most (and Why)

Personal Assistants

Every person has a highly personalized AI agent (like a supercharged Grok/Siri) that knows your entire digital life, anticipates needs, books everything, manages finances, reminds you to call your mom, and negotiates bills automatically.

70-90% of knowledge-work tasks (emails, scheduling, research, basic coding, writing, slide decks) are handled by AI agents with human oversight only for final sign-off.

Software development, legal, marketing, consulting, education

Transportation

Most new cars sold are Level 4 autonomous. Robotaxis dominant in cities (Waymo/Uber-like fleets 10-20× larger). Commutes drop 30-50% in time; people work/read/sleep in cars. Traffic deaths plunge.

Delivery and logistics almost entirely autonomous (drones + robot trucks). Human truck drivers and delivery couriers largely obsolete.

Autonomous vehicles, logistics (Amazon, UPS), insurance (far fewer claims)

Healthcare

AI wears you (continuous monitoring via wearables/implants). Your AI doctor catches cancer years earlier, adjusts your meds in real time, and designs personalized treatment plans. Doctor visits mostly for procedures.

Radiologists, pathologists, and many GPs shift to oversight roles. Drug discovery cycle drops from 10 years to <18 months. Clinical trial matching automatic.

Pharmaceuticals, medical devices, health insurance (prevention focus)

Education

Every student has an infinitely patient AI tutor tailored to their learning style. Top 1% human teachers oversee 1,000+ students via AI orchestration. Dropout rates collapse; mastery-based progression standard.

Teachers become “learning experience designers” and mentors. Corporate training almost entirely AI-driven.

EdTech, corporate L&D, tutoring industries disappear into AI platforms

Creative Industries

Text, images, music, video, and code generated on demand at near-human quality. Most marketing copy, social media content, and stock photography created by AI. Hollywood uses AI for pre-vis, VFX, and even full animated features.

Human creatives shift to directing AI, curating, and adding final 10% “soul.” Mid-tier writers/artists struggle; superstars + AI directors thrive.

Gaming, advertising, entertainment, architecture (AI-generated designs)

Manufacturing & Retail

Lights-out factories common. 3D-printed custom goods on demand. Most retail shifts to “showroom + same-day local print/delivery.”

Blue-collar supervision roles shrink; technicians who fix/maintain robots rise.

Advanced manufacturing, custom consumer goods, defense

Finance & Law

AI agents trade stocks, detect fraud, write basic contracts, and do discovery. 80% of paralegal and junior analyst work automated.

Senior partners and quants become AI orchestrators. High-frequency trading 100% AI.

FinTech, crypto/DeFi, legal tech

Energy & Environment

AI optimizes grids in real time, predicts renewable output, and designs better batteries/solar panels. Massive acceleration in fusion and carbon-capture research.

Energy trading and grid operations fully autonomous.

Clean energy, nuclear fusion startups, carbon removal

Government & Military

Bureaucracy heavily automated (permits, tax filing, welfare distribution). Military drones and cyber operations almost entirely AI-driven.

Large reduction in middle-management civil servants.

Defense contractors, gov-tech


If You're Looking for "Black Swan Events" You'll Never Find Them

A true black swan isn't just unlikely; it's something that exists outside our prevailing model of reality.


Before Europeans encountered Australian swans, "black swan" was literally synonymous with impossibility. The shock wasn't statistical rarity but categorical impossibility suddenly becoming real. Taleb's point was precisely that we're blind to entire dimensions of risk because our frameworks exclude them from consideration.


When financial commentators compile lists of "potential black swans," they're performing an almost comedic self-contradiction. They're essentially saying: "Here are the things we can't anticipate that we're now anticipating."


This suggests the concept has been domesticated into meaning merely "really bad thing we hope won't happen."


The real black swans remain the ones we're not talking about, often because they involve assumptions so foundational we can't even articulate them.


And yet this “domestication” of ideas, theories or principles happens all the time. By definition, we can’t "warn" about black swans because they are, by definition, unforeseeable. 


But the watering down or domestication of any principle, theorem or idea seems to be irresistible. 


Original Concept

Original Meaning

How It Gets Misapplied

Occam's Razor

Among competing hypotheses with equal explanatory power, prefer the one with fewer

assumptions

Used to dismiss complex

explanations simply because they're complex, or to justify intellectual

laziness ("the simplest answer is

usually right")

Gaslighting

A systematic psychological manipulation tactic to make

someone doubt their own sanity

and perception of reality

Now applied to any disagreement, misremembering, or different

perspective ("You're gaslighting me

by saying that didn't happen!")

Kafka-esque

The nightmarish absurdity of faceless bureaucracy stripping individuals of agency and

meaning

Used to describe any mildly frustrating paperwork or

administrative delay

Orwellian

Totalitarian manipulation of

language and reality to control thought itself

Applied to any government action someone dislikes, any surveillance,

or even just opposing political views

Strawman Fallacy

Misrepresenting someone's actual argument to make it easier to attack

Now weaponized to shut down any paraphrasing or summary ("That's a strawman!") even when it accurately

captures the position

Cognitive Dissonance

The psychological discomfort of holding contradictory beliefs simultaneously, which motivates

attitude change

Used as a gotcha accusation meaning "you're being hypocritical" without

the internal discomfort component

Dunning-Kruger Effect

The least competent people lack the metacognitive ability to

recognize their incompetence

Simplified to "stupid people think

they're smart" and used as a general- purpose insult, often by people demonstrating the effect themselves

Narcissism/Narcissistic Personality Disorder

A specific clinical personality disorder involving grandiosity, lack of empathy, and fragile self-

esteem

Applied casually to anyone who seems selfish, takes selfies, or

displays confidence

Correlation vs. Causation

Statistical correlation between variables doesn't prove one

causes the other

Used reflexively to dismiss any suggested causal relationship, even well-supported ones, as if correlation

can never suggest causation

Schrödinger's Cat

A thought experiment about quantum superposition and measurement problems in

quantum mechanics

Misused to mean "we don't know the answer until we check" about any unknown situation

The Butterfly Effect

Sensitive dependence on initial conditions in chaotic systems

Watered down to "everything affects everything" or used to justify magical thinking about tiny actions having massive predetermined

effects

Stockholm Syndrome

Psychological response where hostages develop positive

feelings toward captors as a

survival mechanism

Applied to any situation where

someone defends an institution or person that others think is harming them

Heisenberg's Uncertainty Principle

Fundamental quantum limitation on simultaneously measuring position and momentum

Misapplied to mean "observing something changes it" in any context, or that all knowledge is inherently

uncertain

Gaslighting (worth repeating)

Deliberate, systematic psychological abuse to make

victims question reality

Reduced to mean "lying,"

"disagreeing," or "remembering differently"

Devil's Advocate

Formally arguing against a position to test its strength, even if you agree with it

Now means "let me say something offensive without consequences" or "I'm about to be contrarian for

attention"

Virtue Signaling

Publicly expressing opinions to demonstrate moral superiority

without genuine commitment

Extended to dismiss any public expression of values, making

authentic moral discourse impossible

Paradigm Shift (Kuhn)

Fundamental transformation in scientific worldview that makes old and new frameworks

incommensurable

Applied to any minor change in approach or trending topic ("a paradigm shift in coffee brewing")

Thought Experiment

Rigorous hypothetical scenarios designed to isolate variables and

test philosophical principles

Used for any random "what if" speculation without intellectual rigor

Echo Chamber

Self-reinforcing information environments that completely exclude contrary views

Applied to any community of people who largely agree, even ones that regularly engage with outside

perspectives

Moving the Goalposts

Changing standards of evidence

after they've been met to avoid conceding a point

Invoked whenever someone refines

or adds nuance to an argument during discussion

Wednesday, December 3, 2025

Maybe an AI Bubble Exists for Training, Not Inference

Not all compute is the same, where it comes to artificial intelligence models, argues entrepreneur Dion Lim, especially where it comes to the massive levels of current investment, which some worry are an example of over-investment of bubble proportions.


Maybe not, he argues.


The first pool of invesments is training compute made up of massive clusters used to create new AI models. This is where the game of chicken is being played most aggressively by the leading contenders.


No lab has a principled way of deciding how much to spend; each is simply responding to intelligence about competitors’ commitments.


If your rival is spending twice as much, they might pull the future forward by a year.


The result is an arms race governed less by market demand than by competitive fear, with Nvidia sitting in the middle as the arms dealer.


To a great extent, he suggests, that is where the bubble danger exists.


Training the largest foundational AI models (like large language models) requires an extraordinary, one-time investment in specialized hardware (primarily high-end GPUs) to process huge datasets.

The second area of investment is inference compute, the use of AI models in production, serving actual sers. Here, the dynamics look entirely different, he argues.


Inference is the phase where the trained model is actually used to generate preudictions or responses for users (using a chatbot, running an AI image generator). 

Investment in inference compute arguably is less prone to over-investment because it is tied more closely to actual, measurable customer demand and has a more flexible infrastructure. 

But that is a general rule that might not be true in all instances, some will argue.


Inference costs are ongoing operational expenses (Opex) that scale directly with usage (the number of user queries or requests).

Some will argue that the scale of inference operations will still dominate, over time. 


Still, the argument is that nference hardware has more flexibility than training hardware. Companies can often use a wider variety of chips, including older-generation GPUs, CPUs, or specialized, more cost-efficient accelerators (such as TPUs, ASICs, or FPGAs) optimized for running a fixed, optimized model.

As GPUs become commoditized and compute abundance arrives, inference capabilities will become the next major market, especially given growing demand for efficient agentic tools.


So LLM inference might not be a "bubble" in the sense that investment professionals worry about.


The companies that can deliver intelligence most efficiently, at the lowest cost per token or per decision, will capture disproportionate value, Lim argues.


Training the biggest model matters less now; running models efficiently at planetary scale matters more.


So the argument is that the magnitude of AI capex might, or will, produce some level of over-investment, which is to be expected when an important new technology emerges.

But this might differ fundamentally from the dot-com bubble, which was fueled primarily by advertising spend for firms and products that had yet to establish a revenue model. Of course, some claim neither has AI, for the time being.


Back then, companies burned cash on Super Bowl commercials to acquire customers they hoped to monetize later. That was speculative demand chasing speculative value.


Many would argue that AI already is producing measurable results for companies that do have revenue models and viable products used at scale.




Where is Generative AI Being Used Most?

Generative AI usage at the moment centers overwhelmingly on information work such as creating, processing, and communicating information, a...