Thursday, December 4, 2025

If You're Looking for "Black Swan Events" You'll Never Find Them

A true black swan isn't just unlikely; it's something that exists outside our prevailing model of reality.


Before Europeans encountered Australian swans, "black swan" was literally synonymous with impossibility. The shock wasn't statistical rarity but categorical impossibility suddenly becoming real. Taleb's point was precisely that we're blind to entire dimensions of risk because our frameworks exclude them from consideration.


When financial commentators compile lists of "potential black swans," they're performing an almost comedic self-contradiction. They're essentially saying: "Here are the things we can't anticipate that we're now anticipating."


This suggests the concept has been domesticated into meaning merely "really bad thing we hope won't happen."


The real black swans remain the ones we're not talking about, often because they involve assumptions so foundational we can't even articulate them.


And yet this “domestication” of ideas, theories or principles happens all the time. By definition, we can’t "warn" about black swans because they are, by definition, unforeseeable. 


But the watering down or domestication of any principle, theorem or idea seems to be irresistible. 


Original Concept

Original Meaning

How It Gets Misapplied

Occam's Razor

Among competing hypotheses with equal explanatory power, prefer the one with fewer

assumptions

Used to dismiss complex

explanations simply because they're complex, or to justify intellectual

laziness ("the simplest answer is

usually right")

Gaslighting

A systematic psychological manipulation tactic to make

someone doubt their own sanity

and perception of reality

Now applied to any disagreement, misremembering, or different

perspective ("You're gaslighting me

by saying that didn't happen!")

Kafka-esque

The nightmarish absurdity of faceless bureaucracy stripping individuals of agency and

meaning

Used to describe any mildly frustrating paperwork or

administrative delay

Orwellian

Totalitarian manipulation of

language and reality to control thought itself

Applied to any government action someone dislikes, any surveillance,

or even just opposing political views

Strawman Fallacy

Misrepresenting someone's actual argument to make it easier to attack

Now weaponized to shut down any paraphrasing or summary ("That's a strawman!") even when it accurately

captures the position

Cognitive Dissonance

The psychological discomfort of holding contradictory beliefs simultaneously, which motivates

attitude change

Used as a gotcha accusation meaning "you're being hypocritical" without

the internal discomfort component

Dunning-Kruger Effect

The least competent people lack the metacognitive ability to

recognize their incompetence

Simplified to "stupid people think

they're smart" and used as a general- purpose insult, often by people demonstrating the effect themselves

Narcissism/Narcissistic Personality Disorder

A specific clinical personality disorder involving grandiosity, lack of empathy, and fragile self-

esteem

Applied casually to anyone who seems selfish, takes selfies, or

displays confidence

Correlation vs. Causation

Statistical correlation between variables doesn't prove one

causes the other

Used reflexively to dismiss any suggested causal relationship, even well-supported ones, as if correlation

can never suggest causation

Schrödinger's Cat

A thought experiment about quantum superposition and measurement problems in

quantum mechanics

Misused to mean "we don't know the answer until we check" about any unknown situation

The Butterfly Effect

Sensitive dependence on initial conditions in chaotic systems

Watered down to "everything affects everything" or used to justify magical thinking about tiny actions having massive predetermined

effects

Stockholm Syndrome

Psychological response where hostages develop positive

feelings toward captors as a

survival mechanism

Applied to any situation where

someone defends an institution or person that others think is harming them

Heisenberg's Uncertainty Principle

Fundamental quantum limitation on simultaneously measuring position and momentum

Misapplied to mean "observing something changes it" in any context, or that all knowledge is inherently

uncertain

Gaslighting (worth repeating)

Deliberate, systematic psychological abuse to make

victims question reality

Reduced to mean "lying,"

"disagreeing," or "remembering differently"

Devil's Advocate

Formally arguing against a position to test its strength, even if you agree with it

Now means "let me say something offensive without consequences" or "I'm about to be contrarian for

attention"

Virtue Signaling

Publicly expressing opinions to demonstrate moral superiority

without genuine commitment

Extended to dismiss any public expression of values, making

authentic moral discourse impossible

Paradigm Shift (Kuhn)

Fundamental transformation in scientific worldview that makes old and new frameworks

incommensurable

Applied to any minor change in approach or trending topic ("a paradigm shift in coffee brewing")

Thought Experiment

Rigorous hypothetical scenarios designed to isolate variables and

test philosophical principles

Used for any random "what if" speculation without intellectual rigor

Echo Chamber

Self-reinforcing information environments that completely exclude contrary views

Applied to any community of people who largely agree, even ones that regularly engage with outside

perspectives

Moving the Goalposts

Changing standards of evidence

after they've been met to avoid conceding a point

Invoked whenever someone refines

or adds nuance to an argument during discussion

Wednesday, December 3, 2025

Maybe an AI Bubble Exists for Training, Not Inference

Not all compute is the same, where it comes to artificial intelligence models, argues entrepreneur Dion Lim, especially where it comes to the massive levels of current investment, which some worry are an example of over-investment of bubble proportions.


Maybe not, he argues.


The first pool of invesments is training compute made up of massive clusters used to create new AI models. This is where the game of chicken is being played most aggressively by the leading contenders.


No lab has a principled way of deciding how much to spend; each is simply responding to intelligence about competitors’ commitments.


If your rival is spending twice as much, they might pull the future forward by a year.


The result is an arms race governed less by market demand than by competitive fear, with Nvidia sitting in the middle as the arms dealer.


To a great extent, he suggests, that is where the bubble danger exists.


Training the largest foundational AI models (like large language models) requires an extraordinary, one-time investment in specialized hardware (primarily high-end GPUs) to process huge datasets.

The second area of investment is inference compute, the use of AI models in production, serving actual sers. Here, the dynamics look entirely different, he argues.


Inference is the phase where the trained model is actually used to generate preudictions or responses for users (using a chatbot, running an AI image generator). 

Investment in inference compute arguably is less prone to over-investment because it is tied more closely to actual, measurable customer demand and has a more flexible infrastructure. 

But that is a general rule that might not be true in all instances, some will argue.


Inference costs are ongoing operational expenses (Opex) that scale directly with usage (the number of user queries or requests).

Some will argue that the scale of inference operations will still dominate, over time. 


Still, the argument is that nference hardware has more flexibility than training hardware. Companies can often use a wider variety of chips, including older-generation GPUs, CPUs, or specialized, more cost-efficient accelerators (such as TPUs, ASICs, or FPGAs) optimized for running a fixed, optimized model.

As GPUs become commoditized and compute abundance arrives, inference capabilities will become the next major market, especially given growing demand for efficient agentic tools.


So LLM inference might not be a "bubble" in the sense that investment professionals worry about.


The companies that can deliver intelligence most efficiently, at the lowest cost per token or per decision, will capture disproportionate value, Lim argues.


Training the biggest model matters less now; running models efficiently at planetary scale matters more.


So the argument is that the magnitude of AI capex might, or will, produce some level of over-investment, which is to be expected when an important new technology emerges.

But this might differ fundamentally from the dot-com bubble, which was fueled primarily by advertising spend for firms and products that had yet to establish a revenue model. Of course, some claim neither has AI, for the time being.


Back then, companies burned cash on Super Bowl commercials to acquire customers they hoped to monetize later. That was speculative demand chasing speculative value.


Many would argue that AI already is producing measurable results for companies that do have revenue models and viable products used at scale.




Why People Talking Politics Cannot Communicate

As a first-semester undergraduate, I thought philosophy was the most useless of all subjects in my curriculum. As an adult, I now believe philosophy is the most important of all subjects.


Epistemology, the study of how we know what we know, shapes how people argue, what counts as evidence, and even whether dialogue is possible. And explains why we often are talking past each other. 


In other words, at a deep level, “political discussions,” which I now avoid, are not based on differences about personalities or policies, but are more deeply grounded in different ways of ascertaining “truth.”


Arguments about difficult subjects such as abortion seem irreconcilable because the philosophic assumptions are different: 

  • “Life begins at conception” is a moral claim.

  • “Women should control their bodies” is an ethical-autonomy claim

  •  “A fetus feels pain at X weeks” (empirical claim).


Debates on gender identity, race, or cultural identity illustrate epistemological divergences as well:

  • Subjective epistemology: identity is self-defined and validated by experience

  • Biological epistemology: identity is rooted in physical or genetic reality.

  • Social constructivism: identity categories are created by society and mutable.


When someone says “I am what I say I am” vs “Identity is rooted in biology”, they are not just disagreeing, they are using different knowledge frameworks.


For me, the greatest issue is post-modernism, which asserts that there is no such thing as universal truth, as in “your truth vs my truth.” For those of us whose intellectual framework is still the enlightenment, I find the biggest challenge lies precisely there.


Democracy, law, and social cooperation depend on some common epistemic ground. Grammatical rules for language, driving laws and what constitutes “crime” are examples. 


If truth is subjective, how do we arbitrate disputes? 


So different epistemologies make discourse difficult to impossible. It isn’t the existence of different answers, but different ways of determining answers. 


Epistemology is the hidden engine of public conflict.


Epistemology

What counts as truth?

Authority sources

Typical expression in debates

Empirical/Scientific

Truth = what can be measured, tested, falsified

Science, data, statistics

“Show me the evidence.”

Rationalist/Philosophical

Truth = what follows logically from premises

Logic, argument consistency

“That argument contradicts itself.”

Moral/Religious

Truth = grounded in divine authority or natural law

Scripture, tradition, moral principles

“Life is sacred because…”

Personal/Subjective

Truth = lived experience and internal perception

Individual narratives and identity

“This is my truth.”

Postmodern

Truth = socially constructed power structures

Culture, discourse, ideology

“Truth claims serve power.”

Monday, December 1, 2025

Enantiodromia

"I hope I die before I get old," Pete Townshend of the Who wrote in the song "My Generation." 

Of course, the ballad of youthful defiance became its opposite.

The youth who once said "don't trust anyone over 30" becoming the establishment, the leaders of media, academics, politics and culture. 

Ironic. And an example of Enantiodromia, something becoming its opposite. 

AI User Experience Will Get Way Better, as Did Internet Experiences

One suspects the user experience of artificial intelligence will change as much as did our experience of internet apps: basic functionality that over time gets really sophisticated.


AI Evolution (Next 2 Yrs)

Internet Evolution (Past 20 Yrs)

Key Functional Change

Tools → Companions

Web 1.0Web 2.0 (Read-Only to Social)

Focus shifted from delivering documents to enabling user-generated content (UGC) and two-way interaction.

Assistants → Agents

CGI ScriptsAJAX/SPA

Functionality moved from server-side, full-page reloads to client-side, asynchronous data processing, creating smooth, native-app-like experiences.

Chatbots → Personalities

Generic HTML Sites → Brand/Personal Platforms

Web experiences became highly customizable, responsive, and designed with specific User Experience (UX) patterns to elicit certain feelings/behaviors.

Models → Ecosystems

Isolated Sites → APIs/Cloud Computing

Applications began integrating services across platforms using APIs (Application Programming Interfaces), enabling collaboration and data sharing.


We’ll move from using tools to having companions. AI will shift apps from acting as a transactional utility (a tool you use for a specific, one-off task) to becoming an interpersonal entity designed for ongoing engagement and emotional support.


As a corollary, observers also expect the AI to shift from generic to personal, where the chatbots, for example, have personas. 


Our uses of AI assistants will shift to using them as agents that do not wait for explicit instructions, but instead act as autonomous actors.


AI also will move from being a standalone, single-purpose program to a deeply integrated ecosystem that permeates all aspects of the user's digital life. In many instances, that might also mean the AI creates functionality “on the fly.” 


So users might not have to consciously choose an app to “do something,” but tell the AI what is desired and the functionality is produced on the spot, in real time. 


Of course, there are some likely limits. There are casual consumer uses and then different professional use cases that require more granular control. Apps are likely to remain better for the latter. 


For highly detailed or specialized tasks, such as creating a pixel-perfect logo, detailed CAD drawing, or performing precise color grading, the granular control offered by a dedicated app remains superior. 


Originality and Context: AI systems, by nature, train on existing data. Human designers and creators bring unique, cultural, and emotional context to their work that AI struggles to grasp. The "app" becomes the co-pilot that handles the tedious, repetitive tasks (like masking or code completion), freeing the human to focus on high-level creative and strategic decisions.



The New Interface: The "app" might not disappear; its interface is simply changing. Instead of being a canvas full of buttons and menus, the new interface is often a text box—a conversational AI agent that can be queried, similar to how you interact with me.

Feature

Traditional App (e.g., Photoshop)

AI-Driven Interaction (e.g., AI Generator)

Input Method

Manual operation, clicking tools, adjusting settings.

Natural Language Prompt ("Add a realistic looking alien doing a peace sign," "Change the lighting to sunset").

User Skill Required

High, requires training and expertise.

Low, requires clarity in expressing the desired outcome.

Core Value

Provides a toolkit for maximum control and precision.

Provides a direct solution or output, prioritizing speed and accessibility.

Goal

Editing/Creation Process (You control how it's done).

Final Output (The AI controls how it's done).

Can Netflix Become Disney Faster than Disney Can Become Netflix?

To a larger degree than might be immediately obvious, the new Netflix challenge might be whether “ Netflix can become Disney faster than Dis...