Monday, January 5, 2026

AI Changes Value Chains in Many Ways as did Internet

What are the likely effects of generative artificial intelligence on industries over the next five to 10 years? For some of us, the answer is based on what happened when other earlier forms of applied computing happened:

  • The middle collapses

  • The top captures more value

  • The entry-level funnel narrows.


Basically, at the industry level, benefits flow to the firms with scale, or the firms with clear specialties. “Winner takes most” is an example. And, as might be expected in any maturing market, it becomes harder for upstarts to challenge the established order. 


Think about media (including social media and search) after the internet or retail after e-commerce. A few disruptors emerged and became dominant. So the phrase “do you want to compete directly against Google or Amazon” arose. 


AI might shift value chains as well, devaluing some forms of “effort” and creating new “platform” opportunities and forms of business leverage. Information intermediaries should become new forms of “middle” position suppliers, and see their leverage decrease. 


Platforms and ecosystems should create new value, as has been the case for the internet economy before AI. And though our experience in computing in recent decades has been that infrastructure (hardware) becomes less important as a source of value, compared to software, that might shift a bit. 


Ownership or control of compute hardware (high-performance computing data centers; graphics processors and other accelerated compute chips; energy contracts) might confer new sources of “scarcity,” which produces value. 


At the firm level, there is a related process. Entities essentially stop “monetizing work” and start monetizing “judgment at scale.” That is a subtle shift, but if effort formerly led to output which then was monetized, in an AI era the role of judgment is magnified.


It is in many ways analogous to the difference between “being efficient” (doing things at lowest cost, with least effort) and “being effective” (doing the right things to create value and monetization). 


In other words, AI will collapse the cost of doing things. It might not alleviate the need to focus on what creates value and therefore monetization of effort. 


There are many implications. Value essentially moves from headcount and process prowess and resource utilization towards proprietary data ownership; brand trust or distribution control. When content or judgment is ubiquitous and plentiful, value accrues to platforms that can aggregate audiences. 


From (Declining Value)

To (Rising Value)

Why

Human effort

Human judgment

Effort is cheap; deciding what to do and whether it’s right remains scarce

Information access

Problem framing

Everyone has answers; few ask the right questions

Execution speed

Direction-setting

AI collapses time-to-execute; direction becomes the bottleneck

Labor hours

Outcome ownership

Buyers pay for results, not process

General skill

Taste and discernment

Average quality is automated; taste differentiates

Content creation

Distribution and trust

Supply explodes; attention and credibility don’t

Workflow labor

Workflow orchestration

Managing agents > doing tasks

Static expertise

Adaptive learning loops

Models change; ability to update matters more than knowledge stock

SaaS features

Embedded AI systems

Features commoditize; integrated systems compound value

Margins from inefficiency

Margins from scale and data

AI removes inefficiency arbitrage


Across all layers, winners share three traits:

  • Control of scarce assets (data, distribution, regulation, capital, relationships)

  • Judgment-heavy work that AI can assist but not own

  • Ability to repackage labor into products


Losers share three traits:

  • Work defined by execution speed, not decisions

  • Value based on information scarcity

  • Pricing tied directly to human hours


Dimension

People

Businesses

Industries

Primary impact

Cognitive leverage: individuals can do more with less time and training

Productivity, cost structure, and speed of execution change materially

Value chains reorganize around automation and recomposition

Who most-easily applies

High-agency individuals who know what to ask, judge outputs, and integrate results

Firms with repeatable knowledge work and clear processes

Knowledge-intensive, rules-based, and content-heavy industries

Who is most at risk

Average knowledge workers whose value was speed, recall, or routine analysis

Firms competing mainly on execution rather than insight, relationships, or assets

Industries built on scarcity of information, not assets or regulation

Skill shift

From “doing” → prompting, reviewing, synthesizing, and deciding

From staffing depth → orchestration of humans + AI

From labor intensity → capital + data + model access

Labor effects

Wage dispersion increases; top performers pull away

Fewer junior roles; flatter organizations

Employment shrinks in some roles, grows in others (AI ops, data, governance)

Cost structure

Lower cost to create, analyze, and communicate

Fixed costs fall; variable costs tied to compute and data

Marginal cost of output approaches zero in some segments

Speed of work

Drastically faster learning and iteration

Faster product cycles, decision loops, and experimentation

Shorter innovation cycles; faster competitive turnover

Barriers to entry

Lower—individuals can launch products or content solo

Lower for startups, higher for scale players with data

Lower at the front end, higher at scale and distribution

Quality distribution

Average quality rises; excellence still rare

“Good enough” becomes cheap; differentiation shifts

Commoditization at the middle; premium at the top

Power dynamics

More power to individuals with judgment and taste

More power to firms controlling data, workflows, and distribution

Platform and infrastructure providers gain outsized influence

Long-term trajectory

Humans focus on goals, values, and judgment

Firms reorganize around “AI-first” workflows

Industry boundaries blur; ecosystems replace linear chains

----------------------


Sunday, January 4, 2026

Some Small Business Owners Believe AI is Enabling "Do It Yourself" Alternatives That Cost Them Revenue

Small business owners believe AI is costing them business, and though that perception might be anecdotal and perhaps incorrect, it is what many believe. On the other hand, the potential sources of product substitution and the ability to “do it yourself” seem clear enough. 


As helpful as generative artificial intelligence might be for tasks (get an answer), it is far more useful when the output changes to “outcomes.” 


Last year, we might have used a language model to write a paragraph or conduct a search. Increasingly,  we will tell an agent the desired outcome (such as "launch a localized marketing campaign for Japan") and the agent will execute the tasks autonomously.


In fact, the term “do it yourself” will acquire new meaning, as the “doing” often is done by the model, while the user simply specifies what is to be done.


So in a growing number of cases, the marginal cost of "expert labor" is approaching zero. For a $20 per month subscription, a user has the equivalent of a virtual legal team, a design studio, and a coding assistant.


Existing Service/Product

AI Substitution (2026)

DIY Value Proposition

Boutique Creative Agency

Multimodal Generative Agents

8K video, brand identities, ad copy.

Tax, Accounting Services

Autonomous Financial Agents

Real-time tracking of expenses and automated filing.

Junior Legal Counsel

Contract. Compliance LLMs

Drafting, risk analysis of complex legal documents.

Custom Software Dev Shops

No-Code Agent Orchestrators

"Describe" an app; AI writes the code, deploys.

Travel Agencies,  Concierge

Hyper-Personalized AI Concierge

Agents book flights, hotels, and dinners..

Language Tutors, Translators

Real-time Neural Translation

Translation in any context.

Market Research Firms

Synthetic Audience Simulation

AI replaces surveys.

Does AI Change Wide Area Networking, and, if so, How?

It might be debatable how much changes in artificial intelligence compute workloads (training versus inference operations, for example) create new and fundamental requirements for wide area connectivity, beyond the already-existing requirement for low latency and high bandwidth.


We might, at a high level, argue that model training depends on high throughput, while inference is more dependent on low latency. Similarly, we might argue that model training relies on dense, locally-connected processors inside a single building, or in a cluster of buildings, while inference can be more-widely distributed. 


Feature

Training Architecture (e.g., NVIDIA B200)

Inference Architecture (e.g., Groq LPU, Apple NPU)

Priority

Maximum Throughput (Total FLOPs)

Minimum Latency (Time to First Token)

Precision

High (FP16, FP32, BFloat16)

Low (INT8, FP8, 1-bit Ternary)

Connectivity

Scale-Up (Dense Interconnects like NVLink)

Scale-Out (Distributed edge nodes/localized)

Bottleneck

Compute Bound (Math operations)

Memory Bound (Bandwidth & I/O)


The issue is how much connectivity decisions could be affected, aside from the overall emphasis on high bandwidth and low latency that already exists to support cloud and distributed computing. 


Some will argue that model training might always require a specialized architecture optimized for really-high bandwidth. Lumen Technologies, for example, has a vested interest in making such an argument. 


And, to be sure, the shift to inference ought to move architectures and requirements from "compute-centric" (focusing on raw math speed) to "data-centric" (focusing on moving data efficiently).


Still, it remains unclear how much the fundamental architecture, focused on both high bandwidth and low latency, could be affected. Already, some would note that memory becomes more important for inference operations. 


Overall, when inference is the driver, the "network" is no longer just a pipe for moving datasets; it becomes a live extension of the AI's memory and reasoning path. And while innovations inside data centers are coming (optical connections replace electrical), it might be the case that physical media becomes an important change, not architecture.


Hollow-core fiber networks, for example, are said to offer 47 percent faster throughput than glass core fibers


So perhaps the most-important wide area networking change for interconnects between data centers is new physical media, rather than architectural changes as such.


Wednesday, December 31, 2025

AI Media Impact: More Bifurcation of High and Low; Automated and Scarce Human Content

As someone who worked for 40 years in ad-supported media, the realities of today’s business are brutal, and that was true before generative artificial intelligence, which is accelerating the underlying economic trends. 


In a nutshell, here’s the business model problem: Many media businesses are no longer primarily “storytelling organizations” but traffic monetization systems. As writers we act as storytellers. But whether that is harnessed financially can often be unclear or unworkable. 


Since advertising “cost per thousand” impressions have collapsed over the last few decades, so have media entity revenues, bringing huge cost pressures to the forefront. 


Platforms such asGoogle, Meta and X have captured distribution and pricing power, driving many former independent or smaller entities out of the market. 


So revenue per article must be less than the cost of human labor to produce that content. So automation becomes a survival move. 


Machines can often produce content abundance, especially when the content has high structure. Humans ideally produce scarcity value when lots of insight, interpretation or “meaning” are required. But generative AI is making inroads there, as well. 


The economics of content production therefore favor using machines to produce mass content at scale, when possible, while humans have the edge only where scarce, specialized or trust-critical content is involved. 


Basically, it is all about marginal cost and associated revenue upside. Investigative reporting might, in some cases, have very-high revenue potential, though it is rare. Original analyses have high production cost, and might have moderate revenue lift. 


Breaking news, data-driven news or automated summaries invariably have low revenue upside. So the choices are fairly simple: automate what does not produce reasonable amounts of revenue, reserving human roles for the more-complicated content that will be a relatively-small part of total content production. 


Content Type

Marginal Cost

Revenue Potential

Economic Role

Investigative reporting

Very high

High but rare

Brand anchor

Original analysis

High

Medium

Subscriber retention

Breaking news rewrite

Medium

Low

Traffic defense

Data-driven updates

Near zero

Low

Volume filler

Automated summaries

~zero

Very low

Search capture


Most user-generated content business models follow a somewhat-similar, but “flipped” pattern. UGC platforms follow the same economic logic (abundance versus scarcity, automation versus humans), but the roles of humans, machines, and “content” are different. 


Abundant, low-value content is automated, while scarce, high-value content is human-produced. 


But users supply the labor for abundance, instead of journalists. The platforms automate selection, amplification, and monetization, so scarcity comes from attention, status, and trust. 


That flows from the differences in media models. Media companies pay to produce content and then monetize the content. 


UGC platforms get their content for free and monetize behavior around that content. In other words, the real product is engagement, not content. So curation is more important than content supply, which is, for all intents and purposes, unlimited. Attention is the source of scarcity. 


Generative AI accelerates the content-creation processes, which becomes even more abundant and lower cost. But the algorithms still curate. So UGC platforms will optimize for watch time; shares; comments or return visits


So algorithms will favor emotionally activating content; identity-affirming content or controversial content. That might be likened to “commodity” news, the sort of stuff that, in a professional media context, is structured enough to be automated. 


Top UGC creators, in terms of revenue potential, often provide insights on what matters, what to ignore or how to think about something. They often focus on meaning, which is the same “scarce human” function in professional media.


Ironically, AI increases competition for attention, but raises the premium on human scarcity. That happened with journalism after the internet; music after streaming or photography after smartphones


Abundance also tends to make authentic human insight more valuable, not less, even if it remains rare, as surfaced by the algorithms. 


Will ChatGPT Ever Make Money?

A study by Epoch AI illustrates the monetization issue faced by language model developers , where continual needs to invest in the next ite...