Wednesday, December 31, 2025

AI Media Impact: More Bifurcation of High and Low; Automated and Scarce Human Content

As someone who worked for 40 years in ad-supported media, the realities of today’s business are brutal, and that was true before generative artificial intelligence, which is accelerating the underlying economic trends. 


In a nutshell, here’s the business model problem: Many media businesses are no longer primarily “storytelling organizations” but traffic monetization systems. As writers we act as storytellers. But whether that is harnessed financially can often be unclear or unworkable. 


Since advertising “cost per thousand” impressions have collapsed over the last few decades, so have media entity revenues, bringing huge cost pressures to the forefront. 


Platforms such asGoogle, Meta and X have captured distribution and pricing power, driving many former independent or smaller entities out of the market. 


So revenue per article must be less than the cost of human labor to produce that content. So automation becomes a survival move. 


Machines can often produce content abundance, especially when the content has high structure. Humans ideally produce scarcity value when lots of insight, interpretation or “meaning” are required. But generative AI is making inroads there, as well. 


The economics of content production therefore favor using machines to produce mass content at scale, when possible, while humans have the edge only where scarce, specialized or trust-critical content is involved. 


Basically, it is all about marginal cost and associated revenue upside. Investigative reporting might, in some cases, have very-high revenue potential, though it is rare. Original analyses have high production cost, and might have moderate revenue lift. 


Breaking news, data-driven news or automated summaries invariably have low revenue upside. So the choices are fairly simple: automate what does not produce reasonable amounts of revenue, reserving human roles for the more-complicated content that will be a relatively-small part of total content production. 


Content Type

Marginal Cost

Revenue Potential

Economic Role

Investigative reporting

Very high

High but rare

Brand anchor

Original analysis

High

Medium

Subscriber retention

Breaking news rewrite

Medium

Low

Traffic defense

Data-driven updates

Near zero

Low

Volume filler

Automated summaries

~zero

Very low

Search capture


Most user-generated content business models follow a somewhat-similar, but “flipped” pattern. UGC platforms follow the same economic logic (abundance versus scarcity, automation versus humans), but the roles of humans, machines, and “content” are different. 


Abundant, low-value content is automated, while scarce, high-value content is human-produced. 


But users supply the labor for abundance, instead of journalists. The platforms automate selection, amplification, and monetization, so scarcity comes from attention, status, and trust. 


That flows from the differences in media models. Media companies pay to produce content and then monetize the content. 


UGC platforms get their content for free and monetize behavior around that content. In other words, the real product is engagement, not content. So curation is more important than content supply, which is, for all intents and purposes, unlimited. Attention is the source of scarcity. 


Generative AI accelerates the content-creation processes, which becomes even more abundant and lower cost. But the algorithms still curate. So UGC platforms will optimize for watch time; shares; comments or return visits


So algorithms will favor emotionally activating content; identity-affirming content or controversial content. That might be likened to “commodity” news, the sort of stuff that, in a professional media context, is structured enough to be automated. 


Top UGC creators, in terms of revenue potential, often provide insights on what matters, what to ignore or how to think about something. They often focus on meaning, which is the same “scarce human” function in professional media.


Ironically, AI increases competition for attention, but raises the premium on human scarcity. That happened with journalism after the internet; music after streaming or photography after smartphones


Abundance also tends to make authentic human insight more valuable, not less, even if it remains rare, as surfaced by the algorithms. 


Tuesday, December 30, 2025

Are Neoclouds a Lasting Part of the AI Compute Value Chain?

One logical question to be asked about the neocloud segment of the artificial intelligence compute value chain is how sustainable the role might become, over time, as some amount of consolidation occurs. 


History suggests a new and sustainable role in the AI computing value chain could emerge. 


Hardware and platform layers tend to consolidate first, which might suggest to some that neocloud service providers (as infrastructure) could consolidate and eventually be absorbed by hyperscalers.


Middleware and applications repeatedly re-fragment, on the other hand (databases, runtimes, ML frameworks). .


But there also is an argument to be made that intermediation layers survive. As time-sharing bureaus to value-added resellers and managed service providers have emerged as sustainable niches, neoclouds could emerge as a permanent part of the value chain, providing customers (hyperscalers, for example):

  • Price discovery

  • Flexibility (financial and operational)

  • Vendor neutrality.


Hyperscalers dominate integrated platforms, but merchant compute and specialized capacity might be sustainable positions in the value chain. 


In every computing era, the dominant platform provider tries to absorb adjacent layers. But a neutral or merchant layer re-emerges when:

  • Utilization is volatile

  • Customers resist lock-in

  • Economics differ by workload


That pattern strongly suggests neocloud is not an anomaly, even if there are business reasons the hyperscalers providing “AI compute as a service” might prefer the role be limited.


For starters, to the extent there are supply constraints for graphics processing units, neoclouds compete for that supply, and reduce hyperscaler leverage over chip vendors. 


Neoclouds also can expose:

  • High gross margins on certain workloads

  • Cross-subsidies inside hyperscaler pricing

  • Arbitrage opportunities hyperscalers don’t want visible


For the hyperscalers, non-existence of neoclouds strengthens the “buy from us; there is no alternative” positioning. Without neocloud alternatives, there are fewer opportunities for customers to ask “why is this cheaper elsewhere?”


So there will be some logic for hyperscalers to absorb, starve, or outflank neoclouds. 


On the other hand, there are structural reasons an independent neocloud role persists. Hyperscalers are bad at merchant compute, one might argue. 


Hyperscalers prefer:

  • Platform lock-in

  • Long-lived customer relationships

  • Bundled services

  • Predictable utilization. 


They are not optimized for, or do not prefer:

  • Bursty, price-sensitive workloads

  • Short-term GPU leasing

  • Single-workload economics

  • Pricing experimentation. 


Even if hyperscalers can do neocloud-style offerings, they often won’t, because doing so:

  • cannibalizes higher-margin SKUs

  • disrupts enterprise sales narratives

  • complicates investor messaging

  • introduces volatile revenue sources.


On the demand side, customers (including the hyperscalers themselves) want a neutral compute layer that supports multi-cloud capabilities, without a “platform” agenda. Cost and balance sheet advantages (moving capex to opex) also exist. 


Neoclouds might also offer faster access to new silicon and more flexible or negotiable terms. 


In terms of value chain positioning, the hyperscalers will control:


The value for their customers will include convenience, integration and trust. 


The neoclouds, on the other hand, providing a merchant compute layer, will provide capacity arbitrage, 

specialized hardware and price-performance leadership. The value is raw compute, predictable economics and speed to deployment.


Era

Hardware, Infrastructure

Systems, Platform Layer

Operating System

Middleware,  Runtime

Applications

Services,  Intermediation

Mainframe (1960s–1970s)

Vertically integrated mainframes (IBM-dominated)

Proprietary system architectures

Proprietary (IBM OS/360, etc.)

Embedded in OS

Enterprise custom apps

Systems integrators, time-sharing bureaus

Minicomputer (1970s–1980s)

DEC, HP, Data General

Vendor-specific platforms

UNIX variants, VMS

Early databases, transaction monitors

Departmental apps

VARs, integrators

Client–Server (1980s–1990s)

Commodity servers (x86)

Wintel standard

Windows, UNIX

Databases (Oracle), app servers

Enterprise packaged software

Hosting, VARs, IT outsourcers

Early Cloud (2000s–2010s)

Hyperscale data centers

Virtualized compute platforms

Linux

Cloud middleware, containers

SaaS

MSPs, CDNs, colocation

Mature Cloud (2015–2022)

Hyperscalers dominate scale

IaaS / PaaS platforms

Linux

Kubernetes, managed databases

Cloud-native SaaS

MSPs, FinOps, cloud brokers

Emerging AI Era (2023– )

Accelerators (GPUs, TPUs, ASICs); power & data centers

Hyperscale AI platforms + neocloud capacity

Linux

ML frameworks, inference runtimes

AI-native apps, copilots

Neoclouds, AI infra brokers, model hosts


So there is reason to believe that neoclouds will emerge as a permanent part of the AI compute value chain, supplying:

  • Merchant GPU capacity

  • Independent AI compute

  • Pricing-led infrastructure specialists/

The value chain seemingly always creates a layer where price discovery, specialization, and customer leverage are the values. Neocloud is that layer, some will argue. 


And while enterprise compute will be part of the market, much of the current market is driven by compute needs of the hyperscalers themselves. 


Company

Percentage from Hyperscalers

Key Details/Notes

CoreWeave

~80-100%

Primary revenue from hyperscalers and AI labs. Microsoft alone: 62% (2024 full year), rising to ~70-72% in early 2025 periods. Top 2 customers (likely Microsoft + Meta/OpenAI): 77% in 2024. Additional contracts with Meta ($14B+), OpenAI, and others. Acts as overflow capacity for hyperscalers.

TeraWulf

~14-20% (growing rapidly)

Primarily Bitcoin mining revenue; HPC/AI (hosting for hyperscalers via partners like Fluidstack/Core42, backed by Google) contributed ~14% in recent quarters, with major multi-year contracts ramping in 2025-2026.

CleanSpark

~0-5% (early stage)

Still primarily Bitcoin mining (>95% revenue). Pivoting with AI data center hires and site wins (e.g., beat Microsoft for Wyoming site), but minimal hyperscaler revenue recognized yet; focus on future diversification.

Hut 8

~10-30% (growing rapidly)

Shifting from mining; major 15-year $7B+ lease (potentially $17B+) with Fluidstack (Google-backed) for AI hosting starting ramp in 2025. Earlier GPU-as-a-Service for AI clients; hyperscaler deals driving pivot.

Others (e.g., Core Scientific, IREN)

20-50%+ (varies)

Similar miners pivoting: Core Scientific ~21-30% from HPC (deals with CoreWeave/hyperscalers); many in 10-30% range amid transition. Full pivot companies approach 100%.


Some might question the “permanence” of neocloud providers in the “AI compute as a service” space, but current thinking tends to be that a new role within the value chain is being created. 


Analysts view NeoClouds as emerging with enduring roles through specialization, partnerships, and niche dominance, rather than widespread buyouts.


Hyperscalers (Microsoft, Google, Amazon, Meta) prefer massive long-term offtake contracts and partnerships to secure capacity quickly, while building their own infrastructure. This hybrid approach allows them to use NeoCloud balance sheets for off-balance-sheet scaling without full integration risks.


Others might argue that the window for neoclouds is somewhat less certain, to the extent it is driven by hyperscale inability to rapidly supply the current demand for AI compute. Eventually, the argument goes, the hyperscalers will be able to build and operate their own internal capacity, reducing reliance on neoclouds. 


Source/Estimate

Timeframe

Unmet/Shortfall Capacity

Key Notes/Reasons

McKinsey

By 2030 (incremental 2025-2030)

~125-205 GW (AI-related global)

Total AI demand 156-260 GW by 2030; hyperscalers capture ~70%, but build lags due to power/grid.

CBRE / Utility Requests

US hyperscale 2025-2026

~14-40 GW incremental (2025 surge)

Vacancy at record low 1.6%; requests far exceed grid additions; multi-year delays in key markets.

Seaport Global / Industry

Near-term (2025-2027)

Significant GPU/power shortage

NeoClouds fill "shortage of graphics chips and electricity"; temporary 3-5 year window.

NVIDIA / Analyst Backlogs

Blackwell supply 2025-2026

3.6M units backlog (hyperscalers)

Sold out through mid-2026; drives outsourcing to NeoClouds for immediate access.

Overall Analyst Consensus

2025-2028

Tens of GW + millions of GPUs unmet soon

Power as #1 bottleneck; hyperscalers' $350-600B annual CapEx still constrained by grid/energy.


When scale providers win on unit economics, merchant or brokerage layers appear wherever customers value flexibility, neutrality, or pricing innovation. In the case of AI compute, hyperscale AI compute suppliers, no less than enterprise customers, will have such needs. 


Content delivery networks provide a good example of how new specialist roles can emerge. CDNs are specialized data centers whose value is edge location and latency reduction for media and content delivery. 


AI Media Impact: More Bifurcation of High and Low; Automated and Scarce Human Content

As someone who worked for 40 years in ad-supported media , the realities of today’s business are brutal, and that was true before generative...