Thursday, March 19, 2026

Outcomes Matter, Not Virtue Signaling

Adam Garfinkle's book Telltale Hearts argues that the U.S. antiwar movement of the 1960s (yes, Baby Boomers) did not meaningfully shorten the Vietnam War and may actually have prolonged it. 


That matters if you think it is more important to “do good” than to “feel good;” better to accomplish a change than simply to “virtue signal.” 


The attack is upon the  narrative, arguably central to Boomer self-understanding, that their activism decisively “ended the war.” He argues that story is emotionally satisfying but incorrect. 


For a generation that prides itself on being “transformational,” that puncturing of a myth might be uncomfortable, but a useful antidote to ingrained arrogance


Oddly enough, Garfinkle argues, both opponents of the war and those who believe it might actually have been won by the United States seem to agree on the movement’s impact. But both sides might be wrong. 


Garfinkle challenges the widespread belief that protests forced U.S. withdrawal and instead argues the movement had “marginal impact” (and maybe almost none) on ending the war. 


In fact, he says the movement actually was counterproductive:

  • provoked backlash

  • strengthened hardline positions

  • disrupted conventional political processes that might otherwise have constrained the war.


His most startling argument is that the protests might actually have extended the conflict and increased casualties. 


And other authors have made similar claims about a generation that might have created as many problems as it believes it solved:

  • Boomers: The Men and Women Who Promised Freedom and Delivered Disaster – Helen Andrews
    Argues Boomer elites reshaped institutions (media, politics, religion) in ways that produced long-term dysfunction

  • A Generation of Sociopaths – Bruce Cannon Gibney
    A blunt critique claiming Boomers extracted economic and social value while leaving debt and institutional decay

  • The Narcissism Epidemic – Jean Twenge
    Connects Boomer-era cultural shifts to rising individualism and narcissism (though broader than just Boomers).

Garfinkle’s work is narrower (focused on Vietnam), but:

  • Challenges moral self-congratulation

  • Highlights unintended consequences

  • Separates cultural impact from policy impact (huge in one, limited in the other)

Many will argue boomers were enormously influential. But influence is not the same as positive outcomes.


I may be a boomer, but I do not buy the self-congratulatory plaudits. Perhaps we meant well. But what matters are outcomes, not feelings.


Boomer economic impact likely is mixed, at best.


Author

Positive Effects

Negative Effects

Net View

Bruce Cannon Gibney (A Generation of Sociopaths)

Asset inflation, entitlement expansion, public debt burden shifted to younger generations

Strongly negative

Helen Andrews (Boomers)

Some institutional dynamism

Mismanagement of institutions, short-termism

Mostly negative

William Strauss & Neil Howe (Generations, The Fourth Turning)

Innovation, growth cycles

Fiscal imbalances, intergenerational strain

Cyclical / mixed


Boomer political or institutional impact might be a mix of positive and negative. 


Author

Positive Effects

Negative Effects

Net View

Adam Garfinkle (Telltale Hearts)

Raised awareness of war

Undermined political cohesion; limited policy effectiveness; possible prolongation of Vietnam War

Negative

Todd Gitlin (The Sixties)

Expanded democratic participation

Fragmentation, radicalization weakened movements

Mixed

Alan Wolfe (One Nation, After All)

Greater tolerance, pluralism

Decline in shared moral frameworks

Tradeoff


Cultural or social impact might be the most-questionable area of influence. 


Author

Positive Effects

Negative Effects

Net View

Jean Twenge (The Narcissism Epidemic)

Self-expression, individual empowerment

Rising narcissism, fragility, decline in social cohesion

Negative

Todd Gitlin

Liberation movements, civil rights gains

Excess, identity fragmentation

Mixed

Daniel Bell

Cultural creativity

Breakdown of norms supporting institutions

Tradeoff

Alan Wolfe

Tolerance, reduced prejudice

Moral relativism, weaker shared norms

Tradeoff


What AI Changes in the Area of Connectivity

Year in and year out, it is always safe to predict that connectivity bandwidth demand will grow. But we might always ask how specific innovations drive growth in different parts of the connectivity fabric.


Broadly speaking, artificial intelligence computing workloads and need for connectivity occur across several parts of the network:

  • Intra-Data Center: GPUs within a single cluster must constantly synchronize "weights" and "gradients" during training. This requires 800G (and soon 1.6T or 3.2T) optical links.

  • Data Center to Data Center (DCI): Large models are increasingly trained across distributed clusters in different geographic regions to tap into available power grids. So data center to data center capacity must be reinforced.

  • Data Center to End User: While inference uses less bandwidth than training, the content density is increasing, moving from text-based AI to real-time video. This is primarily access network augmentation and already is happening for other reasons.


But it isn’t simply the data volume that AI might be changing. 


AI traffic, being machine traffic, has different characteristics than human-generated traffic, which has fairly well-defined hourly patterns across any 24-hour period. AI traffic, on the other hand, is less predictable. 


Redundancy and load-shifting are traditional ways of dealing with temporary spikes in compute demand, but AI demand spikes might paradoxically also cascade, if additional resources in other regions are insufficient, or if the connectivity fabric cannot support spikey load shifting. 


So some might say the architecture of internet traffic is evolving from human-shaped to machine-shaped. 


Connectivity Segment

Growth Driver

Primary Demand Metric

Expected Impact/Tech Shift

Intra-DC (Back-end)

GPU-to-GPU synchronization for LLM training.

10x fiber density increase.

Shift to 1.6T/3.2T Ethernet; Co-packaged optics (CPO).

DCI (Data Center Interconnect)

Distributed training & "Check-pointing" across regions.

145% CAGR in pluggable optics.

800G ZR/ZR+ becomes the standard for long-haul.

DC to Edge/User

Real-time Video GenAI & "Agentic" workflows.

30% - 50% annual traffic surge.

Growth in Edge AI data centers to lower latency.

User to DC (Uplink)

AI Wearables (Video glasses) & sensor-rich IoT.

Shift in Asymmetry.

Uplink demand starts to rival Downlink in specific sectors.

Network Management

AI-driven traffic orchestration (NaaS).

38% CAGR in software.

Shift toward Network-as-a-Service (NaaS) for dynamic scaling.


It is reasonable to expect many other app and behavior-related changes to happen as well. 


If AI is rapidly increasing network traffic volume and unpredictability, enterprises and global Internet infrastructure providers will have to redesign their systems for resilience. 


So new enterprise bandwidth, latency, and congestion controls will be needed to handle these loads, many will argue, including:

  • Treating AI workloads as having distinct traffic patterns

  • Add traffic shaping, rate limiting, intelligent filtering, and workload isolation features  

  • Reducing cross-region data movement, placing data closer to models

  • Building redundancy across regions and providers

  • Using real-time monitoring and predictive analytics to detect anomalies

  • Assuming double to five times “typical” traffic spikes.


Compute platforms supporting “AI compute as a service” will likely have to consider:

  • Expand bandwidth, backbone capacity, and route diversity

  • Deploy distributed inference and GPU capacity at the edge

  • Implement AI-aware routing and advanced congestion management

  • Increase regional redundancy, maintain reserve capacity

  • Enable dynamic scaling and improved tenant isolation.


The point is that the costs of creating new AI compute facilities has a number of costs in addition to shells (buildings), power and processors. 


Category

Examples

Why needed for higher AI compute

Very rough cost indications (order of magnitude)

High‑performance interconnect inside clusters

InfiniBand/Ethernet switches, NICs, NVLink bridges, optical cabling, spine‑leaf fabrics

Distributed training and large MoE models need low‑latency, high‑bandwidth links between thousands of GPUs; networking can be a large fraction of AI cluster capex.

Per large AI cluster, networking (switches, NICs, optics) can easily run into hundreds of millions of dollars; per GPU, interconnect can add roughly USD 3,000–10,000 over server cost depending on scale and topology

Storage systems

High‑performance NVMe in servers, parallel/distributed file systems, object storage, backup/archival storage

Training data lakes, checkpoints, model artifacts and logs require very high throughput and capacity; storage performance strongly affects GPU utilization.

Full AI stack hardware (servers + storage + networking) often implies base systems at roughly USD 5,000–45,000 per server before GPUs; petabyte‑scale storage systems add millions to tens of millions of dollars per region.slyd

Advanced cooling infrastructure

Direct‑to‑chip liquid cooling, immersion tanks, rear‑door heat exchangers, upgraded chillers, pumps, heat‑rejection systems

Rack densities >30–100 kW for GPU servers make air cooling insufficient, forcing large investments in liquid cooling plants, distribution loops, and monitoring.

Liquid‑cooling deployments for AI halls can cost tens of millions of dollars per site; over life of the facility, cooling energy is a major part of 15–25% “power & cooling” share of AI TCO.

Power delivery beyond basic “power”

Substations, high‑voltage switchgear, UPS, PDUs, busways, redundant feeds (N+1/2N)

Dense AI clusters require huge, highly reliable power; providers must oversize and harden electrical systems to avoid outages and support higher rack densities .

Upgrading a site’s electrical plant for AI (substation, UPS, distribution) typically runs in the tens to hundreds of millions of dollars for hyperscale campuses, depending on MW added and redundancy.aegissoftte

Data‑center facility upgrades (non‑shell)

Containment systems, raised floors, structural reinforcement for heavy racks/tanks, fire suppression tuned for liquid cooling, white‑space re‑fit

Existing halls often must be rebuilt to handle heavier racks, new coolant loops and different airflow patterns; safety systems are upgraded for new thermal/chemical risks.

Retrofit of an existing hall to AI‑grade density can cost several thousand dollars per square meter; full hall conversions often run into the tens of millions of dollars per building.

WAN and inter‑DC networking

Metro and long‑haul fiber, DWDM equipment, edge routers, private backbone upgrades

AI workloads move large datasets and models between regions and availability zones; cross‑DC bandwidth demand grows sharply with multi‑region training and inference.

Large cloud backbones already represent multi‑billion‑dollar capex programs; incremental AI‑driven capacity (fiber pairs, optical gear) can be hundreds of millions of dollars over a few years for a major provider.

Orchestration , MLOps, and control‑plane software

Cluster schedulers, container platforms, model registries, CI/CD for ML, usage metering/billing

To sell “AI compute as a service,” providers need sophisticated software to allocate GPUs, manage jobs, track utilization, and integrate storage/networking; complexity grows with scale.

Many platforms are internally developed; external software licensing and support can be on the order of 10–15% of AI infrastructure TCO over five years in some deployments.

Observability, telemetry, and optimization tools

Monitoring for GPUs, fabric, cooling, DCIM/BMS integration, AI‑driven optimization (e.g., cooling control)

Keeping thousands of GPUs fully utilized and within thermal/power limits requires deep telemetry and automated tuning, which are non‑trivial engineering investments.aegissoftt

Enterprise‑grade observability stacks and DCIM/BMS integration typically cost millions of dollars per large site over their life (licenses plus engineering and integration work).

Security and compliance

Hardware security modules, key management, secure enclaves, data loss prevention, access controls, audits

Enterprise AI workloads often involve sensitive data and regulated industries; clouds must harden AI clusters against exfiltration and meet compliance standards.

Security tooling and compliance programs add ongoing opex in the millions per year for large environments, plus capex for dedicated hardware and secure facilities.

Personnel and specialized operations

Site reliability engineers, network engineers (InfiniBand/HPC fabrics), MLOps teams, facilities engineers for liquid cooling

AI data centers need more specialized skills than traditional IT: tuning fabrics, managing liquid cooling, optimizing training pipelines, and running large clusters efficiently.

Personnel can represent 20–30% of AI infrastructure TCO over time; for hyperscalers, this means tens to hundreds of millions of dollars annually across global AI regions.slyd

Support, maintenance, and spares

Hardware support contracts, spare parts pools, planned refresh cycles, vendor field engineers

High‑availability AI services require rapid replacement of failed components and regular firmware/software updates, increasing support intensity per rack.

Maintenance and support are often modeled as about 10–15% of AI infrastructure TCO across a 3–7 year horizon.

Land, water, and sustainability programs

Additional sites for new AI regions, water treatment/recycling for cooling, heat‑reuse infrastructure, carbon procurement

AI data centers often face local constraints on water use and emissions; providers invest in water‑efficient cooling, heat reuse, and carbon/renewable projects.

Water‑optimized cooling, treatment and heat‑reuse can add millions to tens of millions per site; broader sustainability and renewable programs for AI loads are multi‑billion‑dollar commitments across portfolios.




Andy Jassy Talks AWS AI Revenues

Andy Jassy's 2025 shareholder letter apparently has provided some relief for investors worried about excessive or unneeded capital inve...