Thursday, March 19, 2026

What AI Changes in the Area of Connectivity

Year in and year out, it is always safe to predict that connectivity bandwidth demand will grow. But we might always ask how specific innovations drive growth in different parts of the connectivity fabric.


Broadly speaking, artificial intelligence computing workloads and need for connectivity occur across several parts of the network:

  • Intra-Data Center: GPUs within a single cluster must constantly synchronize "weights" and "gradients" during training. This requires 800G (and soon 1.6T or 3.2T) optical links.

  • Data Center to Data Center (DCI): Large models are increasingly trained across distributed clusters in different geographic regions to tap into available power grids. So data center to data center capacity must be reinforced.

  • Data Center to End User: While inference uses less bandwidth than training, the content density is increasing, moving from text-based AI to real-time video. This is primarily access network augmentation and already is happening for other reasons.


But it isn’t simply the data volume that AI might be changing. 


AI traffic, being machine traffic, has different characteristics than human-generated traffic, which has fairly well-defined hourly patterns across any 24-hour period. AI traffic, on the other hand, is less predictable. 


Redundancy and load-shifting are traditional ways of dealing with temporary spikes in compute demand, but AI demand spikes might paradoxically also cascade, if additional resources in other regions are insufficient, or if the connectivity fabric cannot support spikey load shifting. 


So some might say the architecture of internet traffic is evolving from human-shaped to machine-shaped. 


Connectivity Segment

Growth Driver

Primary Demand Metric

Expected Impact/Tech Shift

Intra-DC (Back-end)

GPU-to-GPU synchronization for LLM training.

10x fiber density increase.

Shift to 1.6T/3.2T Ethernet; Co-packaged optics (CPO).

DCI (Data Center Interconnect)

Distributed training & "Check-pointing" across regions.

145% CAGR in pluggable optics.

800G ZR/ZR+ becomes the standard for long-haul.

DC to Edge/User

Real-time Video GenAI & "Agentic" workflows.

30% - 50% annual traffic surge.

Growth in Edge AI data centers to lower latency.

User to DC (Uplink)

AI Wearables (Video glasses) & sensor-rich IoT.

Shift in Asymmetry.

Uplink demand starts to rival Downlink in specific sectors.

Network Management

AI-driven traffic orchestration (NaaS).

38% CAGR in software.

Shift toward Network-as-a-Service (NaaS) for dynamic scaling.


It is reasonable to expect many other app and behavior-related changes to happen as well. 


If AI is rapidly increasing network traffic volume and unpredictability, enterprises and global Internet infrastructure providers will have to redesign their systems for resilience. 


So new enterprise bandwidth, latency, and congestion controls will be needed to handle these loads, many will argue, including:

  • Treating AI workloads as having distinct traffic patterns

  • Add traffic shaping, rate limiting, intelligent filtering, and workload isolation features  

  • Reducing cross-region data movement, placing data closer to models

  • Building redundancy across regions and providers

  • Using real-time monitoring and predictive analytics to detect anomalies

  • Assuming double to five times “typical” traffic spikes.


Compute platforms supporting “AI compute as a service” will likely have to consider:

  • Expand bandwidth, backbone capacity, and route diversity

  • Deploy distributed inference and GPU capacity at the edge

  • Implement AI-aware routing and advanced congestion management

  • Increase regional redundancy, maintain reserve capacity

  • Enable dynamic scaling and improved tenant isolation.


The point is that the costs of creating new AI compute facilities has a number of costs in addition to shells (buildings), power and processors. 


Category

Examples

Why needed for higher AI compute

Very rough cost indications (order of magnitude)

High‑performance interconnect inside clusters

InfiniBand/Ethernet switches, NICs, NVLink bridges, optical cabling, spine‑leaf fabrics

Distributed training and large MoE models need low‑latency, high‑bandwidth links between thousands of GPUs; networking can be a large fraction of AI cluster capex.

Per large AI cluster, networking (switches, NICs, optics) can easily run into hundreds of millions of dollars; per GPU, interconnect can add roughly USD 3,000–10,000 over server cost depending on scale and topology

Storage systems

High‑performance NVMe in servers, parallel/distributed file systems, object storage, backup/archival storage

Training data lakes, checkpoints, model artifacts and logs require very high throughput and capacity; storage performance strongly affects GPU utilization.

Full AI stack hardware (servers + storage + networking) often implies base systems at roughly USD 5,000–45,000 per server before GPUs; petabyte‑scale storage systems add millions to tens of millions of dollars per region.slyd

Advanced cooling infrastructure

Direct‑to‑chip liquid cooling, immersion tanks, rear‑door heat exchangers, upgraded chillers, pumps, heat‑rejection systems

Rack densities >30–100 kW for GPU servers make air cooling insufficient, forcing large investments in liquid cooling plants, distribution loops, and monitoring.

Liquid‑cooling deployments for AI halls can cost tens of millions of dollars per site; over life of the facility, cooling energy is a major part of 15–25% “power & cooling” share of AI TCO.

Power delivery beyond basic “power”

Substations, high‑voltage switchgear, UPS, PDUs, busways, redundant feeds (N+1/2N)

Dense AI clusters require huge, highly reliable power; providers must oversize and harden electrical systems to avoid outages and support higher rack densities .

Upgrading a site’s electrical plant for AI (substation, UPS, distribution) typically runs in the tens to hundreds of millions of dollars for hyperscale campuses, depending on MW added and redundancy.aegissoftte

Data‑center facility upgrades (non‑shell)

Containment systems, raised floors, structural reinforcement for heavy racks/tanks, fire suppression tuned for liquid cooling, white‑space re‑fit

Existing halls often must be rebuilt to handle heavier racks, new coolant loops and different airflow patterns; safety systems are upgraded for new thermal/chemical risks.

Retrofit of an existing hall to AI‑grade density can cost several thousand dollars per square meter; full hall conversions often run into the tens of millions of dollars per building.

WAN and inter‑DC networking

Metro and long‑haul fiber, DWDM equipment, edge routers, private backbone upgrades

AI workloads move large datasets and models between regions and availability zones; cross‑DC bandwidth demand grows sharply with multi‑region training and inference.

Large cloud backbones already represent multi‑billion‑dollar capex programs; incremental AI‑driven capacity (fiber pairs, optical gear) can be hundreds of millions of dollars over a few years for a major provider.

Orchestration , MLOps, and control‑plane software

Cluster schedulers, container platforms, model registries, CI/CD for ML, usage metering/billing

To sell “AI compute as a service,” providers need sophisticated software to allocate GPUs, manage jobs, track utilization, and integrate storage/networking; complexity grows with scale.

Many platforms are internally developed; external software licensing and support can be on the order of 10–15% of AI infrastructure TCO over five years in some deployments.

Observability, telemetry, and optimization tools

Monitoring for GPUs, fabric, cooling, DCIM/BMS integration, AI‑driven optimization (e.g., cooling control)

Keeping thousands of GPUs fully utilized and within thermal/power limits requires deep telemetry and automated tuning, which are non‑trivial engineering investments.aegissoftt

Enterprise‑grade observability stacks and DCIM/BMS integration typically cost millions of dollars per large site over their life (licenses plus engineering and integration work).

Security and compliance

Hardware security modules, key management, secure enclaves, data loss prevention, access controls, audits

Enterprise AI workloads often involve sensitive data and regulated industries; clouds must harden AI clusters against exfiltration and meet compliance standards.

Security tooling and compliance programs add ongoing opex in the millions per year for large environments, plus capex for dedicated hardware and secure facilities.

Personnel and specialized operations

Site reliability engineers, network engineers (InfiniBand/HPC fabrics), MLOps teams, facilities engineers for liquid cooling

AI data centers need more specialized skills than traditional IT: tuning fabrics, managing liquid cooling, optimizing training pipelines, and running large clusters efficiently.

Personnel can represent 20–30% of AI infrastructure TCO over time; for hyperscalers, this means tens to hundreds of millions of dollars annually across global AI regions.slyd

Support, maintenance, and spares

Hardware support contracts, spare parts pools, planned refresh cycles, vendor field engineers

High‑availability AI services require rapid replacement of failed components and regular firmware/software updates, increasing support intensity per rack.

Maintenance and support are often modeled as about 10–15% of AI infrastructure TCO across a 3–7 year horizon.

Land, water, and sustainability programs

Additional sites for new AI regions, water treatment/recycling for cooling, heat‑reuse infrastructure, carbon procurement

AI data centers often face local constraints on water use and emissions; providers invest in water‑efficient cooling, heat reuse, and carbon/renewable projects.

Water‑optimized cooling, treatment and heat‑reuse can add millions to tens of millions per site; broader sustainability and renewable programs for AI loads are multi‑billion‑dollar commitments across portfolios.




Wednesday, March 18, 2026

Bye Bye Marginal Cost Pricing

Marginal cost is the reason enterprise software economics have been changed by artificial intelligence


Simply put, the additional cost of supplying traditional software seat number 5,000 is very close to zero, once the software has been written and deployed commercially. That is why profit margins for software were traditionally so high. 


The same impact happens based on usage. The cost of supporting a very-active license user was not very different from supporting a light-usage customer or user. 


AI breaks that model as each inquiry involves additional real cost. So marginal-cost pricing doesn't perform as in the past. Instead, costs have to account for usage, since AI inference imposes real new costs.


source: Adeayo


Basically, infrastructure cost has become variable (at scale) for the first time in software-as-a-service history, since the cost of the next operation is not insignificant.


So AI inference costs have to be recovered in some way other than the licensed, per-seat mechanism. One way or the other, variable usage has to be accounted for and priced into the retail models.


Tuesday, March 17, 2026

In AI Era, Intent Replaces Software Infrastructure?

The process of booking a hotel, today and using an artificial intelligence agent, illustrates the reasons investors are so worried about AI disruption of enterprise software. At least in principle, lots of processes and functions that once were mandatory simply are abstracted. 


Activities are not properties of particular pieces of software but rather are composited on the fly and then deconstructed. Everything is custom, which might have been totally impractical before everyday AI and application program interfaces.  


source: Gabriyel Wong 


To some extent, we might say intent replaces infrastructure. You still need access to the capabilities. But those capabilities are invoked by the agent on your behalf. You don’t necessarily “own” the hardware, software, apps or operating systems that get invoked to complete your desired task. 


It’s analogous to the way cloud computing changed the necessity of “ownership” to reliance on “services.”


Two Thumbs Up for Project Hail Mary

Project Hail Mary really is this good. I read the book first, and except for time constraints, the screenplay is faithful. It's a just great fun and very entertaining. 

What AI Changes in the Area of Connectivity

Year in and year out, it is always safe to predict that connectivity bandwidth demand will grow. But we might always ask how specific innova...