Wednesday, January 21, 2026

How Electricity Charging Might Change

It now is easy to argue that U.S. electricity pricing might have to evolve in ways similar to the change in retail pricing of communications services changed in the shift from analog to digital formats


Significantly, retail pricing might change from “consumption” or “usage” to “capabilities” or “access.” In other words, commercial power customers might eventually be charged based on “how much” power is available; where it is available or when it is available. 


Consider the earlier change in connectivity service pricing. 


For the most part, connectivity providers (telcos, mobile operators) no longer price their services on “usage” (minutes, calls, texts, bytes consumed), preferring “capability” and “access” as the key pricing elements. 


For internet access services, consumption does not typically matter. Instead, prices are based on “potential speed.” So a 100-Mbps connection costs the least; a 500-Mbps connection costs more while a gigabit-per-second costs the most.


Electricity still is mostly priced based on consumption (usage). But the economics of paying for the common costs of generation and transmission remain, even as more customers reduce consumption using self generation (solar panels, local generation by businesses).

Electric grid support therefore will become more challenging as user consumption drops, based on substitution of local generation for network-delivered power.


The basic business problem is that this forces a smaller number of customers to bear a larger portion of shared cost recovery, to the extent that common costs are recovered from usage charges. 


Electricity service providers have some tools to reinvent their business models. Load management becomes more important, for example. 


A shift to “access” charges also would help, creating a different model not based on actual account energy consumption but a fee based on ability to use the network. That mirrors the flat monthly fee approach now used by mobile service providers, where prices are not dictated by the number or length of phone calls, the number of text messages sent and received, or the amount of internet access data consumed. 


Instead, one fee, providing access to the network and its services, dominates. 


As with communications companies, customers who want “bigger pipes” would pay more, as do customers who want gigabit internet access service, compared to those who only want to pay for 100-Mbps speeds.

That is important in an era where local generation is going to reduce grid-delivered power consumption. 


Electricity is ceasing to be an “energy sales business” and becoming an infrastructure access business, exactly like telecom. Where “amount of electricity consumed” used to drive the revenue model, the telecom approach would substitute “ability to use the network and its features.” 


Consumer solar users without extensive battery assets then would pay for the ability to use grid power at night, for example, in the same way that a mobile device user “pays for” the ability to use the mobile operator network, rather than the specific amount of consumption of network resources. 


The alternative is continued cross-subsidy collapse, where costs keep rising for customers unable to switch to some form of self generation. 


Common costs (generation and transmission) must be recovered. Self generation threatens the present model. As with communication networks, electrical grids must be designed to support peak demand, not average demand. 


Network revenue models must assume universal service and recovery of all common costs, not simply marginal costs related to actual consumption. 


Traditional pricing assumes energy consumption is equal to grid usage. But distributed generation breaks that assumption. 


Essentially, customers remove themselves, at least partially, from the system, but retain the optionality of using the grid for reliability, backup, and peak load balancing. 


But fixed costs stay embedded in the price of per-kiloWatt hour charges, so rates will rise as sales fall. 


At the same time, new demand driven by high-performance computing and associated data centers increases the need for new investments in transmission infrastructure as well as generation, increasing the fixed costs. 


The business model will break, if not revamped.


Energy is an Issue for AI, but Not Existential

High-performance computing remains an energy-intensive business, but we must remain alert for non-linear change, as suppliers will have huge incentives to reduce power consumption. 


A common mistake is to assume“if AI usage grows 10×, electricity consumption must grow 10×.” And, to be sure, consumption will grow as usage grows.

 

source: NextEra


 

source: NextEra


source: NextEra


But energy requirements have never grown in a linear way with demand growth. Instead, computing energy usage per unit shows a clear pattern:

  • Demand grows exponentially

  • Energy per unit falls faster than demand rises

  • Total energy grows—but sub-linearly.


Era

Energy per computation

Mainframes

Extremely high

Minicomputers

↓ ~10×

PCs

↓ ~100×

Mobile SoCs

↓ ~1,000×

Specialized accelerators

↓ another 10–100×


Domain-specific silicon, tuning inference operations to the level of required processing, model efficiency and the shift to less-energy-intensive inference operations all will play a part. 


When any key input, such as power, becomes the bottleneck, innovation follows. 


And, by the way, we might also note that every new computing wave has produced fears that energy supply would not keep up, as some argued:


U.S. Department of Commerce, between 1966 and 1972, warned that centralized computing facilities could require “significant regional generation capacity.” IBM internal studies from the late 1960s likewise studied power density limits in computer rooms. That has not proven to be the case. 


Instead, computing power consumption has been cut in half (per operation) about every 19 months. 


Sure, AI operations will consume more power. But growth will not be linear, and will not be unmanageable. 


Sunday, January 18, 2026

How do Computing Products Sold Close to Marginal Cost Recover Capital Investment?

Marginal cost pricing has been a common theme for many computing industry products. The concept is that retail pricing is set in relation to the cost of producing the next units, but not including amortization of any investments in infrastructure. 


Almost counter-intuitively, there are many examples of firms selling computing products at (or near) marginal cost (sometimes at prices near zero), yet still producing strong long-term capital recovery and attractive ROIC


That seems to defy economic logic, but it does work. How, since the investments in infrastructure still must be recovered?


Simply, the firms did not monetize the thing they priced at marginal cost, but instead “monetized what that thing made possible.”


The underlying economics, in computing, are simple: marginal cost collapses faster than average cost. So once a fixed investment is made (processor cycles; storage reads; memory access; bits transmitted; copies of apps), it is possible to price the commodity layer at marginal cost but then recover capital in a complementary scarcity layer. 


In other words, marginal cost pricing works when a supplier has something else to monetize, someplace else in the value chain or stack. 


The IBM mainframe business sold batch jobs or processor time as a service at marginal cost. But IBM recovered the cost of its invested capital other ways. It’s margins on hardware were high. Its customer lock-in was similarly high. 


And IBM was able to sell system engineering and software pertaining to its machines and ecosystem. 


Layer

Monetization

Hardware

Extremely high gross margins

Switching costs

Proprietary architectures

Integration

Services + system engineering

Software lock-in

Non-portable applications


So marginal cost pricing for compute services worked because customers could not switch platforms. So financial returns came from other elements of the platform. 


Microsoft provides another example. Copies of Windows; Office and developer tools were sold at affordable prices. But Microsoft made its profits from its operating system “monopoly;” developer lock-in; bundled distribution and version upgrades. 


Layer

Explanation

OS monopoly

Controlled application access

Ecosystem tax

Developers required Windows

Version upgrades

Periodic re-monetization

OEM bundling

Forced distribution


So Windows licenses were “cheap” relative to value delivered, with an incremental cost that was effectively near zero, but with profit margins near 90 percent. 


What Microsoft essentially monetized was its control of the “standard” for operating systems and the platform. 


Google search arguably offers an even-more-compelling case, as the product is available to users at zero cost. 


Search queries cost nothing. Neither does use of Google Maps, Gmail, Android or the Google productivity suite. 


But with its advertising monetization, Google creates a revenue model based on user attention. 


Layer

Value capture

User attention

Scarce

Intent data

Extremely scarce

Ad auctions

Competitive pricing

Data feedback loops

Increasing returns


So apps requiring lots of compute infrastructure are monetized other ways. “Compute” is not the product; audiences are. 


Amazon Web Services, it can be argued, prices core products near marginal cost (EC2 compute, S3 storage, throughput). 


Mechanism

Explanation

Scale advantage

Lowest unit cost globally

Demand aggregation

Extremely high utilization

Service layering

Databases, AI, analytics

Switching friction

Architecture dependence


So AWS monetizes risk reduction and reliability rather than compute cycles. “Trust” creates the revenue model while lock-in sustains it. 


Perhaps the best example is Open Source, which, by definition, is “free to use.”


Products such as Red Hat are sold at marginal cost (licensing) or the software itself is available at no cost. 


Scarce layer

Revenue source

Support

Enterprises pay for certainty

Certification

Compatibility guarantees

Hosted services

Managed convenience

Security updates

Operational risk reduction


The Apple business model might not seem to be a case of marginal cost pricing for hardware, as such pricing is not bound by marginal cost parameters. 


On the other hand, the ecosystem software (iOS, macOS, developer tools, some cloud services) actually can be characterized as being made available at marginal cost. 


Apple recovers its infrastructure and sunk costs from hardware profit margins, ecosystem lock-in and services. 



Firm

What Sold at Marginal Cost

What Recovered Capital

IBM

Compute usage

Hardware + lock-in

Microsoft

Software copies

Platform control

Google

Search

Advertising

Amazon AWS

Compute

Scale + reliability

Red Hat

Software

Support & ops

NVIDIA

Runtime compute

Chips + ecosystem

Apple

OS + tools

Devices + services


If computing marginal cost approaches zero, then retail pricing also tends to fall to near zero, while successful firms find other places in the value chain to recover capital investment costs. 


Where might scarcity value remain?

  • Time

  • Trust

  • Risk transfer

  • Attention

  • Control points

  • Integration responsibility

  • Physical manufacturing

  • Distribution

How Electricity Charging Might Change

It now is easy to argue that U.S. electricity pricing might have to evolve in ways similar to the change in retail pricing of communication...