This builds on the $8 billion Amazon has previously invested in Anthropic, and embeds Claude within AWS in several ways.
For starters, the full Claude Platform will be available directly within AWS, allowing AWS customers to use the same account, same controls, same billing, with no additional credentials or contracts necessary.
The deal also signals an intent to shift training operations to non-Nvidia platforms, which could affect the graphics processor and acceleration chip markets.
The deal also suggests a reliance on Trainium is not a short-term cost saving move but has strategic implications: AWS is building an integrated ecosystem including chip design, model training, cloud delivery and enterprise distribution.
The new agreement adds up to 5 gigawatts of capacity for training and deploying Claude, including new Trainium2 capacity coming online in the first half of this year and nearly 1GW total of Trainium2 and Trainium3 capacity coming online by the end of 2026.”
The deal also makes AWS the preferred infrastructure platform for Claude operations.
The additional investment means Anthropic is “committing more than $100 billion over the next ten years to AWS technologies, securing up to 5GW of new capacity to train and run Claude,” Anthropic said.
Say what you will about the “circular” AI economy, where infrastructure providers and chip makers invest in model providers who buy infrastructure products and services from those investors, the deal turns AI infrastructure from a high-risk capital outlay into a partially pre-committed, vertically integrated demand engine.
Since investors keep pounding infra investors on the financial returns, the move is a logical result, tying investment outlay into committed services demand.
The move also shifts the infra story further into a sustainable, industrial scale model at a time when compute demand outstrips the supply.
For AWS, this deal is a masterstroke to answer skeptics who want proof of AI monetization “now.”
By securing a $100 billion spending commitment from Anthropic over the next decade, AWS can point to a massive, "guaranteed" revenue backlog for its AI infrastructure.
Though economists might caution against crudely applying Say’s Law, which suggests supply can create its own demand, this sort of deal is a "reciprocal growth loop" where a platform provider builds massive capacity and then strategically seeds the very companies that will consume that capacity.
The new AWS deal with Anthropic is a landmark example of "circular infrastructure financing," where a cloud provider invests capital into a high-demand customer, who then immediately pledges that capital (and more) back to the provider in the form of long-term compute commitments.
For AWS, this deal is a tactical masterstroke to answer skeptics. By securing a $100 billion spending commitment from Anthropic over the next decade, AWS can point to a massive, "guaranteed" revenue backlog for its AI infrastructure. It transforms speculative capital expenditure (building data centers and custom Trainium chips) into a contractual future cash flow, providing the "proof of monetization" that investors currently crave.
In economics, this is often associated with Say’s Law, which suggests that the production of goods generates the income necessary to purchase them. In the tech industry, this often manifests as a "Reciprocal Growth Loop": a platform provider builds massive capacity and then strategically seeds the very companies that will consume that capacity.
It wouldn’t be the first time infrastructure or platform "supply" was used to intentionally manufacture its own "demand."
Firms might be expected to face scrutiny over "build it and they will come" strategies. This deal moves AWS from a "build and wait" model to a "build and fulfill" model.
It transforms speculative capital expenditure (building data centers and custom Trainium chips) into a contractual future cash flow, providing the "proof of monetization" that investors currently crave.