Amazon Web Services unveiled a new AI chip Trainium3 and announced a new supercomputer that will be built with its own chips to serve its AI ambitions.
This move shows that Amazon is reducing its reliance on Nvidia for AI chips and is ready to control silicon and supercomputers of their own.
Amazon's Trainium2 chip is generally available now and it offers 30-40% better price performance than the current generation of servers with Nvidia GPUs.
Trainium3, which AWS previewed ahead of a late 2025 release, has been billed as a "next-generation AI training chip." Servers loaded with Trainium3 chips offer four times greater performance than those packed with Trainium2 chips.
Google and OpenAI are also exploring custom, in-house chip designs of their own to reduce their dependence on Nvidia.
AWS shared that it was working with Anthropic to build an UltraCluster of servers that form the basis of a supercomputer it has named Project Rainier.
When completed, Project Rainier will be the world's largest AI compute cluster reported to date available for Anthropic to build and deploy their future models on.
AWS also acknowledged that companies serious about building their own AI models will need access to highly specialized computing that can handle a new era of AI.
Nvidia is still responsible for 99% of the workloads for training AI models today, but AWS believes Trainium can carve out a good niche for itself.
Other big tech companies are also designing their own AI chips and supercomputers, including Microsoft and Elon Musk's company xAI building a supercomputer using 100,000 Nvidia GPUs in Memphis this year.