Amazon.com Inc.’s cloud division has accelerated its push into artificial intelligence hardware, rushing its newest in-house AI chip to market in an effort to challenge offerings from Nvidia Corp. and Google. The updated accelerator, known as Trainium3, has already been deployed in select AWS data centers and will become available to customers starting Tuesday, according to Dave Brown, vice president at Amazon Web Services.
“We’ll be scaling rapidly as we head into early next year,” Brown said, underscoring the urgency behind Amazon’s latest AI hardware rollout.
This renewed chip strategy is central to Amazon’s ambition to carve out a stronger position in the AI race. AWS remains the world’s largest provider of cloud computing and storage, yet it hasn’t matched that dominance among the developers building next-generation AI systems. Many companies continue to choose Microsoft whose deep partnership with OpenAI gives it a powerful edge or Google, whose specialized AI hardware has attracted its own loyal base of developers.
Amazon shares gained 1.6% to $237.71 as of 11:09 a.m. in New York, while Nvidia trimmed earlier gains and Advanced Micro Devices Inc., another major AI chip competitor, fell to the session’s low.
To win over customers, Amazon is positioning Trainium as a cost-efficient alternative. The company says its chips can handle the heavy computational loads behind AI training at a lower cost and higher energy efficiency than Nvidia’s high-end GPUs. “We’ve been very pleased with the price-performance profile of Trainium,” Brown said.
Trainium3 marks Amazon’s second chip release in roughly a year a remarkably fast pace by semiconductor industry standards. When engineers powered up the new hardware for the first time in August, one AWS staffer joked, “The main thing we’re hoping for is that we don’t see any smoke or fire.” The rapid cycle mirrors Nvidia’s own accelerated roadmap, as the GPU leader has pledged annual releases of new chips.
But Amazon faces a key challenge: its silicon still lacks the deep and mature software ecosystem that makes Nvidia’s GPUs so easy to adopt. Bedrock Robotics—an AI startup building autonomous systems for heavy construction machinery runs its operations on AWS servers. Yet when it comes to training the advanced models that steer its excavators, the company sticks with Nvidia hardware. “We need strong performance and ease of use,” said CTO Kevin Peterson. “That’s Nvidia.”
Much of the current Trainium supply is being used by Anthropic, the fast-growing AI startup. AWS disclosed earlier this year that more than 500,000 Trainium chips are powering Anthropic’s model-training clusters across data centers in Indiana, Mississippi, and Pennsylvania. Amazon aims to scale that number to one million chips dedicated to Anthropic by year-end.
Amazon is betting that Anthropic’s momentum combined with AWS’s expanding AI services will attract a broader customer base. Still, Amazon has named few other major adopters, leaving analysts without enough data to fully assess Trainium’s competitive standing. Anthropic itself remains diversified, also using Google’s Tensor Processing Units and securing tens of billions of dollars’ worth of computing access through a separate agreement with Alphabet.
The Trainium3 announcement was part of re:Invent, Amazon’s flagship cloud conference, which has increasingly evolved into a showcase for the company’s growing suite of AI products. AWS uses the event to court both cutting-edge developers and large enterprises looking to integrate AI more deeply into their operations.
On Tuesday, Amazon also rolled out updates to its primary family of AI models, known as Nova. The second-generation Nova 2 line includes Omni, a multimodal model capable of interpreting text, images, audio, and video inputs and responding in either text or images.
Similar to its chip strategy, Amazon is pitching Nova models as high-performance tools delivered at attractive pricing. Historically, earlier Nova versions have lagged the industry’s top performers in standardized benchmarks. But Rohit Prasad who oversees Amazon’s AGI efforts and much of its model development argued that practical results matter more. “The real benchmark is the real world,” he said, adding that he expects the new models to be competitive.
Amazon is also giving customers more flexibility when customizing the models. A new tool called Nova Forge allows advanced users to access in-progress versions of Amazon’s models before training is fully completed and tailor them using their own datasets.
Reddit Inc. is already leveraging Nova Forge to build an AI system that can determine whether posts violate the platform’s safety rules. According to Reddit CTO Chris Slowe, many customers assume they must use the most powerful model for every use case, even when a purpose-built expert model would deliver better results. “The real value comes from making it an expert in our specific domain,” Slowe said.

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.