Amazon.com Inc. has landed a landmark $38 billion agreement through its cloud division, Amazon Web Services (AWS), to help meet OpenAI’s surging demand for computing power, marking another major milestone in the artificial intelligence race.
Under the seven-year partnership, AWS will supply the creator of ChatGPT with access to hundreds of thousands of Nvidia Corp. graphics processing units (GPUs), the companies announced on Monday. This long-term deal underscores OpenAI’s evolution from a research lab into a full-fledged AI powerhouse that has upended the tech landscape.
OpenAI’s aggressive expansion continues at a staggering pace, with the company reportedly committing around $1.4 trillion to develop and sustain the massive infrastructure needed to power its advanced AI systems. This enormous investment has sparked growing debate across Wall Street and Silicon Valley over whether the AI boom could be inflating a new tech investment bubble.
For Amazon, the deal represents a significant validation of its cloud capabilities at a time when it’s been striving to maintain relevance in the AI era. “As OpenAI continues to push the boundaries of what’s possible, AWS’s world-class infrastructure will serve as the foundation for their AI ambitions,” said Matt Garman, Chief Executive Officer of AWS, in a statement.
As the largest provider of rented cloud computing power globally, Amazon has long been a dominant force in cloud infrastructure. However, until now, it had remained largely on the sidelines while other major U.S. cloud providers secured key partnerships with OpenAI.
Microsoft Corp., OpenAI’s biggest investor and once its exclusive cloud partner, recently revealed a renewed commitment from the AI firm to spend approximately $250 billion on Microsoft’s Azure platform.
Meanwhile, Oracle Corp. has struck a $300 billion deal to build and operate data centers for OpenAI, and Alphabet Inc.’s Google Cloud Platform is also contributing computing resources to support ChatGPT. OpenAI has further diversified its cloud strategy with a $22.4 billion agreement with CoreWeave Inc., one of several rising “neocloud” providers catering to the AI industry.
With this new collaboration, AWS joins the expanding network of cloud providers supporting OpenAI’s growing computing requirements. Under the deal, OpenAI will immediately begin using AWS infrastructure, with the full capacity expected to be delivered by the end of 2026. The agreement also allows for potential expansion in subsequent years, ensuring scalability as AI workloads continue to grow.
Amazon plans to deploy hundreds of thousands of advanced AI chips, including Nvidia’s GB200 and GB300 accelerators, across large-scale clusters optimized for AI training and inference. These systems will play a key role in helping ChatGPT generate real-time responses and train next-generation models more efficiently.
“Scaling frontier AI requires massive, reliable compute,” said Sam Altman, CEO of OpenAI. “Our partnership with AWS strengthens the global compute ecosystem that will power the next era of AI innovation and help bring these technologies to everyone.”
Amazon’s relationship with OpenAI also fits into its broader strategy of deepening its role in the AI infrastructure market. The company has been expanding partnerships with other leading AI developers, most notably Anthropic PBC, a rival founded by former OpenAI researchers.
Just last week, Amazon announced that a data center complex dedicated to Anthropic powered by hundreds of thousands of AWS’s custom-built Trainium2 chips was now operational. These proprietary chips are designed to deliver high-efficiency training performance while reducing energy costs, positioning AWS as a serious competitor to Nvidia in the AI hardware space.
Meanwhile, Google has made its own high-profile move in this competitive race, revealing plans to provide Anthropic with up to one million specialized AI chips in a multibillion-dollar deal. The growing number of massive infrastructure commitments from the world’s biggest tech companies reflects the intensifying arms race to dominate the next generation of AI platforms.
For Amazon, the OpenAI partnership signals a turning point one that strengthens its standing in the global AI ecosystem and underscores its capacity to deliver the large-scale, reliable compute power demanded by frontier AI models. It also marks a strategic shift for OpenAI, which is now taking a multi-cloud approach to ensure redundancy, performance, and supply stability across multiple providers.
As AI development accelerates, the sheer magnitude of these investments highlights the unprecedented scale of resources being devoted to the technology’s advancement. While some analysts warn of potential overinvestment risks, others view these commitments as essential groundwork for the next technological revolution.
Either way, Amazon’s latest $38 billion deal confirms one thing: the battle for AI infrastructure dominance is just getting started and AWS is determined to claim a leading role in shaping the future of artificial intelligence.

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.