OpenAI, the trailblazing artificial intelligence research company, is significantly broadening its hardware partnerships. Following a monumental $100 billion agreement with Nvidia—currently the world’s most valuable publicly traded company—OpenAI has now announced a similar strategic deal with AMD, a key competitor in the chipmaking arena. This move underscores OpenAI’s aggressive expansion and its quest to secure a diverse supply chain for the crucial components powering its advanced AI technologies, including its renowned ChatGPT.
The partnership with AMD is set to kick off in the latter half of next year. OpenAI plans to integrate AMD chips into its new state-of-the-art computer data centers. These upcoming facilities will be distinct from those already earmarked for construction in Texas, New Mexico, Ohio, and an additional, as-yet-unnamed location in the Midwest. Over several years, OpenAI anticipates that its deployment of AMD chips will require an astounding 6 gigawatts of power, a capacity equivalent to supplying every household in a state like Massachusetts. This massive demand comes on the heels of its Nvidia agreement, which committed OpenAI to deploying chips requiring an even larger 10 gigawatts.
Interestingly, this AMD deal doesn’t involve a direct investment *into* OpenAI from the chipmaker. Instead, it grants OpenAI the option to purchase up to 160 million shares in AMD at a symbolic price of one penny per share. This could translate into a substantial 10 percent ownership stake for OpenAI in AMD, potentially providing OpenAI with additional capital as it continues to finance its ambitious computing infrastructure projects over the coming years. The market reacted positively to the news, with AMD’s shares surging by more than 20 percent in premarket trading on Monday.
This agreement reflects a broader industry trend where tech giants are investing hundreds of billions into new data center construction. Companies like OpenAI, Amazon, Google, Meta, and Microsoft collectively aim to spend over $325 billion on these facilities by the end of this year alone. While established behemoths such as Amazon, Microsoft, and Google can leverage their vast cash reserves for these expenditures, newer and smaller entities like OpenAI often need to explore innovative funding strategies, raising or borrowing tens of billions of dollars to keep pace.
Under its ambitious “Stargate Project,” OpenAI had previously indicated collaborations with cloud giant Oracle and Japanese conglomerate SoftBank to allocate over $400 billion for new data centers in the United States. However, the startup and its allies currently lack the direct capital to fully fund these monumental projects, leading to these creative financial arrangements. For instance, the recent Nvidia deal not only secured Nvidia’s chips but also included a $100 billion investment, with an initial $10 billion upfront and the remaining $90 billion spread over several years. This pattern highlights OpenAI’s strategy of attracting significant capital from the very companies whose products and services are critical to its operations.
(It’s worth noting that The New York Times filed a lawsuit against OpenAI and Microsoft in 2023, alleging copyright infringement of its news content in relation to AI systems. Both companies have consistently denied these allegations.)