Microsoft is making significant strides in its artificial intelligence (AI) hardware initiatives by strengthening its collaboration with OpenAI, extending beyond software to include silicon. CEO Satya Nadella announced that Microsoft will integrate OpenAI’s custom AI chip designs into its long-term semiconductor strategy, a decision that could transform the construction and scalability of future AI systems. In a podcast discussion, Nadella stated, “We now have access to OpenAI’s chip and hardware research through 2030,” indicating a decade-long enhancement of their strategic partnership. As part of this updated collaboration, Microsoft will also utilize OpenAI’s AI models through 2032. OpenAI, known for co-developing advanced AI processors and networking hardware alongside Broadcom, is expanding its innovation efforts from algorithms to hardware.
Microsoft intends to “industrialise” these designs, preparing them for mass production while incorporating them into its intellectual property portfolio. This partnership is anticipated to propel the company’s extensive cloud and AI strategy over the next ten years. A strengthened Microsoft–OpenAI partnership signifies a new phase in one of the tech industry’s most important alliances. For Microsoft, this collaboration offers quicker access to state-of-the-art hardware customized for OpenAI’s model-training needs. For OpenAI, it provides access to Microsoft’s extensive infrastructure, enabling its innovations to reach a global audience. This creates a mutually beneficial cycle where OpenAI develops models that challenge hardware capabilities, while Microsoft constructs systems that can operate them effectively.
Nadella referred to this as a “strategic alignment,” which will expedite Microsoft’s semiconductor goals and enhance its competitive standing in the AI sector. At the core of this initiative is Microsoft’s new Fairwater datacentre architecture, consisting of expansive, interconnected facilities designed for the AI era. Each Fairwater location functions as a node in a vast network aimed at training and deploying large-scale AI models. For instance, the Atlanta facility is already in operation and includes NVIDIA GB200 NVL72 rack-scale systems capable of scaling to hundreds of thousands of Blackwell GPUs. Additionally, it employs cutting-edge liquid cooling technology that minimally impacts water usage, establishing a new standard for sustainability in data infrastructure.
Scott Guthrie, executive vice president of Cloud + AI at Microsoft, commented, “Leading in AI isn’t just about adding more GPUs; it’s about constructing the infrastructure that allows them to work collectively as a cohesive system.” He further noted, “Fairwater embodies that comprehensive engineering and is crafted to meet increasing demand with practical performance, rather than just theoretical capacity.” The overarching vision is to dominate the AI stack. From the supercomputers created with OpenAI in 2019 to the systems supporting GPT-4 and beyond, Microsoft has been advancing every layer of AI infrastructure, including chips, networks, and data architecture. By incorporating OpenAI’s hardware expertise, Microsoft is poised to manage the entire spectrum of AI innovation, from silicon to supercomputers to software.
As Nadella expressed, this endeavor is not merely about increasing computing power; it’s about “controlling the entire stack of AI innovation.” In the worldwide AI competition, Microsoft is no longer just procuring GPUs; it is establishing the factory that enables them to function as a cohesive system.


