Nvidia has announced an unprecedented investment of up to $100 billion in collaboration with OpenAI, in what is shaping up to be one of the biggest strategic moves in the tech sector. The agreement aims to deploy 10 gigawatts of specialized AI computing infrastructure over the next decade.
This alliance strengthens Nvidia’s position as the dominant provider of systems for advanced AI models and ensures OpenAI priority access to massive computing resources to train future generations of models like ChatGPT.
An alliance that redraws the AI map
The collaboration goes far beyond a simple chip purchase. OpenAI and Nvidia will work together to deploy large-scale infrastructure based on the new Vera Rubin platform, developed to handle next-generation AI workloads. According to OpenAI’s official statement, the first gigawatt of systems will come online in the second half of 2026.
The goal is to build “AI factories,” a new generation of data centers capable of processing colossal volumes of information to train and operate more complex, faster models with less reliance on third parties.
Why it matters: scale, power, and dependence
This investment reshapes the strategic landscape of artificial intelligence. For Nvidia, it consolidates its role as the technical backbone of the global AI ecosystem. For OpenAI, it represents a key diversification of critical infrastructure, following the redefinition of its alliance with Microsoft and the agreement with Oracle to deploy part of its operations on the OCI cloud.
Through this move, OpenAI seeks greater control over its technological scaling and to avoid bottlenecks in access to computing, an increasingly strategic resource. At the same time, it intensifies debates about reliance on single suppliers, concentration of power, and future regulation.
Challenges of scale: energy, sustainability, and logistics
Deploying 10 gigawatts of power is equivalent to the energy consumption of a medium-sized city. This raises questions about the environmental sustainability of the initiative and the electrical infrastructure required to keep large-scale data centers operational around the clock.
Additionally, the logistics are monumental: multiple sites, regulatory approvals, critical materials, and high-performance cooling. Recent analyses suggest these challenges could become bottlenecks if not managed with technical and political precision.
A clear signal for the market
This announcement is not an isolated move. It reinforces the trend of major tech companies forming vertical alliances between model developers and computing providers. The future of AI depends as much on the code as on who controls the centers where that code is trained and operated.
For the entrepreneurial ecosystem, this is a reminder that infrastructure is no longer a technical detail: it is a strategic factor. Access to computing, and the alliances that enable it, will determine who leads and who watches from the sidelines.