NVIDIA deepens CoreWeave partnership with early access to Vera Rubin chips
Nvidia expands its CoreWeave partnership with early access to Vera Rubin chips and a US$2bn investment to accelerate large-scale AI infrastructure.
NVIDIA has expanded its long-running partnership with cloud provider CoreWeave, confirming that the company will be among the first to deploy its next-generation Vera Rubin computing platform. The move strengthens a relationship that already plays a central role in Nvidia’s artificial intelligence strategy, combining hardware supply, infrastructure planning and financial backing.
Table Of Content
As part of the agreement, Nvidia will make a direct equity investment of US$2 billion in CoreWeave. The commitment signals Nvidia’s confidence in the specialist cloud firm as demand for large-scale AI computing continues to rise, and highlights how closely tied capital investment and compute capacity have become in the current market.
The two companies said the expanded partnership is designed to support rapid growth in AI infrastructure, with a particular focus on building and operating what Nvidia describes as AI factories. These facilities are expected to underpin the next phase of AI development as models become larger, more complex and more widely deployed across industries.
Building the foundations for large-scale AI infrastructure
At the centre of the agreement is a plan to accelerate the construction of AI-focused data centres. CoreWeave has set out ambitions to build more than five gigawatts of capacity by 2030, a scale that reflects the growing power and energy requirements of modern AI workloads.
NVIDIA’s role goes beyond supplying accelerators and systems. The company is also backing CoreWeave in securing land, power and physical infrastructure, linking financial resources directly to deployment timelines. This approach reflects a broader shift in the industry, where the ability to expand AI capacity increasingly depends on close coordination between funding, energy availability and hardware delivery.
Jensen Huang, founder and chief executive of Nvidia, said the scale of change underway is unprecedented. “AI is entering its next frontier and driving the largest infrastructure buildout in human history,” he said. “CoreWeave’s deep AI factory expertise, platform software, and unmatched execution velocity are recognised across the industry. Together, we’re racing to meet extraordinary demand for Nvidia AI factories – the foundation of the AI industrial revolution.”
The partnership also underlines Nvidia’s strategy of working closely with a small number of specialist partners capable of operating at extreme scale. By aligning investment decisions with infrastructure rollouts, Nvidia aims to ensure that its most advanced platforms reach the market quickly and in sufficient volume to meet demand.
Aligning software and hardware across the stack
Beyond physical infrastructure, Nvidia and CoreWeave are deepening their collaboration across software and operational layers. CoreWeave’s cloud stack and internal tooling will be tested and validated alongside Nvidia’s reference architectures, allowing both companies to refine how their technologies work together in production environments.
Michael Intrator, co-founder, chairman and chief executive of CoreWeave, said the partnership has always been driven by close integration. “From the very beginning, our collaboration has been guided by a simple conviction: AI succeeds when software, infrastructure, and operations are designed together,” he said.
CoreWeave is expected to deploy multiple generations of Nvidia platforms across its data centres. This includes early adoption of the Rubin platform, Vera CPUs, and BlueField storage systems. The approach suggests Nvidia is using CoreWeave as a proving ground for full-stack deployments, rather than treating components such as GPUs, CPUs and networking as separate offerings.
By validating entire systems in real-world conditions, Nvidia can fine-tune performance and reliability before wider release. For CoreWeave, early access to new platforms offers a competitive advantage as customers seek the latest hardware to train and run increasingly demanding AI models.
Vera CPUs and shifting pressures in the AI supply chain
A notable element of the announcement is Nvidia’s plan to offer its upcoming Vera CPUs as a standalone option. The CPUs are based on a custom Arm architecture and are designed with high core counts, large coherent memory capacity, and high-bandwidth interconnects to support emerging AI workloads.
Bloomberg interview with Nvidia CEO Jensen Huang, on the Vera CPU Plan:
— Ed Ludlow (@EdLudlow) January 26, 2026
"For the very first time, we're going to be offering Vera CPUs. Vera is such an incredible CPU. We're going to offer Vera CPUs as a standalone part of the infrastructure. And so not only, not only can you… https://t.co/Qwd5bL91Wf
Speaking via Ed Ludlow on X, Huang described the move as a significant shift. “For the very first time, we’re going to be offering Vera CPUs. Vera is such an incredible CPU. We’re going to offer Vera CPUs as a standalone part of the infrastructure. And so not only can you run your computing stack on Nvidia GPUs, but you can now also run it wherever their CPU workload runs on Nvidia CPUs… Vera is completely revolutionary,” he said.
The decision reflects changing dynamics in the AI hardware market. As agent-based and inference-heavy applications grow, server CPUs are becoming another pressure point in the supply chain, alongside GPUs. By offering high-end CPUs separately, Nvidia gives customers more flexibility and potentially lowers the barrier to entry for organisations that do not require full rack-scale systems.
Taken together, the expanded partnership illustrates how AI infrastructure is evolving into a tightly integrated mix of hardware, software and finance. For Nvidia and CoreWeave, the agreement positions both companies to play a central role in the next phase of AI deployment, as demand continues to outpace existing capacity.





