Nvidia has strengthened its position in the artificial intelligence (AI) hardware market by partnering with Samsung Foundry to design and manufacture custom central processing units (CPUs) and XPUs. The collaboration, revealed during the 2025 Open Compute Project (OCP) Global Summit in San Jose, underscores Nvidia’s determination to expand its NVLink Fusion ecosystem and tighten its influence over next-generation AI infrastructure.
Expanding the NVLink Fusion ecosystem
Nvidia’s partnership with Samsung follows its recent collaboration with Intel, which enabled x86 CPUs to connect directly with Nvidia platforms. The addition of Samsung Foundry, known for its advanced semiconductor manufacturing, provides Nvidia with a full design-to-production partner for custom silicon.
Ian Buck, Nvidia’s Vice President of HPC and Hyperscale, described NVLink Fusion as an IP and chiplet solution designed to seamlessly connect CPUs, GPUs, and accelerators within rack-scale systems. The technology enables direct, high-speed communication between processors, effectively removing traditional data bottlenecks between computing components.
During the summit, Nvidia announced that several key ecosystem partners, including Intel and Fujitsu, can now develop CPUs capable of communicating directly with Nvidia GPUs through NVLink Fusion. Samsung’s participation expands this ecosystem further, offering the ability to design and produce custom chips optimised for AI workloads.
Strengthening Nvidia’s role in AI hardware
The collaboration marks a pivotal step for Nvidia as it works to secure a central role in the global AI computing landscape. With competitors such as OpenAI, Google, AWS, Broadcom, and Meta developing their own chips, Nvidia’s strategy aims to ensure its technologies remain vital in data centre environments.
According to TechPowerUp, Nvidia has implemented strict licensing and connectivity requirements within NVLink Fusion. Any custom chip developed through the ecosystem must connect to Nvidia products, with Nvidia maintaining control over communication controllers, physical layers (PHY), and NVLink Switch licensing. This approach gives Nvidia significant leverage over the ecosystem while raising questions about openness and interoperability.
By maintaining control over how third-party chips interact with its technology, Nvidia can ensure consistent performance and compatibility. However, it also reinforces concerns about potential vendor lock-in, particularly as rivals seek greater autonomy through in-house chip development.
Responding to rising competition
The move reflects a broader industry shift towards tighter integration between hardware and AI software. Companies such as Broadcom are advancing custom accelerators for hyperscalers, while OpenAI is reportedly developing its own in-house chips to reduce reliance on Nvidia’s graphics processing units (GPUs).
In response, Nvidia is embedding its intellectual property across the hardware stack, positioning itself as an essential partner for building AI data centres rather than merely a GPU supplier. By expanding NVLink Fusion and partnering with Samsung Foundry, Nvidia aims to deliver custom solutions that can be deployed rapidly and at scale.
Although Nvidia supports open hardware initiatives like the OCP, its NVLink Fusion ecosystem remains tightly controlled to ensure architectural consistency and performance. This strategic balance between openness and control could give Nvidia an edge in maintaining dominance over AI infrastructure while reinforcing its reputation for technical excellence.
Through this partnership, Nvidia and Samsung are setting the stage for a new phase in AI hardware competition—one defined not only by processing power but by who controls the entire silicon-to-software pipeline.