Sunday, 19 October 2025
33 C
Singapore
33.2 C
Thailand
25.5 C
Indonesia
27.8 C
Philippines

Samsung partners with Nvidia to develop custom CPUs and XPUs for AI dominance

Nvidia partners with Samsung to develop custom CPUs and XPUs, expanding its NVLink Fusion ecosystem to strengthen its AI hardware dominance.

Nvidia has strengthened its position in the artificial intelligence (AI) hardware market by partnering with Samsung Foundry to design and manufacture custom central processing units (CPUs) and XPUs. The collaboration, revealed during the 2025 Open Compute Project (OCP) Global Summit in San Jose, underscores Nvidia’s determination to expand its NVLink Fusion ecosystem and tighten its influence over next-generation AI infrastructure.

Nvidia’s partnership with Samsung follows its recent collaboration with Intel, which enabled x86 CPUs to connect directly with Nvidia platforms. The addition of Samsung Foundry, known for its advanced semiconductor manufacturing, provides Nvidia with a full design-to-production partner for custom silicon.

Ian Buck, Nvidia’s Vice President of HPC and Hyperscale, described NVLink Fusion as an IP and chiplet solution designed to seamlessly connect CPUs, GPUs, and accelerators within rack-scale systems. The technology enables direct, high-speed communication between processors, effectively removing traditional data bottlenecks between computing components.

During the summit, Nvidia announced that several key ecosystem partners, including Intel and Fujitsu, can now develop CPUs capable of communicating directly with Nvidia GPUs through NVLink Fusion. Samsung’s participation expands this ecosystem further, offering the ability to design and produce custom chips optimised for AI workloads.

Strengthening Nvidia’s role in AI hardware

The collaboration marks a pivotal step for Nvidia as it works to secure a central role in the global AI computing landscape. With competitors such as OpenAI, Google, AWS, Broadcom, and Meta developing their own chips, Nvidia’s strategy aims to ensure its technologies remain vital in data centre environments.

According to TechPowerUp, Nvidia has implemented strict licensing and connectivity requirements within NVLink Fusion. Any custom chip developed through the ecosystem must connect to Nvidia products, with Nvidia maintaining control over communication controllers, physical layers (PHY), and NVLink Switch licensing. This approach gives Nvidia significant leverage over the ecosystem while raising questions about openness and interoperability.

By maintaining control over how third-party chips interact with its technology, Nvidia can ensure consistent performance and compatibility. However, it also reinforces concerns about potential vendor lock-in, particularly as rivals seek greater autonomy through in-house chip development.

Responding to rising competition

The move reflects a broader industry shift towards tighter integration between hardware and AI software. Companies such as Broadcom are advancing custom accelerators for hyperscalers, while OpenAI is reportedly developing its own in-house chips to reduce reliance on Nvidia’s graphics processing units (GPUs).

In response, Nvidia is embedding its intellectual property across the hardware stack, positioning itself as an essential partner for building AI data centres rather than merely a GPU supplier. By expanding NVLink Fusion and partnering with Samsung Foundry, Nvidia aims to deliver custom solutions that can be deployed rapidly and at scale.

Although Nvidia supports open hardware initiatives like the OCP, its NVLink Fusion ecosystem remains tightly controlled to ensure architectural consistency and performance. This strategic balance between openness and control could give Nvidia an edge in maintaining dominance over AI infrastructure while reinforcing its reputation for technical excellence.

Through this partnership, Nvidia and Samsung are setting the stage for a new phase in AI hardware competition—one defined not only by processing power but by who controls the entire silicon-to-software pipeline.

Hot this week

NVIDIA unveils first US-made Blackwell wafer as domestic chip production expands

NVIDIA unveils its first US-made Blackwell wafer at TSMC’s Arizona facility, marking a major milestone in domestic AI chip production.

Nintendo accelerates Switch 2 production as demand remains strong

Nintendo ramps up Switch 2 production to meet soaring demand, aiming to sell up to 25 million units by March 2026.

Red Hat launches Red Hat AI 3 to bring distributed AI inference to production

Red Hat AI 3 enables distributed AI inference at scale, improving collaboration and accelerating enterprise adoption of AI.

NetApp launches new enterprise-grade AI data platform with NVIDIA integration

NetApp launches AFX and AI Data Engine with NVIDIA integration to simplify AI data pipelines and power enterprise AI innovation.

Wi-Fi 7 as the nervous system of the intelligent enterprise

Wi-Fi 7 is set to become the backbone of intelligent enterprises in Southeast Asia, enabling faster, more reliable networks and powering advanced use cases.

NVIDIA unveils first US-made Blackwell wafer as domestic chip production expands

NVIDIA unveils its first US-made Blackwell wafer at TSMC’s Arizona facility, marking a major milestone in domestic AI chip production.

8BitDo unveils NES40 collection to mark 40 years of the Nintendo Entertainment System

8BitDo marks 40 years of the NES with a limited NES40 collection featuring redesigned controllers, a premium keyboard, and a modernised speaker.

Facebook’s new AI feature scans users’ camera rolls for unpublished photos

Facebook’s new AI tool scans users’ camera rolls to suggest edits and collages, raising questions about data use and privacy.

Google brings Gemini-powered automation to Sheets

Google adds Gemini-powered AI automation to Sheets, allowing users to complete multi-step edits and formatting tasks in one simple command.

Related Articles