Samsung and AMD put HBM4 at the centre of AI infrastructure
Samsung and AMD deepen AI hardware ties around HBM4, DDR5 and the Helios rack architecture for next-generation infrastructure.
Samsung and AMD have expanded their partnership around AI memory and compute infrastructure, with the latest agreement centred on HBM4 for AMD’s next AI accelerator and DDR5 for upcoming EPYC processors. The deal matters less as a routine supplier update than as a clearer view into where pressure is building inside AI systems, around memory bandwidth, power efficiency and how tightly components need to be matched across the rack.
Table Of Content
The memorandum of understanding covers primary HBM4 supply for the AMD Instinct MI455X GPU and advanced DRAM solutions for 6th Gen AMD EPYC CPUs, codenamed “Venice”. Those parts are set to support systems that combine Instinct GPUs, EPYC CPUs and the AMD Helios rack-scale architecture.
The agreement was announced with the signing ceremony held at Samsung’s chip manufacturing complex in Pyeongtaek, Korea.
Memory becomes the key lever
The real significance lies less in the supply agreement itself than in the layer of the stack it targets. As AI workloads grow more demanding, memory bandwidth and power efficiency are becoming central to overall system performance.
Samsung said its HBM4 has entered mass production and is built on its 6th-generation 10-nanometre-class DRAM process and a 4nm logic base die. The company said the memory offers processing speeds of up to 13 gigabits-per-second and maximum bandwidth of 3.3 terabytes-per-second.
Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics, said, “Samsung and AMD share a commitment to advancing AI computing, and this agreement reflects the growing scope of our collaboration.”
Helios gives the partnership context
The announcement also places AMD’s Helios platform at the centre of the story. Samsung’s HBM4 is intended for the Instinct MI455X GPU, which AMD said will be a key building block for the Helios rack-scale architecture. That links the memory partnership to a full system approach rather than a single component deal.
AMD and Samsung are also working on DDR5 memory for 6th Gen EPYC processors, with the stated aim of delivering memory solutions for systems built on Helios. Read together, the HBM4 and DDR5 pieces show AMD trying to line up the accelerator, CPU and rack architecture as a more integrated package for AI infrastructure buyers.
Dr. Lisa Su, Chair and CEO of AMD, said, “Powering the next generation of AI infrastructure requires deep collaboration across the industry.” She added that integration “from silicon to system to rack” is essential to accelerating AI innovation.
Supply depth now matters more
The companies also said they will discuss foundry partnership opportunities, with Samsung potentially providing foundry services for future AMD products. That adds another layer to a relationship that already spans graphics, mobile and computing technologies, and suggests Samsung wants a larger role than memory supply alone.
The existing relationship gives that ambition some grounding. Samsung said it has worked with AMD for nearly two decades and has already served as AMD’s primary HBM3E partner for the Instinct MI350X and MI355X AI accelerators.
For customers, the immediate takeaway is straightforward. As AI systems become more dependent on how memory, processors and packaging work together, supplier relationships are becoming part of infrastructure strategy rather than a background procurement detail. This agreement shows Samsung and AMD trying to lock in that coordination earlier and more deeply around the next generation of AI hardware.





