AMD outlines vision for “AI everywhere, for everyone” at CES 2026
AMD sets out its AI strategy at CES 2026, spanning yotta-scale infrastructure, AI PCs, edge computing and a US$150 million education commitment.
Advanced Micro Devices used its CES 2026 keynote to set out an expansive vision for the next phase of artificial intelligence, positioning open platforms, large-scale infrastructure and broad ecosystem collaboration as central to making AI more accessible across industries and everyday computing. Speaking at the show’s opening keynote, AMD chair and chief executive Dr Lisa Su described how the company’s growing portfolio of AI products is being applied from data centres to personal devices, supported by partnerships spanning technology, healthcare, science and aerospace.
Table Of Content
The presentation brought together a wide range of partners, including OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci and Illumina. Each outlined how AMD hardware is being used to support increasingly complex AI workloads, from large language model training to scientific research and industrial automation. The breadth of examples underlined AMD’s aim to position itself as a foundational supplier for AI systems operating at global scale as well as at the edge.
“At CES, our partners joined us to show what’s possible when the industry comes together to bring AI everywhere, for everyone,” said Dr Su. “As AI adoption accelerates, we are entering the era of yotta-scale computing, driven by unprecedented growth in both training and inference. AMD is building the compute foundation for this next phase of AI through end-to-end technology leadership, open platforms, and deep co-innovation with partners across the ecosystem.”
Building the foundation for yotta-scale AI infrastructure
A central theme of the keynote was the rapid expansion of global computing demand driven by AI. AMD pointed to a shift from roughly 100 zettaflops of global compute capacity today to more than 10 yottaflops within the next five years. According to the company, meeting this demand will require more than incremental performance gains. It will depend on modular, rack-scale designs that can evolve across generations while connecting thousands of accelerators into a unified system.
To address this, AMD presented an early look at its “Helios” rack-scale platform, described as a blueprint for yotta-scale AI infrastructure. The platform is designed to deliver up to three AI exaflops of performance within a single rack, targeting the bandwidth and energy efficiency requirements of trillion-parameter model training. Helios combines AMD Instinct MI455X accelerators with AMD EPYC “Venice” CPUs and AMD Pensando “Vulcano” networking technology, all integrated through the ROCm software ecosystem.
Alongside Helios, AMD expanded its data centre AI portfolio with the introduction of the Instinct MI440X GPU, aimed at on-premises enterprise deployments. The MI440X is designed to support scalable training, fine-tuning and inference workloads in a compact eight-GPU configuration that can be integrated into existing infrastructure. AMD positioned it as a practical option for organisations seeking local AI capability without the complexity of hyperscale deployments.
The MI440X builds on the previously announced MI430X GPUs, which are intended for high-precision scientific computing, high-performance computing and sovereign AI workloads. AMD said these GPUs will power AI factory supercomputers globally, including the Discovery system at Oak Ridge National Laboratory in the United States and the Alice Recoque system, France’s first exascale supercomputer.
Looking further ahead, AMD previewed its next-generation Instinct MI500 Series GPUs, planned for launch in 2027. Built on the forthcoming CDNA 6 architecture, combined with advanced 2nm process technology and HBM4E memory, the MI500 Series is expected to deliver up to a 1,000-fold increase in AI performance compared to the Instinct MI300X GPUs introduced in 2023. AMD described this roadmap as essential to sustaining performance growth as AI models continue to scale in size and complexity.
Expanding AI experiences from PCs to the edge
Beyond the data centre, AMD devoted significant attention to the role of AI in personal and embedded computing. The company argued that the PC will become a primary interface for AI, with billions of users interacting with intelligent applications both locally and via the cloud. To support this shift, AMD announced new additions to its Ryzen AI platform portfolio, aimed at improving on-device performance while maintaining compatibility with cloud-based workflows.
The next-generation Ryzen AI 400 Series and Ryzen AI PRO 400 Series platforms deliver up to 60 trillion operations per second of neural processing performance, alongside efficiency improvements and full ROCm platform support. AMD said this combination enables smoother scaling between cloud and client environments. Initial systems are scheduled to ship in January 2026, with broader availability across original equipment manufacturers expected in the first quarter of the year.
AMD also highlighted its Ryzen AI Max+ 392 and Ryzen AI Max+ 388 platforms, which support models with up to 128 billion parameters using 128GB of unified memory. These configurations are intended to enable advanced local inference, creative workflows and gaming experiences in premium thin-and-light notebooks and small form factor desktop systems, reducing reliance on remote compute resources for demanding tasks.
For developers, AMD unveiled the Ryzen AI Halo developer platform, a compact desktop system designed to support AI application development using high-performance Ryzen AI Max+ processors. The company said the platform offers strong tokens-per-second-per-dollar performance, making it suitable for experimentation and deployment in smaller development environments. Availability is expected in the second quarter of 2026.
At the edge, AMD introduced a new line of Ryzen AI Embedded processors aimed at AI-driven physical systems. The embedded portfolio, which includes the P100 and X100 Series processors, targets applications such as automotive digital cockpits, smart healthcare equipment and autonomous systems, including humanoid robotics. AMD said these processors are designed to balance high-performance AI compute with the efficiency demands of constrained embedded environments.
Partnerships, policy and investment in AI education
The keynote also addressed the broader policy and societal context surrounding AI development. Dr Su was joined on stage by Michael Kratsios, Director of the White House Office of Science and Technology Policy, to discuss the Genesis Mission, a public-private initiative intended to strengthen United States leadership in AI and related technologies. The programme focuses on advancing scientific discovery and long-term competitiveness through coordinated investment and infrastructure development.
As part of Genesis, AMD is supporting two new AI supercomputers at Oak Ridge National Laboratory, named Lux and Discovery. These systems are intended to provide researchers with access to advanced compute resources for large-scale scientific and AI workloads. Kratsios highlighted the importance of collaboration between government, industry and academia in ensuring that AI capabilities are widely accessible and responsibly developed.
In support of this goal, AMD announced a commitment of US$150 million to expand access to AI education. The funding is intended to bring AI tools and learning opportunities into more classrooms and communities, with a focus on hands-on experience. The company positioned the investment as part of a broader effort to build a more inclusive pipeline of AI talent.
The keynote concluded with recognition of more than 15,000 students who participated in the AMD AI Robotics Hackathon, organised in partnership with Hack Club. AMD described the programme as an example of how industry-led initiatives can encourage early engagement with AI and robotics, helping to develop practical skills alongside formal education.