At the annual "Advancing AI" event held on June 12, 2025, in Santa Clara, AMD presented its ambitious vision for an open ecosystem in Artificial Intelligence (AI). With the launch of a series of hardware components, software, and infrastructure solutions, AMD aims to comprehensively cover the AI spectrum, focusing on openness, performance, and flexibility.
Major Acceleration with AMD Instinct™ MI350 Series GPUs
The new AMD Instinct MI350 Series GPU accelerators, particularly the MI350X and MI355X models, represent a significant generational leap with computational performance four times superior to the previous generation. Specifically designed for the growing needs of generative AI and high-performance computing, these accelerators offer a 35-fold improvement in inference and up to 40% more tokens per dollar compared to competing solutions.
Open and Scalable AI Infrastructure
AMD also introduced a complete AI infrastructure at rack scale, relying on open standards. It integrates 5th generation AMD EPYC™ processors, AMD Pensando™ Pollara network cards, and Instinct™ MI350 Series GPUs. Already operational with major players like Oracle Cloud Infrastructure (OCI), this infrastructure will be widely available in the second half of 2025.
Furthermore, AMD unveiled its next-generation infrastructure called "Helios," equipped with upcoming Instinct MI400 GPUs, AMD EPYC™ "Venice" CPUs based on Zen 6 architecture, and AMD Pensando "Vulcano" network cards. This new infrastructure promises up to 10 times superior performance on Mixture of Experts models compared to the current generation.
Open-Source Software: ROCm 7 and AMD Developer Cloud
AMD continues to promote an open software ecosystem with the release of ROCm 7. This latest version enhances support for standard frameworks, extends hardware compatibility, and offers new tools to facilitate the development of generative AI applications.
To support developers, AMD also announced the global availability of the AMD Developer Cloud, providing a comprehensive cloud environment to accelerate AI projects.
Ambitious Energy Efficiency Goals
AMD has exceeded its five-year goal of improving energy efficiency by achieving a 38-fold advancement for AI training nodes. The company has also set an ambitious new goal for 2030: to improve rack-scale energy efficiency by 20 times compared to 2024, aiming to drastically reduce the power consumption of AI infrastructures.