Tuesday, June 18, 2024

With eye on data center AI innovation, AMD expands Instinct GPU roadmap

At Computex 2024, AMD showcased the growing momentum of the AMD Instinct accelerator family during the opening keynote by chair and CEO Lisa Su.

AMD unveiled a multiyear, expanded AMD Instinct accelerator roadmap which will bring an annual cadence of leadership AI performance and memory capabilities at every generation.

The updated roadmap starts with the new AMD Instinct MI325X accelerator, which will be available in Q4 2024. Following that, the AMD Instinct MI350 series, powered by the new AMD CDNA 4 architecture, is expected to be available in 2025 bringing up to a 35x increase in AI inference performance compared to AMD Instinct MI300 Series with AMD CDNA 3 architecture. Expected to arrive in 2026, the AMD Instinct MI400 series is based on the AMD CDNA “Next” architecture.

“The AMD Instinct MI300X accelerators continue their strong adoption from numerous partners and customers including Microsoft Azure, Meta, Dell Technologies, HPE, Lenovo and others, a direct result of the AMD Instinct MI300X accelerator exceptional performance and value proposition,” said Brad McCredie, corporate vice president for Data Center Accelerated Compute at AMD.

“With our updated annual cadence of products, we are relentless in our pace of innovation, providing the leadership capabilities and performance the AI industry and our customers expect to drive the next evolution of data center AI training and inference.”

The AMD ROCm 6 open software stack continues to mature, enabling AMD Instinct MI300X accelerators to drive impressive performance for some of the most popular LLMs.

On a server using eight AMD Instinct MI300X accelerators and ROCm 6 running Meta Llama-3 70B, customers can get 1.3x better inference performance and token generation compared to the competition.

On a single AMD Instinct MI300X accelerator with ROCm 6, customers can get better inference performance and token generation throughput compared to the competition by 1.2x on Mistral-7B. AMD also highlighted that Hugging Face, the largest and most popular repository for AI models, is now testing 700,000 of their most popular models nightly to ensure they work out of box on AMD Instinct MI300X accelerators. In addition, AMD is continuing its upstream work into popular AI frameworks like PyTorch, TensorFlow and JAX.

During the keynote, AMD revealed an updated annual cadence for the AMD Instinct accelerator roadmap to meet the growing demand for more AI compute.

This will help ensure that AMD Instinct accelerators propel the development of next-generation frontier AI models. The updated AMD Instinct annual roadmap highlighted:

  • The new AMD Instinct MI325X accelerator, which will bring 288GB of HBM3E memory and 6 terabytes per second of memory bandwidth, use the same industry standard Universal Baseboard server design used by the AMD Instinct MI300 series, and be generally available in Q4 2024. The accelerator will have industry leading memory capacity and bandwidth, 2x and 1.3x better than the competition respectively, and 1.3x better compute performance than competition.
  • The first product in the AMD Instinct MI350 Series, the AMD Instinct MI350X accelerator, is based on the AMD CDNA 4 architecture and is expected to be available in 2025. It will use the same industry standard Universal Baseboard server design as other MI300 Series accelerators and will be built using advanced 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory. 
  • AMD CDNA “Next” architecture, which will power the AMD Instinct MI400 Series accelerators, is expected to be available in 2026 providing the latest features and capabilities that will help unlock additional performance and efficiency for inference and large-scale AI training.

Finally, AMD highlighted the demand for AMD Instinct MI300X accelerators continue to grow with numerous partners and customers using the accelerators to power their demanding AI workloads, including:

  • Microsoft Azure using the accelerators for Azure OpenAI services and the new Azure ND MI300X V5 virtual machines.
  • Dell Technologies using MI300X accelerators in the PowerEdge XE9680 for enterprise AI workloads.
  • Supermicro providing multiple solutions with AMD Instinct accelerators.
  • Lenovo powering Hybrid AI innovation with the ThinkSystem SR685a V3.
  • HPE is using them to accelerate AI workloads in the HPE Cray XD675.  


- Advertisement -spot_img




- Advertisement -spot_img