AMD Launches Advanced Chips for Data Centers and PCs to Enhance AI Performance

AMD Launches Advanced Chips for Data Centers and PCs to Enhance AI Performance

GamesBeat is thrilled to team up with Lil Snack to bring our audience customized games! As gamers ourselves, we believe this is a fun and engaging way to enjoy the GamesBeat content you already love. Start playing games here.

Advanced Micro Devices (AMD) has just revealed its AMD Instinct MI300X and MI300A accelerator chips, designed specifically for AI processing in data centers. Lisa Su, AMD’s CEO, announced these new chips today at the company’s data center event. During the third-quarter analyst call on October 31, Su mentioned that she expects the MI300 series to reach $1 billion in sales faster than any other product in AMD’s history.

AMD is also introducing the AMD Ryzen 8040 Series processors, previously known by the code name Hawk Point, which are designed for AI-based laptops. The company has promoted its NPU chips for AI processing, stating that millions of Ryzen-based AI PCs were shipped in 2023 by major computer manufacturers.

“It’s another huge moment for PCs,” said Drew Prairie, head of communications at AMD. “The AI performance will take a step up from the performance levels available in the market now.” The performance of the 8040 series will be 1.4 times better than current Ryzen chips that began shipping in the second quarter.

AMD is also working on next-generation Ryzen processors, code-named Strix Point, equipped with AMD XDNA 2 and NPU for generative AI. These are expected to ship in 2024.

AMD showcased hardware from Dell Technologies, Hewlett Packard Enterprise, Lenovo, Meta, Microsoft, Oracle, Supermicro, and others. The company highlighted its ROCm 6 open software ecosystem, which combines next-generation hardware and software for significant performance improvements in generative AI and easier deployment of AI solutions.

The AMD Instinct MI300X accelerators offer industry-leading memory bandwidth for generative AI and high performance for large language model (LLM) training and inferencing. The AMD Instinct MI300A combines CPU and GPU capabilities within a single product, featuring the latest AMD CDNA 3 architecture and Zen 4 CPUs for high-performance computing and AI workloads.

“AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be used in large-scale cloud and enterprise deployments,” said Victor Peng, president of AMD. “By leveraging our leading hardware, software, and open ecosystem approach, cloud providers, OEMs, and ODMs are bringing innovative technologies to market.”

Microsoft is one of the customers utilizing the latest AMD Instinct accelerator portfolio, having recently announced the Azure ND MI300x v5 Virtual Machine series, optimized for AI workloads and powered by AMD Instinct MI300X accelerators. The El Capitan supercomputer, housed at Lawrence Livermore National Laboratory and expected to be the second exascale-class supercomputer powered by AMD, will deliver more than two exaflops of double precision performance when fully deployed.

The AMD Instinct MI300X accelerators feature 192 GB of HBM3 memory capacity and 5.3 TB/s peak memory bandwidth, essential for demanding AI workloads. The AMD Instinct Platform is built on an industry-standard OCP design with eight MI300X accelerators, offering 1.5TB of HBM3 memory capacity. This design allows OEM partners to incorporate MI300X accelerators into existing AI offerings, simplifying deployment and accelerating the adoption of AMD Instinct accelerator-based servers.

Compared to the Nvidia H100 HGX, the AMD Instinct Platform can offer up to 1.6 times better throughput when running LLM inferences like BLOOM 176B. It is also the only solution that can run inference for a 70B parameter model, like Llama2, on a single MI300X accelerator.

Energy efficiency is vital for the HPC and AI communities, but these workloads are extremely data- and resource-intensive. The AMD Instinct MI300A APUs, combining CPU and GPU cores, provide a highly efficient platform for accelerating the latest AI models’ training.

AMD is driving innovation in energy efficiency with its 30×25 goal, which aims to achieve a 30x energy efficiency improvement in server processors and accelerators for AI-training and HPC between 2020 and 2025.

For mobile processors, AMD announced the new AMD Ryzen 8040 Series, which delivers enhanced AI compute capability. The company also introduced Ryzen AI 1.0 Software, a stack that enables developers to deploy apps using pretrained models to add AI capabilities for Windows applications. The next-generation “Strix Point” CPUs, expected to ship in 2024, will feature the XDNA 2 architecture, offering more than triple the AI compute performance compared to previous generations.

With the integrated Ryzen AI NPU on select models, AMD is introducing more advanced AI PCs to the market, with up to 1.6 times better AI processing performance than previous AMD models. The Ryzen AI Software is also widely available, making it easier for users to build and deploy machine learning models on their AI PCs.

The new Ryzen 8040 Series processors will be available from major OEMs, including Acer, Asus, Dell, HP, Lenovo, and Razer, starting in the first quarter of 2024. The Ryzen 9 8945HS, in particular, offers up to 64% faster video editing and 37% faster 3D rendering compared to competitors, and gamers can benefit from up to 77% faster gaming performance.

Stay updated with the latest tech news by subscribing to our daily newsletter.