Covering Scientific & Technical AI | Monday, December 23, 2024

AMD and Top Tech Companies Illuminate Future of AI at ‘Advancing AI’ Event 

SAN JOSE, Calif., Dec. 7, 2023 -- Yesterday at the “Advancing AI” event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

Credit: AMD

“AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs,” said AMD Chair and CEO Dr. Lisa Su. “We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry’s top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem1.”

Advancing Data Center AI from the Cloud to Enterprise Data Centers and Supercomputers

AMD was joined by multiple partners during the event to highlight the strong adoption and growing momentum for the AMD Instinct data center AI accelerators.

  • Microsoft detailed how it is deploying AMD Instinct MI300X accelerators to power the new Azure ND MI300x v5 Virtual Machine (VM) series optimized for AI workloads.
  • Meta shared that the company is adding AMD Instinct MI300X accelerators to its data centers in combination with ROCm 6 to power AI inferencing workloads and recognized the ROCm 6 optimizations AMD has done on the Llama 2 family of models.
  • Oracle unveiled plans to offer OCI bare metal compute solutions featuring AMD Instinct MI300X accelerators as well as plans to include AMD Instinct MI300X accelerators in their upcoming generative AI service.
  • The largest data center infrastructure providers announced plans to integrate AMD Instinct MI300 accelerators across their product portfolios. Dell announced the integration of AMD Instinct MI300X accelerators with their PowerEdge XE9680 server solution to deliver groundbreaking performance for generative AI workloads in a modular and scalable format for customers. HPE announced plans to bring AMD Instinct MI300 accelerators to its enterprise and HPC offerings. Lenovo shared plans to bring AMD Instinct MI300X accelerators to the Lenovo ThinkSystem platform to deliver AI solutions across industries including retail, manufacturing, financial services and healthcare. Supermicro announced plans to offer AMD Instinct MI300 GPUs across their AI solutions portfolio. Asus, Gigabyte, Ingrasys, Inventec, QCT, Wistron and Wiwynn also all plan to offer solutions powered by AMD Instinct MI300 accelerators.
  • Specialized AI cloud providers including Aligned, Arkon Energy, Cirrascale, Crusoe, Denvr Dataworks and Tensorwaves all plan to provide offerings that will expand access to AMD Instinct MI300X GPUs for developers and AI startups.

Bringing an Open, Proven and Ready AI Software Platform to Market

AMD highlighted significant progress expanding the software ecosystem supporting AMD Instinct data center accelerators.

  • AMD unveiled the latest version of the open-source software stack for AMD Instinct GPUs, ROCm 6, which has been optimized for generative AI, particularly large language models. ROCm 6 boasts support for new data types, advanced graph and kernel optimizations, optimized libraries and state of the art attention algorithms, which together with MI300X deliver an ~8x performance increase for overall latency in text generation on Llama 2 compared to ROCm 5 running on the MI250.2
  • Databricks, Essential AI and Lamini, three AI startups building emerging models and AI solutions, joined AMD on stage to discuss how they’re leveraging AMD Instinct MI300X accelerators and the open ROCm 6 software stack to deliver differentiated AI solutions for enterprise customers.
  • OpenAI is adding support for AMD Instinct accelerators to Triton 3.0, providing out-of-the-box support for AMD accelerators that will allow developers to work at a higher level of abstraction on AMD hardware.

Read here for more information about AMD Instinct MI300 Series accelerators, ROCm 6 and the growing ecosystem of AMD-powered AI solutions.

Continued Leadership in Advancing AI PCs

With millions of AI PCs shipped to date, AMD announced new leadership mobile processors with the launch of the latest AMD Ryzen 8040 Series processors that deliver robust AI compute capability. AMD also launched Ryzen AI 1.0 Software, a software stack that enables developers to easily deploy apps that use pretrained models to add AI capabilities for Windows applications. AMD also disclosed that the upcoming next-gen “Strix Point” CPUs, planned to launch in 2024, will include the AMD XDNA 2 architecture designed to deliver more than a 3x increase in AI compute performance compared to the prior generation3 that will enable new generative AI experiences. Microsoft also joined to discuss how they are working closely with AMD on future AI experiences for Windows PCs.

About AMD

For more than 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible.


Source: AMD

AIwire