Covering Scientific & Technical AI | Thursday, January 9, 2025

Graphcore Sets New AI Performance Standards With MK2 IP‌U Systems 

Dec. 9, 2020 -- In a blog post, Phil Brown, Director, Applications at Graphcore, discussed new performance results for the company's second-generation IPU machine intelligence systems, among other updates. The blog post is included in part below. Click here for the full post.


We’re sharing a raft of new performance results for our MK2 IPU-based machine intelligence systems today. You’ll see our IPU-M2000 system significantly outperforms the Nvidia A100 DGX across the board, with orders of magnitude performance improvements for some models.

Graphcore customers are already making big leaps forward with our second generation IPU systems – whether they prioritise faster time to result, model accuracy, better efficiency, lower TCO (Total Cost of Ownership) or the chance to make new breakthroughs in AI with the IPU.

We’ve chosen a range of the most popular models our customers frequently turn to as proxies for their proprietary production AI workloads in natural language processing, computer vision and more, both in training and inference.

We are also delighted to share results in this blog using our new PyTorch framework support. We are continuing to develop and expand this capability – you can find out more in our blog here.

The results are measured on IPU-M2000 & IPU-POD64 platforms. Wherever possible, we compare IPU performance against performance numbers published by NVIDIA for the A100 GPU as part of the DGX A100 platform. It’s notoriously hard to find an exact apples-to-apples comparison for very different products and chip architectures, so we compare against the closest platform in terms of price and power. Where NVIDIA has not published results for a particular model, measured results are used.

Code for all of our benchmarks is available from the examples repo on the Graphcore GitHub site where you can also find code for many other model types and application examples.

We’ve included notes for each chart to explain our methodology and to provide additional information about batch sizes, data sets, floating point arithmetic, frameworks etc. In addition to publishing our benchmarking charts in this blog and on our website, we are also publishing performance data in tabular format for IPU-M2000 and IPU-POD systems on our website. We’ll add more and update the results regularly.

Finally, we’ve also joined MLCommons, the governing body for the independent benchmarking organisation, MLPerf. We will be participating in MLPerf in 2021 – starting with the first training submission in the Spring – as well as continuing to build out our own performance results.


Source: Graphcore

AIwire