Covering Scientific & Technical AI | Sunday, December 22, 2024

GPUs Power Five of World’s Seven Most Powerful Systems 

The top 10 echelon of the world's most powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics processor. With the rollout of the 50th Top500 listing at ISC18 in Frankfurt, Nvidia highlighted its role in powering the new number one system, Summit at Oak Ridge National Lab; the new number three system, Sierra at Livermore Lab, and Japan’s new fastest supercomputer, the number-five machine, ABCI. Nvidia GPUs are also under the hood of the world’s fastest industrial supercomputer, Italian energy company Eni’s HPC4 system, which enters the list at number 13.

The same GPUs are also inside HPC4, the new entrant at number 13 that is the world’s most powerful publicly announced commercial system, providing 12.21 Linpack petaflops (out of a potential theoretical 18.62 petaflops) for Italian energy company Eni’s oil and gas exploration activities. Built by Hewlett Packard Enterprise (HPE), the cluster encompasses 1,600 ProLiant DL380 nodes, each equipped with two Intel 24-core Skylake processors and two Nvidia Tesla P100 GPU accelerators.

Debuting at Oak Ridge National Lab earlier this month, the new number-one champ Summit achieved 122.30 petaflops on its Linpack submission, recapturing the U.S lead at the top of the list and displacing China’s Sunway TaihuLight, now in second place with 93 Linpack petaflops (out of a potential 125 theoretical peak petaflops). The benchmarked Summit is spec’d at 187.66 petaflops peak, putting its Linpack efficiency at 65 percent. You’ll note that the Rpeak is 28 petaflops shy of the full build comprised of 4,608 IBM Power9 nodes, which at 7.8 teraflops per V100 GPU comes out to 215.65 peak petaflops based on the GPUs alone (the Power9s add roughly 5 percent more flops). “At the point that we had to turn it in, that was as large as we could run,” Buddy Bland, project director of the Oak Ridge Leadership Computing Facility, told HPCwire, “and it will continue to get better.”

Sierra, based at the Lawrence Livermore National Laboratory and purchased under the same CORAL RFP as Summit, was somewhat of a surprise entrant in third position with 71.61 Linpack petaflops out of 119.19 Rpeak petaflops, delivered using 17,280 GPUs. That’s a Linpack efficiency of 60 percent. Both machines were built by IBM in collaboration with Nvidia and Mellanox for the United States Department of Energy, but Sierra’s deployment was running a few weeks behind Summit, so it was not certain (from our perspective at least) that it would be benchmarked in time for this list installation.

The IBM/Nvidia/Mellanox CORAL machines Summit and Sierra also took the first two spots on the HPCG benchmark. On the Green500, there was a large spread between the machines with the six-GPU node Summit at number five and four-GPU node Sierra ranked 276th.

Source: Top500 (June 2018, systems 1-10)

In fifth place, Japan’s fastest system AI Bridging Cloud Infrastructure (ABCI) delivers 19.6 petaflops of performance (out of a peak of 32.58 petaflops) using 4,352 V100 GPUs. The Fujitsu-made supercomputer is installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) at the Kashiwa II campus of the University of Tokyo. (See our earlier coverage here.)

Nvidia also powers two existing top ten machines: the Cray XC50 Piz Daint – Europe’s fastest, deployed at the Swiss National Supercomputing Centre (CSCS) – and the previous U.S. title-holder Titan, installed at Oak Ridge National Lab in 2012 as an upgrade to Jaguar. Delivering 19.5 petaflops of Linpack performance using 5,320 P100 GPUs, Piz Daint drops to sixth place from its previous third-place position. Titan, the Cray XK7 with Nvidia K20x’s that was once the world fastest supercomputer (with 17.59 petaflops of Linpack performance) fell two spots to number seven.

“The new systems reflect the broader shift to accelerators in the Top500 list,” said Nvidia, characterizing the machines as “AI supercomputers … uniquely capable of processing both traditional HPC simulations and revolutionary new AI workloads.”

“GPUs now power five out of the world’s seven fastest systems as well as 17 of the 20 most energy efficient systems on the new Green500 list,” the company remarked, adding that the “majority of computing performance added to the Top500 list comes from Nvidia GPUs.”

The Volta Tensor Core GPU makes it possible to “combine simulation with the power of AI to advance science, find cures for disease and develop new forms of energy,” said CUDA inventor Ian Buck.

The latest Top500 report includes 110 systems with some manner of accelerator and/or co-processor technology, up from 101 six months ago. 98 are equipped with Nvidia chips, seven systems utilize Intel Xeon Phi (coprocessor) technology and four are using PEZY technology. Two systems (ranked 52 and 252) employ a combination of Nvidia and Intel Xeon Phi accelerators/coprocessors. The newly upgraded Tianhe-2a (now in fourth position with 61.44 petaflops up from 33.86 petaflops), installed at the National Super Computer Center in Guangzhou, employs custom-built Matrox-2000 accelerators. 19 systems now use Xeon Phi as the main processing unit.

“This year’s Top500 list represents a clear shift toward systems that support both HPC and AI computing,” said Jack Dongarra, professor at the University of Tennessee and Oak Ridge National Laboratory and Top500 author.  “Accelerators, such as GPUs, are critical to deliver this capability at the performance and efficiency targets demanded by the supercomputing community.”

(This story originally appeared in sister publication HPCwire.)

Source: Top500 (June 2018, systems 11-15)
About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

AIwire