Covering Scientific & Technical AI | Wednesday, November 27, 2024

Nvidia Pushes GPU Access With Containers 

(Alex Kolokythas Photography/Shutterstock)

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing tasks as well as AI development with the addition of a container registry designed to deploy its GPU cloud for everything from visualization to drug discovery.

In its drive to expand access to its Volta architecture, Nvidia announced the availability of its Tesla V100 GPU on the Microsoft Azure (NASDAQ: MSFT). Azure is the latest cloud service to join the chipmaker's growing list of public and private cloud services providers along server makers. Most are offering the "GPU accelerated" services for AI development projects such as training deep learning models that require more processing cores and access to big data.

Moving beyond the AI market, Nvidia (NASDAQ: NVDA) on Monday (Nov. 13) unveiled a container registry designed to ease deployment of HPC applications in the cloud. The container registry for scientific computing applications and visualization tools would connect researchers with most GPU-optimized HPC software, the company said during this week's SC17 conference in Denver.

Last month, the company introduced deep learning applications and AI frameworks in its GPU cloud container registry. The AI container registry was rolled out on Amazon Web Services' (NASDAQ: AMZN) Elastic Compute Cloud instances running on Tesla V100 GPUs.

The HPC application containers announced this week include a long list of third-party scientific applications. HPC visualization containers are available in beta on the GPU cloud.

As GPU processing moves wholesale to the cloud and datacenters, easing application deployment was the next logical step as Nvidia extends its reach beyond AI development to scientific computing. (The company notes that the 2017 Nobel Prize winners in chemistry and physics used it CUDA parallel computing and API model. Nvidia's Volta architecture includes more than 5,000 CUDA cores.)

HPC containers are designed to package the libraries and dependencies needed to run scientific applications on top of container infrastructure such as Docker Engine. The cloud container registry for delivering HPC applications uses Nvidia's Docker distribution to run visualizations and other tasks in GPU-accelerated clouds. The service is available now.

Underpinning these scientific workloads in the cloud is the Volta architecture, asserts Nvidia CEO Jensen Huang. "Volta has now enabled every researcher in the world to access…the most advanced high-performance computer in the world at the lowest possible price," Huang claimed during SC17. "You can rent yourself a supercomputer for three dollars" per hour.

The other part of the GPU equation is the software stack and how it remains optimized. Hence, Nvidia has placed software components in the GPU cloud via its container registry. The containerized software stack can then be downloaded from Nvidia's cloud and datacenter partners.

Emphasizing Nvidia's drive to make GPU processing more accessible, Huang conclude: "In the final analysis, it’s got to be simple."

Complete SC17 coverage is available here.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

AIwire