Azure Expands CycleCloud Access, Now Supports Nvidia Containers
Microsoft Azure continued to beef up support for HPC and advanced scale workloads today announcing general availability of CycleCloud – its HPC cloud orchestration product based on technology from Cycle Computing which was acquired last August – and introducing support for containers from the Nvidia GPU Cloud (NGC) registry on Volta and Pascal-powered Azure NCv3, NCv2 and ND instances.
“Microsoft is committed to making Azure the cloud of choice for HPC,” wrote Brett Tanzer, PM manager, Azure Specialized Compute, in a blog. “Azure CycleCloud and Nvidia GPUs ease integration and the ability to manage and scale. Near-term developments around hybrid cloud performance with the Avere vFXT will enhance your ability to minimize latency while leveraging on-premises NAS or Azure blob storage alongside Azure CycleCloud and Azure Batch workloads.”
Cycle Computing is a familiar name in HPC where it was an early player in providing tools for orchestrating HPC workloads in the cloud. At SC17 Tanzer shared Azure’s plans for integrating Cycle suggesting it would take about a year to deeply embed Cycle technology into Azure (see HPCwire article, Microsoft Spins Cycle Computing into Core Azure Product). With today’s announcement Azure seems on track to do that.
Tanzer identified GE, Johnson & Johnson, and Ramboll as current CycleCloud users and described in some detail a case history in which Silicon Therapeutics is using Azure CycleCloud to orchestrate a large Slurm HPC cluster with GPUs to simulate a large number of proteins to assess if and how these proteins can be targeted in their drug design projects.
“Azure CycleCloud created a Slurm cluster using Azure’s NCv1 VMs with full-performance Nvidia K80 GPUs, and a BeeGFS file system. This environment mirrored their internal cluster, so their on-premise jobs could run seamlessly without any bottlenecks in Azure. This search for potential protein “hotspots” where drug candidates might be able to fight disease, generated over 50 TB of data. At peak, the 2048 K80 GPUs used over 25 GB/second of bandwidth between the BeeGFS and the compute nodes,” according to the blog.
It will be interesting to watch CycleCloud’s traction and feature growth. Here’s partial list of Azure CycleCloud capabilities taken from the Azure web site:
Manage compute resources: manage virtual machines and scale sets to provide a flexible set of compute resources that can meet your dynamic workload requirements
Manage data: synchronize data files between cloud and on-premise storage, schedule data transfers, monitor transfers and manage data usage.
Orchestrate compute workloads: monitor job load, manage job submissions and job requirements.
Auto scale resources: automatically adjust cluster size and components based upon job load, availability, and time requirements
Create reports: create reports on a number of metrics including cost, usage, and performance.
Monitor and analyze: collect and analyze performance data using visualization tools
Create alerts: create custom alerts that can warn of overruns, job outliers, and workload problems
Audit usage: use audit and event logs to track usage across the organization
To some extent today’s move to support NGC wasn’t a surprise. Azure, like the other big cloud players, has moved fairly quickly to incorporate GPU offerings aimed at traditional HPC and AI/ML applications. Supporting NGC makes it easier for Azure users to actually deploy GPU-accelerated tools and applications.
Nvidia also announced the Azure support for NGC in a blog today by senior product marketing manager Chris Kawalek who wrote, “For HPC, the difficulty is how to deploy the latest software to clusters of systems. In addition to finding and installing the correct dependencies, testing and so forth, you have to do this in a multi-tenant environment and across many systems…NGC removes this complexity by providing pre-configured containers with GPU-accelerated software. Its deep learning containers benefit from Nvidia’s ongoing R&D investment to make sure the containers take advantage of the latest GPU features. And we test, tune and optimize the complete software stack in the deep learning containers with monthly updates to ensure the best possible performance.”
The NGC container registry includes Nvidia tuned, tested, and certified containers for deep learning software such as Microsoft Cognitive Toolkit, TensorFlow, PyTorch, and Nvidia TensorRT. Nvidia creates an optimal software stack for each framework – including required operating system patches, Nvidia deep learning libraries, and the Nvidia CUDA Toolkit – to allow the containers to take “full advantage” of Nvidia GPUs. The deep learning containers from NGC are refreshed monthly with software and component updates.
NGC also includes GPU-accelerated applications and visualization tools for HPC, such as NAMD, GROMACS, LAMMPS, ParaView, and VMD.
Tanzer wrote, “To make it easy to use NGC containers with Azure, a new image called Nvidia GPU Cloud Image for Deep Learning and HPC is available on Azure Marketplace. This image provides a pre-configured environment for using containers from NGC on Azure. Containers from NGC on Azure NCv2, NCv3, and ND virtual machines can also be run with Azure Batch AIby following these GitHub instructions.”
Link to Azure blog: https://azure.microsoft.com/en-us/blog/microsoft-azure-the-cloud-for-high-performance-computing/
Link to Nvidia blog: https://blogs.nvidia.com/blog/2018/08/29/nvidia-gpu-cloud-ngc-microsoft-azure/
This article originally appeared in sister publication HPCwire.