Cirrascale Powers AI and HPC Advancements with NVIDIA HGX H200 Server Integration
SAN DIEGO, Oct. 3, 2024 -- Cirrascale Cloud Services, a leading provider of innovative cloud solutions for AI and high-performance computing (HPC) workloads, today announced the general availability of NVIDIA HGX H200 servers in its AI Innovation Cloud. The new offering empowers organizations to scale AI and HPC workloads with unprecedented speed, efficiency, and flexibility.
The NVIDIA HGX H200 server platform is available in the form of integrated baseboards in eight NVIDIA H200 Tensor Core GPU configurations, which offer full GPU-to-GPU bandwidth through NVIDIA NVLink and NVSwitch interconnect. Leveraging the power of H200 GPU multi-precision Tensor Cores, an eight-way HGX H200 provides up to 32 petaFLOPS of FP8 deep learning compute and over 1.1TB of aggregate HBM3e memory for high performance in generative AI and HPC applications. Cirrascale instances of the HGX H200 include advanced networking options --- at speeds of up to 3200 gigabits per second (Gb/s) --- utilizing the NVIDIA Quantum-2 InfiniBand networking platform for advanced AI and HPC workload performance.
"Cirrascale remains at the forefront of delivering cutting-edge generative AI and HPC cloud solutions," said Mike LaPan, vice president of Marketing, Cirrascale Cloud Services. "With the integration of the NVIDIA HGX H200 server platform into our AI Innovation Cloud, we're empowering our customers with advanced processing capabilities, allowing them to accelerate AI innovation and deploy models with unprecedented speed and efficiency."
The NVIDIA H200 Tensor Core GPU offers groundbreaking enhancements in accelerated computing. They are the first GPUs to feature 141 gigabytes (GB) of HBM3e memory with a memory bandwidth of 4.8 terabytes per second (TB/s)—nearly double the capacity of NVIDIA H100 Tensor Core GPUs and with 1.4 times more memory bandwidth. These upgrades supercharge generative AI and large language models while delivering significant advancements in scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership (TCO).
"By deploying the NVIDIA HGX H200 accelerated computing platform, Cirrascale can provide its customers with the technology needed to develop cutting-edge generative AI, natural language processing, and HPC model applications," said Shar Narasimhan, Director of Data Center GPUs and AI at NVIDIA. "Our collaboration with Cirrascale will help propel AI and HPC exploration forward to drive a new wave of industry breakthroughs."
NVIDIA HGX H200 servers are now generally available on the Cirrascale Cloud Services platform. Interested customers and partners can visit https://www.cirrascale.com/ai-innovation-cloud/nvidia-ai or call (888) 942-3800 to sign up for the service.
About Cirrascale Cloud Services
Cirrascale Cloud Services is a specialized cloud and managed services provider dedicated to deploying state-of-the-art compute resources and high-speed storage solutions at scale. Our AI Innovation Cloud is purpose-built to enable clients to scale their development, training, and inferencing workloads for generative AI, large language models, and high-performance computing. To learn more about Cirrascale Cloud Services and its unique cloud offerings, please visit https://cirrascale.com.
Source: Cirrascale Cloud Services