Covering Scientific & Technical AI | Wednesday, January 15, 2025

Cirrascale to Offer NVIDIA Tesla M40 GPU Accelerators Throughout Rackmount and Blade Server Product Lines 

SAN DIEGO, Calif., Nov. 10 -- Cirrascale Corporation, a premier developer of GPU-driven blade and rackmount cloud infrastructure for mobile and Internet applications, today announced it will offer the new NVIDIA Tesla M40 GPU accelerators throughout its high-performance GPU- enabled rackmount and blade server product lines. Utilizing the company’s proprietary 80-lane Gen3 PCIe switch-enabled risers, the RM4600 Series and GB5600 Series product lines can respectively peer on a single PCIe root complex up to four or eight discrete NVIDIA Tesla M40 GPU accelerators in a single rackmount or blade server chassis.

“With its latest Tesla GPU accelerator, NVIDIA is once again pushing the boundaries of accelerated computing,” said David Driggers, CEO, Cirrascale Corporation. “Many of our customers are specifically involved with deep learning training deployment and utilize our products because of their unique and powerful multi-GPU peering abilities. The new Tesla M40 accelerators are enabling us to create some of the world’s fastest deep learning solutions available.”

Purpose built for scale out deep learning training deployments, the NVIDIA Tesla M40 GPU accelerator dramatically reduces the time to train deep neural networks – as much as 8X faster than a CPU. The new Tesla M40 features NVIDIA GPU BoostTM technology, which converts power headroom into user-controlled performance boosts, enabling the Tesla M40 to deliver 7 Tflops of single precision peak performance. Additionally, it provides 12GB of ultra-fast GDDR5 memory, which enables a single Cirrascale GB5600 blade server to house up to an incredible 96GB of GPU memory.

Extending the capabilities of these accelerators, the Cirrascale SR3514 PCIe switch riser enables up to eight discrete GPU accelerators to communicate directly with each other on the same PCI bus. This eliminates the need for host CPU intervention creating a “micro-cluster” by allowing the accelerators to share a single memory address space. When used in conjunction with NVIDIA GPUDirect technology, compatible PCIe Gen3 devices can directly read and write CUDA host and device memory. By doing so, it eliminates unnecessary memory copies, dramatically lowers CPU overhead, and reduces latency resulting in significant performance improvements in data transfer times.

“The Tesla M40 GPU is specifically designed for deep learning training in demanding datacenter environments, delivering the highest performance available for building sophisticated neural networks,” said Roy Kim, group product manager of Accelerated Computing at NVIDIA. “With support for the new Tesla accelerators, Cirrascale’s purpose-built solutions will help data scientists achieve breakthroughs in their deep learning work.”

The Cirrascale RM4600 Series rackmount and GB5600 Series blade servers supporting the NVIDIA Tesla M40 GPU accelerators -- as well as the Cirrascale proprietary PCIe switch-enabled riser -- are immediately available to order and are shipping to customers now. Licensing opportunities for these technologies are also available immediately to both customers and partners.

About Cirrascale Corporation

Cirrascale Corporation is a premier developer of GPU-driven cloud infrastructure for mobile and Internet applications. Cirrascale leverages its patented Vertical Cooling Technology and proprietary PCIe switch riser technology to provide the industry’s densest rackmount and blade-based peered multi- GPU platforms. The company sells hardware solutions to large-scale infrastructure operators, hosting and cloud service providers, Biotech, and HPC users. Cirrascale also licenses its award winning technology to partners globally. To learn more about Cirrascale and its unique multi-GPU infrastructure solutions, please visit http://www.cirrascale.com or call (888) 942-3800.

---

Source: Cirrascale

AIwire