Covering Scientific & Technical AI | Wednesday, November 27, 2024

Inspur Takes OCP Beyond Hyperscalers 

Inspur, China’s server leader, is expanding its AI offerings based on Open Compute Project specifications, including an OCP “cloud optimized” server geared to SAP HANA workloads and running Intel’s Optane persistent memory backed by its latest Xeon Scalable processor.

The computing and networking adoption initiative also addresses the increased number of complex AI workloads filling up datacenters. Among the consequences are greater hardware complexity as more AI accelerators are added to the mix. In response, an Open Accelerator Infrastructure (OAI) spec promoted by Baidu, Facebook and Microsoft seeks to reduce complexity to ease deployment of AI accelerators.

For the past decade, OCP has been heavily focused on hardware development and innovation. “This year we really want to talk about adoption,” said Alan Chang, general manager of Inspur’s server product unit.

First and foremost, OCP establishes a form factor for open hardware components like network interface controller cards. For example, OCP-based NIC cards have emerged as industry standard networking components. “OCP definitely needs to be more than a [standard] form factor,” Alan said in an interview.

Looking to spur adoption beyond hyper-scalers like Facebook (NASDAQ: FB), Inspur’s open AI server integrates OCP hardware running on Intel processor and persistent memory technologies that include the chipmaker’s (NASDAQ: INTC) “deep learning boost” framework designed geared toward AI workloads.

The computing and networking initiatives also comply with a recently launched OAI Universal Baseboard (UBB) that among other capabilities is designed to ease links between server modules. That resulting flexibility would allow users to scale bandwidth for different network configurations. Inspur also released a pair of OCP-compliant interconnect frameworks.

The four-socket server is configured as 21-inch rack that supports up to eight OCP accelerator modules for deep learning applications. The accelerator and baseboard modules support different AI accelerators, including GPU pooling, to run deep learning for training as well as applications such as image recognition.

Meanwhile, the platform delivers Intel’s Optane persistent memory for analytics workloads, including SAP HANA deployments while boosting performance for larger data sets, according to Alper Ilkbahar, general manager of Intel’s Memory and Storage Products Group.

Inspur said its open networking components will be shared via white box switches based on the network operating system called Sonic, as in Software for Open Networking in the Cloud. Cisco System (NASDAQ: CSCO) and cloud partner Microsoft (NASDAQ: MSFT) launched the open source initiative to manage network devices as operators encounter bottlenecks spawned by the exponential growth of data storage and raw computing power required for AI and data-driven enterprise applications.

The Sonic-based switches will be contributed to OCP, Inspur said this week.

Inspur and Chinese search giant Baidu (NASDAQ: BIDU) jointly developed a server dubbed X.Man 4.0 billed as the first commercially available server integrating co-processors conforming to the OAI co-processor specification.

Baidu also collaborated with Intel on AI chip development, including Intel’s Nervana neural network processor that targets model training.

Meanwhile, Chang said early customers such as Internet services conglomerate Tencent (OTCMKTS: TCEHY) are adopting Inspur’s four-socket server configuration for cloud workloads. Among the reasons are comparable performance for complex applications like deep learning workloads in a smaller form factor.

Chang said Inspur’s strategy focuses on extending the benefits of OCP-based hardware beyond early hyper-scale members. It’s betting that a combination of a smaller form factor and open networking components that enable scaling along with co-processing and in-memory analytics from Intel and a focus on specific enterprise deployments like SAP HANA will continue to seed the current server boom driven by AI workloads.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

AIwire