Solarflare: Network Processing Wastes 25% of Data Center Resources
Solarflare, the high performance network specialist under agreement to be acquired by Xilinx, has announced a network processing off-loading capability – one that its Wall Street customers have used for years – that Solarflare said lets data centers reclaim 25 percent of server resources.
The key to Cloud Onload is Solarflare’s kernel bypass technology that separates the operating system – and therefore CPUs – from network processing. Aimed at hyperscalers and large-scale private cloud services providers, Cloud Onload software resides in the smart network interface card (NIC).
“It takes all the networking stack…so now the operating system isn’t focused on network processing,” Tom Spencer, Solarflare’s director of product marketing, told us. “Then you take the smart NIC and run the software on it and now not only are you running all the networking in the NIC, now you’re running some of the applications that had been running in the CPU and the operation system, on the NIC as well.”
Cloud Onload is designed to accelerate and scale data centers’ network-intensive in-memory databases, software load balancers and web servers. Solarflare said the platform reduces latency and increases transaction rates for most network-intensive Transmission Control Protocol (TCP) applications on physical servers and in virtualized environments. Internal benchmark testing shows a 2x improvement in Couchbase, Memcached, Redis and other in-memory databases; a 2-10x improvement in NCINX, HAProxy and other software load balancers; and a 50 percent improvement in web servers/applications, including NGINX and Node.js.
Mike Sapien, chief analyst, enterprise services at Ovum-Informa, said Cloud Onload is an “interesting new offer with two trends in its favor: the move to cloud services and customers need to have application performance. In addition to Solarflare’s position with FinTech and eCommerce applications, customers in other verticals are becoming more dependent on internet-based commerce and reliance on internet/cloud based applications.”
Data centers can use Cloud Onload without modifying existing software applications. Built to run in Linux environments, whether open source, bare metal, virtual machine or container, Cloud Onload is POSIX-compliant to ensure compatibility with TCP-based applications, management tools and network infrastructures.
“The software doesn’t change at all,” Spencer said. “You take our adapter card, you take our Cloud Onload software, and the software application…doesn’t know – we’re the TCP stack in the kernel and it (the software application) doesn’t care. There’s a standard POSIX interface, because of that you don’t have to change your software. What you end up with is a much higher performing application, and the CPU in the server that you’re sitting in is actually just doing the applications, it’s not doing any networking.”
Solarflare cited a recent International Data Corporation report stating that cloud IT infrastructure revenues surpassed traditional IT infrastructure revenues for the first time in the third quarter of 2018, and totaled $66.1 billion for the year. “Cloud Onload disrupts this trend by significantly reducing how much operators must spend to build out their cloud data centers,” the company said, by reducing the number of required servers by 25 percent. “The software improves the scalability of all network applications for cloud networks based on Linux, which The Linux Foundation says runs 90 percent of the public cloud workload and IDC has identified as the only endpoint operating system growing at a global level,” the company said.