Covering Scientific & Technical AI | Monday, November 25, 2024

Run:ai Seeks to Grow AI Virtualization with $75M Round 

Run:ai, a provider of an AI virtualization layer that helps optimize GPU instances, yesterday announced a Series C round worth $75 million. The funding figures to help the fast-growing company expand its sales reach and further development the platform.

GPUs are the beating heart of deep learning today, but the limited nature of the computing resource means AI teams are constantly battling to squeeze the most work out of them. That’s where Run:ai steps in with its flagship product, dubbed Atlas, which provides a way for AI teams to get more bang for their GPU buck.

“We do for AI hardware what VMware and virtualization did for traditional computing–more efficiency, simpler management, greater user productivity,” Ronen Dar, Run:ai’s CTO and co-founder, says in a press release. “Traditional CPU computing has a rich software stack with many development tools for running applications at scale. AI, however, runs on dedicated hardware accelerators such as GPUs which have few tools to help with their implementation and scaling.”

Atlas abstracts AI workloads away from GPUs by creating “virtual pools” where GPU resources can be automatically and dynamically allocated, thereby gaining more efficiency from GPU investments, the company says.

The platform also brings queuing and prioritization methods to deep learning workloads running on GPUs, and develops “fairness algorithms” to ensure users have an equal chance at getting access to the hardware. The company’s software also enables clusters of GPUs to be managed as a single unit, and also allows a single GPU to be broken up into fractional GPUs to ensure better allocation.

Atlas functions as a plug-in to Kubernetes, the open source container orchestration system. Data scientists can get access to Atlas via integration to IDE tools like Jupyter Notebook and PyCharm, the company says.

The abstraction brings greater efficiency to data science teams who are experimenting with different techniques and trying to find what works. According to a December 2020 Run:ai whitepaper, one customer was able to reduce their AI training time from 46 days to about 36 hours, which represents a 3,000% improvement, the company says.

“With Run:ai Atlas, we’ve built a cloud-native software layer that abstracts AI hardware away from data scientists and ML engineers, letting Ops and IT simplify the delivery of compute resources for any AI workload and any AI project,” Dar continues.

The Tel Aviv company, which was founded in 2018, has experienced a 9x increase in annual recurring revenue (ARR) over the past 12 months, during which time the company’s employee count has tripled. The company has also quadrupled its customer base over the past two years. The Series C round, which brings the company’s total funding to $118 million, will be used to grow sales as well as enhancing its core platform.

“When we founded Run:ai, our vision was to build the de- facto foundational layer for running any AI workload,” says Omri Geller, Run:ai CEO and co-founder in the press release. “Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.”

Run:ai’s platform and growth caught the eyes of Tiger Global Management, which co-led the Series C round with Insight Partners, which led the Series B round. Other firms participating in the current round included existing investors TLV Partners and S Capital VC.

Run:ai is well positioned to help companies reimagine themselves using AI, says Insight Partners Managing Director Lonne Jaffe, who you might remember was the CEO of Syncsort (now Precisely) nearly a decade ago.

“As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively,” Jaffe says in the press release.

In addition to AI workloads, Run:ai can also be used to optimize HPC workloads.

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.

AIwire