Covering Scientific & Technical AI | Wednesday, March 12, 2025

Cerebras Partners with Hugging Face to Deliver High-Speed AI Inference 

SUNNYVALE, Calif., March 11, 2025 -- Cerebras and Hugging Face today announced a new partnership to bring Cerebras Inference to the Hugging Face platform. HuggingFace has integrated Cerebras into HuggingFace Hub, bringing the world’s fastest inference to over five million developers on HuggingFace.

Cerebras Inference runs the industry’s most popular models at more than 2,000 tokens/s – 70x faster than leading GPU solutions. Cerebras Inference models including Llama 3.3 70B, will be available to HuggingFace developers, enabling seamless API access to Cerebras CS-3 powered AI models.

Cerebras recently announced industry-leading speeds for Llama 3.3 70B, achieving over 2,200 tokens per second – 70 times faster than GPU-based solutions. Leading industry models like OpenAI o3-mini take minutes to generate reasoning answers – Cerebras Inference completes the same tasks at comparable accuracy in mere seconds.

“We’re excited to partner with Hugging Face to bring our industry-leading inference speeds to the global developer community,” said Andrew Feldman, CEO, Cerebras. “By making Cerebras Inference available through Hugging Face, we’re empowering developers to work faster and more efficiently with open-source AI models, unleashing the potential for even greater innovation across industries.”

For the 5 million Hugging Face developers already using the Inference API, this new integration makes it easier than ever to switch to a faster provider for these popular open-source models. Developers can simply select “Cerebras” as their Inference Provider of choice in the Hugging Face platform.

Why Fast and Accurate Open-Source AI Inference Matters

Fast and precise AI inference is essential for a variety of applications, particularly as demand for higher token counts in inference increases with test-time compute and Agentic AI. Open-source models enable Cerebras to optimize these models for the CS-3, delivering developers faster and more accurate inference speeds— ranging from 10 to 70 times faster performance than GPUs.

“Cerebras has been a leader in inference speed and performance, and we’re thrilled to partner to bring this industry-leading inference on open-source models to our developer community,” said Julien Chaumond, CTO of Hugging Face.

Get Started Today

To try it out, visit any of the Hugging Face model cards already supported by Cerebras Cloud. For instance, you can explore Llama 3.3 70B, select Cerebras as your provider, and experience blazing-fast inference directly via Hugging Face.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise.


Source: Cerebras Systems

AIwire