Liquid AI’s LFM Models Challenge the Status Quo, Outperforming Industry Giants
Liquid AI, a startup co-founded by former researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), has introduced a new series of Liquid Foundation Models (LFMs) that aim to transform the AI landscape.
What sets these models apart is their groundbreaking architecture, which breaks away from the transformer model used in most of today's GenAI systems, including OpenAI’s ChatGPT. The transformer model was introduced in the seminal paper "Attention is All You Need" by Vaswani et al. in 2017.
Liquid has taken a different approach by building foundation models on “first principles”, meaning these models are built from the ground up, without relying on existing frameworks like transformers.
"Architecture work cannot happen in a vacuum – our goal is to develop useful models that are competitive with the current best-in-class LLMs," noted Liquid via their website. "In doing so, we hope to show that model performance isn’t just about scale – it’s also about innovation."
The startup’s LFMs are available in three variants: LFM 1.3B, LFM 3B, and LFM 40 MoE. "MoE" stands for “Mixture of Experts,” while the "B" indicates the number of parameters in billions. Typically, models with more parameters offer better performance.
The Boston-based startup claims that its LFMs offer exceptional performance, matching or even surpassing some of the leading large language models of comparable size.
The LFM 1.3B demonstrated superior performance compared to Meta’s Llama 3.1-8B and Microsoft’s Phi-3.5 3.8B across several third-party benchmarks, including the widely recognized Massive Multitask Language Understanding (MMLU). Liquid claims that this marks “the first instance in which a non-GPT architecture has notably surpassed transformer-based models.”
The core advantage of LFMs is their ability to outperform transformer-based models while consuming significantly less memory. The operational efficiency of LFMs makes them ideal for a wide range of use cases across industries.
In contrast to traditional LLMs, which encounter a substantial increase in memory usage during long-context processing, LFMs maintain a much smaller memory footprint. This efficiency makes them particularly well-suited for applications that require the processing of large volumes of sequential data, such as document analysis and AI chatbots.
It’s important for users to note that Liquid’s LFMs are not open source. Instead, access is available exclusively through Liquid’s inference playground, Lambda Chat UI and API, or Perplexity AI. Liquid plans on making it available on Cerebars Inference soon, and is also optimizing the LFM stack for NVIDIA, AMD, Qualcomm, Cerebras, and Apple hardware.
To further support users and the AI community, Liquid AI plans to publish a series of technical blog posts that delve into the inner workings of each model
In a statement from Liquid AI, the company emphasized, “At Liquid AI, we take an open-science approach. We have and will continue to contribute to the advancement of the AI field by openly publishing our findings and methods through scientific and technical reports. As part of this commitment, we will release relevant data and models produced by our research efforts to the wider AI community.”
“We have dedicated a lot of time and resources to developing these architectures, so we're not open-sourcing our models at the moment. This allows us to continue building on our progress and maintain our edge in the competitive AI landscape.”
The four founders of Liquid - Daniela Rus, Mathias Lechner, Alexander Amini, and Ramin Hasani are recognized as pioneers in the concept of “liquid neural networks.” This innovative approach allows AI models to adapt dynamically to new data and tasks.
Liquid was created with the mission to transcend generative pre-trained transformers (GPT) and develop capable, efficient general-purpose AI systems that can operate effectively at any scale. The startup emerged from stealth late last year, securing $37.5 million in a two-stage seed round, with a reported valuation of $303 million.
Liquid’s AI models exemplify the rapid innovation taking place in the highly competitive AI market. Newcomers like Liquid are making significant strides; however, the industry's evolving landscape means that even more advanced models will continue to appear. This ongoing progress will likely foster diversity in model architectures, contributing to a more balanced competitive environment with a more level playing field.