Covering Scientific & Technical AI | Saturday, December 28, 2024

d-Matrix Announces $44M to Build Inference Compute Platform 

SANTA CLARA, Calif., April 20, 2022 — d-Matrix, a leader in high-efficiency AI compute for data centers, has closed $44 million in funding, led by US-based venture capital firm Playground Global, bringing forward a new era in computing through an innovative digital in-memory computing (DIMC) architecture that targets Transformer AI workloads. d-Matrix is also announcing its silicon chiplet Nighthawk based on DIMC technology, a first-of-its-kind approach in the race to solving compute efficiency.

d-Matrix aims to bring the first DIMC-based inference compute platform to market, as transformer-led demand for AI explodes and current memory and energy limits are hitting a threshold. d-Matrix has built a novel AI compute platform that uses a combination of intelligent ML tools and frictionless software approaches coupled with chiplets in a Lego block formation enabling integration of multiple programming engines on a common package. d-Matrix has proven its thesis with multiple chiplet developments — the Nighthawk platform announced today and the soon to be released Jayhawk platform. Using this first-of-a-kind compute architecture and DIMC, d-Matrix will be able to deliver an increase in compute efficiency, several times over, and ensure their clients receive massive gains in performance levels, without compromising on energy costs.

“d-Matrix has been on a three year journey to build the world’s most efficient computing platform for AI inference at scale,” said Sid Sheth, Founder, President & CEO at d-Matrix. “We’ve developed a path-breaking compute architecture that is all-digital, making it practical to implement while advancing AI compute efficiency far past the memory wall it has hit today.”

This investment will enable d-Matrix to build out its product roadmap and grow its team across the United States, Australia, and India— which currently consists of 50+ members, 30% of whom are PhDs, representing a mix of both operational and technical experience.

“The hyperscale and edge data center markets are approaching performance and power limits and it’s clear that a breakthrough in AI compute efficiency is needed to match the exponentially growing market,” said Sasha Ostojic, Venture Partner at Playground Global. “d-Matrix is a novel, defensible technology that can outperform traditional CPUs and GPUs, unlocking and maximizing power efficiency and utilization through their software stack. We couldn’t be more excited to partner with this team of experienced operators to build this much-needed, no-tradeoffs technology.”

“Our investment in d-Matrix comes at a time when data around AI workload requirements, running cost, and value creation are in much better focus than they have been in recent years,” said Michael Stewart, Partner at M12 Microsoft’s Venture Fund. “Their clean slate approach is perfectly timed to meet the operational needs of running giant transformers in the composable, scalable datacenter architecture of the near future.”

About Playground Global

Playground Global invests in founders harnessing frontier technologies to build transformational businesses with multi-generational impact. Recognizing the promise in early-stage entrepreneurs developing new solutions in artificial intelligence, automation, life sciences, next-gen compute, cybersecurity, aerospace, and beyond, Playground offers world-class expertise across a wide variety of business and technical domains to help companies achieve their potential.

Learn more at playground.global.

About d-Matrix

d-Matrix is building a new way of doing datacenter AI inferencing using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency.

Learn more at d-Matrix AI.


Source: d-Matrix

AIwire