Sunday, September 10th 2023
d-Matrix Announces $110 Million in Funding for Corsair Inference Compute Platform
d-Matrix, the leader in high-efficiency generative AI compute for data centers, has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The goal of the fundraise is to enable d-Matrix to begin commercializing Corsair, the world's first Digital-In Memory Compute (DIMC), chiplet-based inference compute platform, after the successful launches of its prior Nighthawk, Jayhawk-I and Jayhawk II chiplets.
d-Matrix's recent silicon announcement, Jayhawk II, is the latest example of how the company is working to fundamentally change the physics of memory-bound compute workloads common in generative AI and large language model (LLM) applications. With the explosion of this revolutionary technology over the past nine months, there has never been a greater need to overcome the memory bottleneck and current technology approaches that limit performance and drive up AI compute costs.d-Matrix has architected an elegant DIMC engine and chiplet-based solution to enable inference at a lower total cost of ownership (TCO) than GPU-based alternatives. This new chiplet-based DIMC platform coming to market in 2024 will redefine the category, further positioning d-Matrix as the frontrunner in efficient AI inference. "The current trajectory of AI compute is unsustainable as the TCO to run AI inference is escalating rapidly," said Sid Sheth, co-founder and CEO at d-Matrix. "The team at d-Matrix is changing the cost economics of deploying AI inference with a compute solution purpose-built for LLMs, and this round of funding validates our position in the industry."
"d-Matrix is the company that will make generative AI commercially viable," said Sasha Ostojic, Partner at Playground Global. "To achieve this ambitious goal, d-Matrix produced an innovative dataflow architecture, assembled into chiplets, connected with a high-speed interface, and driven by an enterprise-class scalable software stack. Playground couldn't be more excited and proud to back Sid and the d-Matrix team as it fulfills the demand from eager customers in desperate need of improved economics."
"We're entering the production phase when LLM inference TCO becomes a critical factor in how much, where, and when enterprises use advanced AI in their services and applications," said Michael Stewart from M12, Microsoft's Venture Fund. "d-Matrix has been following a plan that will enable industry-leading TCO for a variety of potential model service scenarios using a flexible, resilient chiplet architecture based on a memory-centric approach."
d-Matrix was founded in 2019 to solve the memory-compute integration problem, which is the final frontier in AI compute efficiency. d-Matrix has invested in groundbreaking chiplet and digital in-memory compute technologies with the goal of bringing to market a high-performance, cost-effective inference solution in 2024. Since its inception, d-Matrix has grown substantially in headcount and office space. They are headquartered in Santa Clara, California with offices in Bengaluru, India and Sydney, Australia. With this Series-B funding, d-Matrix plans to invest in recruitment and commercialization of its product to satisfy the immediate customer need for lower cost, more efficient compute infrastructure for generative AI inference.
About d-Matrix
d-Matrix is a leading supplier of Digital In-Memory Computing (DIMC) solutions that address the growing demand for transformer and generative AI inference acceleration. d-Matrix creates flexible solutions for inference at scale using innovative DIMC circuit techniques, a chiplet-based architecture, high-bandwidth BoW (chiplet) interconnects and a full stack of machine learning and large language model tools and software. Founded in 2019, the company is backed by top investors and strategic partners including Playground Global, M12 (Microsoft Venture Fund), SK Hynix, Nautilus Venture Partners, Marvell Technology and Entrada Ventures.
Visit d-matrix.ai for more information and follow d-Matrix on LinkedIn for the latest updates.
Sources:
d-Matrix, Notebookcheck
d-Matrix's recent silicon announcement, Jayhawk II, is the latest example of how the company is working to fundamentally change the physics of memory-bound compute workloads common in generative AI and large language model (LLM) applications. With the explosion of this revolutionary technology over the past nine months, there has never been a greater need to overcome the memory bottleneck and current technology approaches that limit performance and drive up AI compute costs.d-Matrix has architected an elegant DIMC engine and chiplet-based solution to enable inference at a lower total cost of ownership (TCO) than GPU-based alternatives. This new chiplet-based DIMC platform coming to market in 2024 will redefine the category, further positioning d-Matrix as the frontrunner in efficient AI inference. "The current trajectory of AI compute is unsustainable as the TCO to run AI inference is escalating rapidly," said Sid Sheth, co-founder and CEO at d-Matrix. "The team at d-Matrix is changing the cost economics of deploying AI inference with a compute solution purpose-built for LLMs, and this round of funding validates our position in the industry."
"d-Matrix is the company that will make generative AI commercially viable," said Sasha Ostojic, Partner at Playground Global. "To achieve this ambitious goal, d-Matrix produced an innovative dataflow architecture, assembled into chiplets, connected with a high-speed interface, and driven by an enterprise-class scalable software stack. Playground couldn't be more excited and proud to back Sid and the d-Matrix team as it fulfills the demand from eager customers in desperate need of improved economics."
"We're entering the production phase when LLM inference TCO becomes a critical factor in how much, where, and when enterprises use advanced AI in their services and applications," said Michael Stewart from M12, Microsoft's Venture Fund. "d-Matrix has been following a plan that will enable industry-leading TCO for a variety of potential model service scenarios using a flexible, resilient chiplet architecture based on a memory-centric approach."
d-Matrix was founded in 2019 to solve the memory-compute integration problem, which is the final frontier in AI compute efficiency. d-Matrix has invested in groundbreaking chiplet and digital in-memory compute technologies with the goal of bringing to market a high-performance, cost-effective inference solution in 2024. Since its inception, d-Matrix has grown substantially in headcount and office space. They are headquartered in Santa Clara, California with offices in Bengaluru, India and Sydney, Australia. With this Series-B funding, d-Matrix plans to invest in recruitment and commercialization of its product to satisfy the immediate customer need for lower cost, more efficient compute infrastructure for generative AI inference.
About d-Matrix
d-Matrix is a leading supplier of Digital In-Memory Computing (DIMC) solutions that address the growing demand for transformer and generative AI inference acceleration. d-Matrix creates flexible solutions for inference at scale using innovative DIMC circuit techniques, a chiplet-based architecture, high-bandwidth BoW (chiplet) interconnects and a full stack of machine learning and large language model tools and software. Founded in 2019, the company is backed by top investors and strategic partners including Playground Global, M12 (Microsoft Venture Fund), SK Hynix, Nautilus Venture Partners, Marvell Technology and Entrada Ventures.
Visit d-matrix.ai for more information and follow d-Matrix on LinkedIn for the latest updates.
14 Comments on d-Matrix Announces $110 Million in Funding for Corsair Inference Compute Platform
A new proprietary Nvidia name is the least he should be able to provide.
PC gaming is not the master race or a special snow flake, it's been the leading toxin of all that's bad in gaming and pushed those trends forward. It's going to happen, and there is nothing any of us can do about it. And as normal when PC gaming pushes cloud onto the rest of gaming people will scream master race as PC once again ruins things for everyone.
Master Race is alive and stronger (also expensive) than ever. lol
www.hpcwire.com/2021/08/20/enter-dojo-tesla-reveals-design-for-modular-supercomputer-d1-chip/
There are better devices but somewhat generic gpus are the most cost effective all things considered (software stack, supply availability, able to be repurposed into something else - not even talking about dumping consumer cards on the used market, the big data center gpu chips can also be partitioned and made available as generic cloud compute).