Sunday, September 10th 2023

d-Matrix Announces $110 Million in Funding for Corsair Inference Compute Platform

d-Matrix, the leader in high-efficiency generative AI compute for data centers, has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The goal of the fundraise is to enable d-Matrix to begin commercializing Corsair, the world's first Digital-In Memory Compute (DIMC), chiplet-based inference compute platform, after the successful launches of its prior Nighthawk, Jayhawk-I and Jayhawk II chiplets.

d-Matrix's recent silicon announcement, Jayhawk II, is the latest example of how the company is working to fundamentally change the physics of memory-bound compute workloads common in generative AI and large language model (LLM) applications. With the explosion of this revolutionary technology over the past nine months, there has never been a greater need to overcome the memory bottleneck and current technology approaches that limit performance and drive up AI compute costs.
d-Matrix has architected an elegant DIMC engine and chiplet-based solution to enable inference at a lower total cost of ownership (TCO) than GPU-based alternatives. This new chiplet-based DIMC platform coming to market in 2024 will redefine the category, further positioning d-Matrix as the frontrunner in efficient AI inference. "The current trajectory of AI compute is unsustainable as the TCO to run AI inference is escalating rapidly," said Sid Sheth, co-founder and CEO at d-Matrix. "The team at d-Matrix is changing the cost economics of deploying AI inference with a compute solution purpose-built for LLMs, and this round of funding validates our position in the industry."

"d-Matrix is the company that will make generative AI commercially viable," said Sasha Ostojic, Partner at Playground Global. "To achieve this ambitious goal, d-Matrix produced an innovative dataflow architecture, assembled into chiplets, connected with a high-speed interface, and driven by an enterprise-class scalable software stack. Playground couldn't be more excited and proud to back Sid and the d-Matrix team as it fulfills the demand from eager customers in desperate need of improved economics."

"We're entering the production phase when LLM inference TCO becomes a critical factor in how much, where, and when enterprises use advanced AI in their services and applications," said Michael Stewart from M12, Microsoft's Venture Fund. "d-Matrix has been following a plan that will enable industry-leading TCO for a variety of potential model service scenarios using a flexible, resilient chiplet architecture based on a memory-centric approach."

d-Matrix was founded in 2019 to solve the memory-compute integration problem, which is the final frontier in AI compute efficiency. d-Matrix has invested in groundbreaking chiplet and digital in-memory compute technologies with the goal of bringing to market a high-performance, cost-effective inference solution in 2024. Since its inception, d-Matrix has grown substantially in headcount and office space. They are headquartered in Santa Clara, California with offices in Bengaluru, India and Sydney, Australia. With this Series-B funding, d-Matrix plans to invest in recruitment and commercialization of its product to satisfy the immediate customer need for lower cost, more efficient compute infrastructure for generative AI inference.

About d-Matrix
d-Matrix is a leading supplier of Digital In-Memory Computing (DIMC) solutions that address the growing demand for transformer and generative AI inference acceleration. d-Matrix creates flexible solutions for inference at scale using innovative DIMC circuit techniques, a chiplet-based architecture, high-bandwidth BoW (chiplet) interconnects and a full stack of machine learning and large language model tools and software. Founded in 2019, the company is backed by top investors and strategic partners including Playground Global, M12 (Microsoft Venture Fund), SK Hynix, Nautilus Venture Partners, Marvell Technology and Entrada Ventures.

Visit d-matrix.ai for more information and follow d-Matrix on LinkedIn for the latest updates.
Sources: d-Matrix, Notebookcheck
Add your own comment

14 Comments on d-Matrix Announces $110 Million in Funding for Corsair Inference Compute Platform

#1
Canned Noodles
Excellent - another card to add to my collection when prices go way down in 10 years! I can't wait!
Posted on Reply
#2
P4-630
Getting tired of "AI" news......
Posted on Reply
#3
TheLostSwede
News Editor
P4-630Getting tired of "AI" news......
Maybe try emailing Jen-Hsun Huang and see if he can do something about it?
A new proprietary Nvidia name is the least he should be able to provide.
Posted on Reply
#4
Fungi
It's not that irrelevant because if nVidia has real competition in AI focused hardware, it could result in them not being able to cruise by and make all the money in that market, thus increasing the importance of gaming focused hardware for them and possibly result in better pricing/performance for hardware enjoyed by the average TPU enjoyer.
Posted on Reply
#5
SOAREVERSOR
FungiIt's not that irrelevant because if nVidia has real competition in AI focused hardware, it could result in them not being able to cruise by and make all the money in that market, thus increasing the importance of gaming focused hardware for them and possibly result in better pricing/performance for hardware enjoyed by the average TPU enjoyer.
Gaming with dedicated physical hardware on PCs it's already been killed the corpse just hasn't started to stink yet. But it is DONE. Consoles will kick about for a bit longer and in the case of Nintendo maybe a lot longer. But for the PC owning anything is long since dead and we are just in the last cash grab before it.

PC gaming is not the master race or a special snow flake, it's been the leading toxin of all that's bad in gaming and pushed those trends forward. It's going to happen, and there is nothing any of us can do about it. And as normal when PC gaming pushes cloud onto the rest of gaming people will scream master race as PC once again ruins things for everyone.
Posted on Reply
#6
Denver
SOAREVERSORGaming with dedicated physical hardware on PCs it's already been killed the corpse just hasn't started to stink yet. But it is DONE. Consoles will kick about for a bit longer and in the case of Nintendo maybe a lot longer. But for the PC owning anything is long since dead and we are just in the last cash grab before it.

PC gaming is not the master race or a special snow flake, it's been the leading toxin of all that's bad in gaming and pushed those trends forward. It's going to happen, and there is nothing any of us can do about it. And as normal when PC gaming pushes cloud onto the rest of gaming people will scream master race as PC once again ruins things for everyone.
nJust a year with a lot of abnormally bad AAA games and people start commenting on these desperate outbursts... Cloud Gaming is doomed to failure or a very small niche.

Master Race is alive and stronger (also expensive) than ever. lol
Posted on Reply
#7
thesmokingman
FungiIt's not that irrelevant because if nVidia has real competition in AI focused hardware, it could result in them not being able to cruise by and make all the money in that market, thus increasing the importance of gaming focused hardware for them and possibly result in better pricing/performance for hardware enjoyed by the average TPU enjoyer.
Dojo is gonna surprise Jensen. The irony is that Nv gpus are not even that great at NN, but its just the best we have. And thus everyone whose anyone in AI is compute constrained. It's so bad that Tesla went out and designed their own silicon for NN, aka Dojo which is ramping up now. And Tesla knows gpus with them bringing a 10K H100 cluster online recently. And the kicker is they are still compute constrained lol, hence Dojo.
Posted on Reply
#8
Toothless
Tech, Games, and TPU!
Space LynxAI is useless, people think it will revolutionize healthcare, for the rich it will, for people like me, my health insurance company will deny all my claims until I am bent over in excruciating pain screaming, as happened earlier this year, my Doctor tried like hell, but Doctors and AI have no power in the realm of Greed and Envy

fuck'em all boys.
Plenty of other applications for AI..
Posted on Reply
#9
thesmokingman
Space Lynxfor rich people too?
I didn't respond on your rant but you're barking up the wrong tree. Just because kids are cheating using chatbots doesn't mean AI is useless or for rich people. Poor form man.
Posted on Reply
#10
thesmokingman
Space Lynxcare to give me a specific example where it has helped a poor person with sources provided?

Are you seriously an educator?
Posted on Reply
#12
user556
That didn't take long for the custom replacements to show up. It's the Bitcoin saga all over again. Now to watch GPU demand start to weaken.
Posted on Reply
#14
trsttte
thesmokingmanbut its just the best we have
*it's just the most cost effective thing we have.

There are better devices but somewhat generic gpus are the most cost effective all things considered (software stack, supply availability, able to be repurposed into something else - not even talking about dumping consumer cards on the used market, the big data center gpu chips can also be partitioned and made available as generic cloud compute).
Posted on Reply
Add your own comment
Nov 21st, 2024 12:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts