Monday, June 2nd 2025

Intel Reportedly Preparing HBM Alternative for AI Accelerators
Demand for AI accelerators has surged in recent years, putting pressure on suppliers to deliver more high-bandwidth memory to enable faster training and higher token throughput in inference. In response, Intel, SoftBank, and the University of Tokyo have quietly formed a new startup called "Saimemory" to develop an alternative to existing HBM solutions, by also using stacked DRAM. According to sources close to the effort, prototype chips are slated for 2027, with volume production aimed at 2030. The venture will combine Intel's extensive chip design experience with novel memory patents from the University of Tokyo, while SoftBank has pledged approximately ¥3 billion (approximately $21 million) to fund the research. Riken Research Institute and Shinko Electric Industries may also join as investors or technical partners, and the team plans to seek government support to accelerate development.
Traditional HBM relies on through-silicon vias (TSVs) to link multiple DRAM dies and uses a wide-bus interposer to achieve data rates above 1 TB/s. Saimemory's design reorganizes signal routing and refresh management to improve energy efficiency, latency, and performance. As readers may recall, there have been past efforts to introduce a rival stacked DRAM technology, but it has not been successful. For example, the Hybrid Memory Cube (HMC), co-developed by Samsung and Micron Technology around 2011, promised speeds up to fifteen times that of DDR3. Despite initial industry backing through the Hybrid Memory Cube Consortium, Micron discontinued HMC production in 2018 after it failed to gain market adoption. HMC's decline shows the challenge of displacing hard-rooted memory standards like HBM. If the Saimemory succeeds, Intel will likely be the first adopter with its upcoming AI accelerators. Others, such as AMD and NVIDIA, could also be approached by the consortium to get a trial chip. Still, the feasibility of mass deployment will largely depend on availability and yields.
Source:
Nikkei Asia
Traditional HBM relies on through-silicon vias (TSVs) to link multiple DRAM dies and uses a wide-bus interposer to achieve data rates above 1 TB/s. Saimemory's design reorganizes signal routing and refresh management to improve energy efficiency, latency, and performance. As readers may recall, there have been past efforts to introduce a rival stacked DRAM technology, but it has not been successful. For example, the Hybrid Memory Cube (HMC), co-developed by Samsung and Micron Technology around 2011, promised speeds up to fifteen times that of DDR3. Despite initial industry backing through the Hybrid Memory Cube Consortium, Micron discontinued HMC production in 2018 after it failed to gain market adoption. HMC's decline shows the challenge of displacing hard-rooted memory standards like HBM. If the Saimemory succeeds, Intel will likely be the first adopter with its upcoming AI accelerators. Others, such as AMD and NVIDIA, could also be approached by the consortium to get a trial chip. Still, the feasibility of mass deployment will largely depend on availability and yields.
4 Comments on Intel Reportedly Preparing HBM Alternative for AI Accelerators
www.linkedin.com/posts/quinas_activity-7305633447882948608-VKER
Also another question ...
Why shintel is not funding already proven working memory like:
Race Track memory from Stuart Parkin where a domain wall made with nanowires is used to create an ultradense & rugged memory chip?
Why shintel is venturing in such way by losing time & money instead of focusing its power on something already proven to be working like ULTRA-RAM or the Race Track memory based on the principle of spintronics????
All this shit from Intel is pure madness.
your examples are experimental and no where near ready for practical use.