Thursday, October 26th 2023
Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP
Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced that the Rambus HBM3 Memory Controller IP now delivers up to 9.6 Gigabits per second (Gbps) performance supporting the continued evolution of the HBM3 standard. With a 50% increase over the HBM3 Gen 1 data rate of 6.4 Gbps, the Rambus HBM3 Memory Controller can enable a total memory throughput of over 1.2 Terabytes per second (TB/s) for training of recommender systems, generative AI and other demanding data center workloads.
"HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies," said Neeraj Paliwal, general manager of Silicon IP at Rambus. "Thanks to Rambus innovation and engineering excellence, we're delivering the industry's leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP.""HBM is a crucial memory technology for faster, more efficient processing of large AI training and inferencing sets, such as those used for generative AI," said Soo-Kyoum Kim, vice president, memory semiconductors at IDC. "It is critical that HBM IP providers like Rambus continually advance performance to enable leading-edge AI accelerators that meet the demanding requirements of the market."
HBM uses an innovative 2.5D/3D architecture which offers a high memory bandwidth and low power consumption solution for AI accelerators. With excellent latency and a compact footprint, it has become a leading choice for AI training hardware.
The Rambus HBM3 Memory Controller IP is designed for use in applications requiring high memory throughput, low latency and full programmability. The Controller is a modular, highly configurable solution that can be tailored to each customer's unique requirements for size and performance. Rambus provides integration and validation of the HBM3 Controller with the customer's choice of third-party HBM3 PHY.
Availability and Additional Information:
The Rambus HBM3 Memory Controller is available for licensing today.
Source:
Rambus
"HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies," said Neeraj Paliwal, general manager of Silicon IP at Rambus. "Thanks to Rambus innovation and engineering excellence, we're delivering the industry's leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP.""HBM is a crucial memory technology for faster, more efficient processing of large AI training and inferencing sets, such as those used for generative AI," said Soo-Kyoum Kim, vice president, memory semiconductors at IDC. "It is critical that HBM IP providers like Rambus continually advance performance to enable leading-edge AI accelerators that meet the demanding requirements of the market."
HBM uses an innovative 2.5D/3D architecture which offers a high memory bandwidth and low power consumption solution for AI accelerators. With excellent latency and a compact footprint, it has become a leading choice for AI training hardware.
The Rambus HBM3 Memory Controller IP is designed for use in applications requiring high memory throughput, low latency and full programmability. The Controller is a modular, highly configurable solution that can be tailored to each customer's unique requirements for size and performance. Rambus provides integration and validation of the HBM3 Controller with the customer's choice of third-party HBM3 PHY.
Availability and Additional Information:
The Rambus HBM3 Memory Controller is available for licensing today.
5 Comments on Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP
Imagine the cost and fees you would owe them in consumer gpu sales... surreal.
They don't own anything HBM related afaik.
This would just help speed up the development process of a CPU/GPU/NPU/whatever that would connect to HBM3 memory.
I very much doubt Rambus is the only company to offer this kind of IP.