Friday, September 6th 2024
Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth
As AI workloads continue to evolve and expand, memory bandwidth and capacity are increasingly critical for system performance. The latest GPUs in the industry need the highest performance high bandwidth memory (HBM), significant memory capacity, as well as improved power efficiency. Micron is at the forefront of memory innovation to meet these needs and is now shipping production-capable HBM3E 12-high to key industry partners for qualification across the AI ecosystem.
Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.Robust ecosystem support
Micron is now shipping production-capable HBM3E 12-high units to key industry partners for qualification across the AI ecosystem. This HBM3E 12-high milestone demonstrates Micron's innovations to meet the data-intensive demands of the evolving AI infrastructure.
Micron is also a proud partner in TSMC's 3DFabric Alliance, which helps shape the future of semiconductor and system innovations. AI system manufacturing is complex, and HBM3E integration requires close collaboration between memory suppliers, customers and outsourced semiconductor assembly and test (OSAT) players.
In a recent exchange, Dan Kochpatcharin, head of the Ecosystem and Alliance Management Division at TSMC, commented, "TSMC and Micron have enjoyed a long-term strategic partnership. As part of the OIP ecosystem, we have worked closely to enable Micron's HBM3E-based system and chip-on-wafer-on-substrate (CoWoS) packaging design to support our customer's AI innovation."
In summary, here are the Micron HBM3E 12-high 36 GB highlights:
Micron's leading-edge data center memory and storage portfolio is designed to meet the evolving demands of generative AI workloads. From near memory (HBM) and main memory (high-capacity server RDIMMs) to Gen 5 PCIe NVMe SSDs and data lake SSDs, Micron offers market-leading products that scale AI workloads efficiently and effectively.
As Micron continues to focus on extending its industry leadership, the company is already looking toward the future with its HBM4 and HBM4E roadmap. This forward-thinking approach ensures that Micron remains at the forefront of memory and storage development, driving the next wave of advancements in data center technology.
For more information, visit Micron's HBM3E page.
Source:
Micron
Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.Robust ecosystem support
Micron is now shipping production-capable HBM3E 12-high units to key industry partners for qualification across the AI ecosystem. This HBM3E 12-high milestone demonstrates Micron's innovations to meet the data-intensive demands of the evolving AI infrastructure.
Micron is also a proud partner in TSMC's 3DFabric Alliance, which helps shape the future of semiconductor and system innovations. AI system manufacturing is complex, and HBM3E integration requires close collaboration between memory suppliers, customers and outsourced semiconductor assembly and test (OSAT) players.
In a recent exchange, Dan Kochpatcharin, head of the Ecosystem and Alliance Management Division at TSMC, commented, "TSMC and Micron have enjoyed a long-term strategic partnership. As part of the OIP ecosystem, we have worked closely to enable Micron's HBM3E-based system and chip-on-wafer-on-substrate (CoWoS) packaging design to support our customer's AI innovation."
In summary, here are the Micron HBM3E 12-high 36 GB highlights:
- Undergoing multiple customer qualifications: Micron is shipping production-capable 12-high units to key industry partners to enable qualifications across the AI ecosystem.
- Seamless scalability: With 36 GB of capacity (a 50% increase in capacity over current HBM3E offerings), Micron HBM3E 12-high allows data centers to scale their increasing AI workloads seamlessly.
- Exceptional efficiency: Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitive HBM3E 8-high 24 GB solution!
- Superior performance: With pin speed greater than 9.2 gigabits per second (Gb/s), Micron HBM3E 12-high 36 GB delivers more than 1.2 TB/s of memory bandwidth, enabling lightning-fast data access for AI accelerators, supercomputers and data centers.
- Expedited validation: Fully programmable MBIST capabilities can run at speeds representative of system traffic, providing improved test coverage for expedited validation, enabling faster time to market and enhancing system reliability.
Micron's leading-edge data center memory and storage portfolio is designed to meet the evolving demands of generative AI workloads. From near memory (HBM) and main memory (high-capacity server RDIMMs) to Gen 5 PCIe NVMe SSDs and data lake SSDs, Micron offers market-leading products that scale AI workloads efficiently and effectively.
As Micron continues to focus on extending its industry leadership, the company is already looking toward the future with its HBM4 and HBM4E roadmap. This forward-thinking approach ensures that Micron remains at the forefront of memory and storage development, driving the next wave of advancements in data center technology.
For more information, visit Micron's HBM3E page.
12 Comments on Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth
....always has been.
So I agree that it's not a great fit for desktop GPUs. It's a great fit for mobile and server GPUs simply because of the power consumption and space advantage it has. We already see this in the server market with these server GPUs that nVidia has been producing for AI and whatnot. The disadvantage of HBM is all of the costs (money) associated with it.
www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html
Or reviews to see how it fares on HPC applications.
www.phoronix.com/review/xeon-max-9468-9480-hbm2e/7
I agree though. I'd like to see an APU-like device with a stack or two of this new HBM3e, at the very least to see how it fares.