Friday, September 6th 2024

Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth

As AI workloads continue to evolve and expand, memory bandwidth and capacity are increasingly critical for system performance. The latest GPUs in the industry need the highest performance high bandwidth memory (HBM), significant memory capacity, as well as improved power efficiency. Micron is at the forefront of memory innovation to meet these needs and is now shipping production-capable HBM3E 12-high to key industry partners for qualification across the AI ecosystem.

Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.
Robust ecosystem support
Micron is now shipping production-capable HBM3E 12-high units to key industry partners for qualification across the AI ecosystem. This HBM3E 12-high milestone demonstrates Micron's innovations to meet the data-intensive demands of the evolving AI infrastructure.

Micron is also a proud partner in TSMC's 3DFabric Alliance, which helps shape the future of semiconductor and system innovations. AI system manufacturing is complex, and HBM3E integration requires close collaboration between memory suppliers, customers and outsourced semiconductor assembly and test (OSAT) players.

In a recent exchange, Dan Kochpatcharin, head of the Ecosystem and Alliance Management Division at TSMC, commented, "TSMC and Micron have enjoyed a long-term strategic partnership. As part of the OIP ecosystem, we have worked closely to enable Micron's HBM3E-based system and chip-on-wafer-on-substrate (CoWoS) packaging design to support our customer's AI innovation."

In summary, here are the Micron HBM3E 12-high 36 GB highlights:
  • Undergoing multiple customer qualifications: Micron is shipping production-capable 12-high units to key industry partners to enable qualifications across the AI ecosystem.
  • Seamless scalability: With 36 GB of capacity (a 50% increase in capacity over current HBM3E offerings), Micron HBM3E 12-high allows data centers to scale their increasing AI workloads seamlessly.
  • Exceptional efficiency: Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitive HBM3E 8-high 24 GB solution!
  • Superior performance: With pin speed greater than 9.2 gigabits per second (Gb/s), Micron HBM3E 12-high 36 GB delivers more than 1.2 TB/s of memory bandwidth, enabling lightning-fast data access for AI accelerators, supercomputers and data centers.
  • Expedited validation: Fully programmable MBIST capabilities can run at speeds representative of system traffic, providing improved test coverage for expedited validation, enabling faster time to market and enhancing system reliability.
Looking ahead
Micron's leading-edge data center memory and storage portfolio is designed to meet the evolving demands of generative AI workloads. From near memory (HBM) and main memory (high-capacity server RDIMMs) to Gen 5 PCIe NVMe SSDs and data lake SSDs, Micron offers market-leading products that scale AI workloads efficiently and effectively.

As Micron continues to focus on extending its industry leadership, the company is already looking toward the future with its HBM4 and HBM4E roadmap. This forward-thinking approach ensures that Micron remains at the forefront of memory and storage development, driving the next wave of advancements in data center technology.

For more information, visit Micron's HBM3E page.
Source: Micron
Add your own comment

12 Comments on Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth

#1
Wirko
So Micron outsoutces a crucial (muh) part of their process, the bonding of stacked dies, to TSMC? That's surprising.
Posted on Reply
#2
TheinsanegamerN
WirkoSo Micron outsoutces a crucial (muh) part of their process, the bonding of stacked dies, to TSMC? That's surprising.
Wait, so its all TSMC?

....always has been.
Posted on Reply
#3
bug
Not gonna come anywhere near the consumer space, so meh...
Posted on Reply
#5
Steevo
I was confused by the 50% more when they stacked 12 instead of the 8, I'm glad they were able to point out that 12 is 50% more than 8, my life is now complete.
Posted on Reply
#6
bug
SteevoI was confused by the 50% more when they stacked 12 instead of the 8, I'm glad they were able to point out that 12 is 50% more than 8, my life is now complete.
That's how it should always be (% diff compared to the old value), though some will play fast and loose with that.
Posted on Reply
#7
hsew
bugNot gonna come anywhere near the consumer space, so meh...
Not enough RGB
Posted on Reply
#8
bug
hsewNot enough RGB
High-latency, huge bandwidth, iirc, which isn't a great fit for consumer GPUs.
Posted on Reply
#10
Aquinus
Resident Wat-man
bugHigh-latency, huge bandwidth, iirc, which isn't a great fit for consumer GPUs.
It's an excellent fit for how GPUs work. The problem is the added cost isn't worth it for desktop GPUs because less expensive options can get the same results. I'd argue that HBM's advantage isn't bandwidth because we already have plenty of it, but the power efficiency and size demands compared to traditional DRAM. That makes it far more suitable for higher performance mobile applications in my opinion because you can get the same done with less power and in less space. Both of which are precious commodities for mobile devices and the server space.

So I agree that it's not a great fit for desktop GPUs. It's a great fit for mobile and server GPUs simply because of the power consumption and space advantage it has. We already see this in the server market with these server GPUs that nVidia has been producing for AI and whatnot. The disadvantage of HBM is all of the costs (money) associated with it.
Posted on Reply
#11
Nordic
This would never exist but I think it would be cool. Imagine an APU with 220w TDP and 32gb on die HBM memory, in addition to normal dimm slots separate from the HBM. You could have the benefits of a massive L4 cache for the CPU and more than sufficient on die VRAM for the GPU.
Posted on Reply
#12
Aquinus
Resident Wat-man
NordicThis would never exist but I think it would be cool. Imagine an APU with 220w TDP and 32gb on die HBM memory, in addition to normal dimm slots separate from the HBM. You could have the benefits of a massive L4 cache for the CPU and more than sufficient on die VRAM for the GPU.
You don't have to look that far to find a CPU with HBM memory onboard.
www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html

Or reviews to see how it fares on HPC applications.
www.phoronix.com/review/xeon-max-9468-9480-hbm2e/7

I agree though. I'd like to see an APU-like device with a stack or two of this new HBM3e, at the very least to see how it fares.
Posted on Reply
Add your own comment
Nov 21st, 2024 10:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts