Tuesday, January 19th 2016
Samsung Begins Mass-Producing 4-Gigabyte HBM2 Memory Stacks
Samsung Electronics Co., Ltd., announced today that it has begun mass producing the industry's first 4-gigabyte (GB) DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface, for use in high performance computing (HPC), advanced graphics and network systems, as well as enterprise servers. Samsung's new HBM solution will offer unprecedented DRAM performance - more than seven times faster than the current DRAM performance limit, allowing faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.
"By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies," said Sewon Chun, senior vice president, Memory Marketing, Samsung Electronics. "Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market."
The newly introduced 4GB HBM2 DRAM, which uses Samsung's most efficient 20-nanometer process technology and advanced HBM chip design, satisfies the need for high performance, energy efficiency, reliability and small dimensions making it well suited for next-generation HPC systems and graphics cards.
Following Samsung's introduction of a 128GB 3D TSV DDR4 registered dual inline memory module (RDIMM) last October, the new HBM2 DRAM marks the latest milestone in TSV (Through Silicon Via) DRAM technology.
The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8-gigabit (Gb) core dies on top. These are then vertically interconnected by TSV holes and microbumps. A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of a 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.
Samsung's new DRAM package features 256GBps of bandwidth, which is double that of a HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips. Samsung's 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb-GDDR5-based solution, and embeds ECC (error-correcting code) functionality to offer high reliability.
In addition, Samsung plans to produce an 8GB HBM2 DRAM package within this year. By specifying 8GB HBM2 DRAM in graphics cards, designers will be able to enjoy a space savings of more than 95 percent, compared to using GDDR5 DRAM, offering more optimal solutions for compact devices that require high-level graphics computing capabilities.
The company will steadily increase production volume of its HBM2 DRAM over the remainder of the year to meet anticipated growth in market demand for network systems and servers. Samsung will also expand its line-up of HBM2 DRAM solutions to stay ahead in the high-performance computing market and extend its lead in premium memory production.
"By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies," said Sewon Chun, senior vice president, Memory Marketing, Samsung Electronics. "Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market."
The newly introduced 4GB HBM2 DRAM, which uses Samsung's most efficient 20-nanometer process technology and advanced HBM chip design, satisfies the need for high performance, energy efficiency, reliability and small dimensions making it well suited for next-generation HPC systems and graphics cards.
Following Samsung's introduction of a 128GB 3D TSV DDR4 registered dual inline memory module (RDIMM) last October, the new HBM2 DRAM marks the latest milestone in TSV (Through Silicon Via) DRAM technology.
The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8-gigabit (Gb) core dies on top. These are then vertically interconnected by TSV holes and microbumps. A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of a 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.
Samsung's new DRAM package features 256GBps of bandwidth, which is double that of a HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips. Samsung's 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb-GDDR5-based solution, and embeds ECC (error-correcting code) functionality to offer high reliability.
In addition, Samsung plans to produce an 8GB HBM2 DRAM package within this year. By specifying 8GB HBM2 DRAM in graphics cards, designers will be able to enjoy a space savings of more than 95 percent, compared to using GDDR5 DRAM, offering more optimal solutions for compact devices that require high-level graphics computing capabilities.
The company will steadily increase production volume of its HBM2 DRAM over the remainder of the year to meet anticipated growth in market demand for network systems and servers. Samsung will also expand its line-up of HBM2 DRAM solutions to stay ahead in the high-performance computing market and extend its lead in premium memory production.
18 Comments on Samsung Begins Mass-Producing 4-Gigabyte HBM2 Memory Stacks
4-Hi 4GB stack x4 = 16GB
8-Hi 8GB stack x4 = 32GB <-HBM2 max. No need for HBM3 unless more then 32GB is needed
You could keep adding stacks but GPU design would have to change to accomidate for it. What we saw from Fiji and Pascal mock-ups show 4 stacks.
It's not that we actually need all that extra bandwidth, but it's more for the extra video memory available for 4K gaming and beyond. HBM1 has a limiting factor of 4GB max.
It will be even more better when devs actually start to use DX12 and combine for example the available memory in Crossfire and not just 2x4GB from which only 4GB effictive can be used.
But does nVidia's TSMC deal mean they can't get HBM2 mem from Samsung?