Tuesday, January 19th 2016

Samsung Begins Mass-Producing 4-Gigabyte HBM2 Memory Stacks

Samsung Electronics Co., Ltd., announced today that it has begun mass producing the industry's first 4-gigabyte (GB) DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface, for use in high performance computing (HPC), advanced graphics and network systems, as well as enterprise servers. Samsung's new HBM solution will offer unprecedented DRAM performance - more than seven times faster than the current DRAM performance limit, allowing faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

"By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies," said Sewon Chun, senior vice president, Memory Marketing, Samsung Electronics. "Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market."

The newly introduced 4GB HBM2 DRAM, which uses Samsung's most efficient 20-nanometer process technology and advanced HBM chip design, satisfies the need for high performance, energy efficiency, reliability and small dimensions making it well suited for next-generation HPC systems and graphics cards.

Following Samsung's introduction of a 128GB 3D TSV DDR4 registered dual inline memory module (RDIMM) last October, the new HBM2 DRAM marks the latest milestone in TSV (Through Silicon Via) DRAM technology.

The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8-gigabit (Gb) core dies on top. These are then vertically interconnected by TSV holes and microbumps. A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of a 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.

Samsung's new DRAM package features 256GBps of bandwidth, which is double that of a HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips. Samsung's 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb-GDDR5-based solution, and embeds ECC (error-correcting code) functionality to offer high reliability.

In addition, Samsung plans to produce an 8GB HBM2 DRAM package within this year. By specifying 8GB HBM2 DRAM in graphics cards, designers will be able to enjoy a space savings of more than 95 percent, compared to using GDDR5 DRAM, offering more optimal solutions for compact devices that require high-level graphics computing capabilities.

The company will steadily increase production volume of its HBM2 DRAM over the remainder of the year to meet anticipated growth in market demand for network systems and servers. Samsung will also expand its line-up of HBM2 DRAM solutions to stay ahead in the high-performance computing market and extend its lead in premium memory production.
Add your own comment

18 Comments on Samsung Begins Mass-Producing 4-Gigabyte HBM2 Memory Stacks

#1
xfia
So the next top tier hbm2 with 8gb from both camps are going to be at least twice as powerful if not off the chart in some cases. Pretty amazing leap in performance. Will 8gb be enough for amd to keep best work station server gpu or do they need higher capacity from hmb3 to make one first?
Posted on Reply
#2
Xzibit
2-Hi 2GB stack x4 = 8GB
4-Hi 4GB stack x4 = 16GB
8-Hi 8GB stack x4 = 32GB <-HBM2 max. No need for HBM3 unless more then 32GB is needed

You could keep adding stacks but GPU design would have to change to accomidate for it. What we saw from Fiji and Pascal mock-ups show 4 stacks.
Posted on Reply
#3
xfia
yeah 32gb should make for a good new high end workstation gpu haha
Posted on Reply
#4
Jism
HBM1 could be oc'ed at least a 100% offering twice the amount of bandwidth HBM 1 on 500MHz offers.

It's not that we actually need all that extra bandwidth, but it's more for the extra video memory available for 4K gaming and beyond. HBM1 has a limiting factor of 4GB max.

It will be even more better when devs actually start to use DX12 and combine for example the available memory in Crossfire and not just 2x4GB from which only 4GB effictive can be used.
Posted on Reply
#5
The Quim Reaper
xfiaSo the next top tier hbm2 with 8gb from both camps are going to be at least twice as powerful if not off the chart in some cases. Pretty amazing leap in performance.
It will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.
Posted on Reply
#6
xfia
The Quim ReaperIt will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.
def dragging heals but better than being a rushed mess. seems like they did a lot of research into the best manufacturing processes for all components.
Posted on Reply
#7
buggalugs
Samsung sure knows how to get things done. Could be the beginning of a good era for AMD......
Posted on Reply
#8
medi01
Good news for AMD...
But does nVidia's TSMC deal mean they can't get HBM2 mem from Samsung?
Posted on Reply
#9
Frick
Fishfaced Nincompoop
The Quim ReaperIt will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.
Like they had a choice. They didn't.
Posted on Reply
#10
TheLostSwede
News Editor
The Quim ReaperIt will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.
How have they been dragging their feat when their manufacturing partners have been unable to come up with viable node shrinks? It's not really AMD or Nvidia's fault in this case, as TSMC has failed to deliver on its promised 20nm node and we're now jumping to 16/14nm due to this.
Posted on Reply
#11
TheLostSwede
News Editor
medi01Good news for AMD...
But does nVidia's TSMC deal mean they can't get HBM2 mem from Samsung?
Why would it? Samsung sells memory to the highest bidder/any customer that's willing to pay for it. Why would this have anything to do with who makes the GPU?
Posted on Reply
#12
xfia
At least they figured out what node is best for what component..
TheLostSwedeWhy would it? Samsung sells memory to the highest bidder/any customer that's willing to pay for it. Why would this have anything to do with who makes the GPU?
Samsung will take NV cash in a heart beat plus HBM is a open standard so there is nothing stopping another company from producing it and selling it to whoever they want.
Posted on Reply
#14
medi01
RecusWhy?
Cause nVidia's TSMC deal, but I stand corrected, thanks.
Posted on Reply
#15
PP Mguire
It's been said Pascal will have HBM2 already so this info pretty much confirms the 16GB quoted target.
Posted on Reply
#16
Xzibit
PP MguireIt's been said Pascal will have HBM2 already so this info pretty much confirms the 16GB quoted target.
Its also been said that Pascal GP104 will use GDDR5X. If Nvidia repeats the cycle GP104 will be their flagship and big Pascal GP110 wont be GeForce ready until next cycle some time in 2017.
Posted on Reply
#17
PP Mguire
XzibitIts also been said that Pascal GP104 will use GDDR5X. If Nvidia repeats the cycle GP104 will be their flagship and big Pascal GP110 wont be GeForce ready until next cycle some time in 2017.
And if that happens I'll happily be running AMD come this summer. :peace:
Posted on Reply
#18
Divide Overflow
HBM2? HBM products were barely beginning to show up. Should we wait a month or two for HBM3 or 4?
Posted on Reply
Add your own comment
Nov 22nd, 2024 17:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts