• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Begins Mass-Producing 4-Gigabyte HBM2 Memory Stacks

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,677 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Samsung Electronics Co., Ltd., announced today that it has begun mass producing the industry's first 4-gigabyte (GB) DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface, for use in high performance computing (HPC), advanced graphics and network systems, as well as enterprise servers. Samsung's new HBM solution will offer unprecedented DRAM performance - more than seven times faster than the current DRAM performance limit, allowing faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

"By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies," said Sewon Chun, senior vice president, Memory Marketing, Samsung Electronics. "Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market."

The newly introduced 4GB HBM2 DRAM, which uses Samsung's most efficient 20-nanometer process technology and advanced HBM chip design, satisfies the need for high performance, energy efficiency, reliability and small dimensions making it well suited for next-generation HPC systems and graphics cards.

Following Samsung's introduction of a 128GB 3D TSV DDR4 registered dual inline memory module (RDIMM) last October, the new HBM2 DRAM marks the latest milestone in TSV (Through Silicon Via) DRAM technology.

The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8-gigabit (Gb) core dies on top. These are then vertically interconnected by TSV holes and microbumps. A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of a 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.

Samsung's new DRAM package features 256GBps of bandwidth, which is double that of a HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips. Samsung's 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb-GDDR5-based solution, and embeds ECC (error-correcting code) functionality to offer high reliability.

In addition, Samsung plans to produce an 8GB HBM2 DRAM package within this year. By specifying 8GB HBM2 DRAM in graphics cards, designers will be able to enjoy a space savings of more than 95 percent, compared to using GDDR5 DRAM, offering more optimal solutions for compact devices that require high-level graphics computing capabilities.

The company will steadily increase production volume of its HBM2 DRAM over the remainder of the year to meet anticipated growth in market demand for network systems and servers. Samsung will also expand its line-up of HBM2 DRAM solutions to stay ahead in the high-performance computing market and extend its lead in premium memory production.

View at TechPowerUp Main Site
 
So the next top tier hbm2 with 8gb from both camps are going to be at least twice as powerful if not off the chart in some cases. Pretty amazing leap in performance. Will 8gb be enough for amd to keep best work station server gpu or do they need higher capacity from hmb3 to make one first?
 
2-Hi 2GB stack x4 = 8GB
4-Hi 4GB stack x4 = 16GB
8-Hi 8GB stack x4 = 32GB <-HBM2 max. No need for HBM3 unless more then 32GB is needed

You could keep adding stacks but GPU design would have to change to accomidate for it. What we saw from Fiji and Pascal mock-ups show 4 stacks.
 
Last edited:
yeah 32gb should make for a good new high end workstation gpu haha
 
HBM1 could be oc'ed at least a 100% offering twice the amount of bandwidth HBM 1 on 500MHz offers.

It's not that we actually need all that extra bandwidth, but it's more for the extra video memory available for 4K gaming and beyond. HBM1 has a limiting factor of 4GB max.

It will be even more better when devs actually start to use DX12 and combine for example the available memory in Crossfire and not just 2x4GB from which only 4GB effictive can be used.
 
So the next top tier hbm2 with 8gb from both camps are going to be at least twice as powerful if not off the chart in some cases. Pretty amazing leap in performance.

It will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.
 
It will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.
def dragging heals but better than being a rushed mess. seems like they did a lot of research into the best manufacturing processes for all components.
 
Samsung sure knows how to get things done. Could be the beginning of a good era for AMD......
 
Good news for AMD...
But does nVidia's TSMC deal mean they can't get HBM2 mem from Samsung?
 
It will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.

Like they had a choice. They didn't.
 
It will only seem like an 'amazing leap in performance' because both AMD & Nvidia have been flogging the 28nm horse to death for the last 4yrs instead of releasing a new architecture every 18mths like they used to.

How have they been dragging their feat when their manufacturing partners have been unable to come up with viable node shrinks? It's not really AMD or Nvidia's fault in this case, as TSMC has failed to deliver on its promised 20nm node and we're now jumping to 16/14nm due to this.
 
Good news for AMD...
But does nVidia's TSMC deal mean they can't get HBM2 mem from Samsung?

Why would it? Samsung sells memory to the highest bidder/any customer that's willing to pay for it. Why would this have anything to do with who makes the GPU?
 
At least they figured out what node is best for what component..
Why would it? Samsung sells memory to the highest bidder/any customer that's willing to pay for it. Why would this have anything to do with who makes the GPU?
Samsung will take NV cash in a heart beat plus HBM is a open standard so there is nothing stopping another company from producing it and selling it to whoever they want.
 
Last edited:
It's been said Pascal will have HBM2 already so this info pretty much confirms the 16GB quoted target.
 
It's been said Pascal will have HBM2 already so this info pretty much confirms the 16GB quoted target.

Its also been said that Pascal GP104 will use GDDR5X. If Nvidia repeats the cycle GP104 will be their flagship and big Pascal GP110 wont be GeForce ready until next cycle some time in 2017.
 
Its also been said that Pascal GP104 will use GDDR5X. If Nvidia repeats the cycle GP104 will be their flagship and big Pascal GP110 wont be GeForce ready until next cycle some time in 2017.
And if that happens I'll happily be running AMD come this summer. :peace:
 
HBM2? HBM products were barely beginning to show up. Should we wait a month or two for HBM3 or 4?
 
Back
Top