• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Shows Off 32 Gbps GDDR7 Memory at GTC

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,297 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Samsung Electronics showed off its latest graphics memory innovations at GTC, with an exhibit of its new 32 Gbps GDDR7 memory chip. The chip is designed to power the next generation of consumer and professional graphics cards, and some models of NVIDIA's GeForce RTX "Blackwell" generation are expected to implement GDDR7. The chip Samsung showed off at GTC is of the highly relevant 16 Gbit density (2 GB). This is important, as NVIDIA is rumored to keep graphics card memory sizes largely similar to where they currently are, while only focusing on increasing memory speeds.

The Samsung GDDR7 chip shown is capable of its 32 Gbps speed at a DRAM voltage of just 1.1 V, which beats the 1.2 V that's part of JEDEC's GDDR7 specification, which along with other power management innovations specific to Samsung, translates to a 20% improvement in energy efficiency. Although this chip is capable of 32 Gbps, NVIDIA isn't expected to give its first GeForce RTX "Blackwell" graphics cards that speed, and the first SKUs are expected to ship with 28 Gbps GDDR7 memory speeds, which means NVIDIA could run this Samsung chip at a slightly lower voltage, or with better timings. Samsung also made some innovations with the package substrate, which decreases thermal resistance by 70% compared to its GDDR6 chips. Both NVIDIA and AMD are expected to launch their first discrete GPUs implementing GDDR7, in the second half of 2024.



View at TechPowerUp Main Site | Source
 
Joined
Oct 26, 2008
Messages
2,259 (0.38/day)
System Name Budget AMD System
Processor Threadripper 1900X @ 4.1Ghz (100x41 @ 1.3250V)
Motherboard Gigabyte X399 Aorus Gaming 7
Cooling EKWB X399 Monoblock
Memory 4x8GB GSkill TridentZ RGB 14-14-14-32 CR1 @ 3266
Video Card(s) XFX Radeon RX Vega₆⁴ Liquid @ 1,800Mhz Core, 1025Mhz HBM2
Storage 1x ADATA SX8200 NVMe, 1x Segate 2.5" FireCuda 2TB SATA, 1x 500GB HGST SATA
Display(s) Vizio 22" 1080p 60hz TV (Samsung Panel)
Case Corsair 570X
Audio Device(s) Onboard
Power Supply Seasonic X Series 850W KM3
Software Windows 10 Pro x64
Would still rather have an HBM2 Auquabolt or HBM3 card. I've had my Vega64 XTX for 6+ years now and its never had an issue and its seen 24/7 use. I rarely EVER turn my PC off.

I'd be more than willing to pay $1,000-1,200 for a GPU with the same performance as say a 7900XTX but with 16 GB of HBM2 Auquabolt.
 
Joined
May 12, 2017
Messages
2,207 (0.79/day)
Would still rather have an HBM2 Auquabolt or HBM3 card. I've had my Vega64 XTX for 6+ years now and its never had an issue and its seen 24/7 use. I rarely EVER turn my PC off.

I'd be more than willing to pay $1,000-1,200 for a GPU with the same performance as say a 7900XTX but with 16 GB of HBM2 Auquabolt.

Totally agree with you. User(s) that normally part with 1000$+ cards with GDDR6 should be getting HBM for that price. The latest price cut's shows cards did not need to be that expensive in the first place.


2x Vega Nano
2x R9 Nano
 
Joined
Aug 21, 2013
Messages
1,936 (0.47/day)
Same. For those that say HBM is expensive and AI market gobbles up all available capacity - on cards that cost 1000+ the cost is less of an issue and the AI market always chases the latest and greatest. Currently that's HBM3e but gaming cards could make do with HBM3 or even older HBM2/2e that are much less in demand. Also since most people dont buy 1000+ cards the HBM supply does not need to be as big as GDDR6 or whats needed by AI cards.

For example comparing the last consumer card with HBM2 (Radeon VII, 16GB, 4096bit 4x4GB) and the fastest card with GDDR6X (4080S, 16GB, 256bit, 8x2GB) the 4 year older HBM2 card still has a lead in memory bandwidth and compactness on the PCB. 1TB/s vs 736GB/s
Yes 4090 technically has the same 1TB/s bandwidth albeit with slower 21Gbps G6X at a wider 384bit bus. If it used the newer 23Gbps G6X it would have 1.1TB/s.

Also HBM2 and newer versions still hold the advantage of stack size with 4GB being common where as GDDR7 only plans to move to 3GB modules sometime in 2025 at the earliest.

HBM also supports building cards with intermediary capacities/odd number stacks while still retaining much of the speeds. Such as using 3x4GB stacks for a 12GB card. Or using 6x4GB for 24GB. Not to mention much higher capacities when using stacks bigger than 4GB.

Im less sure about HBM's power consumption but considering that 1000+ costing cards also already consume more than 350W with a limit of roughly 600W i dont see a big problem with this either.
 
Joined
Jun 1, 2010
Messages
392 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
Not to mention, with Chiplet/MCM approach, AMD could easilly stuff couple of dense HBM modules on the same interposer, close to their MCDs. That would remove the bantwidth and bus width issues, instantly. This is especilly crucial for lower end SKUs like 7800XT, (and 7900GRE/XT at some point) etc which BTW has plenty of space left from unused MCDs. They could also try to "integrate" HBM on top of MCD or into it. But don't beat me. Just some layman thoughts aloud.
 
Top