Wednesday, March 20th 2024

Samsung Shows Off 32 Gbps GDDR7 Memory at GTC

Samsung Electronics showed off its latest graphics memory innovations at GTC, with an exhibit of its new 32 Gbps GDDR7 memory chip. The chip is designed to power the next generation of consumer and professional graphics cards, and some models of NVIDIA's GeForce RTX "Blackwell" generation are expected to implement GDDR7. The chip Samsung showed off at GTC is of the highly relevant 16 Gbit density (2 GB). This is important, as NVIDIA is rumored to keep graphics card memory sizes largely similar to where they currently are, while only focusing on increasing memory speeds.

The Samsung GDDR7 chip shown is capable of its 32 Gbps speed at a DRAM voltage of just 1.1 V, which beats the 1.2 V that's part of JEDEC's GDDR7 specification, which along with other power management innovations specific to Samsung, translates to a 20% improvement in energy efficiency. Although this chip is capable of 32 Gbps, NVIDIA isn't expected to give its first GeForce RTX "Blackwell" graphics cards that speed, and the first SKUs are expected to ship with 28 Gbps GDDR7 memory speeds, which means NVIDIA could run this Samsung chip at a slightly lower voltage, or with better timings. Samsung also made some innovations with the package substrate, which decreases thermal resistance by 70% compared to its GDDR6 chips. Both NVIDIA and AMD are expected to launch their first discrete GPUs implementing GDDR7, in the second half of 2024.
Source: HardwareLuxx.de
Add your own comment

4 Comments on Samsung Shows Off 32 Gbps GDDR7 Memory at GTC

#1
Flyordie
Would still rather have an HBM2 Auquabolt or HBM3 card. I've had my Vega64 XTX for 6+ years now and its never had an issue and its seen 24/7 use. I rarely EVER turn my PC off.

I'd be more than willing to pay $1,000-1,200 for a GPU with the same performance as say a 7900XTX but with 16 GB of HBM2 Auquabolt.
Posted on Reply
#2
delshay
FlyordieWould still rather have an HBM2 Auquabolt or HBM3 card. I've had my Vega64 XTX for 6+ years now and its never had an issue and its seen 24/7 use. I rarely EVER turn my PC off.

I'd be more than willing to pay $1,000-1,200 for a GPU with the same performance as say a 7900XTX but with 16 GB of HBM2 Auquabolt.
Totally agree with you. User(s) that normally part with 1000$+ cards with GDDR6 should be getting HBM for that price. The latest price cut's shows cards did not need to be that expensive in the first place.


2x Vega Nano
2x R9 Nano
Posted on Reply
#3
Tomorrow
Same. For those that say HBM is expensive and AI market gobbles up all available capacity - on cards that cost 1000+ the cost is less of an issue and the AI market always chases the latest and greatest. Currently that's HBM3e but gaming cards could make do with HBM3 or even older HBM2/2e that are much less in demand. Also since most people dont buy 1000+ cards the HBM supply does not need to be as big as GDDR6 or whats needed by AI cards.

For example comparing the last consumer card with HBM2 (Radeon VII, 16GB, 4096bit 4x4GB) and the fastest card with GDDR6X (4080S, 16GB, 256bit, 8x2GB) the 4 year older HBM2 card still has a lead in memory bandwidth and compactness on the PCB. 1TB/s vs 736GB/s
Yes 4090 technically has the same 1TB/s bandwidth albeit with slower 21Gbps G6X at a wider 384bit bus. If it used the newer 23Gbps G6X it would have 1.1TB/s.

Also HBM2 and newer versions still hold the advantage of stack size with 4GB being common where as GDDR7 only plans to move to 3GB modules sometime in 2025 at the earliest.

HBM also supports building cards with intermediary capacities/odd number stacks while still retaining much of the speeds. Such as using 3x4GB stacks for a 12GB card. Or using 6x4GB for 24GB. Not to mention much higher capacities when using stacks bigger than 4GB.

Im less sure about HBM's power consumption but considering that 1000+ costing cards also already consume more than 350W with a limit of roughly 600W i dont see a big problem with this either.
Posted on Reply
#4
Random_User
Not to mention, with Chiplet/MCM approach, AMD could easilly stuff couple of dense HBM modules on the same interposer, close to their MCDs. That would remove the bantwidth and bus width issues, instantly. This is especilly crucial for lower end SKUs like 7800XT, (and 7900GRE/XT at some point) etc which BTW has plenty of space left from unused MCDs. They could also try to "integrate" HBM on top of MCD or into it. But don't beat me. Just some layman thoughts aloud.
Posted on Reply
Dec 21st, 2024 21:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts