Sunday, December 4th 2022

Samsung Reveals GDDR7 Memory Uses PAM3 Signalling to Achieve 36 Gbps Data-Rate

The next-generation GDDR7 memory standard is shaping up nicely, to double bandwidth and density over the current GDDR6. In a company presentation detailing upcoming memory technologies, Samsung revealed that GDDR7 uses PAM3 signalling. While ones and zeroes are stored in DRAM memory cells, it is transmitted between devices (such as the DRAM chip and the GPU) in electrical waveforms known as "signals." Ones and zeroes are interpreted by patterns in the signal waveform.

Conventional GDDR6 memory uses NRZ (non-return to zero) or PAM2 signalling to achieve data-rates starting from 14 Gbps, with 24 Gbps expected to be the fastest production GDDR6 memory speed on offer, however some of the faster GDDR6 speeds such as 18 Gbps, 20 Gbps, and 22 Gbps couldn't hit production soon enough for the development phase of the GeForce RTX 30-series "Ampere" GPU, and so NVIDIA and Micron Technology co-developed the GDDR6X standard leveraging PAM4 signalling, to offer speeds ranging between 18 Gbps to 23 Gbps (or higher) several quarters ahead of this faster JEDEC-standard GDDR6.
Conventional NRZ signalling provides 1 bit per cycle transmission rate, while PAM4 does 2 bits per cycle. PAM3 increases this to 3 bits per cycle using a more advanced waveform with many more "eyes" (gaps created by intersections of waves that are interpreted as bits). Samsung states that PAM3 is 25% more efficient than NRZ signalling, and that GDDR7 will be 25% more energy efficient. PAM3 signalling is also used by the 80 Gbps per-direction Thunderbolt 4 standard, and the upcoming USB4.

As for performance, the Samsung slide references 36 Gbps data-rate, which confirms that GDDR7 will bring a generational doubling in data-rates over GDDR6, much like GDDR6 did over GDDR5. A typical GPU with a 256-bit memory bus, when using 36 Gbps-rated GDDR7 memory, will enjoy 1152 GB/s of memory bandwidth. High-end GPUs with 384-bit memory interfaces will do 1728 GB/s. Mainstream GPUs with 128-bit interfaces get 576 GB/s on tap.
Source: Dr Ian Cutress (Twitter)
Add your own comment

23 Comments on Samsung Reveals GDDR7 Memory Uses PAM3 Signalling to Achieve 36 Gbps Data-Rate

#1
Minus Infinity
Why didn't they go straight for GDDR7W given the improvement they showed with the just announced GDDR6W. This stacked method seems it should be the default configuration now.
Posted on Reply
#3
Nanochip
The forward march of technology soldiers on. Soon, even the mighty 4090 will be obsolete and nothing more than an once expensive relic, a power-hungry dinosaur.
Posted on Reply
#4
watzupken
I am not sure which existing GDDR6 solution is being used to compare with the GDDR7 36Gbps data rate to derive that 25% efficiency, but I am expecting a big jump in power requirements, just not proportionate to the increase in data rate.
Posted on Reply
#5
R-T-B
dir_dSo PAM3>PAM4
Technically speaking, no, but it's what they went with.
watzupkenI am not sure which existing GDDR6 solution is being used to compare with the GDDR7 36Gbps data rate to derive that 25% efficiency, but I am expecting a big jump in power requirements, just not proportionate to the increase in data rate.
That's the fate of all technology.
Posted on Reply
#6
mechtech
mmmmmmmmmmmmmmmmmmmmm.............................

Bandwidths

errrrr

Donuts......
Posted on Reply
#7
watzupken
NanochipThe forward march of technology soldiers on. Soon, even the mighty 4090 will be obsolete and nothing more than an once expensive relic, a power-hungry dinosaur.
I feel that flagship GPUs have always been very power hungry given that they exist to push performance boundaries. In any case, I think we are already hitting a point where transistors are not shrinking fast enough to make more complex chips. So very likely, it will be increasingly common to see bigger chips with higher power draw. The RTX 4090 is an exception this time because the jump from Samsung's 8nm (basically a refined 10nm), to TSMC's 4nm (a refined 5nm) is a very significant improvement. I would assume even if Nvidia is to continue using TSMC's 3nm for their next gen GPUs, I don't think we will see such a big jump in performance. With VRAM getting faster, I suspect other than the halo products, every other product range will start using narrower memory bus and let the memory pick up the slack.
Posted on Reply
#8
The King
Minus InfinityWhy didn't they go straight for GDDR7W given the improvement they showed with the just announced GDDR6W. This stacked method seems it should be the default configuration now.
The answer I believe is that R&D cost money and they need to recover those costs.

It's not like any of these companies do these things just to bring the fastest technology available without anyone footing the bill. Profits need to be made.
Posted on Reply
#9
Dirt Chip
Ready from your favorite high-end GPU brand at overpriced $$$ starting late 2024.
24-32GB of those will cost like a complete midrange GPU.
Progress is magic.
Posted on Reply
#10
Minus Infinity
The KingThe answer I believe is that R&D cost money and they need to recover those costs.

It's not like any of these companies do these things just to bring the fastest technology available without anyone footing the bill. Profits need to be made.
Yes but the hard work and R&D has been done, it shouldn't be as big of a jump for GDDR7 one would presume.
Posted on Reply
#11
R-T-B
You guys are overthinking this. GDDR6W requires a wider memory bus for the performance uptick to apply. It's why you won't see it's wide adoption.
Posted on Reply
#13
Wirko
Minus InfinityWhy didn't they go straight for GDDR7W given the improvement they showed with the just announced GDDR6W. This stacked method seems it should be the default configuration now.
Stacking is routinely employed to make system RAM so technology has long been available. But power density is low there. Heat might be the reason stacking isn't more common in VRAM.
Posted on Reply
#14
Lianna
KenjiroPAM3 is not 3-bit, but 3 level signals, which could be used in encoding using base-3 systems, i.e. sending values of 0-15 as 3 signals, not 4 in binary.
Look at: en.wikipedia.org/wiki/Ternary_numeral_system
Yes, that's what the third picture shows. PAM3 makes it possible to send 3 bits in 2 signals or 11 bits in 7 signals, so roughly 1.5x or 1.57x speedup (theoretical just over 1.58), matching claim of 36Gbps vs. max 24 Gbps in current GDDR6.

Edit: @btarunr could you please update/correct the second sentence of the third paragraph?
Posted on Reply
#15
Assimilator
R-T-BYou guys are overthinking this. GDDR5W requires a wider memory bus for the performance uptick to apply. It's why you won't see it's wide adoption.
You mean GDDR6W, right?
Posted on Reply
#16
R-T-B
AssimilatorYou mean GDDR6W, right?
Yes, typo.
Posted on Reply
#17
trsttte
Aren't we getting to the point it would be better to just move to HBM? Signal integrity and power will be an issue, at some point it must just become simpler and cost effective to pay for the more advanced packaging solutions, AMD is already using chiplets after all.
Posted on Reply
#18
Assimilator
trsttteAren't we getting to the point it would be better to just move to HBM? Signal integrity and power will be an issue, at some point it must just become simpler and cost effective to pay for the more advanced packaging solutions, AMD is already using chiplets after all.
If it was that easy, don't you think it would already have happened? Maybe because... it's not that easy. Unless you think that the power and heat requirements of extremely high-bandwidth memory can just be handwaved away?
Posted on Reply
#19
ArdWar
PAM3 is exactly log2(3) bits per cycle, which is about 1.585 bits. Definitely not 3 bits...

Interestingly PAM3 miss by just a hair on being able to use 83 cycle on the usual 128b/130b line code
Posted on Reply
#20
trsttte
AssimilatorIf it was that easy, don't you think it would already have happened? Maybe because... it's not that easy. Unless you think that the power and heat requirements of extremely high-bandwidth memory can just be handwaved away?
It uses less power than GDDR and thus less heat, though heat is still a problem because of the proximity to the main compute die. Cost is also a problem but here we're at a chicken and egg problem: price won't decrease if no one uses it.

Packaging is also much more expensive, but power consumption with GDDR7 will certainly increase again to meet the increased difficulty in transmiting high bw signals to the compute die, at some point (if not already - more and more aplications are starting to use it as well) we'll cross the line where it becomes the better option
Posted on Reply
#21
Assimilator
trstttePackaging is also much more expensive, but power consumption with GDDR7 will certainly increase again to meet the increased difficulty in transmiting high bw signals to the compute die, at some point (if not already - more and more aplications are starting to use it as well) we'll cross the line where it becomes the better option
You're also ignoring the fact that die stacking/interconnection requires every single one of the dies involved to be without defects. If even one of them doesn't work, you throw all of them away. You don't have that problem with discrete memory chips, which makes production a lot less wasteful, which is a major concern at a time when every die from a leading-edge node is becoming more and more expensive. This is exactly why AMD discontinued the use of HBM in consumer GPUs after only one generation.
Posted on Reply
#22
trsttte
AssimilatorYou're also ignoring the fact that die stacking/interconnection requires every single one of the dies involved to be without defects. If even one of them doesn't work, you throw all of them away. You don't have that problem with discrete memory chips, which makes production a lot less wasteful, which is a major concern at a time when every die from a leading-edge node is becoming more and more expensive. This is exactly why AMD discontinued the use of HBM in consumer GPUs after only one generation.
It wasn't just one generation, it was two (splitting hairs, I know). Memory doesn't generally use leading-edge nodes but cost was and probably still is a problem, but I'm not saying this is something that will happen tomorrow and it's not like GDDR6X and GDDR7 are cheap either. All I'm saying is we're getting to that point
Posted on Reply
Add your own comment
Dec 20th, 2024 12:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts