Wednesday, July 13th 2022

Samsung Launches Industry's First 24Gbps GDDR6 Memory

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced that it has begun sampling the industry's first 16-gigabit (Gb) Graphics Double Data Rate 6 (GDDR6) DRAM featuring 24-gigabit-per-second (Gbps) processing speeds. Built on Samsung's third-generation 10-nanometer-class (1z) process using extreme ultraviolet (EUV) technology, the new memory is designed to significantly advance the graphics performance for next-generation graphics cards (Video Graphics Arrays), laptops and game consoles, as well as artificial intelligence-based applications and high-performance computing (HPC) systems.

"The explosion of data now being driven by AI and the metaverse is pushing the need for greater graphics capabilities that can process massive data sets simultaneously, at extremely high speeds," said Daniel Lee, executive vice president of the Memory Product Planning Team at Samsung Electronics. "With our industry-first 24 Gbps GDDR6 now sampling, we look forward to validating the graphics DRAM on next-generation GPU platforms to bring it to market in time to meet an onslaught of new demand."
Engineered with an innovative circuit design and a highly advanced insulating material (High-K Metal Gate; HKMG) that minimizes current leakage, Samsung's 24 Gbps GDDR6 will deliver 30% faster speeds compared to the previous 18 Gbps product. When integrated into a premium graphics card, the GDDR6 DRAM can transfer up to 1.1-terabytes (TB) of data, or about 275 Full HD movies, in just one second.

Designed to be fully compliant with JEDEC specifications, Samsung's GDDR6 DRAM will be compatible across all GPU designs, facilitating broad market adoption in a wide array of graphics solutions.

Samsung's new GDDR6 lineup will also feature low-power options that help extend the battery life of laptops. Utilizing dynamic voltage switching (DVS) technology which adjusts the operating voltage depending on performance requirements, Samsung will provide 20 Gbps and 16 Gbps versions with approximately 20% higher power efficiency at 1.1 V, compared to the 1.35 V GDDR6 industry standard.

Graphics DRAM is increasingly being sought for use beyond PCs, laptops and game consoles, extending into other graphics-intensive applications like HPC, autonomous driving and electric cars. Addressing these emerging markets, Samsung's GDDR6 will enable seamless 4K and 8K video playback, while supporting demanding AI accelerator workloads.

Samsung continues to lead the graphics DRAM market globally, and forecasts that the high-performance graphics portion will see double-digit growth annually in the coming years.

With customer verifications starting this month, Samsung plans to commercialize its 24 Gbps GDDR6 DRAM in line with GPU platform launches, therein accelerating graphics innovation throughout the high-speed computing market.
Add your own comment

15 Comments on Samsung Launches Industry's First 24Gbps GDDR6 Memory

#1
Tomorrow
Navi 3X is likely the first to use these?

Nvidia will use G6X at the high end but perhaps uses 16Gbps and 20Gbps options on 4070 and lower?
Posted on Reply
#2
Tek-Check
The first edition on Navi is unlikely to use those new modules. Too late. 20 or 24 Gbps memory modules are likely to feature in XX50 refresh next year.
Posted on Reply
#3
Bomby569
won't it be a bit confusing, retaining the 6 naming and being faster then 6x?

should be 7 or 6xx
Posted on Reply
#4
ppn
This is nextgen+1, 5070/80 more likely. Even more reasons to skip the half baked 40 series.
Posted on Reply
#5
Tsukiyomi91
just call it GDDR7 ffs. This sounds like the USB standard all over again.
Posted on Reply
#6
Tomorrow
Bomby569won't it be a bit confusing, retaining the 6 naming and being faster then 6x?

should be 7 or 6xx
Tsukiyomi91just call it GDDR7 ffs. This sounds like the USB standard all over again.
GDDR7 is a separate standard that starts from 32Gbps initially. This being faster than G6X is on Micron. Micron is free to produce faster G6X. This is far from the USB mess in terms of naming.

In my eyes G6X is worse than G6. Firstly it consumes more power. Secondly it silently fails when overclocking. Instead of visual artifacts like G6 the error correction will silently downgrade the performance when pushed too far. So you need to test performance to make sure G6X OC is stable instead of seeing visual artifacts.

And now with 24Gbps G6 it's also slower than fastest G6. Tho to be fair Samsung did not disclose in this press release at what voltage the 24Gbps chips are running. We know 20Gbps is 1.1v instead of 1.35v so i assume 24Gbps is 1.35v
Posted on Reply
#7
PCL
Bomby569won't it be a bit confusing, retaining the 6 naming and being faster then 6x?

should be 7 or 6xx
IIRC, 6 is an actual JEDEC standard whereas 6X is not. It's not really on Samsung or JEDEC that Micron and nVidia decided to muddy the waters with a non-standard 6X.
Posted on Reply
#8
ghazi
Did anyone need further proof NVIDIA's investment in GDDR6X was a mistake? Now that hot expensive mess also has a slower data rate.
Posted on Reply
#9
Slizzo
TomorrowIn my eyes G6X is worse than G6. Firstly it consumes more power. Secondly it silently fails when overclocking. Instead of visual artifacts like G6 the error correction will silently downgrade the performance when pushed too far. So you need to test performance to make sure G6X OC is stable instead of seeing visual artifacts.
What? GDDR has had this behavior ever since G5/G5X on the Pascal cards. Turing was the same with G6, and Ampere with G6 and G6X still exhibits the same "issue".
Posted on Reply
#10
chrcoluk
Whats voltage does 6X run at? As its power consumption is bonkers, the different for TDP on 6X vs 6 cards on Ampere is very stark.
Posted on Reply
#11
Richards
TomorrowGDDR7 is a separate standard that starts from 32Gbps initially. This being faster than G6X is on Micron. Micron is free to produce faster G6X. This is far from the USB mess in terms of naming.

In my eyes G6X is worse than G6. Firstly it consumes more power. Secondly it silently fails when overclocking. Instead of visual artifacts like G6 the error correction will silently downgrade the performance when pushed too far. So you need to test performance to make sure G6X OC is stable instead of seeing visual artifacts.

And now with 24Gbps G6 it's also slower than fastest G6. Tho to be fair Samsung did not disclose in this press release at what voltage the 24Gbps chips are running. We know 20Gbps is 1.1v instead of 1.35v so i assume 24Gbps is 1.35v
It will still be more efficient than the power pig that is gddr6x... thats why nvidia doesn't use gddr6x on laptops... look at the power difference between rtx 3070 and 3070ti just by adding gddr6x lol
PCLIIRC, 6 is an actual JEDEC standard whereas 6X is not. It's not really on Samsung or JEDEC that Micron and nVidia decided to muddy the waters with a non-standard 6X.
Uhmm.. so thats why its crap because its not JEDEC standard... gddr6x is a power hog lol
Posted on Reply
#12
Tomorrow
SlizzoWhat? GDDR has had this behavior ever since G5/G5X on the Pascal cards. Turing was the same with G6, and Ampere with G6 and G6X still exhibits the same "issue".
Nope. This behaviour is unique to G6X. Possibly due to PAM4 signaling instead of NRZ. I owned G5, G5X and now a G6 card and they all behave like VRAM has always behaved when pushed too far - artifacting. There is no silent performance downgrade.
Posted on Reply
#13
Slizzo
My experience with the GTX 1080, 1080Ti, RTX 2080 Ti and RTX 3080 Ti is that performance will degrade if memory is pushed too far, but there will be no evident artifacting due to it. And I know most others will have seen the same thing.
Posted on Reply
#14
R0H1T
PCLMicron and nVidia decided to muddy the waters with a non-standard 6X.
It'll be a standard if they push hard enough, anyway wasn't GDDR5x like this as well or it never went on to be "ratified" by JEDEC?
Posted on Reply
#15
ghazi
TomorrowNope. This behaviour is unique to G6X. Possibly due to PAM4 signaling instead of NRZ. I owned G5, G5X and now a G6 card and they all behave like VRAM has always behaved when pushed too far - artifacting. There is no silent performance downgrade.
My earliest recollection of this behavior would be with the HD 7900 series cards 10 years ago. I wouldn't be surprised if it predates GDDR5 as well. That's what you would expect inherently from functional error correction.
Posted on Reply
Add your own comment
Dec 22nd, 2024 13:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts