Thursday, October 17th 2024

Samsung Develops Industry's First 24 Gb GDDR7 DRAM

Samsung Electronics, the world leader in advanced memory technology, today announced it has developed the industry's first 24-gigabit (Gb) GDDR7 DRAM. In addition to the industry's highest capacity, the GDDR7 features the fastest speed, positioning itself as the optimum solution for next-generation applications. With its high capacity and powerful performance, the 24 Gb GDDR7 will be widely utilized in various fields that require high-performance memory solutions, such as data centers and AI workstations, extending beyond the traditional applications of graphics DRAM in graphics cards, gaming consoles and autonomous driving.

"After developing the industry's first 16 Gb GDDR7 last year, Samsung has reinforced its technological leadership in the graphics DRAM market with this latest achievement," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "We will continue to lead the graphics DRAM market by bringing next-generation products that align with the growing needs of the AI market." The 24 Gb GDDR7 utilizes 5th-generation 10-nanometer (nm)-class DRAM, which enables cell density to increase by 50% while maintaining the same package size as the predecessor.
In addition to the advanced process node, three-level Pulse-Amplitude Modulation (PAM3) signaling is used to help achieve the industry-leading speed for graphics DRAM of 40 gigabits-per-second (Gbps), a 25% improvement over the previous version. The GDDR7's performance can be further enhanced to up to 42.5 Gbps, depending on the usage environment.

Power efficiency is also enhanced by applying technologies that were previously used in mobile products to graphics DRAM for the first time. By implementing methods like clock control management and dual VDD design, unnecessary power consumption can be significantly reduced, leading to an improvement of over 30% in power efficiency.

To boost operational stability during high-speed operations, the 24 Gb GDDR7 minimizes current leakage by using power gating design techniques.

Validation for the 24 Gb GDDR7 in next-generation AI computing systems from major GPU customers will begin this year, with plans for commercialization early next year.
Add your own comment

11 Comments on Samsung Develops Industry's First 24 Gb GDDR7 DRAM

#1
kondamin
That’s nice, I hope they can make It at a large scale with very little defects
Posted on Reply
#2
Philaphlous
So what if Samsung can produce 24GB single chip GDDR7... NVIDIA will be custom ordering 8GB chips for laptops...keep laptops permanently binned lower...
Posted on Reply
#4
yfn_ratchet
PhilaphlousSo what if Samsung can produce 24GB single chip GDDR7... NVIDIA will be custom ordering 8GB chips for laptops...keep laptops permanently binned lower...
24 gigabits, not 24 gigabytes. In standard terms, it's a 3GB GDDR7 chip like people have been postulating about for a while as a way for Nvidia to—fingers crossed—actually raise the VRAM capacity on the 5080/5070 without clamshelling everything or increasing the bus width.
Posted on Reply
#5
Macro Device
yfn_ratchetincreasing the bus width.
Which is still rather a necessity. Apart from being more power efficient and supporting DLSS FG, 4060 Ti is vastly inferior to 3060 Ti exactly because of having two times less VRAM bandwidth.
There will be very little sense in these 3 GB modules at speeds below 32 GT/s for actual gaming. Only the edge case gamers who love their UHD graphics mods will enjoy it. Outside gaming, of course, it makes all sense in the world because there's a lot of software that loves VRAM amounts and doesn't really care for its speed that much.

A 256-bit 16 GB GPU will be more welcome than a 192-bit 18 GB GPU in 99.9% gaming scenarios provided they use the same VRAM at the same frequency. I don't see how it's possible to compensate for this massive deficite. No cache and no algo in the world can do it. Not yet, at least.
Posted on Reply
#6
THU31
These modules are the only thing that can save the upcoming generation of NVIDIA GPUs. But I wouldn't expect them until a Super refresh a year later.
Posted on Reply
#7
Macro Device
THU31are the only thing that can save the upcoming generation of NVIDIA GPUs
Nah, choosing sweet spot instead of milking all juice will also really help them. Like, if you need 15 VRM elements instead of 25 and a 2-slot cooler instead of a 4-slot monster to cool a GPU it becomes much less expensive to make and thus, possibly, less expensive for the end customers because the GPU itself has its price dropped and the wares you need are less exhausting for your wallet (PSU, PC case). This will help even better than x1.5 on the VRAM.
Posted on Reply
#8
Minus Infinity
I would take Scamsung's words with a bucket of salt. The hype is always strong with this one. Nvidia won't even touch their HBM3e. I'll wait for Micron and SK Hynix.
Posted on Reply
#9
Macro Device
Minus InfinityI would take Scamsung's words with a bucket of salt. The hype is always strong with this one.
What's wrong with their VRAM? I, like, only had VRAM issues with brands that are not Samsung. But I might be out the loop since never touched RDNA3/Ada SKUs.
Posted on Reply
#10
yfn_ratchet
Beginner Macro DeviceWhich is still rather a necessity. Apart from being more power efficient and supporting DLSS FG, 4060 Ti is vastly inferior to 3060 Ti exactly because of having two times less VRAM bandwidth.
There will be very little sense in these 3 GB modules at speeds below 32 GT/s for actual gaming. Only the edge case gamers who love their UHD graphics mods will enjoy it. Outside gaming, of course, it makes all sense in the world because there's a lot of software that loves VRAM amounts and doesn't really care for its speed that much.

A 256-bit 16 GB GPU will be more welcome than a 192-bit 18 GB GPU in 99.9% gaming scenarios provided they use the same VRAM at the same frequency. I don't see how it's possible to compensate for this massive deficite. No cache and no algo in the world can do it. Not yet, at least.
I mean yeah, obviously, bigger number better in that case, but I phrased it with the defeatist (if pragmatic) assumption that Nvidia wouldn't give a rodent's eensy butt cheeks about catering to gamers' desires of all things, and rather that they would carry over the bus widths from Ada give or take a 5060Ti adjustment to 192-bit (MAYBE).
Posted on Reply
#11
Macro Device
yfn_ratchet
Nvidia wouldn't give a rodent's eensy butt cheeks

Are you by chance a writer..?
yfn_ratchetabout catering to gamers' desires of all things
I also seriously doubt it but if the wonder happens and AMD come up with something really explosive then lower end might get adjusted to wider buses. High end is definitely not a subject to noticeable generational uplifts, especially in this department.
Posted on Reply
Add your own comment
Dec 11th, 2024 20:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts