Monday, July 29th 2024

SK hynix Launches Its New GDDR7 Graphics Memory

SK hynix Inc. announced today that it introduced the industry's best-performing GDDR7, a next-generation graphics memory product. The development of GDDR7 in March comes amid growing interest by global customers in the AI space in the DRAM product that meets both specialized performance for graphics processing and fast speed. The company said that it will start volume production in the third quarter.

The new product comes with the operating speed of 32 Gbps, a 60% improvement from the previous generation and the speed can grow up to 40 Gbps depending on the circumstances. When adopted for the high-end graphics cards, the product can also process data of more than 1.5 TB per second, equivalent to 300 Full-HD movies (5 GB each), in a second.
SK hynix also improved power efficiency by more than 50% compared with the previous generation by adopting the new packaging technology that addresses the heat issue as a result of the ultra-fast processing of data.

The company increased the layer number of the heat-dissipating substrates from four to six, while applying the EMC for the packaging material in a bid to reduce thermal resistance by 74%, compared with the previous generation, while maintaining the size of the product unchanged.

Sangkwon Lee, Head of DRAM Product Planning & Enablement at SK hynix, said that GDDR7 is expected to be adopted by a wider range of applications such as high-specification 3D graphics, AI, high-performance computing and autonomous driving.
"We will continue to work towards enhancing our position as the most trusted AI memory solution provider by strengthening the premium memory lineup further," Lee said.
Source: SK hynix
Add your own comment

23 Comments on SK hynix Launches Its New GDDR7 Graphics Memory

#1
LabRat 891
Any word on whether Navi 4x is using GDDR7?

I could see a re-do of Tahiti->Antigua / Navi 14->Navi 24
-uArch rev., membus narrowed, and faster VRAM. [higher-yield, lower-cost]
Posted on Reply
#2
watzupken
LabRat 891Any word on whether Navi 4x is using GDDR7?

I could see a re-do of Tahiti->Antigua / Navi 14->Navi 24
-uArch rev., membus narrowed, and faster VRAM. [higher-yield, lower-cost]
I think it is quite unlikely due to the timing of GDDR7 release, cost and limited supply. I suspect most mid to low range GPUs will end up with faster GDDR6 options.
Posted on Reply
#3
JWNoctis
"equivalent to 300 Full-HD movies (5 GB each), in a second" is arguably not quite the best figure of merit, when processing video content is one of the things GPU does, and no current or near-future GPU would process 300 FHD movies in a second except, well, iterate over it, over and over. :oops:
Posted on Reply
#4
wolf
Better Than Native
I see this enabling GPU makers to go for (or continue, as it were) narrow memory bus designs with the large increase in Gbps per chip. Having said that if everything performs great there's no issue, like usual it'll be the price that's the hardest pill to swallow.
Posted on Reply
#5
oxrufiioxo
wolfI see this enabling GPU makers to go for (or continue, as it were) narrow memory bus designs with the large increase in Gbps per chip. Having said that if everything performs great there's no issue, like usual it'll be the price that's the hardest pill to swallow.
The one issues with that approach so far is gpus like the 4070ti and 4070 scale poorly at higher resolutions.

Hopefully whatever other changes are made fix that.
Posted on Reply
#6
Minus Infinity
watzupkenI think it is quite unlikely due to the timing of GDDR7 release, cost and limited supply. I suspect most mid to low range GPUs will end up with faster GDDR6 options.
I agree, no chance we'll see it in N48 which will be 8800/8700XT class gpu at $500 max.

RDNA5 for sure, but that's 18 months away.
Posted on Reply
#7
Tomorrow
Minus InfinityI agree, no chance we'll see it in N48 which will be 8800/8700XT class gpu at $500 max.

RDNA5 for sure, but that's 18 months away.
I concur. Considering the likely initial high price and limited supply of G7 it does not make sense to put in in a sub $500 product.
Posted on Reply
#8
R0H1T
oxrufiioxoThe one issues with that approach so far is gpus like the 4070ti and 4070 scale poorly at higher resolutions.
And that's why we have DLSS, FSR, XESS & whatever QC's calling their upscaling solution. They're here to stay & part of the reason companies will justify by going small(er) with memory bus/capacity o_O
Posted on Reply
#9
DaemonForce
Very cool. I've been looking forward to GDDR7 coming for a very long time.
Not a Hynix customer but when Micron and Samsung start mastering 4GB units we'll quickly see 32-48GB cards ship out and then the fun starts.
Ultra high core clocks with bleeding edge high bandwidth memory makes for very smooth high FPS experience all throughout the 1080p-4K range.
Unfortunately I don't see a good situation for gaming if AMD sticks to the current die strategy.

I also picture nVidia cutting corners with memory again. SFF RTX 5050 6-8GB models inbound, probably.
Of course this might also be a sign that early adoption GDDR7 units will be in some wild modder territory by the time China can get their hands on any cards.
Wouldn't be the best situation for gamers but we'll all be very VM+AIML aware and in a completely separate universe of problems than framerates by then.
So maybe it's a good strat to pull the trigger on some current era high end cards once those GDDR7 models ship out and wait for GDDR7 to mature before switching up.
Posted on Reply
#10
stimpy88
Looks like these chips won't be entering the market until Q4 at the earlies, so that makes me think an Easter launch for nGreedia at the earliest.
Posted on Reply
#11
THU31
Not interested in the next generation until they introduce 3 GB modules (or go back to wider memory buses).
Posted on Reply
#12
stimpy88
THU31Not interested in the next generation until they introduce 3 GB modules (or go back to wider memory buses).
Well they won't do wider busses, it costs another $5 per PCB! Maybe on the 5090 or Titan (if rumours are correct)... As they don't quite mind spending a couple of Dollars extra because they can slap an extra $200 on it.

But with you on the 3GB chips. nGreedia need more VRAM on all but their lowest end 50x0 cards. 8GB should only be on the bottom card, and the 5080 needs 24GB - Do I think this will be reality... No, it's nGreedia we're talking about.
Posted on Reply
#13
ARF
LabRat 891Any word on whether Navi 4x is using GDDR7?
watzupkenI think it is quite unlikely due to the timing of GDDR7 release, cost and limited supply. I suspect most mid to low range GPUs will end up with faster GDDR6 options.
Given that Navi 4x is already delayed, it will be late, slow, power hungry, and generally less sales for AMD, I doubt that the "limited supply" has any role... it's more like that AMD itself wants to release a meh product line.
Posted on Reply
#14
stimpy88
ARFGiven that Navi 4x is already delayed, it will be late, slow, power hungry, and generally less sales for AMD, I doubt that the "limited supply" has any role... it's more like that AMD itself wants to release a meh product line.
It's almost as if they want to exit the consumer graphics card market...

It's at least two years until AMD will have another chance of competing against nGreedia, that's if nGreedia are still bothering to sell to us lowly poor creatures.
Posted on Reply
#15
Tomorrow
ARFGiven that Navi 4x is already delayed, it will be late, slow, power hungry, and generally less sales for AMD, I doubt that the "limited supply" has any role... it's more like that AMD itself wants to release a meh product line.
Late? When was it supposed to launch then according to you?
Average launch cadence for both AMD and Nvidia has been around 24 months for a new series. Give or take 2-3 months less or extra.
Considering that Navi 3 was released in November 2022 then Navi 4 releasing in January 2025 would be well within that window as is Nvidia Blackwell compared to Lovelace.

Slow? Compared to what exactly?
4090 sure but that's not the point. No one in their right mind is expecting a sub $500 midrange card to beat a previous series flagship in terms of performance.
256bit 16GB with performance better than 7900XT is a good 1440p high refreshrate and decent 4K card even when not using upscaling or frame generation.
What makes or breaks this series is pricing, not performance.

Power hungry? This is the most unrealistic statement.
It will likely be made on TSMC's N4P node that has better efficiency than current Lovelace cards that use TSMC's 5nm and are already very efficient.
Plus the midrange orientation dictates that it will likely not exceed 300W (7900XT), especially given the more efficient N4P node and monolithic design.

There have been no Navi 4 performance leaks to even assume two of the three statements.

I would like to remind people that Navi 1 that was only a midrange card sold very well (then brought in good profits during the mining) and AMD came back after this with very successful Navi 2 high end cards that are fast with plenty of VRAM (6800XT etc) even today. So fast and well priced that many people these days refuse to upgrade because there's nothing equal on the market.

Perhaps im not remembering this correctly but did people speak the same "doomsday talk" after AMD went with Navi 1 after the GCN5 based Vega series?
Also this was in 2019 where they were doing better as company than during the Vega era thanks to Ryzen but if they did not exit dGPU space back then, then why on earth would they exit now?

Most of these big companies exit markets due to financial reasons as a whole company. Not because their last series did not manage to beat the competitors 50% more expensive flagship. Just look at Intel. Did intel shutter it's SSD and other endeavors because of bad sales or because competitors were faster or did they do it because they were doing poorly as a company with hands in too many jars?
Posted on Reply
#16
ARF
TomorrowLate? When was it supposed to launch then according to you?
It should have already been released this spring or this summer, at the latest.
Given that it is a small, incremental update of RDNA 3 with slightly tweaked RT units, I don't see why they postpone/push the release back.
TomorrowAverage launch cadence for both AMD and Nvidia has been around 24 months for a new series. Give or take 2-3 months less or extra.
Considering that Navi 3 was released in November 2022 then Navi 4 releasing in January 2025 would be well within that window as is Nvidia Blackwell compared to Lovelace.
TomorrowPower hungry? This is the most unrealistic statement.
It will likely be made on TSMC's N4P node that has better efficiency than current Lovelace cards that use TSMC's 5nm and are already very efficient.
Plus the midrange orientation dictates that it will likely not exceed 300W (7900XT), especially given the more efficient N4P node and monolithic design.
4nm? When Apple has just started to make chips on the new 2nm, while it has been making older chips on the 3nm?
Thanks, but nope.
Posted on Reply
#17
R0H1T
ARF4nm? When Apple has just started to make chips on the new 2nm, while it has been making older chips on the 3nm?
Apple probably sells $200 billion worth of leading(?) edge products each year, that's more than the entire GPU market including enterprise users! Excluding "AI" of course.
Posted on Reply
#18
Tomorrow
ARFIt should have already been released this spring or this summer, at the latest.
Given that it is a small, incremental update of RDNA 3 with slightly tweaked RT units, I don't see why they postpone/push the release back.
Maybe it's not so small and incremental after all? Besides Navi 4 was initially a full lineup. I think the chiplet based versions were sacrificed for server/AI capacity, but that does not mean that the monolithic dies could come out faster because of it. I rather they take their time and release a bug free lineup instead of rushing.
ARF4nm? When Apple has just started to make chips on the new 2nm, while it has been making older chips on the 3nm?
Thanks, but nope.
Yes 4nm. Zen 5 uses it and so will both Navi 4 and Blackwell. It makes zero sense to release Navi 4 on the same 5nm from 2022.
By the time Navi 4 comes out 4nm is already old and less expensive process with Apple moving to 2nm and even Intel using 3nm for some the Arrow Lake and Lunar Lake dies.
AMD will likely use 3nm for their server Zen5c models.
Posted on Reply
#19
AnotherReader
ARFIt should have already been released this spring or this summer, at the latest.
Given that it is a small, incremental update of RDNA 3 with slightly tweaked RT units, I don't see why they postpone/push the release back.





4nm? When Apple has just started to make chips on the new 2nm, while it has been making older chips on the 3nm?
Thanks, but nope.
No one is using the so called 2 nm processes right now. TSMC is expected to start mass production on the N2 node in the second half of 2025. AMD's Zen 5 and Zen 5c are doing pretty well against Apple despite being on a slightly inferior process. The M3 Pro, which has the same TDP as the unwieldily named Ryzen AI 9 HX 370, gets 913 points in the same test.

Posted on Reply
#20
ARF
AnotherReaderNo one is using the so called 2 nm processes right now.

[SIZE=4]TSMC begins trial 2nm production for Apple, say reports[/SIZE]

[SIZE=4]Leading chip foundry TSMC is beginning production of chips made using its 2nm manufacturing process technology this week, according to reports from Taiwan.[/SIZE]

Apple is set to be the first customer and the production flow will also include advanced packaging to make up the M5 processor that is intended for use in both Mac computers and AI servers. The process is to run at TSMC’s Baoshan wafer fab in Hsinchu, Taiwan. Other reports say 2nm chips going into production could be used in Apple’s forthcoming iPhone 17.
Trial production was expected to begin in 4Q24 ahead of mass production in 2025. The decision to begin production early is being seen as part of an effort to secure better yields ahead of its customer’s full needs.

www.eenewseurope.com/en/tsmc-begins-trial-2nm-production-for-apple-say-reports/
www.macrumors.com/2024/07/09/next-gen-chip-trial-to-begin-next-week/
Posted on Reply
#21
DaemonForce
TomorrowLate? When was it supposed to launch then according to you?
Average launch cadence for both AMD and Nvidia has been around 24 months for a new series. Give or take 2-3 months less or extra.
Considering that Navi 3 was released in November 2022 then Navi 4 releasing in January 2025 would be well within that window as is Nvidia Blackwell compared to Lovelace.
I can't believe this is a real question given this:
Tomorrowvery successful Navi 2 high end cards that are fast with plenty of VRAM (6800XT etc) even today. So fast and well priced that many people these days refuse to upgrade because there's nothing equal on the market.
There is absolutely no reason for me not to be looking at the RX6800XT to sunset my tired and aging RX 580.

As a gamer it's great. Runs everything I actually care about but struggles with UGC that makes nVidia specific or raytracing calls.
As a creator, it's inexcusable that I'm still on something that locks me out by DirectX feature level and lacks serious encoder power.
The RX6800XT also has some hint of superior DX11 performance over the RX7900XT, which made me lock onto it because of features.
Despite all of this, I really want to get in when these cards start going 2nm, which is likely soon™ with higher clocks and fresh GDDR7.
Apple is not the only customer interested in the new stuff. The world doesn't revolve around them but they sure do have some weight.
ARFIt should have already been released this spring or this summer, at the latest.
Given that it is a small, incremental update of RDNA 3 with slightly tweaked RT units, I don't see why they postpone/push the release back.
Yeah. These are a new design that looks like the explore+refine process that's been going on with Ryzen.
So the debut of RDNA4 as simply a refresh of RDNA3 doesn't really strike anyone as impressive and I am not locked in.
Posted on Reply
#22
AnotherReader
ARF

[SIZE=4]TSMC begins trial 2nm production for Apple, say reports[/SIZE]

[SIZE=4]Leading chip foundry TSMC is beginning production of chips made using its 2nm manufacturing process technology this week, according to reports from Taiwan.[/SIZE]

Apple is set to be the first customer and the production flow will also include advanced packaging to make up the M5 processor that is intended for use in both Mac computers and AI servers. The process is to run at TSMC’s Baoshan wafer fab in Hsinchu, Taiwan. Other reports say 2nm chips going into production could be used in Apple’s forthcoming iPhone 17.
Trial production was expected to begin in 4Q24 ahead of mass production in 2025. The decision to begin production early is being seen as part of an effort to secure better yields ahead of its customer’s full needs.

www.eenewseurope.com/en/tsmc-begins-trial-2nm-production-for-apple-say-reports/
www.macrumors.com/2024/07/09/next-gen-chip-trial-to-begin-next-week/
That's good, but note that mass production still won't happen until the second half of 2025; that lines up well with the traditional launch of new iPhone models.
Posted on Reply
#23
R-T-B
DaemonForceThe RX6800XT also has some hint of superior DX11 performance over the RX7900XT, which made me lock onto it because of features.
As someone owning both a RX6900XT and RX7900XTX, don't get too excited. I've seen nothing better in the 6xxx in DX11 land.
Posted on Reply
Add your own comment
Nov 21st, 2024 05:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts