Friday, June 10th 2022

AMD RDNA3 Offers Over 50% Perf/Watt Uplift Akin to RDNA2 vs. RDNA; RDNA4 Announced

AMD in its 2022 Financial Analyst Day presentation claimed that it will repeat the over-50% generational performance/Watt uplift feat with the upcoming RDNA3 graphics architecture. This would be a repeat of the unexpected return to the high-end and enthusiast market-segments of AMD Radeon, thanks to the 50% performance/Watt uplift of the RDNA2 graphics architecture over RDNA. The company also broadly detailed the various new specifications of RDNA3 that make this possible.

To begin with, RDNA3 debuts on the TSMC N5 (5 nm) silicon fabrication node, and will debut a chiplet-based approach that's somewhat analogous to what AMD did with its 2nd Gen EPYC "Rome" and 3rd Gen Ryzen "Matisse" processors. Chiplets packed with the GPU's main number-crunching and 3D rendering machinery will make up chiplets, while the I/O components, such as memory controllers, display controllers, media engines, etc., will sit on a separate die. Scaling up the logic dies will result in a higher segment ASIC.
AMD also stated that it has re-architected the compute unit with RDNA3 to increase its IPC. The graphics pipeline is bound to get certain major changes, too. The company is doubling down on its Infinity Cache on-die cache memory technology, with RDNA3 featuring the next-generation Infinity Cache (which probably operates at higher bandwidths).

From the looks of it, RDNA3 will be exclusively based on 5 nm, and the company announced, for the very first time, the new RDNA4 graphics architecture. It shared no details about RDNA4, except that it will be based on a more advanced node than 5 nm.

AMD RDNA3 is expected to debut in the second half of 2022, with ramp across 2023. RDNA4 is slated for some time in 2024.
Add your own comment

121 Comments on AMD RDNA3 Offers Over 50% Perf/Watt Uplift Akin to RDNA2 vs. RDNA; RDNA4 Announced

#1
Rithsom
With the ever-so increasing difficulty of being able to source new, power-efficient nodes, this is very hard to believe. I'd love for it to be true, but I'm going to keep my expectations in check until independent reviews come out.
Posted on Reply
#2
oxrufiioxo
Interesting if the rumored 2x performance increase over RDNA2 is true the top gpu would need to use around 500W.
Posted on Reply
#3
ratirt
I'm starting to believe, that the perf/watt is a dead end in the graphics and CPU industries. It no longer satisfies me when companies say that and obviously the growing power consumption for these has a lot to do with it. I'm looking forward for the new tech but if the power consumption is through the roof, I will literally skip buying and investing in graphics cards and CPUs for that matter.
oxrufiioxoInteresting if the rumored 2x performance increase over RDNA2 is true the top gpu would need to use around 500W.
where do you have 2x performance increase over RDNA2? AMD said 50% increase.
Posted on Reply
#4
oxrufiioxo
ratirtI'm starting to believe, that the perf/watt is a dead end in the graphics and CPU industries. It no longer satisfies me when companies say that and obviously the growing power consumption for these has a lot to do with it. I'm looking forward for the new tech but if the power consumption is through the roof, I will literally skip buying and investing in graphics cards and CPUs for that matter.


where do you have 2x performance increase over RDNA2? AMD said 50% increase.
Just pretty much every rumor out there about performance some are crazy like 2.5x and 3x..... It was leaked ages ago that it would be MCM that turned out to be true so I guess we will see. Similar rumors about the 4090 being 1.8x to 2x over the 3090.
Posted on Reply
#5
AusWolf
Looking at Nvidia's near zero performance/Watt uplift from Turing to Ampere, then the new 1.21 Jiggawatt Lovelace architecture, I'm happy with any sort of efficiency increase at this point.
Posted on Reply
#6
konga
ratirtwhere do you have 2x performance increase over RDNA2? AMD said 50% increase.
AMD said 50% perf/watt, which is different than +50% total performance. There have been a lot of BS from fake leakers saying RDNA3 will give a 2.5x - 3x total perf boost, but 2x perf is most likely correct. The person you're replying to did the math wrong, though. To get 2x performance over the 6900XT with a +50% perf/watt boost, the 7900XT would need to be 400W only. (33% more power * 50% better efficiency = +100% perf)

iirc 400W is within the range of what's been rumored by the more reliable rumor folks.

edit: also remember those rumors pertained to the 7900XT relative to the 6900XT specifically.
Posted on Reply
#7
ratirt
oxrufiioxoJust pretty much every rumor out there about performance some are crazy like 2.5x and 3x..... It was leaked ages ago that it would be MCM that turned out to be true so I guess we will see. Similar rumors about the 4090 being 1.8x to 2x over the 3090.
and you believe those rumors because i surely don't. If you don't, no point of bringing these rumors up.
I'm pretty sure there will be no 2x performance increase.
kongaAMD said 50% perf/watt, which is different than +50% total performance. There have been a lot of BS from fake leakers saying RDNA3 will give a 2.5x - 3x total perf boost, but 2x perf is most likely correct. The person you're replying to did the math wrong, though. To get 2x performance over the 6900XT with a +50% perf/watt boost, the 7900XT would need to be 400W only. (33% more power * 50% better efficiency = +100% perf)

iirc 400W is within the range of what's been rumored by the more reliable rumor folks.

edit: also remember those rumors pertained to the 7900XT relative to the 6900XT specifically.
Yes wrong thus my question. He didn't do any math. He just said rumors claim this increase.
Posted on Reply
#8
Valantar
AusWolfLooking at Nvidia's near zero performance/Watt uplift from Turing to Ampere, then the new 1.21 Jiggawatt Lovelace architecture, I'm happy with any sort of efficiency increase at this point.
My thoughts exactly. It's a funny turnaround to see AMD taking the lead on perf/W when they were so far behind Nvidia for so many years, but they knocked it out of the park with RDNA2, so I'm inclined to be optimistic towards this.

Of course I still think the high end GPUs will be ludicrously priced power hogs, but this is very promising for lower end variants. Hopefully they don't screw the pooch with the 7500 series this time around - if it can deliver 6600-ish performance around 75W, that would be amazing. Though of course those GPUs are likely still more than a year out.


As for the specifics of the 50% number: seeing how this was presented at an investor relations day, the risk of being sued if any of this is even marginally wrong is significant, so we can trust the numbers to be accurate - at least in one SKU. And unless that SKU is the 6500XT (which manages to have garbage efficiency compared to other RDNA2 GPUs) this is very promising.
Posted on Reply
#9
konga
ratirtand you believe those rumors because i surely don't. If you don't, no point of bringing these rumors up.
I'm pretty sure there will be no 2x performance increase.

Yes wrong thus my question. He didn't do any math. He just said rumors claim this increase.
A 2x performance increase is plausible, though. MCM design can allow for larger total die areas than monolithic design, which means that AMD can push the TDPs higher before diminishing returns kick in. It seems reasonable to believe that there will be a TDP hike in light of this, and when combined with a +50% perf/watt increase, 2x performance is possible. I'm not saying that it's definitely going to happen, but I wouldn't discount the possibility.
Posted on Reply
#10
R0H1T
RithsomWith the ever-so increasing difficulty of being able to source new, power-efficient nodes, this is very hard to believe. I'd love for it to be true, but I'm going to keep my expectations in check until independent reviews come out.
Why not? They have lots of room with 5nm node, chiplets, faster memory, infinite cache & in fact IF as well.
Posted on Reply
#11
ratirt
kongaA 2x performance increase is plausible, though. MCM design can allow for larger total die areas than monolithic design, which means that AMD can push the TDPs higher before diminishing returns kick in. It seems reasonable to believe that there will be a TDP hike in light of this, and when combined with a +50% perf/watt increase, 2x performance is possible. I'm not saying that it's definitely going to happen, but I wouldn't discount the possibility.
RDNA3 will not be MCM design as far as we know so there should be no speculation about 2x performance increase.
Posted on Reply
#12
R0H1T
ratirtRDNA3 will not be MCM design as far as we know
Not necessarily, it's possible though unlikely at this stage ~

With advanced chiplet packaging I assume it's something along the lines of regular Zen based chips.
Posted on Reply
#13
medi01
AusWolfLooking at Nvidia's near zero performance/Watt uplift from Turing to Ampere, then the new 1.21 Jiggawatt Lovelace architecture, I'm happy with any sort of efficiency increase at this point.
Ampere lineup was disrupted by RDNA2, those ridiculous mem configurations, for instance.
I'd say obviously nVidia was forced to drop a tier on its cards, cut mem in half to reduce price, and likely clock them up too.
Posted on Reply
#14
AusWolf
medi01Ampere lineup was disrupted by RDNA2, those ridiculous mem configurations, for instance.
I'd say obviously nVidia was forced to drop a tier on its cards, cut mem in half to reduce price, and likely clock them up too.
That has little to do with the fact that efficiency hasn't changed much since Pascal.
Posted on Reply
#15
ratirt
R0H1TNot necessarily, it's possible though unlikely at this stage ~

With advanced chiplet packaging I assume it's something along the lines of regular Zen based chips.
I find it hard to believe AMD will put 2 chiplets and just because it says chiplet design doesnt mean there will be two like ZEN. Although, it worked for ZEN with yields so who knows.
Posted on Reply
#16
R0H1T
They're already doing this with CDNA based cards, I'd say an even chance they'll do so with consumer cards especially if Nvidia releases 500~600W monstrosity chips! No way AMD matches them with just 400~500W even if they lead in perf/W at the high end.

Posted on Reply
#17
ARF
ratirtI find it hard to believe AMD will put 2 chiplets and just because it says chiplet design doesnt mean there will be two like ZEN.
It looks like one chiplet will be with the shaders and the uncore, while the other chiplets will have the Infinity Cache.
Navi 31: 1 main chiplet called GCD and 6 supplementary chiplets with Infinity Cache.
Navi 32: 1 main chiplet called GCD and 4 supplementary chiplets with Infinity Cache.

I wonder what will the die size of these chiplets be?


3DCenter.org on Twitter: "AMD Navi 33/32/31 (updated) chip data, based on rumors & assumptions As @kopite7kimi pointed out, old info from last Oct is outdated updated: - 20% less WGP - no more double GCD for N31/N32 - 6 MCD for N31 = 384 MB IF$ - 4 MCD for N32 = 256 MB IF$ https://t.co/rj2G2gi9CU https://t.co/yDqeTTdSAT" / Twitter
Posted on Reply
#18
konga
ratirtI find it hard to believe AMD will put 2 chiplets and just because it says chiplet design doesnt mean there will be two like ZEN. Although, it worked for ZEN with yields so who knows.
Well, I just checked the presentation and they straight-up said that RDNA3 is not monolithic. So that settles that. (around an hour and nine minutes into the presentation if you want to see for yourself)

They didn't go into specifics obviously, but the presenter said "It allows us to scale performance aggressively without the yield and cost concerns of large monolithic silicon."
Posted on Reply
#19
ARF
Navi 31 will break the 1000 GTexels/s (1 TTexel/s) Texture Fillrate barrier.

Posted on Reply
#20
ratirt
ARFIt looks like one chiplet will be with the shaders and the uncore, while the other chiplets will have the Infinity Cache.
Navi 31: 1 main chiplet called GCD and 6 supplementary chiplets with Infinity Cache.
Navi 32: 1 main chiplet called GCD and 4 supplementary chiplets with Infinity Cache.

I wonder what will the die size of these chiplets be?


3DCenter.org on Twitter: "AMD Navi 33/32/31 (updated) chip data, based on rumors & assumptions As @kopite7kimi pointed out, old info from last Oct is outdated updated: - 20% less WGP - no more double GCD for N31/N32 - 6 MCD for N31 = 384 MB IF$ - 4 MCD for N32 = 256 MB IF$ https://t.co/rj2G2gi9CU https://t.co/yDqeTTdSAT" / Twitter
It would seem that the design is split into two chiplets 1 for cores and other for infinity cash. It is not like you would have 2 chiplets with 6900xt combined for instance.
but it is a new design so I suppose AMD knows what they are doing although, it is still speculation and rumor.
Posted on Reply
#21
ARF
ratirtIt would seem that the design is split into two chiplets 1 for cores and other for infinity cash. It is not like you would have 2 chiplets with 6900xt combined for instance.
but it is a new design so I suppose AMD knows what they are doing although, it is still speculation and rumor.
It won't be multi-GPU configuration, so, yeah, one chiplet with the shaders, and the other chiplets for the Infinity Cache.

It is such a pity that they shot themselves in the foot by refusing to develop multi-GPU technology.
Posted on Reply
#22
ratirt
ARFIt won't be multi-GPU configuration, so, yeah, one chiplet with the shaders, and the other chiplets for the Infinity Cache.

It is such a pity that they shot themselves in the foot by refusing to develop multi-GPU technology.
Yes unlike Zen where you have 2 chiplets and you end up with more cores. This one is different in comparison to ZEN but like I said, If this is true, AMD knows what they are doing.
With the Multi-GPU, I think it is coming but not this time around.
Posted on Reply
#23
konga
ratirtIt would seem that the design is split into two chiplets 1 for cores and other for infinity cash. It is not like you would have 2 chiplets with 6900xt combined for instance.
but it is a new design so I suppose AMD knows what they are doing although, it is still speculation and rumor.
They won't necessarily need more than one compute die to be competitive at first. Splitting the silicon between compute and "other shit" dies is enough to allow them to scale up the total transistor count without running into serious yield and cost issues. Multiple compute dies may not come until RDNA4.
Posted on Reply
#24
Valantar
ratirtRDNA3 will not be MCM design as far as we know so there should be no speculation about 2x performance increase.
R0H1TNot necessarily, it's possible though unlikely at this stage ~

With advanced chiplet packaging I assume it's something along the lines of regular Zen based chips.
ratirtI find it hard to believe AMD will put 2 chiplets and just because it says chiplet design doesnt mean there will be two like ZEN. Although, it worked for ZEN with yields so who knows.
What? Of course it's MCM. MCM = multi chip module, i.e. a single package with more than one piece of silicon on it. Chiplet = a single piece of silicon that works together with others on the same package to form a single "chip". If AMD is saying RDNA3 uses "Advanced chiplet packaging", they are saying RDNA3 GPUs are MCM. In this context, the two are synonymous.

Heck, the article even says as much (even if the sentence is a bit garbled):
Chiplets packed with the GPU's main number-crunching and 3D rendering machinery will make up chiplets, while the I/O components, such as memory controllers, display controllers, media engines, etc., will sit on a separate die.
Whether that's one processing die, two, or more is currently unknown, but regardless of that it will be MCM. And, crucially, once you're disaggregating the die, the difference between running one and several processing dice is far smaller than going from a monolithic design. Given that VRAM and PCIe are on the IOD, all chiplets will have equal access to the same data - and if IC is on the IOD as has been speculated, this will also ensure a fully coherent and very fast cache between processing chiplets, ensuring that they don't need to wait on each other for data like in ordinary mGPU setups.
ARFIt is such a pity that they shot themselves in the foot by refusing to develop multi-GPU technology.
Non-transparent multi-GPU is and has always been a dead end, requiring far too much driver and software developer effort to make useful, and needing incredibly fast (and very power hungry) interconnects to try and overcome fundamental challenges like microstuttering. Game development is already massively complex, so putting this on developers is just not feasible. And no GPU maker has the resources to fully support this on their end either. And other than that, I don't see how anyone has "refused" to develop mGPU tech? Transparent multi-GPU in the form of MCM GPUs is coming - it's just a question of when. Everything being on the same package allows for overcoming the inherent challenges of mGPU much, much more easily. With advanced packaging methods putting the dice so close that signal latecy nears zero and interconnect power drops to near nothing, you can start moving the scheduling parts of the GPU onto an IOD and make the compute chiplets "pure" compute. This is obviously not trivial whatsoever, but it's the direction things are moving in. It might take a couple of generations, but we're already seeing it in CDNA (which doesn't do real time graphics and thus has slightly fewer issues with keeping everything in sync).
Posted on Reply
#25
olymind1
I'd like that Navi 33, the question is what will be their price.. because that will be the deciding factor for most of us.
Posted on Reply
Add your own comment
Dec 22nd, 2024 01:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts