Monday, August 29th 2022

AMD Teases Next-Gen RDNA3 Graphics Card: Claims to Repeat 50% Perf/Watt Gain

AMD in its Ryzen 7000 launch event teased its next-generation Radeon graphics card based on the RDNA3 graphics architecture. Built on an advanced process node just like "Zen 4," AMD is hoping to repeat the magic of the RX 6000 series, by achieving a 50% performance-per-Watt gain over the previous generation. which allows it either to build some really efficient GPUs, or consume the power headroom to offer significantly higher performance at power levels similar as the current-gen.

AMD's teaser included a brief look at the air-cooled RDNA3 flagship reference-design, and it looks stunning. The company showed off a live demo of the card playing "Lies of P," a AAA gaming title that made waves at Gamescom for its visuals. The game was shown playing on an RDNA3 graphics card running on a machine with a Ryzen 9 7950X processor at 4K, with extreme settings. AMD CEO Dr Lisa Su confirmed a 2022 launch for RDNA3.
Add your own comment

27 Comments on AMD Teases Next-Gen RDNA3 Graphics Card: Claims to Repeat 50% Perf/Watt Gain

#1
zlobby
Now put this on an APU with Zen4 and build me a laptop!
Posted on Reply
#2
LFaWolf
Typo - launch “event”.
Posted on Reply
#3
Minus Infinity
zlobbyNow put this on an APU with Zen4 and build me a laptop!
You do realise Phoenix Point coming out next year is exactly that: Zen 4 + RDNA3 on 4nm. I also can't wait as I will have a 7 year old Haswell based laptop I want to upgrade and it won't be cRaptor Lake mobile. I'd switch to Apple before going Intel. Maybe Arrow Lake or possibly Meteor Lake would change my mind.
Posted on Reply
#4
ppn
Well yeah. 203 mm2 NAVI33 is in fact a 6nm shrink of 237mm2 6600 XT, so they simply started to count 2048 Shaders as 4096 all the sudden,

much like Nvidia RTX 3070 that is in fact a glorified RTX 2080 with the caveat that 3072 CUDA can now do both INT32 +FP32 or FP32 +FP32 and this translates into +25% performance.

so what is a 12288 NAVI31 supposed to be. realistically, a glorified 6144 shaders, it's +20% shaders over RX 6900 +25% efficiency resulting 50% performance. at same power. and a 384 bit bus to feed the 50% increase over the 256 bit bus with enough bandwidth.

GCD Size ~308 mm²
MCD 6x ~37.5 mm²
Full GPU ~533 mm²

50% perf/watt compared to what, is it 6900 TDP 300 watt 300 measured @ 89% performance
Custom AIB 6950 is worsening that number. +10% performance for +33% power.
Posted on Reply
#5
MrDweezil
Seems like a lot less has leaked about the upcoming amd cards compared to the nvidia ones, despite supposedly similar launch timeframes.
Posted on Reply
#6
kiakk
Gimme a 75W TDP card with near RX 6600 performance and a high clock speed good 4c/8T CPU for a power efficient esport PC with less than 120W overall power usage.
And start to manufacture 80+ GOLD 150-200W power supply.
Thank you! Tons of 500W power supplies are fckN way too powerfull to get good power efficiency at light load.
Posted on Reply
#7
watzupken
kiakkGimme a 75W TDP card with near RX 6600 performance and a high clock speed good 4c/8T CPU for a power efficient esport PC with less than 120W overall power usage.
And start to manufacture 80+ GOLD 150-200W power supply.
Thank you! Tons of 500W power supplies are fckN way too powerfull to get good power efficiency at light load.
You can get close to what you asked for here by getting a laptop with a RX 6600 or RTX 3060 now.
Posted on Reply
#8
kiakk
watzupkenYou can get close to what you asked for here by getting a laptop with a RX 6600 or RTX 3060 now.
Thank you, but I do not need under engineered, low quality plastic body with noisy cooler in an expensive package. :cool: Notebook is only for work and some content consuming.
Posted on Reply
#9
Tomorrow
Good. Atleast one company is providing perf per/w gains. The electricity prices in my country have gone bonkers. We have had several days where the price was not 50 cents kW (that itself is already high). It was 4 bucks a kW. Over 8 times the normal value. So power effieciency is quickly becoming one of my top priorities. Even is most consumers lament AMD for losing a few percentage points of performance to Intel or NVIDIA.
Posted on Reply
#10
1d10t
Just a shroud?Oh come AMD show some ti...er...cards, at least we know this ain't paper launch.
Posted on Reply
#11
ymdhis
Bet they will also cost 50% more.
Posted on Reply
#12
Valantar
kiakkGimme a 75W TDP card with near RX 6600 performance and a high clock speed good 4c/8T CPU for a power efficient esport PC with less than 120W overall power usage.
And start to manufacture 80+ GOLD 150-200W power supply.
Thank you! Tons of 500W power supplies are fckN way too powerfull to get good power efficiency at light load.
HDPlex has the PSU for you. Other than that, that is exactly the GPU I want to see as well. I'm very close to splurging on a used 6600 just because I can get one for cheap, but I would truly love to see 75W GPUs get the boost we've been waiting for for nearly a decade now.
1d10tJust a shroud?Oh come AMD show some ti...er...cards, at least we know this ain't paper launch.
... this isn't a GPU launch at all? A teaser is a teaser - if they're giving out concrete information, it's not a teaser.
ppn50% perf/watt compared to what, is it 6900 TDP 300 watt 300 measured @ 89% performance
Custom AIB 6950 is worsening that number. +10% performance for +33% power.
There is absolutely no chance whatsoever that AMD is using a non-stock AIB configuration as their baseline for a gen-on-gen efficiency comparison. Hopefully they're doing a clean same-tier SKU comparison (i.e. 7900 XT vs. 6900 XT), though at this point seeing how there are no announced SKUs it could be a broader description of how the implemented architecture averages out, or it could (worst case scenario) be the most efficient first-round RDNA3 SKU compared to one of the less efficient RDNA2 SKUs (though not the least efficient, as that would be the 6500 XT, which wouldn't make any type of sense as a comparison to anything resembling a high end GPU - it would be far too transparent at that point).
Posted on Reply
#13
ModEl4
The most probable scenario for Navi33 (4096 SPs) is the doubling of SPs per CU to give around max 1.5X extra performance/clock and combined with a high clock like 2879MHz for example to be able to be around 2% faster than 6900XT at 1080p.
But in this case at 1440p 6900XT will be nearly 10% faster and nearly 30% faster in 4K. (Even RX 6800 will be faster in 4K in this case)
No data to support this, just a feeling!
Posted on Reply
#14
Mysteoa
kiakkGimme a 75W TDP card with near RX 6600 performance and a high clock speed good 4c/8T CPU for a power efficient esport PC with less than 120W overall power usage.
And start to manufacture 80+ GOLD 150-200W power supply.
Thank you! Tons of 500W power supplies are fckN way too powerfull to get good power efficiency at light load.
They can't make them cheap enough to have reason to make them. Best you could do is APU.
Posted on Reply
#15
Valantar
MysteoaThey can't make them cheap enough to have reason to make them. Best you could do is APU.
Nah, they could make a card like that pretty cheap if they wanted to - the question is whether it would sell in sufficient quantities, and if they'd be willing to not take massive margins on its sale. Margins are the main problem, as chipmakers are increasingly set on raking in massive margins. In terms of production and design, a tiny GPU die like that is dirt cheap, and you don't need much in terms of VRMs or other ancillary componentry for such a small, simple card. It'll have a higher BOM cost than similar GPUs a decade ago from the requirements of GDDR6, PCIe 4.0/5.0, DP 2.0/HDMI 2.1, etc., but it'd still be cheap to make.

Of course, my dream scenario is a GPU die designed to be a GPD - a GPU die for use on desktop MCM APUs, that also gets implemented as a low end dGPU. Maybe 20 CUs? 16 at least? That would be pretty much the best of both worlds. And on 5nm with RDNA3, especially with its MCM architecture with memory controllers and I/O separated from the GPU die (sadly only for the high end, at least for now), that GPU die is starting to look dangerously close to what such a GCD would look like - just a bunch of CUs + a few IF links. I don't think we'll see something like this this generation though - unless that Zen4 iGPU has some tricks up its sleeve that we don't know about, like the ability to control a GCD. I doubt it does though.
Posted on Reply
#16
zlobby
ValantarNah, they could make a card like that pretty cheap if they wanted to - the question is whether it would sell in sufficient quantities, and if they'd be willing to not take massive margins on its sale. Margins are the main problem, as chipmakers are increasingly set on raking in massive margins. In terms of production and design, a tiny GPU die like that is dirt cheap, and you don't need much in terms of VRMs or other ancillary componentry for such a small, simple card. It'll have a higher BOM cost than similar GPUs a decade ago from the requirements of GDDR6, PCIe 4.0/5.0, DP 2.0/HDMI 2.1, etc., but it'd still be cheap to make.

Of course, my dream scenario is a GPU die designed to be a GPD - a GPU die for use on desktop MCM APUs, that also gets implemented as a low end dGPU. Maybe 20 CUs? 16 at least? That would be pretty much the best of both worlds. And on 5nm with RDNA3, especially with its MCM architecture with memory controllers and I/O separated from the GPU die (sadly only for the high end, at least for now), that GPU die is starting to look dangerously close to what such a GCD would look like - just a bunch of CUs + a few IF links. I don't think we'll see something like this this generation though - unless that Zen4 iGPU has some tricks up its sleeve that we don't know about, like the ability to control a GCD. I doubt it does though.
Uhm, would you go through tremendous effort to secure wafers just to sell it later with $1/pcs margin?
Posted on Reply
#17
Valantar
zlobbyUhm, would you go through tremendous effort to secure wafers just to sell it later with $1/pcs margin?
Lol, you seem to not have considered the effects of small die sizes on profits. Per chip margins might be lower, but selling 2-3x as many chips kind of alleviates that, no? Also, the wafers are already secured, long, long before this. Heck, going by this logic nothing but high end parts would ever get made.
Posted on Reply
#18
zlobby
ValantarLol, you seem to not have considered the effects of small die sizes on profits. Per chip margins might be lower, but selling 2-3x as many chips kind of alleviates that, no? Also, the wafers are already secured, long, long before this. Heck, going by this logic nothing but high end parts would ever get made.
Long story short - I promise you this: nobody, I repeat nobody, not me, not you, not AMD, not intel, not fooking VIA are selling on $1 profit per chip.
Posted on Reply
#19
Valantar
zlobbyLong story short - I promise you this: nobody, I repeat nobody, not me, not you, not AMD, not intel, not fooking VIA are selling on $1 profit per chip.
... and I don't care whatsoever about random numbers you're pulling out of your rear end, so ... yay?
Posted on Reply
#20
ARF
kiakkGimme a 75W TDP card with near RX 6600 performance and a high clock speed good 4c/8T CPU for a power efficient esport PC with less than 120W overall power usage.
And start to manufacture 80+ GOLD 150-200W power supply.
Thank you! Tons of 500W power supplies are fckN way too powerfull to get good power efficiency at light load.
I have a very serious microstutter with my 4-core/8-thread Ryzen 5 2500U in F1 2018 and F1 2020. It's like the game runs smooth, and then all of a sudden severe microstutter happens for five seconds, then again the game runs smooth, and after 15-20 seconds the microstutter repeats... It's like it enters a loop or something and never exits from it. Impossible to game, anymore...

I also wonder - will a 4-core / 8-thread CPU or APU be good enough to run Fortnite smooth?
Posted on Reply
#21
gffermari
Bring 150% uplift in RT performance and keep the raster one at the same level and i'm happy.
Posted on Reply
#22
zlobby
Valantar... and I don't care whatsoever about random numbers you're pulling out of your rear end, so ... yay?
I don't mind harsh words, doubts, mockery, but people not using their heads? Ay, ay, ay...
Posted on Reply
#23
Valantar
zlobbyI don't mind harsh words, doubts, mockery, but people not using their heads? Ay, ay, ay...
Using their heads how? By agreeing to some arbitrary number you've come up with to illustrate low margins, with zero reasoning or data to back it up? If you have an argument, make it. It's not my job to guess at your reasoning, and I'd love to see where you're getting that $1 figure from. You already know my guess as to its source - feel free to prove me wrong.
Posted on Reply
#24
medi01
ppnWell yeah. 203 mm2 NAVI33 is in fact a 6nm shrink of 237mm2 6600 XT, so they simply started to count 2048 Shaders as 4096 all the sudden,
Team green "innovated" inflated shader figure by 100% ("justified" by "but we can do 2 FPs at the same time"), nobody even blinked, team red took notice.

Ultimately, I doubt average consumer cares, to be honest, but, perhaps, GPU companies know better.
Posted on Reply
#25
ModEl4
If Navi33 is in 6nm, 4096 SP and only 203mm² possibly points out to certain assumptions, for example logically we will have also 2X/clock the theoretical raytracing ray-box/sec & ray-tri/sec performance, so more than 2X theoretical peak but in reality will be a lot lot less.
For example if in 1080p raster RTX 3080 12GB is let's say 5% slower than Navi33, when you enable raytracing RTX will be at least 5% faster!
Again no data, just a feeling.
Posted on Reply
Add your own comment
Nov 19th, 2024 19:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts