Thursday, October 20th 2022

AMD Announces RDNA 3 GPU Launch Livestream

It's hardly a secret that AMD will announce its first RDNA 3 based GPUs on the 3rd of November and the company has now officially announced that that it'll hold a livestream that starts 1:00 pm (13:00) Pacific Daylight Time. The event goes under the name "together we advance_gaming". AMD didn't share much in terms of details about the event, all we know is that "AMD executives will provide details on the new high-performance, energy-efficient AMD RDNA 3 architecture that will deliver new levels of performance, efficiency and functionality to gamers and content creators."
Source: AMD
Add your own comment

104 Comments on AMD Announces RDNA 3 GPU Launch Livestream

#76
Fluffmeister
It all boils down to stock levels, and with the world and their mother using TSMC, I suspect stock levels will be sold out day 1.

I hope i'm wrong, but I'd place a bet on myself being right.
Posted on Reply
#77
cvaldes
EatingDirtTPU's own benchmarks compare the efficiency of the 6xxx AMD series to the Nvidia 3xxx series(and now the 4090). Here's the biggest outlier:
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)

Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.

Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.
Just because Brand X increased performance by ___ percentage from one generation to the next doesn't automatically mean that Brand Y will see the same percentage bump.

We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.

Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.

You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).

It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.

Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.

The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.

A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.

AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.

We'll have a better idea on November 3rd.
Posted on Reply
#78
kapone32
cvaldesJust because Brand X increased performance by ___ percentage from one generation to the next doesn't automatically mean that Brand Y will see the same percentage bump.

We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.

Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.

You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).

It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.

Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.

The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.

A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.

AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.

We'll have a better idea on November 3rd.
I know this is highly anecdotal but when I got my 6500XT I was blown away by the fact it basically OC to 2983 MHZ on the GPU and 2400 MHZ on the Memory. If they can achieve the Clock speeds advertised on Youtube I have no doubt that the performance will be compelling. Let's also remember these are 5nm GPUs just like Nvidia but one could argue that AMD has had a longer time refining TSMC than Nvidia so may be able to extract more performance.
Posted on Reply
#79
Valantar
FluffmeisterIt all boils down to stock levels, and with the world and their mother using TSMC, I suspect stock levels will be sold out day 1.

I hope i'm wrong, but I'd place a bet on myself being right.
Yeah, but TSMC order numbers have also been falling off a cliff recently as every chipmaker is scaling back production. The question is if this came soon enough to bolster RDNA3 production - and I don't think so. But it could bode well for stock to show up in large numbers a month or two afterwards.
Posted on Reply
#80
Fluffmeister
ValantarYeah, but TSMC order numbers have also been falling off a cliff recently as every chipmaker is scaling back production. The question is if this came soon enough to bolster RDNA3 production - and I don't think so. But it could bode well for stock to show up in large numbers a month or two afterwards.
Your glass is half full, I like it!
Posted on Reply
#81
Valantar
FluffmeisterYour glass is half full, I like it!
More like I think maybe the glass might get a refill at some point soon :p
Posted on Reply
#82
EatingDirt
cvaldesJust because Brand X increased performance by ___ percentage from one generation to the next doesn't automatically mean that Brand Y will see the same percentage bump.

We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.

Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.

You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).

It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.

Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.

The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.

A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.

AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.

We'll have a better idea on November 3rd.
I stated the efficiency of the AMD cards are raytracing and you said I was wrong. I showed you evidence that I was correct, and instead of addressing my comment you went on this odd little rant.

Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.

On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.
Posted on Reply
#83
cvaldes
Well, you can't buy an RDNA3 card now.

Regardless of how much AMD touts their ray tracing improvements in RDNA3, there's no direct evidence of it YET. Speculate all you want but until there are third-party benchmark results and reviews, there's no useful data for a purchase decision.

I would love to see AMD destroy NVIDIA in raster, RT, and image upscaling performance, at 100W TDP less and 25% cheaper at MSRP. But that's just a pipe dream right now. If AMD thinks they can do that, great, they have until November 3 to figure it out. Because November 4, they aren't gonna have anything faster.

Most likely AMD already has a pretty good idea how their halo RNDA3 card (7900 XT?) will match up against RTX 4090.

I don't think there are any games that run only on RT cores. Raster performance is part of the end result. We'll see if Radeon can catch up to GeForce on unassisted RT performance. One thing for sure, almost no gamer will turn on RT without turning on some sort of image upscaling technology. And those image upscaling technologies have some impact on image quality so even if the unassisted RT image quality is identical, what really matters to the end user is how those images appear after going through the upscaling process.

No one is going to play some unassisted ray traced game at 4K that runs 15 fps. It might be done for a graphics benchmark but not by people who are trying to game.
Posted on Reply
#84
GunShot
EatingDirtTPU's own benchmarks compare the efficiency of the 6xxx AMD series to the Nvidia 3xxx series(and now the 4090). Here's the biggest outlier:
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)

Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.

Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.
TPU also benches titles with the latest swapped vDLSS DLL.

Is that even ~ practical ~ for every end-users knowledge / rig's performance?!
EatingDirtI stated the efficiency of the AMD cards are raytracing and you said I was wrong. I showed you evidence that I was correct, and instead of addressing my comment you went on this odd little rant.

Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.

On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.
You CANNOT get more power (unless the devs work with AMD exclusively... yeah, right, like that would happen) from utilizing... wait for it... modified texture shaders (in a nutshell, that's RDNA2's answer for it's so-called dedicated RT cores aka CUs) until AMD decides to invest in its R&D, and REIMAGINE, its TRUE dedicated ray-tracing cores/CUs for its silicon. :laugh:
Posted on Reply
#85
ModEl4
I posted earlier that 7900XT at 330W will be -7% at best case in 4K vs 4090 but only -2% in QHD.
The reason for QHD projection, was Shader engines (S E.) in Navi31 (6) vs RTX4090's (11) active GPC and the assumption was that in QHD performance 4090 will lose around 1% per GPC/SE difference, ergo 5% vs the 4K difference.
That was wrong, even if the they had the same numbers of SM/GPCs the QHD difference would be at least 1%-2% lower and it doesn't matter if 11GPCs are active the die has the inherent latency characteristics of a 12GPC design, so it is at least 7% difference, so logically if 330W Navi31 is 7% slower in 4K, it won't be 2% slower in QHD but it will at least match 4090 in QHD. (So OC versions faster than 4090 in QHD)
The latest rumors suggest 42WGP (10752SP) and 20GB (320bit bus) for 6900XT, I had in mind 44WGP (11264SP) and 24GB (384bit bus), although this is good for the design (meaning with less resources it can supposedly achieve the ≥50% performance/W claim) the difference in WGPs are small anyway, maybe by lowering the GDDR6 ICs and MCDs they dropped power consumption a little bit and they increased instead 5-6% the frequency in order to compensate the difference.
Regarding naming, 6900XT was full Navi21 with full 16GB realized, now 7900XT is supposedly cut down Navi31 (1 GCD 42WGP+5 MCDs) with only 20GB present so the naming is strange, it should have been 7800XT, I bet AMD doesn't want consumers to compare the performance/$ of 7900XT with 6800XT (assumption=nearly +75% performance at 4K) because it will be worse... (It will need $999 SRP just to barely match current RX6800XT ($549) performance/$ and even if we compare it with original 2020 SRP ($649) 6800XT will have only 15% worse performance/$, so AMD in this case ($999) will give you after 2 years only 15% more performance/$ essentially despite claiming same SRP as 6900XT.
That's why they choose this naming with 7950X being the top dog.
Still the value will be much better than $1199 RTX 4080 regarding raster!
But $999 is the best case scenario for 7900XT (nearly same performance/$ as current priced 6800XT), I wonder with this kind of performance what price AMD will position it based on Ada competition/pricing!
Posted on Reply
#86
EatingDirt
GunShotTPU also benches titles with the latest swapped vDLSS DLL.

Is that even ~ practical ~ for every end-users knowledge / rig's performance?!
What in the world are you talking about? What does a vDLSS DLL have to do with anything I posted?
GunShotYou CANNOT get more power (unless the devs work with AMD exclusively... yeah, right, like that would happen) from utilizing... wait for it... modified texture shaders (in a nutshell, that's RDNA2's answer for it's so-called dedicated RT cores aka CUs) until AMD decides to invest in its R&D, and REIMAGINE, its TRUE dedicated ray-tracing cores/CUs for its silicon. :laugh:
I really hope you a hardware & software engineer. Otherwise, it seems like a foolish statement to just say 'lulz, they can't improve their raytracing efficiency because their architecture doesn't work the same way as Nvidia's'. I expect this to be an interesting post to come back to at the launch of RDNA 3.

I'll admit that I'm wrong if RDNA3 continues to perform as inefficiently as RDNA2 in raytracing titles, however I doubt that I will need to. They've been pretty clear they're dedicating more resources to raytracing in RDNA3. There's no logical reason for them to lie about it.
Posted on Reply
#88
Vayra86
CallandorWoTWell boys. Time to go to the medieval armorer and get equipped. We got us some bots asses to kick next month. HELL YEAH BOYS!!! WE CAN DO IT!!!


RDNA3 AT MSRP WILL BE MINE!!! EAT SHIT BOTS!!! :rockout: :rockout: :rockout: :rockout: :rockout: :rockout: :rockout: :rockout: :rockout:
Inb4 the next disappointing result ;)
Posted on Reply
#90
Vayra86
ModEl4AMD’s Fastest Radeon RX 7000 “RDNA 3” Laptop GPU Could Offer RX 6950 XT & RTX 3090 Levels of Performance

It seems legit expectation, good news for Gaming Laptop enthusiasts!
Seems legit? LOL. I suppose we're looking at RTX4090 thicccness here for this model.
EatingDirtI stated the efficiency of the AMD cards are raytracing and you said I was wrong. I showed you evidence that I was correct, and instead of addressing my comment you went on this odd little rant.

Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.

On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.
15% won't put them on par... That's what, 4 FPS on most titles in 4K at the top end?

This is going to be highly per game specific stuff, at best. Nvidia has already shown us the key is a solid DLSS + RT implementation, and a latency hit, to go further. Or in other words, so much for the low hanging fruit, they've already hit a wall and we're now paying in more ways than silicon die area + TDP increase + ray approximation instead of true accuracy :) RT is already dismissed for anything competitive with the current update on Ada.

In brief, its a massive shitshow.

I agree, RDNA3 won't regress, but I wouldn't be too eager for massive RT strides. And honestly, good riddance IMHO. Just make us proper GPUs that slay frames. We have to consider as well that the current console crop still hasn't got much in the way of RT capability either. Where is the market push? AMD won't care, by owning consoles they have Nvidia by the balls. AMD can take its sweet time and watch Nvidia trip over all the early issues before the next console gen wants something proper.
Posted on Reply
#91
AsRock
TPU addict
Legacy-ZAI think everyone is going to be in for a surprise, even nVidia. :):)
Well i don't care if that's good or bad to be honest, 6900XT would of been more than enough for what i wanted, but with them abandoning older cards i thought i would try to get the near ish to be released.

As long as it's reliable and all that i be happy.
Posted on Reply
#92
ModEl4
Vayra86Seems legit? LOL. I suppose we're looking at RTX4090 thicccness here for this model.
In the RTX 4090 TPU bench, in QHD 3090 is 74% and 6950XT 77% and in 4K equivalent regarding raster.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)
Posted on Reply
#93
Vayra86
ModEl4In the RTX 4090 TPU bench, in QHD 3090 is 74% and 6950XT 77% and in 4K equivalent regarding raster.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)
Exactly, 175W :D In a laptop. Without a CPU or anything else.
Posted on Reply
#94
btk2k2
ModEl4In the RTX 4090 TPU bench, in QHD 3090 is 74% and 6950XT 77% and in 4K equivalent regarding raster.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)
I am not sure that laptop N32 is full N32. Think it may be the 12GB 3 MCD version with a cut die (maybe around 5k shaders at a guess) because I get the same rough idea that full N32, even downclocked, should be 20/30% or so faster than the 6950XT.

Also N32 is ~200mm of N5 + ~144mm of N6 for the full 4 MCD version or 108mm of N6 for the 3MCD version if the SkyJuice Angstronomics leak is accurate.
Posted on Reply
#95
medi01
Here is how last gen panned out:



4090, that is 45-50%-ish ahead of 3090Ti, should be well within punching distance for AMD's top dog.

The question is: the consequences of MCU design.
Will we see Zen's chiplet story and major meltdowns in competitor camp.
Or did it rather fail and new GPU cannot beat old one by 50%-ish percent.


PS
DLLS3/RT is pure bazinga, the former has quite negative impact on visual quality, when the latter is barely used even by people with uber cards.

Mind you, there were 3D TVs not so long ago.
% of users actually using that feature was arguably higher than number of users with RT on.
It got slashed.

For RT to take of it must be much less problematic to develop and much less taxing in FPS terms.
Posted on Reply
#96
btk2k2
medi01Here is how last gen panned out:



4090, that is 45-50%-ish ahead of 3090Ti, should be well within punching distance for AMD's top dog.

The question is: the consequences of MCU design.
Will we see Zen's chiplet story and major meltdowns in competitor camp.
Or did it rather fail and new GPU cannot beat old one by 50%-ish percent.


PS
DLLS3/RT is pure bazinga, the former has quite negative impact on visual quality, when the latter is barely used even by people with uber cards.

Mind you, there were 3D TVs not so long ago.
% of users actually using that feature was arguably higher than number of users with RT on.
It got slashed.

For RT to take of it must be much less problematic to develop and much less taxing in FPS terms.
It is pretty obvious that the 4090 is within range of RDNA3. We don't know where it will actually fall but given the public announcement by AMD of >50% perf/watt and some TBP guesses you can easily get to 4090 level performance and AMDs prior +50%'s have been on the conservative side. Of course we need to wait and see if they have actually achieved that or not but it is 100% in the realms of reasonably possible.

RT is the future. I don't think RDNA3 / Ada are going to deliver that future but I could see RDNA4 / Hopper (or whatever the consumer next gen NV codename is) doing for RT what the 9700Pro did for AF and AA by making it a default on feature.
Posted on Reply
#97
medi01
cvaldesThat said, NVIDIA may be putting more effort into improving their Tensor cores especially since ML is more important for their Datacenter business.
Huang himself, said that datacenter GPU is a no-brainer design.
More silicon => more dumb compute unites can be crammed.
(I think the context of that comment was AMD rapidly catching up)
ValantarYeah, but TSMC order numbers have also been falling off a cliff recently as every chipmaker is scaling back production. The question is if this came soon enough to bolster RDNA3 production - and I don't think so. But it could bode well for stock to show up in large numbers a month or two afterwards.
Shouldn't AMD have an edge given its using multiple nodes?
Posted on Reply
#98
ModEl4
btk2k2I am not sure that laptop N32 is full N32. Think it may be the 12GB 3 MCD version with a cut die (maybe around 5k shaders at a guess) because I get the same rough idea that full N32, even downclocked, should be 20/30% or so faster than the 6950XT.

Also N32 is ~200mm of N5 + ~144mm of N6 for the full 4 MCD version or 108mm of N6 for the 3MCD version if the SkyJuice Angstronomics leak is accurate.
I agree, probably cut down, if you calculate what I suggest in my post with full die it's even higher than RX6950 but depends on the base/game/boost clock and the TGP that AMD will target.
Also the competition will have a strong product in the form of AD103, depending what performance level will achieve AMD will decide then.
N32 I had the impression that the leak was 200+4*37.5mm² (other sites rounding the MCDs at 38mm² also), is 4*36mm² now the latest info?
Posted on Reply
#99
medi01
Hype intensifier / overhype mode on:

Posted on Reply
#100
GunShot
medi01DLLS3/RT is pure bazinga
Untrue and TRUE numbers disagrees with that ~ opinion ~ completely.

RT, etc. and its global adoption will only grow by magnitudes that will soon be to complex to track anymore.

Mostly every past, current, and future AAA titles down to indie titles supports RT, etc. and only the ~ few (inferior devs, potato rig users, etc.) ~ are having issues with this inevitable change.

"The many ALWAYS outweighs the few... or the one!" :laugh:

www.rockpapershotgun.com/confirmed-ray-tracing-and-dlss-games-2022
Posted on Reply
Add your own comment
Dec 20th, 2024 06:01 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts