Thursday, October 20th 2022
AMD Announces RDNA 3 GPU Launch Livestream
It's hardly a secret that AMD will announce its first RDNA 3 based GPUs on the 3rd of November and the company has now officially announced that that it'll hold a livestream that starts 1:00 pm (13:00) Pacific Daylight Time. The event goes under the name "together we advance_gaming". AMD didn't share much in terms of details about the event, all we know is that "AMD executives will provide details on the new high-performance, energy-efficient AMD RDNA 3 architecture that will deliver new levels of performance, efficiency and functionality to gamers and content creators."
Source:
AMD
104 Comments on AMD Announces RDNA 3 GPU Launch Livestream
I hope i'm wrong, but I'd place a bet on myself being right.
We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.
Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.
You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).
It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.
Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.
The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.
A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.
AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.
We'll have a better idea on November 3rd.
Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.
On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.
Regardless of how much AMD touts their ray tracing improvements in RDNA3, there's no direct evidence of it YET. Speculate all you want but until there are third-party benchmark results and reviews, there's no useful data for a purchase decision.
I would love to see AMD destroy NVIDIA in raster, RT, and image upscaling performance, at 100W TDP less and 25% cheaper at MSRP. But that's just a pipe dream right now. If AMD thinks they can do that, great, they have until November 3 to figure it out. Because November 4, they aren't gonna have anything faster.
Most likely AMD already has a pretty good idea how their halo RNDA3 card (7900 XT?) will match up against RTX 4090.
I don't think there are any games that run only on RT cores. Raster performance is part of the end result. We'll see if Radeon can catch up to GeForce on unassisted RT performance. One thing for sure, almost no gamer will turn on RT without turning on some sort of image upscaling technology. And those image upscaling technologies have some impact on image quality so even if the unassisted RT image quality is identical, what really matters to the end user is how those images appear after going through the upscaling process.
No one is going to play some unassisted ray traced game at 4K that runs 15 fps. It might be done for a graphics benchmark but not by people who are trying to game.
Is that even ~ practical ~ for every end-users knowledge / rig's performance?! You CANNOT get more power (unless the devs work with AMD exclusively... yeah, right, like that would happen) from utilizing... wait for it... modified texture shaders (in a nutshell, that's RDNA2's answer for it's so-called dedicated RT cores aka CUs) until AMD decides to invest in its R&D, and REIMAGINE, its TRUE dedicated ray-tracing cores/CUs for its silicon. :laugh:
The reason for QHD projection, was Shader engines (S E.) in Navi31 (6) vs RTX4090's (11) active GPC and the assumption was that in QHD performance 4090 will lose around 1% per GPC/SE difference, ergo 5% vs the 4K difference.
That was wrong, even if the they had the same numbers of SM/GPCs the QHD difference would be at least 1%-2% lower and it doesn't matter if 11GPCs are active the die has the inherent latency characteristics of a 12GPC design, so it is at least 7% difference, so logically if 330W Navi31 is 7% slower in 4K, it won't be 2% slower in QHD but it will at least match 4090 in QHD. (So OC versions faster than 4090 in QHD)
The latest rumors suggest 42WGP (10752SP) and 20GB (320bit bus) for 6900XT, I had in mind 44WGP (11264SP) and 24GB (384bit bus), although this is good for the design (meaning with less resources it can supposedly achieve the ≥50% performance/W claim) the difference in WGPs are small anyway, maybe by lowering the GDDR6 ICs and MCDs they dropped power consumption a little bit and they increased instead 5-6% the frequency in order to compensate the difference.
Regarding naming, 6900XT was full Navi21 with full 16GB realized, now 7900XT is supposedly cut down Navi31 (1 GCD 42WGP+5 MCDs) with only 20GB present so the naming is strange, it should have been 7800XT, I bet AMD doesn't want consumers to compare the performance/$ of 7900XT with 6800XT (assumption=nearly +75% performance at 4K) because it will be worse... (It will need $999 SRP just to barely match current RX6800XT ($549) performance/$ and even if we compare it with original 2020 SRP ($649) 6800XT will have only 15% worse performance/$, so AMD in this case ($999) will give you after 2 years only 15% more performance/$ essentially despite claiming same SRP as 6900XT.
That's why they choose this naming with 7950X being the top dog.
Still the value will be much better than $1199 RTX 4080 regarding raster!
But $999 is the best case scenario for 7900XT (nearly same performance/$ as current priced 6800XT), I wonder with this kind of performance what price AMD will position it based on Ada competition/pricing!
I'll admit that I'm wrong if RDNA3 continues to perform as inefficiently as RDNA2 in raytracing titles, however I doubt that I will need to. They've been pretty clear they're dedicating more resources to raytracing in RDNA3. There's no logical reason for them to lie about it.
It seems legit expectation, good news for Gaming Laptop enthusiasts!
This is going to be highly per game specific stuff, at best. Nvidia has already shown us the key is a solid DLSS + RT implementation, and a latency hit, to go further. Or in other words, so much for the low hanging fruit, they've already hit a wall and we're now paying in more ways than silicon die area + TDP increase + ray approximation instead of true accuracy :) RT is already dismissed for anything competitive with the current update on Ada.
In brief, its a massive shitshow.
I agree, RDNA3 won't regress, but I wouldn't be too eager for massive RT strides. And honestly, good riddance IMHO. Just make us proper GPUs that slay frames. We have to consider as well that the current console crop still hasn't got much in the way of RT capability either. Where is the market push? AMD won't care, by owning consoles they have Nvidia by the balls. AMD can take its sweet time and watch Nvidia trip over all the early issues before the next console gen wants something proper.
As long as it's reliable and all that i be happy.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)
Also N32 is ~200mm of N5 + ~144mm of N6 for the full 4 MCD version or 108mm of N6 for the 3MCD version if the SkyJuice Angstronomics leak is accurate.
4090, that is 45-50%-ish ahead of 3090Ti, should be well within punching distance for AMD's top dog.
The question is: the consequences of MCU design.
Will we see Zen's chiplet story and major meltdowns in competitor camp.
Or did it rather fail and new GPU cannot beat old one by 50%-ish percent.
PS
DLLS3/RT is pure bazinga, the former has quite negative impact on visual quality, when the latter is barely used even by people with uber cards.
Mind you, there were 3D TVs not so long ago.
% of users actually using that feature was arguably higher than number of users with RT on.
It got slashed.
For RT to take of it must be much less problematic to develop and much less taxing in FPS terms.
RT is the future. I don't think RDNA3 / Ada are going to deliver that future but I could see RDNA4 / Hopper (or whatever the consumer next gen NV codename is) doing for RT what the 9700Pro did for AF and AA by making it a default on feature.
More silicon => more dumb compute unites can be crammed.
(I think the context of that comment was AMD rapidly catching up) Shouldn't AMD have an edge given its using multiple nodes?
Also the competition will have a strong product in the form of AD103, depending what performance level will achieve AMD will decide then.
N32 I had the impression that the leak was 200+4*37.5mm² (other sites rounding the MCDs at 38mm² also), is 4*36mm² now the latest info?
RT, etc. and its global adoption will only grow by magnitudes that will soon be to complex to track anymore.
Mostly every past, current, and future AAA titles down to indie titles supports RT, etc. and only the ~ few (inferior devs, potato rig users, etc.) ~ are having issues with this inevitable change.
"The many ALWAYS outweighs the few... or the one!" :laugh:
www.rockpapershotgun.com/confirmed-ray-tracing-and-dlss-games-2022