Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources:
4Gamers.net, HotHardware
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
According to 13900K benchmark on TPU with the Cyberpunk game at any given resolution that is not correct. Performance does not drop when disabling ecores by 15% but rather 1.8% at 1080p.
1 fps dude, it runs 1 fps faster with RT. 1 fps
Meanwhile it's like 40% faster with RT off, not even worth comparing the two.
"runs better on a 3080", the nonsense rubbish you say never ceases to amaze me, you might just take the cake for the worst fanboy I've seen on this site yet.
Can this shitpostfest be locked yet it's clear that some would rather argue about AMD v Nvidia or CPUs.
And if you actually run Windows 10 instead of 11 most games will perform like crap because of no thread director, which is essential so e-cores are not used for stuff that actually matters
I'll show you when I'm home So it runs slower than a 2.5 year old card. Splendid I don't really care what tpup showed, I have the cpu and the game. If tpup doesn't test in cpu demanding areas that need more than 8cores then obviously you won't see a difference.
W1z has already confirmed that despite his general statements about the need for more VRAM (generally: core runs out as VRAM runs out), which I do agree with, the exceptions do make the rule and we've already seen several examples appear in his own reviews, where this had to be acknowledged. Similarly, W1z has also been seen saying the technologies in play here are progress, and that he likes to see it. But has also been saying how abysmal the performance can get in certain games. Its a thing called nuance. You should try it someday.
And that's my general stance with regard to these new features too: the general movement is good. But paying through the nose for them today is just early adopting into stuff with an expiry date, and very little to show for it. Upscaling technologies, are good. And they're much better if they are hardware agnostic.
Similarly, RT tech, is good. And its much better if its hardware agnostic.
AMD is proving the latter in fact 'just works' too.
And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
i7-13700K has the same game performance as i9-13900K.
Even i5-13600K only performs 1% behind i9-13900K. For half the price.
Efficiency cores gives you exactly nothing. Performance cores is what matters for gaming performance.
Ryzen 7800X3D will smack i9-13900K for half the price in a few weeks. Oh, and half the watt usage.
You can enable DLSS 3 to make fake frames tho, that will remove cpu bottleneck :roll:
But it doesn't matter, it wasn't me who made the claim. Someone said "are we talking about the spiderman that the 7900xt is faster than a 4070ti" as that means something. It doesn't. It's a pointless statement that doesn't seem to bother you. You seem particularly bothered when a specific company is losing. Someone would even call you biased Well obviously you don't look at facts and proof cause you don't have the actual cpu. I do, and I'm telling you in areas that are cpu demanding ecores boost performance by a lot
I'll make some videos with ecores off as well since in my channel I only have with ecores on and you'll see that there is a difference.
None of the technologies in play 'require AI' because Nvidia said so, and the point isn't proven either because Nvidia has a larger share of the market now. That just proves the marketing works - until a competitor shows a competitive product/a design win (like Zen!) and the world turns upside down. See, the truth isn't what the majority thinks it is. The truth is what reality dictates - a principle people seem to have forgotten in their online bubbles. And then they meet the real world, where real shit has real consequences. Such as the use of die space vs cost vs margins vs R&D budgets.
Nvidia is clearly charging ahead with their implementation and marketing because having to dial it back would:
A. destroy their dual-use strategy for datacenter and consumer GPUs
B. force them to revert to old technology sans special cores
C. Redesign the CUDA core to actually do more per clock, or somehow improve throughput further while carrying their old featureset
They realistically can't go back, so strategically, AMD's bet is a perfect one - note that I said this exact thing when they talked about their proprietary RTX cores shortly after it was initially announced. Also, the fact Wang is saying now what he's said years ago at around the same time... Time might be on either company's side, really. Its going to be exciting to see how this works out. Still though, the fact AMD is still on the same trajectory is telling, it shows they have faith in the approach of doing more with less. Historically, doing more with less has always been the success formula for hardware - and it used to be the 'Nvidia thing'.
Note my specs, and note how I'm not paying through the nose at any time ever - I still run a 1080 because every offer past it has been regression, not progress. You might not want to see it, but the fact is, the price to get an x80 GPU has more than doubled since then and you actually get less hardware for it. Such as lower VRAM relative to core.
I'm not even jumping on a 550-600 dollar RX 6800 (XT) because we're in 2023 now and this is the original MSRP of years back. That's paying too much for what its going to do, even if it nearly doubles game performance relative to the old card.
There are a LOT of people on this dilemma right now. Every offer the market has currently is crappy in one way or another. If a deal is hard to swallow, its a no deal in my world. Good deals feel like a win-win. There is no way any card in the new gen is a win-win right now.
Chasing the cutting edge has never been great, even when I did try doing so. I've learned I like my products & purchases solid and steady, so that I get what I pay for.
Hey, and don't take it from me, you don't have to:
www.techpowerup.com/forums/threads/graphics-card-prices-doubled-on-average-between-2020-and-2023-mindfactory-data.305018/
So who's the fool here? Amd buyers are paying more money for less features, higher power draw, worse rt and similar raster per dollar.
With RDNA3 we are seeing AI accelerators that will largely go unused, especially for gaming until FSR3 comes out. Zen5 will introduce AI accelerators and we already have that laptop Zen that has XDNA. On top of all the CPU cycles that go unused.
It's coming, but I think it's overrated in the consumer space atm. It's very niche to need those Tensor cores and a gaming GPU. On the business side, AMD has had CDNA with AI. What is really limiting is consumer software and strong AI environments on the AMD side. For gaming I'm more excited for raytracing and would rather that be the focus. RT is newer and needs that dedicated hardware. But generally, we are still lacking in how much hardware we are getting to accelerate that RT performance even from Nvidia. If for example Nvidia removed all that Tensor and replaced it with RT and just use FSR or similar, that would be mouth watering performance.
For AMDs argument, if they made up for it in rasterization and ray-tracing performance, that would make since. But they can't even do that. Seems more like AMD just generally lacks resources.
And it's not like AMD is doing more with less, they are doing less with more, 7900XTX with 384bit bus + 24GB VRAM barely beat 256bit 4080 by a hair in raster and lose in everything else ;). The BOM on the 7900XTX is definitely higher than that of 4080 and the only way for AIBs to earn any profit is selling 7900XTX at ~1100usd, which make it the worse choice than 1200usd 4080.
Everyone and their mother should realize by now Nvidia is just letting RTG survive enough to keep the pseudo duopoly going.
AMD pulls the old switcheroo and claims they have implemented AI acceleration hardware in RDNA3 for what ? Features that may or may not be a thing when RDNA3 goes EOL ? When Nvidia implemented AI acceleration hardware in Turing they did also immediately put games that would leverage said hardware to the table , they didn't wait for it to happen .
Yet somehow you are falling for it ... well majority of the market isn't .