Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources:
4Gamers.net, HotHardware
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
They've always made a huge chunk of their money from consumer products, sadly for Nvidia marketing and sponsorship deals don't work very well outside of the consumer market. You can't buy your way to success as easily and actually have to provide extremely competitive pricing because ROI is critical to businesses as opposed to regular consumers so you can't just price everything to infinity and provide shit value.
Nvidia is desperately looking to repurpose its fancy cores because all that die space is sitting there. They use it for DLSS, while AMD achieves near-similar results without using that die space for FSR. We can talk for ages about whether each pixel is arranged favorably on a per-game basis, but if you zoom out a little bit, that is, into an ergonomic seating position, you can't tell the difference proper.
Nvidia is pushing RT but AMD is carrying it just fine, again without special cores. AMD's approach is clearly superior when you see the perf gap relatively shrink between RDNA2 and 3, and between AMD and Nvidia. We're building larger GPUs anyway, and its a major step back to have parts of the die sit there not being used. We figured this out already decades ago... Nvidia's move since Turing was, is and will continue to be a competitive regression - not progress. They're using die space for single purpose nonsense that barely pays off. We're paying that die space bigtime.
As long as AMD controls the console space, and as the only capable custom APU with decent graphics, that's what they'll keep, they can dictate the progress of RT and gaming in general because shit just has to run on their hardware. And they're making it happen as we speak. You cán use RT on AMD. You cán use FSR on any GPU. The technologies work and pay off just the same.
Nvidia is diving into a hole, and this is a long journey. Let's see where it ends. I don't think AMD is using a bad strategy here, nor a loser's strategy. Chasing the most popular soundbites isn't by definition the best way forward. And if you combine this with the general economical situation... wasting cores and valuable die space on special purpose makes even less sense.
AMD perpetuate the current state by choosing this route.
Come on bro. Our 'AI' technology is input > output and lots of training. Its tunnel vision waiting to happen, and we've seen it all before. Its just a fancy slightly less random RNG. These things are full of paradoxical stuff, and we're discovering that as we speak...err chat.
In gaming what is the big constant? Good design, talented developers make great games when given the tools. The tools are just tools. If you haven't got brilliance at the helm, you'll get thirteen in a dozen crap and we have a well fleshed out history of said games and differences. AI won't change that at all. It'll only lower the bar for more bottom feeding bullshit.
And Intel uses big.LITTLE, meaning tons of efficiency cores while AMD uses performance cores only, so you can stop compairing cores like that. Ryzen 7800X3D will probably smack i9-13900KS in gaming when released in a few weeks.
Then at 1080p the 3080 chokes and can't even keep up with the 3060 that atleast manages to hit 30 fps.
At 4K nothing manages to hit a nice 60 fps. Even the 4090 suffers a fair number of drops below 50 but atleast that could work, everything else is just too slow unless you want to turn on DLSS/FSR or accept 30fps.
Or...we can feed the troll here just a bit more with random charts from random moments in time.
AMD will need to make up any kind of dlss3 answer, even if not frame generation related, to make up for the preformance gap.
I don't see "procedural world generation, NPCs, bot AI" as the write say helping doing so.
Trying to use it in part of an argument given these obvious and observable short comings suggests you have a disingenuous agenda.
As far as I know, across the board, 7900xtx is way faster than a 4070Ti.
About RT. heh, it is sad you bring that up since not even 4090 can pull this RT off in certain games. having 50FPS or 20FPS does not make a difference really. Nice to have RT but rely solely on that is fools errand.
Just like pursuing some sort of AI scheme that it supposedly is better for gaming. AI is not for gaming but it does make huge strides in other areas but not gaming.
But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway. 30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.
Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.