Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources:
4Gamers.net, HotHardware
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
I'm not sure about improving quality, though. Upscaling is a technology designed to slightly worsen your image quality for more performance, so improving is a contradiction. I'll believe it when I see it, I guess. :) It sure can now, but what about 2-4 years later? Besides, I'm happy with 1080p, I don't feel like I'm missing out on anything at all. :) That's another reason why I don't want to upgrade. Higher res may sound like a nice thing to have until you look at games like Hogwarts Legacy that run like crap with everything turned on at 4k even on a 4090. I don't want to fall into the trap of getting used to it, and then not being able to go back, or having to spend more on a GPU upgrade when something new that I want to play doesn't run well.
Fg on the other hand is way more situational, it only works decently / great when your framerate is already at least 45-50 fps without it. Still wouldn't call it a gimmick How much power do you think the 4090 consumes? You realise in the vast majority of games it hovers around 350 watts right?
Doesnt the same thing apply to zen 4 cpus? Don't they cut the power in half while losing a tiny amount of performance? You seem more eager to believe it when amd is involved don't you?
What I think 4090 consumes? According to TPU MSI RTX 4090 Gaming Trio during gaming consumes 429Watts. I'm not even talking about max power here. Saying that you have dropped 100Watts for 1.3% performance loss is rather spectacular and weird that NV did not advertise it by themselves. Or maybe, it is just your card which again, I can't rely on your experience alone since maybe only your card is that spectacular and majority of the same card are not. That is why the power consumption advertisement from NVidia otherwise a lot of cards would not pass the qualification test thus less in the market which means even higher prices for the product which is ridiculously highly priced anyway.
There are quite a few bad AA implementations, though. The one that's in the Nvidia driver (I'm not sure what it's called) gave me some terrible shimmering around UI elements in Skyrim back in the days. I remember because it took me a while to figure out what the problem was.
i9-13900K has 8 performance cores and 16 useless e cores, then marketed as a 24 core chip, LMAO.
Now with that said, with ecores off on for example cyberpunk, performance drops. Dramatically in fact. Tom's dinner area goes from around 100-120 fps to 80-95. So yeah..
TPU test.
www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-e-cores-enabled-vs-disabled/2.html
7950X 2nd CCD has performance cores + SMT only. Zero garbage cores.
Efficiency cores makes pretty much zero sense for desktop usage. And you are stuck with Windows 11 only, because without Thread Director, you will get wonky performance (software uses the wrong cores = crap performance)
The only reason why Intel does it, is to up the multithreaded performance, especially in synthetic tests like Cinebench and marked the chips as higher core count chips, but it's mostly just a marketing gimmick lie, because Intel has struggled for years with core count. They COULD have put 12 performance cores on 13900K, but watt usage would explode, however performance would have been much better than it is. Sadly Intel needs 5.5-6 GHz clockspeeds to match AMD and Upcoming Ryzen 7000 3D will beat Intel in gaming, again. 7800X3D at 399 dollars will probably smack even i9-13900KS which will be a 799 dollar chip. Sad but true.
And 7900XTX gets closer and closer to 4090, while beating 4080 more and more; www.techpowerup.com/review/asrock-radeon-rx-7900-xtx-taichi/31.html
Nvidia answer will be; 4080 Ti and 4090 Ti. Gimpy gimpy time. The leatherjacket soon pulls them out of the oven.
I don't care what you have to be fair. Or is it like with your 4090 card claim? Considering how you showcase your 4090 as a legitimately low power consumption card proclaiming it applies to all 4090's, I see a flaw in your conclusions which deem you untrustworthy.
Another miracle uncovered by you. Some games or applications use more cores some don't. So you focus on whatever suits you. What is it you are trying to prove again here? That the CPU you have is exceptional? great lucky you.
Never said the 13900k is exceptional. Actually I swapped back to my 12900k cause I prefer it. Had I said is that ecores are not useless in gaming, since there are games that benefit a lot from them. Try to actually argue with what people are saying instead of constant strawmaning
Not a lower power card. OK. Efficient? If you talk about efficiency you need to say against what? AMD CPUs since you have brought those up with your 4090? You need to have a metric. This is how much power this device use and this is how much power my device use for the same performance for instance or any other metric you have there when you talk about efficiency.
Yes you did say the ecores are not useless for games and I told you they are or if there is a difference, it is so mediocre it's pointless to mention it. Arguing that you have a CPU with ecores and I dont which somehow gives you the right to falsely claim things is not OK? If you want please refer to @W1zzard test of ecores and explain why it is wrong since your "exceptional CPU" performs different and W1zards finding are false.
The 3D computation process should be fully managed by the video card and not with the artificial support of having extra frames by sacrificing the native resolution. I can understand the use of DLSS and Frame Generation in games that maybe use Ray Tracing quite complete and heavy enough to handle natively for a video card, and in any case you get good results even with traditional rendering methods.
Intels E cores are using dated and MUCH LOWER CLOCKED microarchitecture. The cores are NOT FAST at all. Their primary goal is to fool consumers into thinking the chip has more cores than it has.
i5-13600K is a "14 core chip" but only has 6 performance cores :roll: Intel STILL only has 6-8 performance cores across the board on mainstream chips in the upper segment. Rest is useless e cores.
Ryzen 7000 chips with 3D cache will beat Intel in gaming anyway. Hell even the 7800X3D 400 dollar chip will beat 13900KS with 6 GHz boost and twice if not triple the peak watt usage + 800 dollar price tag and Intel will abandon the platform after 2 years as usual. Meaning 14th gen will require new socket, new board. Milky milky time. Intels architechture is inferior which is why they need to run with high clockspeeds to be able to compete, SADLY for Intel this means high watt usage.
However i9-12900K/KS and i9-13900K/KS are pointless chips gamers, since i7 delivers the same gaming performance anyway, without the HUGE watt usage. Hell even i5's are within a few percent.