Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources:
4Gamers.net, HotHardware
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
IMO there's no huge rush to release it because as tech outlets like HWUB have reported, DLSS 3.0 is only useful in very specific scenarios due to it's drawbacks. The introduced latency will always be a problem for DLSS 3.0 unless Nvidia fundamentally change the technology. Unless Nvidia starts inserting the next frame instead of the frame between the current and last, DLSS 3.0 frame insertion will always be niche. By end of 2023 is still a year before RDNA4's likely launch mind you.
"CEO Rory Read has made the comment that AMD will no longer compete head to head with Intel in the CPU market"
decryptedtech.com/amds-rory-read-says-there%E2%80%99s-enough-processing-power-on-every-laptop-on-the-planet-today
Ahh, dark days.
I wonder how that is going to be handled? Will they have similar or different functionality? Can they help each other out? Is the OS going to have to handle which one to use? Can end-users pick which ones we use? Just wait. We are going to be blow up the rear end with AI cores. Zen 5 is going to have an XDNA AI Engine, which will also be an APU that will have AI accelerators contained in RDNA and then those with discrete cards will have AI cores there too.
That's what they're using 'AI' for, after all. Its hilarious in all of its sadness, the low hanging fruit is long gone and graphical improvements have hit diminishing returns bigtime. Expensive post processing is expensive, always has been, and RT then sells us the idea that brute forcing the whole scene's lighting is a great step forward. Again, its hilariously stupid and sad if you think of it. It is desperate commerce looking for desperate measures to keep any semblance of progress in the GPU space to keep selling products to us.
You can still put RT and non RT scenes side by side and be challenged to spot a difference. The vast majority of its lighting is still rasterized/pre cooked, and the moment its not, the performance nosedives. If we RT a full scene, you're down to unplayable FPS on the fastest GPU on the planet right now (Portal), and again, struggling to see the point / what's gained in actual graphics fidelity.
Fact is, some shit's just done at some point.
Literally a comment by the Leather Man himself, when Frau Su rolled out some "AI" crap GPU that rivaled his. Yeah. I mean, latest quarter earnings:
1.6 billion made by AMD GPU + Console business in one quarter.
1.57 billion made by NV (a sharp drop from earlier years mind you).
"But that marketing company told me so". I can spot the difference 100% of times, if FPS counter is on, though. :D Something my 5+ years old TV is doing.
Yes, it also adds lag, naturally.
Even in a declining market Nvidia managed to grind more market share over AMD which tells you how well AMD plans are going ... unless of course you are trying to tell me that AMD plans to abandon dGPU market and focus only on making iGPUs and SOCs for consoles :roll:.
I don't mean overpowered implementation, like 2x over NV, but something competitive and useful and doesn't affect the other business they have.
And if this is their idea about AI, why they added AI to their Pheonix APUs?
Oh ,wait, AMD reported console APUs + GPU sales combined, THAT IS WHY?
Oh. But hard to follow, isn't it?
7 or so console APUs sold by AMD in the same period. Reported 1.6 billion revenue. (NV reported 1.57 from GPU business)
So the question, that needs a rocket scientist, I guess, judging by comments in this thread, how much of that 1.6 billion chunk is console APUs?
How much could AMD realistically charge for a bare APU chip, if consoles start at $399 with a controller, SSD and what not?
Say $100-150.
That gives us
between
1600 - 7'*100 = 900 million
1600 - 7*150 = 550 million
for GPU revenue on AMD side.
For "12% of the market share" to be true, AMD would need to make on GPUs less than 1.57 (NV revenue)*12/84 = 220 million on GPUs.
Then console APUs should on average cost 1600-220=1380/7 = $197
No way in hell is AMD getting half of the PS5 console price for its chip alone. This is how we refer to a 20% more expensive card being 16% faster at RTthese days. :roll:
If you wonder what RT is: the most impactful aspect of this feature is "bring down my FPS by 40-50%".
It was promised that "new GPUs" won't be affected, because they have "more RT thingies".
But for some reason it didn't happened even in the 3rd generation of RT cards.
As if "RT thingies" were still largely utilizing good old raster thingies... :D
7900 XTX even has higher minimum fps than 4090 at Ultra settings 4K in Hogwartz Legacy; www.techspot.com/review/2627-hogwarts-legacy-benchmark/
AMD does less with more? Hahah. Nvidia needed a node advantage and GDDR6X. You compare bus width and memory size but don't talk about GDDR6 vs GDDR6X and manufacturing proces ... Once again, Clueless. No wonder Nvidia already prepping 4090 Ti and 4080 Ti :laugh: RDNA3 is getting faster and faster for every driver and OC headroom on 7900XTX is like 10-15%, meanwhile you can get 2-5% tops on Nvidia, because they already maxed them out and don't allow proper OC.
Most gamers don't give a F about Ray Tracing. It's a gimmick. No Nvidia card will do heavy ray tracing at high res without using upscaling anyway. Maybe 5000 series will be able to, 2000, 3000 and 4000 series are too slow for proper RT unless you accept much lower fps, with huge dips. Even 4090 is slow as hell with RT on high.
Btw 7900XTX uses 50 less watts than 4090 and performs on par or beats it in plenty of games in raster which is the only thing that matters to most gamers :roll: 4090 is 60-75% more expensive as well, what a steal :laugh:
Nvidia is prepping 4090 Ti because AMD is prepping 7950XTX :toast:
Oh well if you can't afford 4090, don't feel too bad ;)
Oh and Nvidia is prepping for Blackwell too, which could be twice as fast as 4090 if Nvidia is serious, poor RDNA3/4 are so 1-2 gen behind :/
It's funny to see how butthurt you are about 7900XTX getting closer and closer. You probably cleared the bank to buy that entry level 4090 :laugh: Probably why you re-used an old HX850 :roll:
Glad I am not forced to use Windows 11 to make my CPU work right :laugh:
Yeah, a gimmick game that sells 12+ million copies in 2 weeks :laugh: Keep dreaming, 1080 Ti was a $699 card, not $1599 and up like 4090. 1080 Ti had proper connectivity, 4090 don't even have DP 2.1 in 2023 :roll:
7900 XTX is already close to 4090 in tons of games and even beats it on some, for 600 dollars less and with 50 watts less :laugh:
Nvidia also skimped on the VRAM on 4090; 4080 have higher clocked memory.
4080 Ti and 4090 Ti = DP 2.1 + High speed GDDR6X modules. Both will kill off 4090 and make it EoL. Then 1 year after, 5070 will come out and beat 4090 :laugh: And by then, resell value will be sub 400 dollars :laugh:
Great buy :toast: But I guess you can afford it, thats why you bought the absolute cheapest 4090 and skimps on other parts :laugh: Remember to replace that dated PSU before it pops, you are using high watt parts afterall :p
Maybe AMD should just rename RTG to CTG (Console Technology Group)