Monday, February 20th 2023
AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources:
4Gamers.net, HotHardware
While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.
AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA
I think it's a good strategy. Nvidia keeps making the wrong hard bets:
- Lower all cards below the 4090 by two tiers and pretend that crypto-hashing is still a thing to continue to jack up their prices.
- Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
- Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
- Implement dedication to very-specific kinds of AI stuff that requires dedicated silicon that doesn't inherently go towards gaming performance.
- Make all of their implementations only work on their cards.
- Buy hey, even if the 4030 is mislabeled as a "4060" it's nice to see a 30-series card come out with 8GB of video RAM!
- Make their customers think that their $400 card which is going to be 30% slower than AMD's equivalent is better because the $2,000+ card they can't afford is faster than AMD's fastest.
Yeah, a lot of people fall for the last one but there are lots of people who monkey-see monkey-do and make zero effort in to doing research.AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
Raster rendering is well and it is evolving. So surprised you have even mentioned it here considering all the new stuff that is constantly showing up. What about AI? It is great in the industry but not for gaming. At this point it is just a marketing scheme. Kinda like RT is now. It is there but what's the point if the performance is being hit so much by it giving literally not much in return.
When AMD was leading with their own features (Eyefinity, Radeon Image Sharpening, SenseMI etc.) then that wasn't open source. And if they come out with something new and cool, you better believe it won't be open source.
AMD's pricing is also in the same boat-- usually AMD's products see the largest drops in price since they really like to gouge all of the early adopters, then drop prices massively. You can see this with every Zen and TR generation. You can also see this with the 7900xtx and 7900xt pricing - they had the opportunity to really undercut nvidia, but chose to play the market.
Nvidia is hard gouging its customers -- and AMD is right there at those price levels -- just ever so slightly under, due to inferior features, and with more RAM. So I agree that Nvidia is making very anti-consumer bets, but I disagree that AMD is "on the other hand" so to speak -- same hand, slaps just as hard.
You can make AI accelerated hardware that is bound to the GPU, and separating those 2 products into their own segments would benefit all of us.
Not every gamer does AI shit, we just want maximum performance per watt for gaming, and not even AI programmer is using their GPUs to play games, or using their GPU to accelerate computing.
I think AMD is heading in the right direction, separate their AI and GPU segments, and if they want to go AI, they can focus on a separate product.
ok
I can't really make that any clearer
gamers are a insignificant part of the market
That being said, if you're gonna focus on performance, you mind telling me WTF happened with rDNA3? It's performance improvements are perfectly in line with SM increases, not counting the doubling of shader count that seemed to do F all. If you want to be a "premium brand" and you are not ogingg to do the AI thing, you need to hit it out of the park on raster performance, not play second fiddle. Geforce and Quadro/A series are comparable in terms off yearly sales to nvidia. Much like RT, AI at the consumer level will likely gain traction within a few years. The idea of being able to run something like Stable Diffusion on your home PC is exciting to many, and its something that AMD will be playing second fiddle to, much like RT. Fromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.
This bet is: AI accelerators won't be relevant to gamers. The reason this initially seems wrong is that there's synergy between compute dedicated to AI and there's some claims that DLSS tech and RT relies on AI (by nvidia and Intel with their Xess and compute tiles push) -- will be interesting to see if that's actually true though. If it isn't, and Radeon is right, then they will simplify the design and have a cheaper/better gaming product at a much lower cost and complexity, which is always an easy and massive win. I honestly think their bet is that Nvidia will have to continue gouging customers and their chips will continue to be huge and expensive with included AI processing, and because DC demand will be strong for the same chips that power the GPUs, that they will have a product segmentation issue.
They can't really compete with DLSS 3 so why even try -- make a way smaller and cheaper chip that rasters similarly to big daddy NV, does Lumen and open RTX methods just fine and has the benefit of not competing with the MI300... let's see.
AMD can secretly works in IA app after all ...
Gamers aren't the core drivers of the market anymore.
Anyone tieing their boat to gamers exclusively will be relegated to mediocrity.
I like games too, and yes, I'm a gamer, but these are just the facts of today.
Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
AMD will have something different for everyone like FSR? Maybe. That makes it a better perspective but still not convinced. We will have to wait longer for those to matter more as for now gaming is doing pretty good without AI.
"Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing."
So no, AMD is not giving up on AI. They are *gasp* going to focusing their consumer gaming cards AI capabilities on *gasp* AI that improves games. This is not a change in trajectory for AMD, they had already split their architecture into two separate branches including RDNA and CDNA. This is merely a commentary more on how Nvidia has cards with AI capabilities that don't really benefit gamers. No, gaming is still Nvidia's top earning segment. To go as far as the poster you are quoting to say that there are irrelevant is laughably incorrect. 50% of your revenue is not remotely insignificant.
RT is mostly good for screenshots because without the absolute most expensive card, people won't be using it anyway, unless they think 30 fps is great, hell in some games it even feels like there's additional processing lag when RT is enabled, even when fps is "decent" - I think it's a gimmick and I hope AMD will be very competitive in raster perf going forward. ALOT of people don't care about RT and 1440p is still the sweet spot and will be for long, this is where AMD shines as well. 7900XTX already bites 4090 in the butt in some games when raster only and 1440p. This is why I consider going AMD next time.
And FYI AMD has 15-16% dGPU marketshare and that is on Steam, it's probably more + 100% of Console market.
What if I where to combine AI Art Generation with ChatGPTs Natural Lanuage Interface with Something like Unreal Engine 5 (we really are not far away from this at all all the pieces exist it just takes somebody to bring it all togetor )
what if you could generate entire envroments just by telling a AI to "show me the bridge of the enterprise"
if you can't see the potental and the way the winds are shifting you may our soon to exist Ai-god have mercy on your fleshy soul
When Nvidia does the exact same thing with 48MB/64MB/72MB L2, you consider it "making the wrong bet". Okay. In case you haven't noticed, a 530mm^2 aggregation of chiplets and an expensive new interconnect didn't exactly pass along the savings to gamers any more than Nvidia's 295mm^2 monolithic product did.
When things became more and more demanding
Nvidia will have to allocate more and more die space for AI dedicated processing units.
Soon it will reach a critical point, where there is too much 'dead weight' and it is no longer cost effective to do it.
and Nvidia themselves will have to find a way to integrate AI processing back to the "Normal" stream processors.
So the cycle begins again
Good luck, AMD.
the argument is AMD Throwing in the towel when they can't hack it
A company basically admitting will we aren't as good as your competitors so we aren't going to to even try
yea thats gonna go over really well
Also... RDNA3 already has tensor cores... AMD just calls them WMMA Matrix cores... They will continue to add feature sets to them... What Wang said was...
He thinks that FSR and DLSS is a waste of the matrix math engines these gpus have... when they could be used on smarter NPCs and game enhancing features... not just as a means to fix poor game optimizations. But evidently reading is hard.
Since you are unawares...
videocardz.com/newz/amd-adds-wmma-wave-matrix-multiply-accumulate-support-to-gfx11-rdna3-architecture-amds-tensor-core
AMD added matrix cores to CDNA 1.0 MI100, enhanced them for CNDA 2 MI210/250x And RDNA3 got them and its unclear if they are enhanced past CNDA2 as CDNA3 is already in testing.
AMD has added a fpga xilinx core to the Zen4 laptop line for ai inferencing... and it is fairly clear they will continue to add and support an accelerated future. This article was not about a lack of AI support but in using it for... enhancement, not as a replacement for proper game design and optimization.
...unless you meant RT, or video compression, or DLSS, or something else - but you didn't say any of that.