Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.
AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources: 4Gamers.net, HotHardware
Add your own comment

221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

#76
JAB Creations
fevgatosOf course, the 7900xt was very reasonable, getting absolutely destroyed by the 4070ti. And their cpus? Oh those are insanely reasonable, they priced 6 cores at 300€ msrp when their competition asks 350 for 14cores that smack it around the room.
Newegg:
$839: 4070 Ti 16GB
$849: 7900 XT 20GB

TPU lists the 7900 XT as 9% faster than the 4070 Ti. I'm not saying it's a fantastic card but I don't think you looked at prices before making that odd argument.

Newegg:
$319: Intel Core i5-13600K
$324: AMD Ryzen 9 5900X

So Intel costs less up-front though what about the energy consumption? The 12900K or whatever had a 1GHz advantage over the 5800X3D, used a ton of more energy and still barely beat it by like what, 1% in some games? Plus that 13600K isn't optimized on Windows 10 because of the big/little core nonsense so who wants to willingly install Windows 11? Not that 10 is fantastic to begin with.

Frankly I don't see the point in getting upset about a general statement. No one business is going to have all winners that make sense in all scenarios. At some point you're not going to notice higher FPS improving a game or making any meaningful reduction in most software related cases with the exceptions of poorly written software (e.g. Cyberpunk 2077 is clearly not optimized and has a lot of bloat).
clopeziProbably not, but we all see the AMD Radeon numbers and Nvidia numbers, and it's easy to understand to everyone.

Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
I think the problem with AMD is that they really wanted to release before RDNA3 was ready and unlike Intel Nvidia had wiggle room to pull off easy ways to improve performance through brute-force (where as Intel uses their own foundries). That begs the question: why can't Intel just dump a shed-ton of cache on their own CPUs? AMD did it almost in an afterthought fashion though obviously there is still a lot of heavy tech involved.

Raytracing is slowly becoming more relevant and I think they should have aimed for something more along the lines of 80% instead of 50% because they should very well know that Nvidia will stop at nothing to be the whole "we're blindly #1 with our $53,000 video card so you'll get a winner when you buy a $200 card that gets 24% less performance than an exactly priced AMD card" nonsense that so many people fall for. They should have waited and let the developers mature the drivers a bit. In the end in 2023 it is at-best moderately relevant as the market share for those with a decent enough card for RT is very limited.

I'd really like to see AMD do two things. First: 8GB and 16GB on the lower cards (e.g. 7600XT or whatever) like how they did 4GB and 8GB on the RX 570s (I think, I went from 290X to RX 6800). GPU RAM needs to increase (and here is Nvidia decreasing it). Secondly I'd like to see them really push for that #1 bit and not gimp their second place cards in order to up-sell their best. I like the strategy of limiting their own cards to two 8-pin connectors but allowing their partners to go with three allowing them to get within striking distance of the 4090. If they waited like three months to let the drivers mature I think the reviews would have been more forgiving.
Posted on Reply
#77
Vayra86
Argyrtwice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)
You've already been corrected, but the gist is that Nvidia uses memory and bandwidth to execute planned obscolescence, and while in past generations they had good - to sometimes fantastic - balance between VRAM and core, today you're looking at architectural changes that highly depend on the type of game and load you present to determine how much VRAM and bandwidth you need to keep the core at work.

Nvidia still has an incredibly good shader/SMX and lean architecture and they know this. The perf/watt is stellar on ada again - until you start hitting that VRAM wall. 10GB 3080's are already seeing it. And Ada cards at and below 12GB are also already seeing it. Sometimes the cache can alleviate a big part of it but this is highly game/engine specific. Basically, for good performance on Nvidia, they're pushing a lot of work to developers once again, as they've always done, supported by their own extensive engineering teams. Its the reason you get a game ready driver every odd week. Not because Nvidia is doing great aftersales... but because they're fine tuning to meet their perf targets.

This is how Nvidia carves out its competitive advantage. Its beyond a 'proprietary' approach. Basically they try to nudge the industry towards their best practices within the architecture. AMD does that too, except they've got a much longer-term plan going on: asynchronous shader support is a great example, along with their Vulkan/DX12 push and you can see how that pays off today in for example the 7970/280X. It absolutely runs circles around its equivalent 780/780ti of the time, with the same VRAM amount, in new APIs where the tech is used.

I'm not going to herald the fine wine nonsense, because that IS nonsense, but the nuances above do exist. AMD has a long term approach here, as per the title of this thread - and it seems to begin working for them, despite the immense favor in market share for Nvidia. But technically and from a competitive standpoint in terms of how they use their die and its size (which directly says something about margins/product pricing flexibility) and the way they can use the consoles and gaming push there to their advantage... they have a much stronger position than Nvidia, which is actually moving away from the consumer market and more into datacenter.
nguyenOh well Nokia thought smart phone was gimmick until it was too late.

Companies don't get to decide what is gimmick, consumers will do that. Looks like AMD is following Nokia lead
This is absolutely true. Time will tell. So far, its too early. But at the same time: AMD runs RT fine and also does FSR fine without AI. And because of that, its easy for them to see that investing in it is counterproductive. They simply don't need it for anything in a GPU.
Posted on Reply
#78
JustBenching
ratirtNext thing is he is going to tell you he uses Vsync or frame cap at 60. I've seen those user who claim that 4090 is very efficient and use very little power with Vsync enabled or frame cap. Then they measure power consumption and according to their calculation it is very efficient. Utter crap but it is what it is. Countless of those posts everywhere.
Or even better. Downclock it 2000Mhz and then measure. But when they check how fast can it render then obviously no limits but then they do not bring the power consumption up since it is irrelevant. :laugh:
Then 4090 is in fact very efficient. Actually it is the most efficient card out there, especially for heavier workloads, not just gaming. I have it with a 320w power limit and it performs better than at stock, I can post you some record breaking numbers at just 320w.
Posted on Reply
#79
Argyr
Vayra86You've already been corrected, but the gist is that Nvidia uses memory and bandwidth to execute planned obscolescence, and while in past generations they had good - to sometimes fantastic - balance between VRAM and core, today you're looking at architectural changes that highly depend on the type of game and load you present to determine how much VRAM and bandwidth you need to keep the core at work.

Nvidia still has an incredibly good shader/SMX and lean architecture and they know this. The perf/watt is stellar on ada again - until you start hitting that VRAM wall. 10GB 3080's are already seeing it. And Ada cards at and below 12GB are also already seeing it. Sometimes the cache can alleviate a big part of it but this is highly game/engine specific. Basically, for good performance on Nvidia, they're pushing a lot of work to developers once again, as they've always done, supported by their own extensive engineering teams. Its the reason you get a game ready driver every odd week. Not because Nvidia is doing great aftersales... but because they're fine tuning to meet their perf targets.

This is how Nvidia carves out its competitive advantage. Its beyond a 'proprietary' approach. Basically they try to nudge the industry towards their best practices within the architecture. AMD does that too, except they've got a much longer-term plan going on: asynchronous shader support is a great example, along with their Vulkan/DX12 push and you can see how that pays off today in for example the 7970/280X. It absolutely runs circles around its equivalent 780/780ti of the time, with the same VRAM amount, in new APIs where the tech is used.

I'm not going to herald the fine wine nonsense, because that IS nonsense, but the nuances above do exist. AMD has a long term approach here, as per the title of this thread - and it seems to begin working for them, despite the immense favor in market share for Nvidia. But technically and from a competitive standpoint in terms of how they use their die and its size (which directly says something about margins/product pricing flexibility) and the way they can use the consoles and gaming push there to their advantage... they have a much stronger position than Nvidia, which is actually moving away from the consumer market and more into datacenter.


This is absolutely true. Time will tell. So far, its too early. But at the same time: AMD runs RT fine and also does FSR fine without AI. And because of that, its easy for them to see that investing in it is counterproductive. They simply don't need it for anything in a GPU.
Nvidia improved memory bandwith by optimizing and redesigning circuitry, and also by sending some of the load to the CPU. That's how it's possible that the 4070 Ti is on par with the 3090 Ti which has double the VRAM and bandwith. Pretty impressive. I'm amazed with what Nvidia can do with such a narrow bus.

All the while AMD just keeps shoveling more and more ram onto the cards with zero innovation. People gobble it up, it works. Cheap trick, I would prefer AMD to actually innovate but I've been waiting for a decade and it's become stale. I'll just buy Nvidia and get over it.

AMD CPU's are awesome, but their GPU's are electronic trash.
Posted on Reply
#80
TheoneandonlyMrK
ArgyrNvidia improved memory bandwith by optimizing and redesigning circuitry, and also by sending some of the load to the CPU. That's how it's possible that the 4070 Ti is on par with the 3090 Ti which has double the VRAM and bandwith. Pretty impressive. I'm amazed with what Nvidia can do with such a narrow bus.

All the while AMD just keeps shoveling more and more ram onto the cards with zero innovation. People gobble it up, it works. Cheap trick, I would prefer AMD to actually innovate but I've been waiting for a decade and it's become stale. I'll just buy Nvidia and get over it.

AMD CPU's are awesome, but their GPU's are electronic trash.
Innovation, GCD and MCD IS innovation what has Ada innovated RT gen 3 ,dlss3 err wait what now?!.
Ok now your clearly confused can we get back OT.
Posted on Reply
#81
AusWolf
I agree with AMD here... there's no need for AI in a consumer GPU. FSR is proof of that.
Posted on Reply
#82
RH92
ixiI wonder, do you use Tensor or Ray tracing cores anywhere?
Me ? Nope ... my GPU does though www.nvidia.com/en-us/geforce/news/nvidia-rtx-games-engines-apps/ :



I mean you guys need to wake up , it's 2023 we are well past 2018 , both ray tracing and machine learning anti aliasing have seen wide adoption and aren't going anywhere if anything else they are gaining importance over raster every year ... In risk of repeating myself , AMD is failing to read the room big time !
Posted on Reply
#83
Dimitriman
All I read is: "We will not have an answer to DLSS 3.0 with RDNA 4". The speech seems entirely geared towards expectation management.
Seems AMD GPU division is happy to continue living with Nvidia scraps.
Posted on Reply
#84
AusWolf
DimitrimanAll I read is: "We will not have an answer to DLSS 3.0 with RDNA 4". The speech seems entirely geared towards expectation management.
Seems AMD GPU division is happy to continue living with Nvidia scraps.
All I read is: "upscaling tech doesn't need AI, as we've proved it with FSR".
Posted on Reply
#85
Dimitriman
AusWolfAll I read is: "upscaling tech doesn't need AI, as we've proved it with FSR".
Ok but I am talking specifically frame generation here. AMD is implying it will not have a similar solution in its next gen, and I also continue to expect them to lag on upscaling image quality (FSR almost always looks worse), ray tracing, video editing, etc.

How many features are you willing to leave on the table before you decide that it just isn't worth it? I am pretty disappointed by RDNA 3 feature set vs Ada already, if they double down on not catching up to Nvidia, then they better be at least 30% cheaper on each level next time around.
Posted on Reply
#86
Vayra86
DimitrimanHow many features are you willing to leave on the table before you decide that it just isn't worth it? I am pretty disappointed by RDNA 3 feature set vs Ada already, if they double down on not catching up to Nvidia, then they better be at least 30% cheaper on each level next time around.
I'm with you on that. The only reason I'm not going for 7900 series is the price. The fact IS that the featureset is less, whatever value AMD doesn't want to attribute to that, isn't quite relevant. Its clear they're not looking to flood the market with 7900s... Its there as proof of concept.

I hope lower down the stack the price will reflect the product. I think AMD banked on the specs being the product and priced along those lines relative to NV, but that's their typical limbo in marketing.
Posted on Reply
#87
Patriot
renz496that is in 2021 when gaming GPU sales are being boosted significantly by crypto. look at nvidia numbers for Q3 2022. gaming sales is only half of that. gaming contribute less and less towards nvidia revenue.
They are trying to get sued 2 years in a row for misrepresenting finances... They know approximately how many were sold to miners based on driver download # and updates vs cards sold. Also they straight up sold batches of cards to miners.
AusWolfI agree with AMD here... there's no need for AI in a consumer GPU. FSR is proof of that.
That is not what was said.
Posted on Reply
#88
Vayra86
renz496that is in 2021 when gaming GPU sales are being boosted significantly by crypto. look at nvidia numbers for Q3 2022. gaming sales is only half of that. gaming contribute less and less towards nvidia revenue.
Relatively yes, but then again datacenter is just an emerging market for them. Its on top, not instead of gaming.
Posted on Reply
#89
AusWolf
DimitrimanOk but I am talking specifically frame generation here. AMD is implying it will not have a similar solution in its next gen, and I also continue to expect them to lag on upscaling image quality (FSR almost always looks worse), ray tracing, video editing, etc.

How many features are you willing to leave on the table before you decide that it just isn't worth it? I am pretty disappointed by RDNA 3 feature set vs Ada already, if they double down on not catching up to Nvidia, then they better be at least 30% cheaper on each level next time around.
Ray tracing and video editing features have nothing to do with AI, and aren't mentioned in this article.

As for upscaling, I'm not a fan of it anyway. I run everything at native resolution as much as possible, and rather decrease some other image quality settings before resorting to upscaling.

I can't say much about frame generation, but since I can make any game run at 1080p 60 fps with my current hardware, and I'm not planning on a monitor upgrade, I don't have much need for it anyway.

Sure, Nvidia has more stuff in their GPUs, but whether you call them features or gimmicks is highly debatable.
PatriotThat is not what was said.
Nope. That is what was implied.
Posted on Reply
#90
nguyen
DimitrimanAll I read is: "We will not have an answer to DLSS 3.0 with RDNA 4". The speech seems entirely geared towards expectation management.
Seems AMD GPU division is happy to continue living with Nvidia scraps.
Would be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself
he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing
Then it will become gimmick according to some people :rolleyes:.
Posted on Reply
#91
AusWolf
nguyenWould be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself



Then it will become gimmick according to some people :rolleyes:.
There was an interesting discussion about this in another thread. Somebody said that if game AI was extremely clever, up to the point of learning the player's tactics and countering it to beat them, gaming wouldn't be fun. Nobody likes losing every time.

Edit: Not an opinion on my part, just food for thought. :)
Posted on Reply
#92
Dimitriman
nguyenWould be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself



Then it will become gimmick according to some people :rolleyes:.
Nvidia shares are massively trending up a riding a lot on the popularity of AI and chatgpt. Maybe AMD wants to focus its AI resources on the CDNA segment only, but it's just a fact that AI will be more and more the focus of computing on every segment from now on, and gaming will likely benefit greatly from this. AMD may ignore the needs of its customers by cutting corners on gaming GPUs, but they can't ignore their shareholders, which is why I think this strategy of keeping AI focus to a minimum will not work out for them.

Posted on Reply
#93
nguyen
AusWolfThere was an interesting discussion about this in another thread. Somebody said that if game AI was extremely clever, up to the point of learning the player's tactics and countering it to beat them, gaming wouldn't be fun. Nobody likes losing every time.
Modders already incorporated chatGPT into bannerlord game, AI enhanced storytelling will be neat

So yeah, AMD is way behind in everything
Posted on Reply
#94
Redwoodz
dyonoctisA lot of people are excited about "A.I democritizing creative/technical jobs", but not realizing that it's also going to oversaturate the market with low effort content. We are already finding faults on stuff that require a lot of money and willpower to do, A.I generated content is just going to make more of them.

We need to be carefull about how we use that tool, (who's becoming more than a tool) a few generation down the line, we might just end up with a society addicted to get instant results, and less interested to learn stuff. Studies shows that gen Z are really not that tech literate...because they don't need to understand how something actually work to use it, it's been simplified so much.
So in that sense I like AMD statement, we don't need to use A.I for every little thing. It's a wonderfull thing for sure, but overusing it might also have bad consequences.
You are wrong there. Society is already addicted to instant results, won't take a couple generations.
Would you jump of a cliff if you thought you could get a better TimeSpy score?
www.tweaktown.com/news/90416/ai-threatens-revenge-by-exposing-personal-information-to-ruin-reputation/index.html
Posted on Reply
#95
Dimitriman
nguyenModders already incorporated chatGPT into bannerlord game, AI enhanced storytelling will be neat

So yeah, AMD is way behind in everything
Excellent example..
Posted on Reply
#96
Patriot
nguyenWould be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself



Then it will become gimmick according to some people :rolleyes:.
the 4090 is monstrously powerful for inference and training. AMD's mi300A is setting out to rectify Nvidia's Tensor feature set lead, currently AMD is very strong in high precision workloads and weak in inference and sparse. AMD has been very vague in what matrix/tensor cores RDNA3 has, if it has the same as the mi200 or a stripped or enhanced feature set.
The mi300 is supposed to get an 8x ai uplift over mi250x which is pretty great going from 560w to 600w, finally better tensor/matrix cores.

for RDNA3... we have ~61flops FP32,, what the matrix cores do on the instinct side is let FP64 be 1:1 with FP32 9... amd then half precision is... 4x single precision.
What AMD shows for RDNA3 is packed math... 61/123Tflops, no matrix performance listed yet.
We know they support Bfloat16, but not if it supports int4/8 and what acceleration performance it gets.
The slide below shows up to 2.7x Matrix speed increase, but is that over FP32 or int16? is it Bfloat16 164.7Tflops or 342.9 int16/8/4 tops?
IDK they have been tight lipped.

Nvidia is measuring the 4090 in
FP16 with FP32 accumulate = 165.2/330.42
FP8 at 660.6/1321.21*
int8 is the same
int4 1321.2/2642.4*
*sparse

images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf
page 30 has the numbers.
So, my guess is AMD is... AMD has Bfloat16/int16 at 342 ish, which is competative with the 4090, as they are with the instincts.... And then they just get f* decimated at int8/4 and sparce.
Posted on Reply
#97
TheoneandonlyMrK
nguyenModders already incorporated chatGPT into bannerlord game, AI enhanced storytelling will be neat

So yeah, AMD is way behind in everything
Yeah except server side chatgpt doesn't give a shit what GPU you have,. Sooo irrelevant.

AMD do have Ai hardware just not as much as Nvidia on gaming GPU cDNA beats Nvidia though.

And three years in what has Nvidia done with tensors, frame generation. .. . ... Oh yeah and RT that really needs em, oh wait.

no that's right you now need 4th gen and the tensors on 2#### and 3#### are good for little now.

RTX is clearly another driver of EWaste, something Nvidia are getting better at making.
Posted on Reply
#98
dyonoctis
RedwoodzYou are wrong there. Society is already addicted to instant results, won't take a couple generations.
Would you jump of a cliff if you thought you could get a better TimeSpy score?
www.tweaktown.com/news/90416/ai-threatens-revenge-by-exposing-personal-information-to-ruin-reputation/index.html
At the moment we still have enough people considering "long term benefits" running companies. People postuling as a pure"A.I concept artist/designer" will get refused by big companies.
But I don't know if this will still be the case in 50 years, once the millenial/early gen z retire...
Posted on Reply
#99
JustBenching
AusWolfSure, Nvidia has more stuff in their GPUs, but whether you call them features or gimmicks is highly debatable.
Whoever is calling them gimmicks is clueless / hasn't tried them or is sworn amd fan. It's not debatable among normal sane people
Posted on Reply
#100
Vayra86
nguyenModders already incorporated chatGPT into bannerlord game, AI enhanced storytelling will be neat

So yeah, AMD is way behind in everything
Myeah... too bad the game isn't fun. Its a repetitive POS copied over from part 1. AI fits right in, generic, randomly generated BS is the whole game. Game ain't even finished proper, btw, but devs say it is.

If anything this proves AI has emerged in a great place: bottom barrel content :)

Also, how does this affect what GPU you run it on? This is Gaming As A Service, buddy. Not client side AI.
DimitrimanExcellent example..
Yeah, it really is lol.

This is how buzzwords get people's fantasies to run wild. There isn't a single good game with AI in it, and Bannerlord isn't a better game because of it.
Posted on Reply
Add your own comment
Apr 3rd, 2025 00:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts