Monday, May 13th 2024
AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3
AMD's future RDNA 5 graphics architecture will bear a "clean sheet" design, and may probably not even have the RDNA branding, says WJM47196, a source of AMD leaks on ChipHell. Two generations ahead of the current RDNA 3 architecture powering the Radeon RX 7000 series discrete GPUs, RDNA 5 could see AMD reimagine the GPU and its key components, much in the same way RDNA did over the former "Vega" architecture, bringing in a significant performance/watt jump, which AMD could build upon with its successful RDNA 2 powered Radeon RX 6000 series.
Performance per Watt is the biggest metric on which a generation of GPUs can be assessed, and analysts believe that RDNA 3 missed the mark with generational gains in performance/watt despite the switch to the advanced 5 nm EUV process from the 7 nm DUV. AMD's decision to disaggregate the GPU, with some of its components being built on the older 6 nm node may have also impacted the performance/watt curve. The leaker also makes a sensational claim that "Navi 31" was originally supposed to feature 192 MB of Infinity Cache, which would have meant 32 MB segments of it per memory cache die (MCD). The company instead went with 16 MB per MCD, or just 96 MB per GPU, which only get reduced as AMD segmented the RX 7900 XT and RX 7900 GRE by disabling one or two MCDs.The upcoming RDNA 4 architecture will correct some of the glaring component level problems causing the performance/Watt curve to waver on RDNA 3; and the top RDNA 4 part could end up with performance comparable to the current RX 7900 series, while being from a segment lower, and a smaller GPU overall. In case you missed it, AMD will not make a big GPU that succeeds the "Navi 31" and "Navi 21" for the RDNA 4 generation, but rather focus on the performance segment, offering more bang for the buck well under the $800-mark, so it could claw back some market share from NVIDIA in the performance- mid-range, and mainstream product segments. While it remains to be seen if RDNA 5 will get AMD back into the enthusiast segment, it is expected to bring a significant gain in performance due to the re-architected design.
One rumored aspect of RDNA 4 that even this source agrees with, is that AMD is working to significantly improve its performance with ray tracing workloads, by redesigning its hardware. While RDNA 3 builds on the Ray Accelerator component AMD introduced with RDNA 2, with certain optimizations yielding a 50% generational improvement in ray testing and intersection performance; RDNA 4 could see AMD put more of the ray tracing workload through fixed-function accelerators, unburdening the shader engines. This significant improvement in ray tracing performance, performance/watt improvements at an architectural level, and the switch to a newer foundry node such as 4 nm or 3 nm, is how AMD ends up with a new generation on its hands.
AMD is expected to unveil RDNA 4 this year, and if we're lucky, we might see a teaser at the 2024 Computex, next month.
Sources:
wjm47196 (ChipHell), VideoCardz
Performance per Watt is the biggest metric on which a generation of GPUs can be assessed, and analysts believe that RDNA 3 missed the mark with generational gains in performance/watt despite the switch to the advanced 5 nm EUV process from the 7 nm DUV. AMD's decision to disaggregate the GPU, with some of its components being built on the older 6 nm node may have also impacted the performance/watt curve. The leaker also makes a sensational claim that "Navi 31" was originally supposed to feature 192 MB of Infinity Cache, which would have meant 32 MB segments of it per memory cache die (MCD). The company instead went with 16 MB per MCD, or just 96 MB per GPU, which only get reduced as AMD segmented the RX 7900 XT and RX 7900 GRE by disabling one or two MCDs.The upcoming RDNA 4 architecture will correct some of the glaring component level problems causing the performance/Watt curve to waver on RDNA 3; and the top RDNA 4 part could end up with performance comparable to the current RX 7900 series, while being from a segment lower, and a smaller GPU overall. In case you missed it, AMD will not make a big GPU that succeeds the "Navi 31" and "Navi 21" for the RDNA 4 generation, but rather focus on the performance segment, offering more bang for the buck well under the $800-mark, so it could claw back some market share from NVIDIA in the performance- mid-range, and mainstream product segments. While it remains to be seen if RDNA 5 will get AMD back into the enthusiast segment, it is expected to bring a significant gain in performance due to the re-architected design.
One rumored aspect of RDNA 4 that even this source agrees with, is that AMD is working to significantly improve its performance with ray tracing workloads, by redesigning its hardware. While RDNA 3 builds on the Ray Accelerator component AMD introduced with RDNA 2, with certain optimizations yielding a 50% generational improvement in ray testing and intersection performance; RDNA 4 could see AMD put more of the ray tracing workload through fixed-function accelerators, unburdening the shader engines. This significant improvement in ray tracing performance, performance/watt improvements at an architectural level, and the switch to a newer foundry node such as 4 nm or 3 nm, is how AMD ends up with a new generation on its hands.
AMD is expected to unveil RDNA 4 this year, and if we're lucky, we might see a teaser at the 2024 Computex, next month.
169 Comments on AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3
"Discounts" and "Nvidia" are words you should not expect to be used together.,
I don't know where I read this, and my memory regarding the details is hazy, so take it with a pinch of salt.
Seems to me we're gonna get an entire generation of ReBrandeon, with a "fix" that I'd bet decent money doesnt actually improve performance beyond margin of error. Kinda like GCN 1.1/1.2/1.4. So given the way AMD behaved with the RX 7000s, expect it to launch at $799 and drop below $700 after a year of nobody buying it. *the answer is whatever the consumer will support.
It seems RDNA4 is long overdue. Ideally this is what RDNA3 should have been, instead of releasing half baked product. This is important, because not only dGPUs are based on it, but iGPUs, as well. And the last require these improvements even more. Things are worse, because APUs/mobile are stuck with RDNA 3/3.5 for a long time. It means that the laptops comming in the next couple years, will still have the outdated iGPUs, that should have been replaced by RDNA4 at least year ago. And yet they advertise this as an achievement.
The power consumption, is the biggest threat right now. I would say even bigger than the inferiority of upscale and RTRT technologies. No matter how much someone uses the upscale, this won't fix the situation, if the card is a powerhog. It's completely clear, that the primary market for any company in the world is US, where company like AMD sells their absolute mass of products, and where people do not count, or think about amount of electric power being used. Especially when it comes to gamers.
So why does it matter? Because once again, the iGPUs use the same µarch, but scaled down/limited to just few CUs. If the efficiency is bad for desktop, it will be as bad with smaller iGPUs/hanheld/mobile GPUs either. And sadly, the inefficiency, is not only a result of an inferior node, but of inferior design as well. Not to mention, that there are ways to reduce the rendering load, without dumping the quality, instead of bruteforcing it. And this part alos heavily relies on the software side, which sadly still lags behind.
But enough complains. This won't fix the situation. Still would be great if these delays will turn into good fruition, and lead to abundance of powerful and improved, energy efficient products, even if they will not have the performance of next gen nVidia top solutions. But I don't hold my breath. Exactly. This is obvious, that nVidia get the pricing out of thin air, just to see where is the threshold they can get away with. And people paid that, thus set the pricing in stone, forever. The pricing was doomed, the day RTX 2080 was announced. AMD has no incentive, as they position themselves as "premium" brand. There can be no value-oriented market segment, and especially value oriented pricing for such segment, when each of the market participant (except intel for now) is a "premium" company.
This seems like great business model, where the consumer is pressed to the wall, and have no choice but to pay extortion prices, because both "rivals" set their SKUs and pricing vitrually the same. A "parity", with no losers, except the end user.
$700 is clearly inacceptable for a mid range GPU, even with inflation. Why midrange? Because from all the rumors RDNA4 will have no hi-end SKUs, just the refining of RDNA3 Rebrandeon RX 8000. So AMD just wants to stretch the maximal price nVidia would set, to their "top" midrange solutions. But at same time without submitting much effort, by maintaining the "GPU underdog" image, to justify their greed and laziness. This is ridiculous.
At least nVidia still has some pretense, that they "care" about "gamers", and still puts some fractions of R&D budget into the consumer GeForce. AMD does non of this, while asks as much money.
But this have began a while ago. Remember when AMD had to bring down RX5700 prices, due to outrage? They wanted to gouge people the way nVidia did, a while back, with the release of very first RDNA1. The completely new, raw and µarch, that had a lot of bugs and issues. But they somehow had hubris, to set the price tags, like they were flawless.
The problem with AMD is not because they have bad products. The problem is that they know that the nVidia pricing is delusional, but still comly with it. AMD being bashed more, because they follow the suit, and thus lose the public image and credibility. And it hurts them more, than nVidia themselves, because repeating the immoral move is even worse than comiting it in the first place (which can be explained as "unintentional" mistake).
No matter what people say, the CPUs and server, are not the only branches AMD has profited a lot from. Their GPUs still made them tons of money. But the Radeon is yet to have the same R&D treatment as Ryzen.
People say, that AMD has no money to put into better R&D, and they cannot compete with the scale of nVidia budget. But everyone forgets, that AMD made it's first Ryzen, by being on the verge of bancrupcy. The absolute end, with zero finacial backup. They had no additional sources of income, it was "all-in".
I don't say they must put all budget into consumer Radeon. That's stupid. But it would have been a great move, to improve RTG, as it can be an additional source of income. It's impossible to gain, without the input.
And the consumer/gaming branch poor sales are not due to people are uninterested in Radeon products. But due to the pricing being atrocious. It's not because people prefer nVidia, but because doubling down on margins alone, is destructive for the economy, and the company's health. If they don't see this now, then nothing will help. Coulpe bucks below nVidia counterparts, just to make an illusion of competition. I dunno where is 7900XT for less than $799. Here are about 40 stores, all sell it for $1000. There's no competition.
But, yeah, this time AMD has to put at least some efforts for the consumer market. Otherwise it just looks as looks like a placeholder Radeon branch along with Enterprise.
And nope, I guess AMD won't make 7900XT for 7800XT money, since they already have set this pricing already, by shifting entire stack one class above. I mean 7900XT is just cries it is just a 7800XT, and what they call "7800XT" is just 7700XT instead. With 7900GRE being 7800 non-XT. 7600 being 7500, and 7700XT being true 7600XT. Only 7900XTX holds it's top SKU moniker, just having an excessive "X" at the end. So why would they do such favor, and undercut themselves by "gifting" the previous gen top performance, for "mid-end" prices? This is new era.
But the true cause behind the atrocious pricing, is that AMD is knee deep into top profit margins market of AI and enterprize. They don't make any assignment, no matter what.
However, the whole price/class shifting shenanigans, in reality made more disservice, rather than help. What I mean, is that the generational uplifts used to be due to each next gen one tier lower SKU performance, is same or more, than the higher tier/class SKU of previous gen. Or in other words, the same class SKU of the next gen should have performance uplift. Otherwise, this doesn't make sence, and is both counter-productive, and counter-evolutionary.
But AMD has really have shot themselves in both feet, by sticking to this scammy shenanigans, especially doing this after nVidia backpedaled their dumb 4080 12GB naming. What they have done is even worse. This not only makes the products less attractive price-wise. But also completely abolishes the whole generational uplift, by making it twice as bad. I mean If they'd stick with the naming mentioned above, the performance growth would be true, and way greater, than it is. But shifting it one tier above, the entire stack lost their performance improvements altogether.
The reasoning behind this is clear. AMD wanted to increase the profit margins. But by raising prices without naming the SKUs accordingly, would have made the pricing an unreasonable and blatant rip off. And could show their true pursuit for nVidia behavior, thus might ended up into public outrage. So, here we are. The entire point of pricing the VGA for a grant and above, is just means these cards were supposed for use by prosumers, just the GPU maker (nVidia in this case) to extract maximum margins out of them. Well, JHH himself said this while announcing RTX lineup of GeForce, saying it's a "holy grail" for the developers. Both 2080 Ti and Titan RTX (true uncut 2080Ti, rather than Titan), were positioned as products for content creators and designers. Sorta Quatro for "poor".
So this is obvious, the ordinary "gamur" Joe/Jane doesn't need to waste this ungodly amount of hard-earned money, on device, which price was set and inflated artificially and unreasonably. Because they (nVidia) can. Indeed. There is the reason why AMD market cap is now almost twice as big as Intels. Everyone says it's server/data center and CPU branch. But everyone forgets that both nVidia and AMD made a fortune by selling ships of cards directly to the miners. And used convenient "scalper" scapegoat, as a reasoning to inflate prices forever. The true virtue signaling of them, as claiming they were unable to issue the solution. But it was clear, that they could enforce the sale restriction, as much as take the entire supply under their control, and sell cards, directly from their very own store. But they didn't.
Intel was just late with their miner-specific ASIC hardware, so they've just switched to AI instead, before anything came to the market. So they just had to hold by the Arc development, as it's now their bread.
I'm sure there's huge portion of current AMD wealth, that is due to one single non-stop covid-mining rampage. And they both GPU makers had no incentive to return their prices back to the sane levels, as both were already benefit from selling their entire stocks to the compute/data center tasks, which miners/crypto are directly related to.
That's why, instead of investing into consumer market, they just invented AI "surge" to justify their wish to continue to sell all silicon, including consumer/gaming grade graphic cards to non-consumer/gamer, compute markets. AMD is maybe a bit more descreet about it, as they've openly dropped the consumer market in favor of AI/enterprize. And nVidia is just creates a whole lot of different tricks to rebadge their gamer GPUs as "compute" cards and circuimvent the restrictions.
That is the reason, they all stuffing the useless "AI" features to every GPU and CPU, just to inflate the price and ride on this "AI" train, while it's still going full steam. Some even try to "coordinate" the independent/open AI endevours. They don't even try. I might be wrong, but it may happen, AMD designs RDNA5 with the main intention, of use them later as WS cards. They aren't trying to "better" the RDNA cards for consumer/gamer market needs. They used to have CDNA based on Vega/GCN, but why just not use the chip for workstations and sell the "crisps" binned chips as consumers RX series.
Now AMD just makes minimal efforts, to use it as in order to Radeon would continue to "exist" (which BTW helps nVidia to not fall under "monopoly" status), for R&D and test gound for future iGPU products, used in APU and consoles. Since these two and OEM are the next more important ereas after AI/enterprise. The DIY market is sadly dead.
Back to "wait for Big RDNA5".
BUT I bear this in mind. AMD 10 years ago were a heartbeat or two away from going bankrupt. Without them I dread to think what pricing would be now between Intel and Nvidia. I bet the executives at AMD also bear this in mind, it wasn't that long ago. It's still quite fresh in the mind.
Got it.
7900XT/4090 are the 4K gaming cards.
Few gamers embrace 4K as it has too many pixels for any GPU to hand out 120FPS with all the extra bits.
AMD aims for mainstream success.
Nvidia made a card that costs $250 more that we would rather buy, except that new one that will cost $500 that AMD isn’t going to compete with.
Rinse, repeat……
The earnings call was from Nvidia’s Q2 2023.
So, the last RDNA architecture will be RDNA 4. The next one will no longer be RDNA? Nice...
And I guess they will invent and new retail branding. Something like Radeon UHD 280 X to be the top part.
This is why I have decided I am never going to sell my current setup, so if M$ takes things too far, I can just yolo off to Linux for the next two decades and play backlog and emulate nostalgia games.
I hope I am wrong, and this rig ends up collecting dust in my closet for awhile when RDNA5 comes out.
5120 shaders like the GRE has could be very powerful at 3GHz with twice the InfinityCache. The GRE is basically a bandwidth-starved 7900XT
How much insider information does he have?