Monday, July 22nd 2024
Several AMD RDNA 4 Architecture Ray Tracing Hardware Features Leaked
We've known since May that AMD is giving its next generation RDNA 4 graphics architecture a significant upgrade with ray tracing performance, and had some clue since then, that the company is working on putting more of the ray tracing workflow through dedicated, fixed function hardware, unburdening the shader engine further. Kepler_L2, a reliable source with GPU leaks sheds light on some of the many new hardware features AMD is introducing with RDNA 4 to better accelerate ray tracing, which should give its GPUs a reduced performance cost of having ray tracing enabled. Kepler_L2 believes that these hardware features should also make it to the GPU of the upcoming Sony PlayStation 5 Pro.
To begin with, the RDNA 4 ray accelerator introduces the new Double Ray Tracing Intersect Engine, which should at least mean a 100% ray intersection performance increase over RDNA 3, which in turn offered a 50% increase over that of RDNA 2. The new RT instance node transform instruction should improve the way the ray accelerators handle geometry. Some of the other features we have trouble describing include a 64-byte RT node, ray tracing tri-pair optimization, Change flags encoded in barycentrics to simplify detection of procedural nodes; improved BVH footprint (possibly memory footprint): and RT support for oriented bounding box and instance node intersection. AMD is expected to debut Radeon RX series gaming GPUs based on RDNA 4 in early 2025.
Sources:
Kepler_L2 (Twitter), VideoCardz
To begin with, the RDNA 4 ray accelerator introduces the new Double Ray Tracing Intersect Engine, which should at least mean a 100% ray intersection performance increase over RDNA 3, which in turn offered a 50% increase over that of RDNA 2. The new RT instance node transform instruction should improve the way the ray accelerators handle geometry. Some of the other features we have trouble describing include a 64-byte RT node, ray tracing tri-pair optimization, Change flags encoded in barycentrics to simplify detection of procedural nodes; improved BVH footprint (possibly memory footprint): and RT support for oriented bounding box and instance node intersection. AMD is expected to debut Radeon RX series gaming GPUs based on RDNA 4 in early 2025.
247 Comments on Several AMD RDNA 4 Architecture Ray Tracing Hardware Features Leaked
In case you didnt notice, buyers will buy when a product is a better value ... AMD CPU Ryzen lineup is a proof of it. people buy it because its good better value and thats why people buy nvidia ... its a better value for money.
There are still brand fan's that tends to stick ...well to brands instead of better value.
AMD GPU is 1 gen behind Nvidia ... Nvidia is making sure to keep AMD on that step ...
There are things that turns a product into better value in mainstream. Performance/Efficiency/Ecosystem.
Lets see what NVIDIA offers:
+Better Performance/Watt
+Better DLSS implementations (even XeSS got a little better than FSR, go figure )
+Better Encoding Capabilities aka NVENC ... used in OBS and even on Discord.
+Better day1 Games Optimized drivers
+Better Software support on CUDA
- Premium Price
In every "+" AMD failed to surpass Nvidia, something that in the CPU division, Ryzen did surpassed the long king Intel because of better value/features/price and even platform.
AMD needs to have the same approch on GPU division, offer better "features" that values the product.
They are investing in RT .... because it is a "selling" point used by Nvidia, for me ... its just a gimmick that do not offers any value, Nvidia use the "RT" benchmark because they already have nailed the other features... AMD is falling to NVIDIA's trap.
AMD's recent decision to create a "Software Division" will problably pump AMD GPU's features to Nvidia level, but that will take time ... time that NVIDIA know's how to manage.
When was the last time AMD had designed a normally sized high-end chip to be in line with nvidia's die sizes? 600 mm^2? 650 mm^2?
AMD doesn't invest. AMD is very greedy. It wants (or maybe doesn't) to make money with as little investments as possible.
Do you think that going to GPU chiplets is normal, when we know that GPUs CAN'T work fast if there are too high latencies between its different parts? This is not a CPU, which can have a billion chiplets, and would still be good enough..
Also, the last but not least important - you should compare the efficiency of those multi-billion budgets - or the helpful work per dollar.
AMD is doing what everyone wants them to do whether they know it or not. They are bringing more competition to Nvidia. Even the Nvidia fans will be thankful if this makes their GPUs less of a ripoff due to lack of competition.
The only thing that has borked where we were all headed eventually anyway is that Nvidia pushed the whole RT thing out too early. Developers were poorly able to implement it properly and the hardware and software was lacking but I can assure you that RT will continue to become more desirable and probably the last bit of serious resistance will end when the next gen consoles arrive with much more of a focus on RT.
No one will be saying that RT is a gimmick then. Some will still be against it but their voices will be drowned out by the vast majority of gamers that do want it.
RT is persistent when the user really wants it, the performance hit is minimal and the game justifies it. Other than that I dont see any point of RT.
I see the example of Gran Turismo 7 vs Forza Motorsport ... a very well developed game doesnt need RT to look amazing.
Possible a mind exercise, if AMD offered a Performance/Watt + better Upscalling + (few other things) at a reasonable pricetag (vs Nvidia). I think that people would buy AMD even if it lacked RT performance ....( that is what NVIDIA is afraid of )
AMD already made a better Raster-performance and increased VRAM, but not the optimizations needed and the other features...
Why Ryzen was a success ? because price/performance and gave user more cores ... something that Intel was holding back, in case of NVidia is VRAM.
In GPU universe VRAM is a point, but it doesnt tell the full story, like the RTX 4060 TI 16GB is a proof of that, it lacked performance for price. AMD needs to look at those Nvidia fails ...
For me, the "future" features are upscalling techs/encoding/decoding and optimizations on performance/watt.
I do really want some punches throwed at NVIDIA... because they are getting way to big for my confort. Intel GPU's and RADEON's do need to step up
And I know this because I did a 6800XT to 7900XTX upgrade and in UE5 games performance is much better.
What I'd like to see with RDNA4 is a card with performance similar to RX7600 for $230 (with tax), $330 is too much. My RX570 might die before I buy a new card.
Frame rates take a hit when RT is used. There are software solutions that help though.
I should have been alittle more clear. I'm reffering to the RT games where AMD struggles. ie cyberpunk , Alan Wake etc.
This list consist of games where RT is light also.
sarcasm/
Usually people of RT are willing to put $$$ for RT performance ... In that case ... yeah AMD needs to double the performance of Nvidia's RT and add double to the pricetag to say "AMD GPU are now a valueable product"
For example only 2 of those games last for more than 200 hours (Witcher 3 and cyberpunk).
Subjective but 5 games and that cost does not do it for me.
Once we have both consoles with most titles supporting RT is when we will see the big push for it in my eyes. When Medium tier GPU's can push RT in the PC side is when things will really get interesting.
We know dev's target the low to medium tier buyers as that is where the most volume is.
200 hours for $2500 purchase "totally worth it"
I will have to agree to disagree on that.
As of right now 4080 super and 7900xtx cost the same in EU and around 70 pounds difference in the UK. So the cost to use rt in those handful of titles is 0 to 70 pounds.
If amd doesn't address this issue they will be dropping to below 10% soon. Strawman much? Why are you even bringing up the 4090? It's not just faster in rt, it's faster in everything.
The xtx competes against the 4080 super, they cost roughly the same and the nvidia part is much faster in rt. Obvious choice
And based on cost and game selection I would not make that move at all.
as I said we will have to agree to disagree on this.
I've asked this question on many forums and nobody has been able to give me a convincing argument once you factor in cost and the amount of games.
And yes you can go down a tier or two for the cost issue but the amount of games doesn't change.
But let fix that sentence a bit. See above and adding, there are thousands and thousands of games without RT that are still way better than those 500+. Eventually, but hopefully we will also have low to mid tear gpus that can serve the customers, instead of requiring a US$2K+ gpu to be able to admire those puddles and mirrors reflections. Personally, by year 2 of the “year of the affordable RT gpu” i said that we were at least 5 gens away of that.
But I think that was too conservative.
Also, to me (so far) RT has not proven to be more than a gimmick anyway.
When the game worlds are not entirely wet or made completely of mirrors and still add a real reason to demand RT, i will then stop thinking its not a gimmick.
At some point you need to stop pretending amd is at 10% marketshare because of brand name. When you can get DLSS + FG + working RT for minimal cost over amd cards, why the heck would you not?