Monday, July 22nd 2024
Several AMD RDNA 4 Architecture Ray Tracing Hardware Features Leaked
We've known since May that AMD is giving its next generation RDNA 4 graphics architecture a significant upgrade with ray tracing performance, and had some clue since then, that the company is working on putting more of the ray tracing workflow through dedicated, fixed function hardware, unburdening the shader engine further. Kepler_L2, a reliable source with GPU leaks sheds light on some of the many new hardware features AMD is introducing with RDNA 4 to better accelerate ray tracing, which should give its GPUs a reduced performance cost of having ray tracing enabled. Kepler_L2 believes that these hardware features should also make it to the GPU of the upcoming Sony PlayStation 5 Pro.
To begin with, the RDNA 4 ray accelerator introduces the new Double Ray Tracing Intersect Engine, which should at least mean a 100% ray intersection performance increase over RDNA 3, which in turn offered a 50% increase over that of RDNA 2. The new RT instance node transform instruction should improve the way the ray accelerators handle geometry. Some of the other features we have trouble describing include a 64-byte RT node, ray tracing tri-pair optimization, Change flags encoded in barycentrics to simplify detection of procedural nodes; improved BVH footprint (possibly memory footprint): and RT support for oriented bounding box and instance node intersection. AMD is expected to debut Radeon RX series gaming GPUs based on RDNA 4 in early 2025.
Sources:
Kepler_L2 (Twitter), VideoCardz
To begin with, the RDNA 4 ray accelerator introduces the new Double Ray Tracing Intersect Engine, which should at least mean a 100% ray intersection performance increase over RDNA 3, which in turn offered a 50% increase over that of RDNA 2. The new RT instance node transform instruction should improve the way the ray accelerators handle geometry. Some of the other features we have trouble describing include a 64-byte RT node, ray tracing tri-pair optimization, Change flags encoded in barycentrics to simplify detection of procedural nodes; improved BVH footprint (possibly memory footprint): and RT support for oriented bounding box and instance node intersection. AMD is expected to debut Radeon RX series gaming GPUs based on RDNA 4 in early 2025.
247 Comments on Several AMD RDNA 4 Architecture Ray Tracing Hardware Features Leaked
AMD knows this that's why they are ignoring the "RT bad" crowd and try to push for RT. Let's just hope they do well this time around. A 100% increase as they claim over RDNA 3 is pretty decent.
(7900xtx, etc)
i think i enjoyed the ps5 more. even at 30fps (60 fps, do not look good, so inured to a 30fps rate, although it was in HDR, whereas on the PC had to bein SDR.
54/58CU $550 20GB, 250W - On par with 7900XTX raster, slightly slower than 7900XTX in RT
48/52CU $400 16GB, 230W - 15% faster than 7800XT, 7900XT in RT
I have a 6900XT and I would even consider upgrading if these prices are good. Been doing AI stuff, and man it's time consuming. Would love to process my images and videos more quickly. Though I think if I were able to find some more modern algorithms, that would help a lot. Seems like all the tools I find came out 4yrs ago and it's difficult to tell what model was last updated. I've been curious what kind of speed improvements, if any, I'd get with RDNA3.
This focus is on RT improvements, which is nice, but I am also highly curious what kind of AI improvements RDNA4 will have. Will it move to XDNA architecture? Still use its own "AI accelerators"?
The upcoming 5090 will be able to do that but at what cost, 2000+ EUR?!
I believe RT will truly become mainstream only after the PS6 is out and available for purchase.
By mainstream I mean, a 60 class GPU (non-Super, non-Ti) that is able to provide on average 60 FPS in RT-heavy titles at QHD RT-maxed out settings natively (100% render resolution). If they intend to stay relevant, otherwise SONY will turn to others like Intel (or even Nvidia) for their next console. This^
This is always what I expected from RT, not to turn it on and take a perf hit and looks practically the same.
Is this what native enjoyers are really after? The picture on the right is basically every new games "native" due to freaking TAA. Just click it and see how much detail is gone.
This is very important especially in War thunder, I use SSAA 4x even if it makes my graphics card noisier because I can see distant enemies better. It makes me want to have a 4K screen but unfortunately I don't intend to buy one soon and my PC wouldn't support it correctly for now anyway.
i think AA smooths the Alias-es, (or blurs) those diagonal lines that should not be there, but are because of the quantization or screen door effect that too few pixels produces…
AA blurs… (no new content)
upscaling generates from nothing, so requires blurring or making a circle out of a square…so you don’t notice the blob effect. (or from clues in the picture, motion vectors, scene content etc.)
lol, CSI’s detail from nothing zoom effect, could be excused by stating the source they use is actually downsampled to fit the screen, and zooming is just displaying what is there…
I believe I've already see someone talking about DLAA in techpowerup, but I've never see this settings in a game. It should be exactly DLSS used as AA considering the name.
In the end I'm a bit confused. :oops:
It kinda help me a bit.
Can have the same effect, but it will amplify jaggies when it is integer value scaling, which requires further processing to make not noticeable.
lanzos (amongst all “fancy“upscalers) upscales in noninteger amounts, thus has build in AA. which “spreads” out the jaggies…
AA is not upscaling. it “cleans up the picture”. it “in effect“ smears the pixels, because a stairstep line is more noticeable that a blurry line…
and looks real when moving because you see blur as speed in real life…