Friday, May 3rd 2024
AMD to Redesign Ray Tracing Hardware on RDNA 4
AMD's next generation RDNA 4 graphics architecture is expected to feature a completely new ray tracing engine, Kepler L2, a reliable source with GPU leaks, claims. Currently, AMD uses a component called Ray Accelerator, which performs the most compute-intensive portion of the ray intersection and testing pipeline, while AMD's approach to ray tracing on a hardware level still relies greatly on the shader engines. The company had debuted the ray accelerator with RDNA 2, its first architecture to meet DirectX 12 Ultimate specs, and improved the component with RDNA 3, by optimizing certain aspects of its ray testing, to bring about a 50% improvement in ray intersection performance over RDNA 2.
The way Kepler L2 puts it, RDNA 4 will feature a fundamentally transformed ray tracing hardware solution from the ones on RDNA 2 and RDNA 3. This could probably delegate more of the ray tracing workflow onto fixed-function hardware, unburdening the shader engines further. AMD is expected to debut RDNA 4 with its next line of discrete Radeon RX GPUs in the second half of 2024. Given the chatter about a power-packed event by AMD at Computex, with the company expected to unveil "Zen 5" CPU microarchitecture on both server and client processors; we might expect some talk on RDNA 4, too.
Sources:
HotHardware, Kepler_L2 (Twitter)
The way Kepler L2 puts it, RDNA 4 will feature a fundamentally transformed ray tracing hardware solution from the ones on RDNA 2 and RDNA 3. This could probably delegate more of the ray tracing workflow onto fixed-function hardware, unburdening the shader engines further. AMD is expected to debut RDNA 4 with its next line of discrete Radeon RX GPUs in the second half of 2024. Given the chatter about a power-packed event by AMD at Computex, with the company expected to unveil "Zen 5" CPU microarchitecture on both server and client processors; we might expect some talk on RDNA 4, too.
227 Comments on AMD to Redesign Ray Tracing Hardware on RDNA 4
Consumers like these features. Therefore a consumer serving company will cater to their customers.
End of story. Maybe R&D spending is... important?
How many generations does it take to get it right? For these two things, RT hardware and upscaling, for Intel, one generation.
Take notes AMD.
Up until the last three or four years, Nvidia had an R&D budget over twice as big as AMD's and Intel had one over 7x as large, and despite that, AMD was able to compete and hang with Nvidia and actually beat Intel. Can anybody name an example from any other industry where a company is able to successfully.cimpete while being completely outmatched by its competition resource wise?
How much of that 3x budget is going to GPUs specifically?
Its not a bad thing for a tech company to innovate beyond what is the current demand or trends. Say what you will about people not wanting RT or AI upscaling back then, right now its a necessity, and Nvidia has positioned themselves as the clear leader in those areas. Nvidia is simply reaping what they sowed.
The 6800XT is a discontinued 4 year old card, that's still much faster in raster by the way, but that doesn't matter, the point is Intel sucks in raster and can't even match first gen AMD RT performance. The 7700XT is more expensive because it's simply a much better GPU.
In the end, the product is all that matters. And said product is holistic - hardware, software, features, the lot.
Precisely. Well said.
Additionally it took until Zen 3 for AMD to be competitive in all areas, ST+MT and in games.
So looking at Arc Alchemist I see many similarities.
Ah mod had to edit my phone screenshot size again :toast:, my bad, easy thing to forget those pixel densities :D
From our consumer pov we are under the impression that Nvidia created the waves, but I think that Nvidia is actually surfing on a wave that was already there but needed a push. When I was in uni i had a seminar where a guy talked about how digital twins are having a big influence on his job...and that was a few weeks after Nvidia kept on talking about digital twins at the GTC
Like what happened with the last two console generations, the whole AIB market could have been moving towards $100-$500 GPUs doing temporal 2K-4K resolution with excellent performance and visuals using screen-space effects, perhaps with limited RT reflections here and there (like e.g. Spider Man 2 or Ratchet & Clank).
Instead Nvidia convinced most GPU reviewers to tell people RT performance is such an important thing so they could upsell their $1500-2000 GPUs, so we're back to AMD having to adapt their hardware to Nvidia's latest naked-emperor. Just like they had to when Nvidia was telling reviewers they had to measure performance rendering sub-pixel triangles in Geralt's hair. I wonder where all this progressivism was when ATi/AMD tried to introduce TruForm, then the first tessellation hardware in DX10, then TruAudio.
All of which were good enough to be widely adopted in consoles (i.e. a much higher volume of gaming hardware than PC dGPUs), but for some reason the PC crowd ignored.
The intel implementation is inferior, ARC dedicates more space to RT and AI than the others. What's smart or better about dedicating a huge part of the die to RT to shout out that it has more performance in RT, but in reality it's irrelevant in 99% of games, and ends up struggling to compete with the RX 6600/RX7600?
A770 > 406mm² @ 21,700 million transistors.
RX 7600/RX 7600XT @ 204 mm² @ 13,300 million transistors.
Performance>
And no, it's not worth having a design 2x larger to have such a "glorious" advantage:
The 7600XT you are comparing to is a three month old review with old drivers, and you've specifically chosen 1080p RT off, the Intel card scales better at higher resolutions, and/or with RT on. Irrelevant huh?