Friday, May 3rd 2024
AMD to Redesign Ray Tracing Hardware on RDNA 4
AMD's next generation RDNA 4 graphics architecture is expected to feature a completely new ray tracing engine, Kepler L2, a reliable source with GPU leaks, claims. Currently, AMD uses a component called Ray Accelerator, which performs the most compute-intensive portion of the ray intersection and testing pipeline, while AMD's approach to ray tracing on a hardware level still relies greatly on the shader engines. The company had debuted the ray accelerator with RDNA 2, its first architecture to meet DirectX 12 Ultimate specs, and improved the component with RDNA 3, by optimizing certain aspects of its ray testing, to bring about a 50% improvement in ray intersection performance over RDNA 2.
The way Kepler L2 puts it, RDNA 4 will feature a fundamentally transformed ray tracing hardware solution from the ones on RDNA 2 and RDNA 3. This could probably delegate more of the ray tracing workflow onto fixed-function hardware, unburdening the shader engines further. AMD is expected to debut RDNA 4 with its next line of discrete Radeon RX GPUs in the second half of 2024. Given the chatter about a power-packed event by AMD at Computex, with the company expected to unveil "Zen 5" CPU microarchitecture on both server and client processors; we might expect some talk on RDNA 4, too.
Sources:
HotHardware, Kepler_L2 (Twitter)
The way Kepler L2 puts it, RDNA 4 will feature a fundamentally transformed ray tracing hardware solution from the ones on RDNA 2 and RDNA 3. This could probably delegate more of the ray tracing workflow onto fixed-function hardware, unburdening the shader engines further. AMD is expected to debut RDNA 4 with its next line of discrete Radeon RX GPUs in the second half of 2024. Given the chatter about a power-packed event by AMD at Computex, with the company expected to unveil "Zen 5" CPU microarchitecture on both server and client processors; we might expect some talk on RDNA 4, too.
227 Comments on AMD to Redesign Ray Tracing Hardware on RDNA 4
4070 Ti S is cheaper than the 7900XTX and faster in RT across all resolutions.
4070 Ti 12 faster at all resolutions except 4K, but in path tracing is significantly faster even at 4K.
Edit: the Arc GPUs are literally chart topping in TPU performance/dollar too.
Raster is irrelevant in this discussion.
In terms of technical AMD RNDA3 RT cores/units/accelerators specs they're starved for data in RDNA3, so could easily understand why they instantly gained almost 25% increase in possible performance form changes
On The NVidia side Nvidia may have more generations of RT out but their RT gains have been at a steady 6% increase in efficiency per-generation so far.
And I don’t want it. AI could be used to enhance games but instead it’s a crappy Google for people who can’t Google and looks through your photos to turn you in for wrong think.
Raise your scepticism to necessary reasonable level.
I think you would all agree, people that turn around and say the game is fully RT/PT do not know what they are talking about. That cannot happen. You need models, textures, materials, the basic building blocks of a game are all raster.
As an example, download Blender (a free ray tracing program). Put any shape you like on the screen. Raster. Texture or put a material on it. Raster. Put a light in there. Render. Only the light affecting the model and material is RT.
For example, I cannot care less as to which company has better RT. It's not a selling point I'd consider when buying GPU. It's like asking which colour of apple I'd prefer to chew. I care more about seeing content in good HDR quality on OLED display and having enough VRAM for a few good years so that GPU does not choke too soon in titles I play.
AMD leads complex and bumpy, yet necessary for high-NA EUV production in future, transition to chiplet-based GPUs, which is often not appreciated enough. Those GPU chiplets do not rain from clouds. They also offer new video interface DP 2.1 and more VRAM on several models. Those features also appeal to some buyers.
Although Nvidia leads in various aspects of client GPU, they have also managed to slow down the transition to DP 2.1 ports in entire monitor industry for at least two years. They abandoned the leadership in this area, which they had in 2016 when they introduced DP 1.4. The reason? They decided to go cheap on PCB and recycle video traces on the main board from 3000 into 4000 series, instead of innovating and prompting monitor industry to accelerate the transition. The result is limited video pipeline on halo cards that is not capable of driving Samsung's 57-inch 8K/2K/240Hz monitor beyond 120Hz. This will not be fixed until 5000 series. Also, due to being skimpy on VRAM, several classes of cards with 8GB become obsolete quicker in growing number of titles. For example, paltry 8GB+RT on then very expensive 3070/Ti series have become a joke just ~3 years later.
AMD needs to work on several features, there is no doubt about it, but it's not that Nvidia has nothing to fix or address, including basic hardware on PCB.
RT in AMD is handled by a bulked out SP, and they have one of these SPs in each CU. Its a really elegant design philosophy that sidesteps the need for specialized hardware and keeps their die sizes a bit more svelte than Nvidia. When the GPU isn't doing an RT workload, the SP is able to contribute to rendering the rasterized scene unlike Nvidia and Intel.
However they get a double penalty when rendering RT because they lose raster performance in order to actually do the RT calculations unlike Intel and Nvidia who have dedicated hardware for it.
It would be interesting if not only is the RT unit redesigned, but there are maybe two or more of them dedicated to a CU. Depending on the complexity of the RT the mix of SPs dedicated to RT calculations or Raster calculations could adjust on the fly...
Personally I'm actually hoping AMD greatly improve AI performance in the GPU as I'm using AI software that could greatly benefit from this.
For example the 7900GRE lose 58% of its FPS with RT enabled in Ratchet&Clank
If the impact of RT is minimized to 35% (like with Ampere/Ada), the 7900GRE would be getting 71FPS, a very comfortable FPS for 1440p or 4K with Upscaling+Frame Generation.
RT GI, Reflections, Shadows, Ambient Occlusion are superior to the rasterized version of them, so when anyone asked for better visual, RT was always the answer. For people who don't care about graphics, then RT is not for them.
Shadows and Reflections are more preferential and crapshoot if people can even tell the difference, but GI and AO are basically 100% better with RT.
But do you guys remember Physx and Hairworks? How AMD Struggled or couldn't operate with it? I mean, there was dedicated cards for it, heck I had one for that space bugs game on that cold planet? I cant remember the name. But yeah, it was used heavily and AMD Couldn't work with it. Had to get another card.
Anyway, what I am getting at is that AMD is late to the game, as usual. RT is the new Physx and Hairworks. Even bigger actually. And a game changer to lighting. Hell, it is fantastic for horror games.
I am glad they are now being active in looking into it. But at this point, for midrange, I don't care who it is (AMD, Intel, Nvidia), so long as I can get a cheaper GPU that can implement RT, then I will go for it.
Once hardware matures and engines make efficient use of the tech, it's going to benefit everyone, including developers. For consumers, games will appear visually more impressive and natural, have less artifacts, distractions, and shortfalls (screen-space technologies anyone?), and for developers it'd mean less hacking together of believable approximations (some implementations are damn impressive, though), less baked lighting, texture work, and so on.
The upscaling technology that makes all this more viable on current hardware may not be perfect, but it's pretty darn impressive in most cases. Traditionally, I've was a fan of more raw rendering; often avoided TAA, many forms of post processing, motion blur, etc, due to the loss of perceived sharpness in the final image, but DLSS/DLAA have converted me. In most games that support the tech, I've noticed very little visual issues. There is an overall softer finish to the image, but it takes the roughness of aliasing away - makes it feel less 'artificial' and blends the scene for a more cohesive presentation. Even in fast-paced shooters like Call of Duty, artifacting with motion is very limited and does not seem to affect my gameplay and competitiveness. Each release of DLSS only enhances it. I have not experienced Intel or AMD's equivalent technologies, but I'm sure they're both developing them at full steam.
Yes, I'm an Nvidia buyer, and have been for many generations, but I'm all for competition. Rampant capitalism without competition leads to less choice, higher prices and stagnation. Intel and AMD not only staying in the game, but also innovating and hopefully going toe-to-toe with Nvidia, will benefit everyone, regardless which 'team' you're on.
Technology is what interests me the most, not the brand, so whoever brings the most to the table, for an acceptable price, is who gets my attention...
Could it be that one day there is no rasterisation, or traditional rendering techniques, in games at all, in future? Would AI models just infer everything? Sora is already making videos and Suno generating music, is it much of a stretch to think game rendering is next?
For how long have you all been saying RT doesn't matter.. Raster the world!
And now this :confused::confused:
:p
RT still really doesn't and really hasn't "mattered" for the last several gens. It doesn't look much if any better than Raster and it's performance hit is way too high. Every game except Metro Exodus has kept a rasterized lighting system with RT hap hazardly thrown on top.
Nothing about RDNA4 is going to change that equation.
That doesn't mean some people don't prefer RT lighting, or just like the idea of the tech even if they aren't really gungho about the practical differences vs Raster.