Friday, May 3rd 2024

AMD to Redesign Ray Tracing Hardware on RDNA 4

AMD's next generation RDNA 4 graphics architecture is expected to feature a completely new ray tracing engine, Kepler L2, a reliable source with GPU leaks, claims. Currently, AMD uses a component called Ray Accelerator, which performs the most compute-intensive portion of the ray intersection and testing pipeline, while AMD's approach to ray tracing on a hardware level still relies greatly on the shader engines. The company had debuted the ray accelerator with RDNA 2, its first architecture to meet DirectX 12 Ultimate specs, and improved the component with RDNA 3, by optimizing certain aspects of its ray testing, to bring about a 50% improvement in ray intersection performance over RDNA 2.

The way Kepler L2 puts it, RDNA 4 will feature a fundamentally transformed ray tracing hardware solution from the ones on RDNA 2 and RDNA 3. This could probably delegate more of the ray tracing workflow onto fixed-function hardware, unburdening the shader engines further. AMD is expected to debut RDNA 4 with its next line of discrete Radeon RX GPUs in the second half of 2024. Given the chatter about a power-packed event by AMD at Computex, with the company expected to unveil "Zen 5" CPU microarchitecture on both server and client processors; we might expect some talk on RDNA 4, too.
Sources: HotHardware, Kepler_L2 (Twitter)
Add your own comment

227 Comments on AMD to Redesign Ray Tracing Hardware on RDNA 4

#101
dgianstefani
TPU Proofreader
Disingenuous.

4070 Ti S is cheaper than the 7900XTX and faster in RT across all resolutions.

4070 Ti 12 faster at all resolutions except 4K, but in path tracing is significantly faster even at 4K.

Edit: the Arc GPUs are literally chart topping in TPU performance/dollar too.

Raster is irrelevant in this discussion.
Vya DomusAh yes of course, when Nvidia has faster GPUs in RT they can charge a disproportionally large premium over AMD and it's all cool, if AMD has a faster GPU in RT than Intel they can't because uhm...uhm...

Ah, shucks, I forgot about the "Nvidia good, AMD bad" infallible logic once again, my bad.
Vya DomusOf course you are, the 7900XTX is still faster in raster and if you want a GPU that can comfortably beat it at that metric from Nvidia you actually have to go all the way up to the eye watering 1600$ 4090.

I know you are totally obsessed with RT and if it was after you literally nothing else would matter but that's not how it works in the real world. AMD can ask for a premium for better raster performance in the same way Nvidia can for RT, it must be real hard to wrap your head around the concept but it's actually a real phenomena.

The thing is Intel sucks in both raster and RT so you shouldn't be so bewildered that AMD cards are much more expensive than Intel's.
Posted on Reply
#102
kapone32
dgianstefaniFor a £300+ card I want to be able to use whatever resolution and settings I please.

Besides, GPUs are traditionally tested at higher resolutions, and CPUs at lower resolutions, to avoid bottlenecks from other parts of the system.

It's true, 1080p/1440p is the range these cards can comfortably operate with settings maxed. However the A770 in particular is decent as a productivity card, and doesn't mind higher resolutions for gaming either.
Wow. This is not 2016. A GPU for $300 today is only good enough for 1080P. Then you claim that an ARC770 is better at $300 than the rest. Do you realize how foolish this argument is? RDNA4 is still opinion and it would not matter if it's performance in in RT was as good as a 4090. That notion about resolutions is foolish when the market already dictates GPU hierarchy by pricing at retail.
Posted on Reply
#103
DemonicRyzen666
In terms of efficiency AMD RT cores/units/accelerators are 8-10% faster in RDNA3 than they are in RDNA2.
In terms of technical AMD RNDA3 RT cores/units/accelerators specs they're starved for data in RDNA3, so could easily understand why they instantly gained almost 25% increase in possible performance form changes
On The NVidia side Nvidia may have more generations of RT out but their RT gains have been at a steady 6% increase in efficiency per-generation so far.
Posted on Reply
#104
Steevo
I feel more of a AI RT compute unit coming.
And I don’t want it. AI could be used to enhance games but instead it’s a crappy Google for people who can’t Google and looks through your photos to turn you in for wrong think.
Posted on Reply
#105
AusWolf
dgianstefaniTwo generations too late for good upscaling or RT so only targeting mid range. Fingers crossed for RDNA 5 if it's called that.

I wonder if Battlemage will have a higher end card than RDNA 4. Intel's upscaler and RT tech is already ahead.
With current prices and performance levels, midrange is more than enough for me.
Posted on Reply
#106
Shtb
Kepler_L2?
Raise your scepticism to necessary reasonable level.
Posted on Reply
#107
RGAFL
I don't think people realise the work involved in getting RT/PT working in games. There's two PT games at the moment that had the full development weight of NVidia behind them. It really does need that amount of work and lots of money to do right. Honestly, you are looking at three years at least before that becomes the norm. The level of RT/PT your seeing now is what will continue for the near future.
Posted on Reply
#108
AnotherReader
OnasiAnd the 7900XTX (full chip) is massively more powerful on paper than the 4080S (another full chip) and yet is only slightly ahead and in only several pure raster scenarios. Same story with, for example, 4070S vs 7900 GRE There’s a reason why comparing sheer stats doesn’t work between different architectures.
That is true. However, like AMD, past Intel GPUs have built off previous generations. Therefore, it is likely that Battlemage will be similar to Alchemist. People expecting huge performance increases are likely to be disappointed. Now, Alchemist is very good at ray tracing and even if the best Battlemage can do is match the 4070, it is likely to match the 4070 in all respects. On the other hand, the rumoured specifications of Navi 48, the largest RDNA4 SKU, indicate that it will probably match or surpass the 4070 Ti Super in raster, but probably fall back in ray tracing.
Posted on Reply
#109
DemonicRyzen666
RGAFLI don't think people realise the work involved in getting RT/PT working in games. There's two PT games at the moment that had the full development weight of NVidia behind them. It really does need that amount of work and lots of money to do right. Honestly, you are looking at three years at least before that becomes the norm. The level of RT/PT your seeing now is what will continue for the near future.
does the Path tracing completely remove all uses of any rasterization?
Posted on Reply
#110
RGAFL
DemonicRyzen666does the Path tracing completely remove all uses of any rasterization?
Nope, not at all. Everything else apart from AI, game mechanics and lighting in a RT/PT game is raster. In a normal non RT/PT game the lighting is raster. All RT/PT is doing is taking over how the lighting is done and how it reacts to the textures and materials applied to the 3d models. You see even the textures and materials are raster. All that happens is that maps get applied to those textures and materials. You have common maps like colour maps, transparency maps, specular maps and bump maps. This where shader programmers earn there money. They work with these maps to get the effects you get in games.

I think you would all agree, people that turn around and say the game is fully RT/PT do not know what they are talking about. That cannot happen. You need models, textures, materials, the basic building blocks of a game are all raster.

As an example, download Blender (a free ray tracing program). Put any shape you like on the screen. Raster. Texture or put a material on it. Raster. Put a light in there. Render. Only the light affecting the model and material is RT.
Posted on Reply
#111
Guwapo77
HyderzWould love to see amd pull a 9700pro level performance on nvidia for rdna4 but I’m not too hopeful
Man I would love to see that too. Unfortunately, AMD is not making a card to compete with the 5090. I've been cheering on AMD to go back to the ATi days because that 9700pro was a beast of a card. We'll see what happens moving forward with RDNA 5.
Posted on Reply
#112
Tek-Check
dgianstefaniAs you said, hoping to see some leadership from AMD in the GPU space would be a good thing.
Both companies show leadership in different areas, Nvidia being more successful in more of those areas. Majority of buyers buy more into one feature set and minority into alternative feature set. And that's fine, as buyers make their own choices based on whatever they perceive and think, erroneously or not, is relevant for them.

For example, I cannot care less as to which company has better RT. It's not a selling point I'd consider when buying GPU. It's like asking which colour of apple I'd prefer to chew. I care more about seeing content in good HDR quality on OLED display and having enough VRAM for a few good years so that GPU does not choke too soon in titles I play.

AMD leads complex and bumpy, yet necessary for high-NA EUV production in future, transition to chiplet-based GPUs, which is often not appreciated enough. Those GPU chiplets do not rain from clouds. They also offer new video interface DP 2.1 and more VRAM on several models. Those features also appeal to some buyers.

Although Nvidia leads in various aspects of client GPU, they have also managed to slow down the transition to DP 2.1 ports in entire monitor industry for at least two years. They abandoned the leadership in this area, which they had in 2016 when they introduced DP 1.4. The reason? They decided to go cheap on PCB and recycle video traces on the main board from 3000 into 4000 series, instead of innovating and prompting monitor industry to accelerate the transition. The result is limited video pipeline on halo cards that is not capable of driving Samsung's 57-inch 8K/2K/240Hz monitor beyond 120Hz. This will not be fixed until 5000 series. Also, due to being skimpy on VRAM, several classes of cards with 8GB become obsolete quicker in growing number of titles. For example, paltry 8GB+RT on then very expensive 3070/Ti series have become a joke just ~3 years later.

AMD needs to work on several features, there is no doubt about it, but it's not that Nvidia has nothing to fix or address, including basic hardware on PCB.
Posted on Reply
#113
GodisanAtheist
AMD tends to follow the Alton brown GPU design philosophy: No single use tools.

RT in AMD is handled by a bulked out SP, and they have one of these SPs in each CU. Its a really elegant design philosophy that sidesteps the need for specialized hardware and keeps their die sizes a bit more svelte than Nvidia. When the GPU isn't doing an RT workload, the SP is able to contribute to rendering the rasterized scene unlike Nvidia and Intel.

However they get a double penalty when rendering RT because they lose raster performance in order to actually do the RT calculations unlike Intel and Nvidia who have dedicated hardware for it.

It would be interesting if not only is the RT unit redesigned, but there are maybe two or more of them dedicated to a CU. Depending on the complexity of the RT the mix of SPs dedicated to RT calculations or Raster calculations could adjust on the fly...
Posted on Reply
#114
Minus Infinity
RGAFLThe major problem with the AMD cards is the way they use the shaders. Where NVidia have dedicated tensor and RT cores to do upscaling/Ray Tracing etc, AMD use the shaders they have and dual issue them. Very good for light RT games as their shaders are good, probably the best pure shaders out there, but when RT gets heavy then they lose about 20-40% having to do the RT calculations and the raster work. Dedicated shaders or cores will help out a lot as like I said the shaders are excellent, power wise not far off 4090 performance when used correctly. But AMD do like their programmable shaders.
I think that's the point of this article. AMD has realised ultimately it can't rely on software and shaders any longer to compete. They need some specialised ASIC to help handle RTing operations. I'm getting into this argument over whether RT is a waste of time, because alas that ship. has sailed and Nvidia is calling the shots. SOny's embracing it, Microsoft's embracing it, Intel's embracing it, Samsung's embracing it, AMD just have to live with this fact and get on board. Hopefully if they can deliver compelling update it stops this mindless fanboy bashing they get over something most people don't even use anyway, but hold over them. I'm not against RT but with a 6800XT I've never enabled it and frankly I don't have any interest in the few games that seem to promote it like Cyberpunk. A lot of the games I've seen demoed for RT look worse to me.

Personally I'm actually hoping AMD greatly improve AI performance in the GPU as I'm using AI software that could greatly benefit from this.
Posted on Reply
#115
nguyen
Well let hope rx8000 aren't gonna lose 50-65% of FPS just by enabling RTGI + Reflections + RTAO, combine with AI Upscaling then RT will be a lot more palatable.

For example the 7900GRE lose 58% of its FPS with RT enabled in Ratchet&Clank


If the impact of RT is minimized to 35% (like with Ampere/Ada), the 7900GRE would be getting 71FPS, a very comfortable FPS for 1440p or 4K with Upscaling+Frame Generation.

RT GI, Reflections, Shadows, Ambient Occlusion are superior to the rasterized version of them, so when anyone asked for better visual, RT was always the answer. For people who don't care about graphics, then RT is not for them.
Posted on Reply
#116
GodisanAtheist
nguyenWell let hope rx8000 aren't gonna lose 50-65% of FPS just by enabling RTGI + Reflections + RTAO, combine with AI Upscaling then RT will be a lot more palatable.

For example the 7900GRE lose 58% of its FPS with RT enabled in Ratchet&Clank


If the impact of RT is minimized to 35% (like with Ampere/Ada), the 7900GRE would be getting 71FPS, a very comfortable FPS for 1440p or 4K with Upscaling+Frame Generation.

RT GI, Reflections, Shadows, Ambient Occlusion are superior to the rasterized version of them, so when anyone asked for better visual, RT was always the answer. For people who don't care about graphics, then RT is not for them.
-I would honestly say GI and AO are qualitatively better with RT.

Shadows and Reflections are more preferential and crapshoot if people can even tell the difference, but GI and AO are basically 100% better with RT.
Posted on Reply
#117
sepheronx
I dunno if it has been said or not.

But do you guys remember Physx and Hairworks? How AMD Struggled or couldn't operate with it? I mean, there was dedicated cards for it, heck I had one for that space bugs game on that cold planet? I cant remember the name. But yeah, it was used heavily and AMD Couldn't work with it. Had to get another card.

Anyway, what I am getting at is that AMD is late to the game, as usual. RT is the new Physx and Hairworks. Even bigger actually. And a game changer to lighting. Hell, it is fantastic for horror games.

I am glad they are now being active in looking into it. But at this point, for midrange, I don't care who it is (AMD, Intel, Nvidia), so long as I can get a cheaper GPU that can implement RT, then I will go for it.
Posted on Reply
#118
stimpy88
It would be so funny to see AMD as fast or faster at RT than nGreedia in just two generations, who we know have been holding back on their RT hardware for years now, with minimal improvements.
Posted on Reply
#119
RGAFL
sepheronxI dunno if it has been said or not.

But do you guys remember Physx and Hairworks? How AMD Struggled or couldn't operate with it? I mean, there was dedicated cards for it, heck I had one for that space bugs game on that cold planet? I cant remember the name. But yeah, it was used heavily and AMD Couldn't work with it. Had to get another card.

Anyway, what I am getting at is that AMD is late to the game, as usual. RT is the new Physx and Hairworks. Even bigger actually. And a game changer to lighting. Hell, it is fantastic for horror games.

I am glad they are now being active in looking into it. But at this point, for midrange, I don't care who it is (AMD, Intel, Nvidia), so long as I can get a cheaper GPU that can implement RT, then I will go for it.
It is being looked in to. I don't think it will be as quick as NVidias implementation still but it will be improved. The thing is these are computationally very heavy calculations, years ago things like this needed render farms to do a single frame. Are AMD slow to the game? Yes of course depending in who you speak to and the games you play. Like I have said previously before in another thread, you will still need a lot of raster power, games are getting bigger, more detailed and with more models, textures and materials. Don't just dismiss the amount of raster power you will still need.
Posted on Reply
#120
Naito
I honestly can't see why anyone would be against RT/PT in games...

Once hardware matures and engines make efficient use of the tech, it's going to benefit everyone, including developers. For consumers, games will appear visually more impressive and natural, have less artifacts, distractions, and shortfalls (screen-space technologies anyone?), and for developers it'd mean less hacking together of believable approximations (some implementations are damn impressive, though), less baked lighting, texture work, and so on.

The upscaling technology that makes all this more viable on current hardware may not be perfect, but it's pretty darn impressive in most cases. Traditionally, I've was a fan of more raw rendering; often avoided TAA, many forms of post processing, motion blur, etc, due to the loss of perceived sharpness in the final image, but DLSS/DLAA have converted me. In most games that support the tech, I've noticed very little visual issues. There is an overall softer finish to the image, but it takes the roughness of aliasing away - makes it feel less 'artificial' and blends the scene for a more cohesive presentation. Even in fast-paced shooters like Call of Duty, artifacting with motion is very limited and does not seem to affect my gameplay and competitiveness. Each release of DLSS only enhances it. I have not experienced Intel or AMD's equivalent technologies, but I'm sure they're both developing them at full steam.

Yes, I'm an Nvidia buyer, and have been for many generations, but I'm all for competition. Rampant capitalism without competition leads to less choice, higher prices and stagnation. Intel and AMD not only staying in the game, but also innovating and hopefully going toe-to-toe with Nvidia, will benefit everyone, regardless which 'team' you're on.

Technology is what interests me the most, not the brand, so whoever brings the most to the table, for an acceptable price, is who gets my attention...



Could it be that one day there is no rasterisation, or traditional rendering techniques, in games at all, in future? Would AI models just infer everything? Sora is already making videos and Suno generating music, is it much of a stretch to think game rendering is next?
Posted on Reply
#121
mechtech
I'd rather have a card with no RT and give me the discount in %$ of % less transistors.
Posted on Reply
#122
freeagent
What the heck you guys?

For how long have you all been saying RT doesn't matter.. Raster the world!

And now this :confused::confused:

:p
Posted on Reply
#123
RGAFL
NaitoCould it be that one day there is no rasterisation, or tradiontal rendering techniques, in games at all, in future? Would AI models just infer everything? Sora is already making videos and Suno generating music, is it much of a stretch to think game rendering is next?
Ummm, no. All that will happen is that they will get more complex. Even if AI generates them the models, textures and materials are still raster. RT/PT only affects the lighting and how the light reacts with materials/textures. What is so hard to grasp about this? It just happens to be the most computationally expensive thing to do in a graphics pipeline. We've had the same way of doing 3d for how many years now in one of the most forward looking industries and the way a 3d game is made is basically the same. The only thing that has changed is resolution, texture/material quality, the game engines are better quality and the hardware has got faster to display it. If a way of doing things different has not been found by now, don't bank on it changing anytime soon.
Posted on Reply
#124
GodisanAtheist
freeagentWhat the heck you guys?

For how long have you all been saying RT doesn't matter.. Raster the world!

And now this :confused::confused:

:p
- I think it's important to be careful with words and also point in time sampling etc.

RT still really doesn't and really hasn't "mattered" for the last several gens. It doesn't look much if any better than Raster and it's performance hit is way too high. Every game except Metro Exodus has kept a rasterized lighting system with RT hap hazardly thrown on top.

Nothing about RDNA4 is going to change that equation.

That doesn't mean some people don't prefer RT lighting, or just like the idea of the tech even if they aren't really gungho about the practical differences vs Raster.
Posted on Reply
#125
freeagent
Bah.. I use it all the time, looks good.
Posted on Reply
Add your own comment
Nov 26th, 2024 08:50 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts