Tuesday, August 22nd 2023

NVIDIA Announces DLSS 3.5 Ray Reconstruction Technology, Works on GeForce 20 and Newer

At this year's Gamescom 2023, NVIDIA will be releasing several new updates for the GeForce Gaming crowd. These are led by the announcement of the new DLSS 3.5 Ray Reconstruction feature, available this Fall for all GeForce RTX GPUs (RTX 20-series and later). DLSS 3.5 introduces a new feature called "Ray Reconstruction Technology" that's specifically designed to improve the way ray traced elements look in games. While traditional rasterization calculates every single pixel, for each frame, real-time ray tracing cannot do that, for performance reasons. During rendering, only few rays are shot in a coarse grid, which leaves empty "black" gaps in-between the ray outputs. To fill those in, a denoiser is used that runs various algorithms to literally fill in the blanks.

With DLSS 3.5, NVIDIA introduces a new denoiser that's optimized to work hand-in-hand with DLSS 2 upscaling, to provide better image quality results that are more correct at the same time. This feature relies on the Tensor Cores (not the RT cores, we asked), so it is available on all GeForce RTX graphics cards (Turing and newer).

The picture below shows the traditional way to do RT effects. Please note that DLSS 2 upscaling is enabled here—the image is composited at low resolution first and then scaled to native size.

In a first step, the engine creates the geometry and materials, but without any shading. This information is used to create the BVH acceleration structure for ray tracing, which helps to determine where rays intersect with world geometry. Next, a number of rays is cast and their path is traced, to calculate intersections, possibly let them bounce, maybe even several times. These results are now fed to the denoiser, which turns the individual pixels into a continuous image that looks like a ray traced reflection, shadow, lighting or ambient occlusion. With upscaling enabled, the denoiser generates output at the lower render resolution, not the final native output—the denoiser isn't even aware of the final resolution. On top of that, another problem is that the upscaler doesn't know anything about rays, it just sees the pixel output from the denoiser—all the original ray tracing values are lost at that stage.
The biggest problem with denoisers is that they rely on previous frames, to "collect" enough pixel data for the final image. This means that the RT output is an average of several previous frames. The slide above details such problematic cases. For example, the mirror on a moving car gets combined throughout several frames, which results in ghosting artifacts. Another problem is with subtle illumination effects and reflections that just look smeared out.

NVIDIA's innovation with DLSS 3.5 is that they are combining both the denoising and the upscaling steps into a single combined step that has more information available, which promises a higher-quality output image. The low-res output is combined with the output from rasterization, the ray tracing steps and the motion vectors, and everything is painted directly into a high-res output image, 4K in this case. The DLSS 3.5 algorithm also takes into account previous frames (temporal feedback), just like DLSS 2. Once upscaling is completed, another pass is made for the DLSS 3 Frame Generation feature (when enabled).

Here's some results provided by NVIDIA that show how DLSS 3.5 Ray Reconstruction promises to enhance the RT fidelity over classic denoising techniques.
Ray Reconstruction has negligible performance cost of its own, on frame-rate comparisons NVIDIA showed taken on an RTX 40-series GPU, DLSS 3.5 RR offers marginally higher frame-rates than DLSS 3 FG. NVIDIA made it clear that DLSS 3.5 is not a performance enhancing feature, but the focus is on image quality. Depending on the scene, the performance will be virtually identical, slightly better or slightly worse. In theory it is possible that game developers reduce the number of rays when DLSS 3.5 is enabled, which would lower the RT performance hit, and improve framerates—still with improved image quality. There's no handholding for that though, this is purely a game dev feature and out of the scope of NVIDIA's DLSS 3.5 implementation.

DLSS 3.5 will not only be available in games, but also in NVIDIA's professional D5 renderer, where it will enable real-time previews of stunning detail.
When it releases this Fall, DLSS 3.5 will be enabled on all GeForce RTX GPUs through a driver update. You now have three distinct subsets of DLSS—Super Resolution (SR), or the core image upscaling tech; Frame Generation (FG) introduced with DLSS 3, which doubles frame-rates by generating alternate frames using AI; and now the new Ray Reconstruction (RR) feature. DLSS 3.5 RR will work with all RTX GPUs, as all generations include tensor cores. On older RTX 20-series "Turing" and RTX 30-series "Ampere," DLSS 3.5 will work exactly like it does on the latest RTX 40-series "Ada," but FG won't be available. Games with support for Ray Reconstruction will have an additional checkbox "enable Ray Reconstruction", just like there's a checkbox "enable Frame Generation". We confirmed with NVIDIA that running DLAA with Ray Reconstruction is supported—you don't have to use the upscaler at all times.

While the naming is a bit confusing, it's great to see that NVIDIA is constantly improving their technology. There's no news yet regarding AMD's FSR 3; perhaps an announcement might come at Gamescom. However, from a technical standpoint, we'd classify Ray Reconstruction as "DLSS 2.5", because it has absolutely nothing to do with DLSS 3 Frame Generation, and is closely interlinked with DLSS 2 upscaling. It seems NVIDIA is now releasing all similar technologies under their established "DLSS" brand name, which is further segregated by feature. For example, "DLSS 3 Frame Generation" is only supported on GeForce 40—this announcement does not change that. The new "DLSS 3 Ray Reconstruction" works on GeForce 20 and newer though, just like "DLSS 2 Upscaling" works on GeForce 20, too.
Add your own comment

89 Comments on NVIDIA Announces DLSS 3.5 Ray Reconstruction Technology, Works on GeForce 20 and Newer

#76
AusWolf
gffermariYou must be joking now...

When you walk in the game, you will definitely notice that the light bounces on the surfaces have latency. It's ridiculously obvious.
In this image it's more than obvious that the pavement does not reflect the lights above because it's some msecs behind in time.
The same happens with the reflections. The resolution is low or low and later gets better after it is rendered.

The problem is people do not realize how difficult it is for a hardware to process this in msecs and post bullshXts about RT.
A good CPU can render one, just one, frame in minutes while a GPU has to render 60+ frames in ONE second.
How can you say that nVidia, or even AMD, are not good enough for delivering RT?
Their GPUs, both radeon and geforce, are multiple years ahead of the CPUs.

If it weren't about RT, we would get the same oily shXtty plasticky graphics like all the console ports we got this year.
Increasing the texture resolution is NOT better graphics.
I think you misunderstand me. I'm not talking about reflections, but light sources that seem to be missing from the picture with ray reconstruction enabled.
LeiPurple rays came in and then denoiser said "let there be light..." and saw those details as unnecessary noise:D

Those flares on top were like icing on the cake. Sad to see the new DLSS punched them out.

Here's the same frame with DLSS 6. Nvidia finally managed to animate each frame. :rockout:

DLSS 56487 needs to add a noiser filter for the de-noiser to be at its best.
Posted on Reply
#77
Lei
gffermariHow can you say that nVidia, or even AMD, are not good enough for delivering RT?
Their GPUs, both radeon and geforce, are multiple years ahead of the CPUs.

If it weren't about RT, we would get the same oily shXtty plasticky ...
Posted on Reply
#78
gffermari
AusWolfI think you misunderstand me. I'm not talking about reflections, but light sources that seem to be missing from the picture with ray reconstruction enabled.
The missing light is not a DLSS issue. It's just half second later and the lighting has changed.
The whole point is the latency of the light bounces.
The image nVidia used is not good for showing what DLSS 3.5 does.
A reflection on a mirror would be better for that or a video.

Another example:
The light has just been switched off but the information of the light bounces to stop hitting the darn surface has not arrived yet.
In cyberpunk is way more noticeable because there are changing lights everywhere.



Posted on Reply
#79
AusWolf
gffermariThe missing light is not a DLSS issue. It's just half second later and the lighting has changed.
The whole point is the latency of the light bounces.
The image nVidia used is not good for showing what DLSS 3.5 does.
A reflection on a mirror would be better for that or a video.

Another example:
The light has just been switched off but the information of the light bounces to stop hitting the darn surface has not arrived yet.
In cyberpunk is way more noticeable because there are changing lights everywhere.



Ah, I see. We'll have to look at it in action from third-party sources, I guess.
Posted on Reply
#80
fevgatos
MahboiIt's 99% the same thing and in 99% of cases you will never notice any extra detail, any difference at all, unless you do a side by side comparison. And even then you need a lens.

But the cultists must believe what the Master says...
Eventually they'll release some explanation of what they're actually doing that's different. AI-ifying the denoising process can't hurt much, and should bring some speed upgrades/faster RT. That's the only advantage I can think of.
Ιm sorry to say but you sound like the one being in a cult here.
AusWolfIt's a low resolution image, so we can't really see how much detail is missing. And on the topic of missing detail...
I marked the light sources that I was talking about in my previous post, and their reflections on the floor. The last picture seems to be missing them. It's easy to produce more frames per second with less detail.
It's not missing anything, the light source on the first 3 pictures shouldn't be there but without RR it takes too long for the information to be processed. I guess a video would make a better job of demonstrating what it does.
Posted on Reply
#81
Mahboi
gffermariThe missing light is not a DLSS issue. It's just half second later and the lighting has changed.
The whole point is the latency of the light bounces.
The image nVidia used is not good for showing what DLSS 3.5 does.
A reflection on a mirror would be better for that or a video.

Another example:
The light has just been switched off but the information of the light bounces to stop hitting the darn surface has not arrived yet.
In cyberpunk is way more noticeable because there are changing lights everywhere.
Finally an actual answer!
So yeah, the AI denoiser accelerates the denoising, that's the only thing I thought it would do.
Well, it's a nice improvement.
Posted on Reply
#82
W1zzard
gffermariThe image nVidia used is not good for showing what DLSS 3.5 does.
That's not exactly correct. Previous denoisers are using temporal information, so they average out a bunch of frames over time, which smears out high-contrast areas. With DLSS 3.5 NVIDIA promises this is correctly taken into account. Their example is the front lights of the car in CP2077, which is just one large bright area with the old denoiser, with 3.5 you can clearly see the light cone
Posted on Reply
#83
Lycanwolfen
So if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
Posted on Reply
#84
AusWolf
LycanwolfenSo if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
Either this, or DLSS/FSR is a push for everybody to buy a 4K monitor that you couldn't game on (not to mention you wouldn't want one) otherwise.
Posted on Reply
#85
Dirt Chip
LycanwolfenSo if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
It's definitely improvement because it enables you to experience RT @4k @>60fps. Otherwise you couldn't do it.
It achiev so with compromise in the form of dlss, insted of the traditional graphic settings tune.

He who can't stand the notion of compromise will never accept dlss (or fsr or xess) and will be just fine with RT off.

The vast majority who can compromise, each one to his own degree, will be able to enjoy thos "precious" eye candy. Nothing more, nothing less.
Posted on Reply
#86
fevgatos
LycanwolfenSo if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
You are reading this wrong. DLSS / FSR allows you to improve image quality with no hit to framerate. 4k + DLSS Q looks better than 1440p native and has similar performance, so why would play at 1440p native instead of 4k DLSS Q? Rhetorical question.
Posted on Reply
#87
gffermari
....yes, if the normal raster games require, let's say, 10 calculations per second to be done by the gpu, a simple RT game with just reflections, shadows ask for 1000 calc/s.
If you don't want to compromise by using DLSS/FSR, wait 15-20 years and maybe you'll get a gpu that can render RT games natively.

I wrote it again. People have no idea of how RT works, how much difficult it is to calculate etc.
Are you happy if you see models with more triangles and better textures as an improvement in games?
Posted on Reply
#88
AusWolf
fevgatosYou are reading this wrong. DLSS / FSR allows you to improve image quality with no hit to framerate. 4k + DLSS Q looks better than 1440p native and has similar performance, so why would play at 1440p native instead of 4k DLSS Q? Rhetorical question.
If you have a 1440p monitor, you'd want 1440p native, ideally. Not 1440p + DLSS. 4K is probably a different story. Not many graphics cards can game at such high resolution natively.
Posted on Reply
#89
Legacy-ZA
ChomiqIs this a joke? DLSS 3 requires 40-series but somehow 3.5 runs on 20-series and up?
FSR 3.0 has frame-generation and will run on all GPU's that can do DX11+

nVidia is just being their deceiving/scummy selves as usual.
Posted on Reply
Add your own comment
May 21st, 2024 19:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts