Tuesday, January 31st 2023
Cyberpunk 2077 Gets NVIDIA DLSS 3 Support
CDProjekt Red today released a major update to Cyberpunk 2077, which adds support for the NVIDIA DLSS 3 performance enhancement. DLSS 3 leverages the Optical Flow Accelerator component of GeForce RTX 40-series "Ada" graphics cards to generate nearly every alternate frame entirely using AI, without involving the main graphics rendering pipeline, with which it nearly doubles frame-rates at quality comparable to native resolution. When used in conjunction with DLSS quality settings, DLSS 3 ends up working like a frame-rate multiplier. The feature also ends up positively impacting energy efficiency of the GPU. DLSS 3 requires a GeForce RTX 40-series GPU.
Source:
NVIDIA
75 Comments on Cyberpunk 2077 Gets NVIDIA DLSS 3 Support
"equally crystal clear" only if you're legally blind. It'll feel like sped up footage being run through a blender.
For example, with DLSS + No Frame Generation, let's say you're getting 20 FPS.
With DLSS + Frame Generation, you may be getting 35 FPS, but the input latency will still be at the 20 FPS latency or a bit higher (it takes time to insert generated frames), because those 15 extra frames you're getting are all generated between the 20 frames the game is actually rendering.
Frame generation input latency will be less noticeable, and probably mostly negligible, if you're already getting 60+ FPS without it.
DLSS3 increases blur and wrecks input latency... um... if you are watching a movie and not controlling anything, 60fps is more than enough, the entire reason you want 120fps is for two reasons that DLSS makes worse
Dude you can't be more confused. Just read the comment I made to the one I replied before you, probably you will understand (more probably not, you're beyond help).
I say, simply judge the output quality and input latency, rather than draw an arbitrary conclusion where the technique used crosses some self imposed line in your brain. That's actually something that crossed my mind a few weeks ago, my LG TV can't do it at a variable refresh rate, but presumably even the older OFA of 20/30 series cards could just generate a black fame between rendered frames, perhaps even slightly boost brightness of the rendered frames to offset too... could legit help.
Love the comments from people who've clearly not seen it with their own eyes, keep regurgitating only the negative aspects of reviews you watched or read :P - Having tried it, it's a great feature.
After trying it myself, i have to say it exceeded my expectations.
I really wish Nvidia wouldn't have called frame generation DLSS3 as it has nothing to do with upscaling, should have come up with a different name. Especially when you can run DLSS2 and "DLSS3" at the same time or just DLSS2 or just frame generation independently of each other.
These are artificially generated frames. The input latency will not be reduced over whatever framerate you're getting without frame generation, because the generated frames are not frames that your inputs are actually interacting with.
My main worry is going forward Nvidia will put less effort into architectural improvement and optimise just for AI trickery to do their heavy lifting.
By example, Lighting. In a lot of game, still today, the ambient occlusion is baked. That means that it's being rendering offline using ray-tracing then saved into texture and just being displayed as a texture. They are not real but hey, that is good enough. The main problem is if you add a dynamic light source, it will lit the area and you will still see the baked shadows and it will look odd.
To fix that they implemented various way of doing ambient occlusion in real time, most of them are approximation based on screen space z-buffer. It's better when you have dynamic lighting, but it way less accurate.
Then you have Ray tracing, that many people call a gimmick because it's really expensive to do the real thing and it cut performance to provide the same results as baked lighting. (Or the improvement over real time ambient occlusion isn't obvious for the people that don't know what they are looking at).
Another example is how game do level of details (LOD). in many game, the background stuff are just 2d Texture to save on rendering time. That is also a gimmick.
When you go over all the thing a game have to do to run smoothly, it's full of "gimmick"
And there is no big deal about all those stuff. It would be nice to have game that are full path traced with realistic materia and etc. But who want to play at 1 frame per 2 hours.
Anecdotal evidence isn't evidence in either case.
TL;DR:
Native: ~40FPS, 100ms PC latency with Reflex OFF, 50ms latency with Reflex ON
DLSS Q: ~60FPS, 70ms PC latency with Reflex OFF, 35ms PC latency with Reflex ON
DLSS Q + FG: ~100FPS, 50ms PC latency (Reflex ON)
This patch introduced Nvidia Reflex, so previously everyone have to use DLSS2 to get 60FPS with 70ms of input Latency, was the game unplayable then? Nope
After this patch, 100FPS with 50ms of input latency, Nvidia already offered the solution to the input latency problem, but haters will only focus on the problem :).
Looks like lots of people here find FG a positive experience too.
And I KNOW it sounds pretentious. I don't really expect people to just believe me, but perhaps you can at least see how it's possible in a logical sense. When you put everything about these technologies under the looking glass, try to take samples that encapsulate what it's doing, they tend to fall short. With all of the effects, their core strength is in what they push out of the way. If you look for overt or earth shattering, you often find boring and strange instead. It's a matter of missing the full context of those elements within the entire space. The main strength of RT effects is in the way they sneak in and up the overall plausibility factor. It's just less of a strain to believe you're there. Many different kinds of games can benefit from these things. While lighting has always been faked, RT is the better way to fake things.
The DLSS tech goes hand in hand with that. The challenge has always been having the grunt to get enough throughput for practical amounts of real-time accuracy (or at least, correction.) Correction is more attainable right now. It's not ideal. And there are tradeoffs. But from every experience I've ever had with it, it's hard for me not to see it as very worthwhile tech. Bridging that performance gap through sneaky unburdening approaches, allows for an increase in overall plausibility, at the cost of that last layer or two in image fidelity.
And you know what? That IS subjective. I think it's fair to not like what you lose in the images. But I think in time that could change, and I don't think what it offers in impact is worth entirely discarding. RT is pretty interesting as a tool, and who knows what other uses devs might find for it. To me, if it allows for better fidelity of conveyance, more convincing expression in the visuals... to me that stands out as something very valuable. It doesn't automatically make a games visuals better. But it DOES make for a better platform to convey visuals that are fundamentally good better. For instance, some classic games really look great with a proper RT conversion, and the reason they look so good is because the RT is in concert with very good visual design.
Anything that has the potential to elevate the experiences possible in games is worth keeping on-radar at the least. It's prohibitively expensive - that's worth some outrage. I can't even use this... I'll be stuck with my 3060ti for a while... and I only got lucky that a friend cut me a deal on a spare he happened to snipe out. Nvidia really, really is not a great company. I really don't care about DLSS or RTX as brands. But the technology itself is good, has IMO proven its worth, and to me, shows promising future potential. If tricks like machine learning super scaling and frame generation can open the door to it, I can't see that as a bad thing. The only bad thing about it is the absurd cost/availability.