Tuesday, January 31st 2023

Cyberpunk 2077 Gets NVIDIA DLSS 3 Support

CDProjekt Red today released a major update to Cyberpunk 2077, which adds support for the NVIDIA DLSS 3 performance enhancement. DLSS 3 leverages the Optical Flow Accelerator component of GeForce RTX 40-series "Ada" graphics cards to generate nearly every alternate frame entirely using AI, without involving the main graphics rendering pipeline, with which it nearly doubles frame-rates at quality comparable to native resolution. When used in conjunction with DLSS quality settings, DLSS 3 ends up working like a frame-rate multiplier. The feature also ends up positively impacting energy efficiency of the GPU. DLSS 3 requires a GeForce RTX 40-series GPU.
Source: NVIDIA
Add your own comment

75 Comments on Cyberpunk 2077 Gets NVIDIA DLSS 3 Support

#26
TheinsanegamerN
clopeziIt's not fake frames, it's generated frames, and very well done.

Of course, as @Raiden85 said, if you look for pixel error, you will find it, but in game, it's imperceptible and the game it's a lot more smoother and equally crystal clear.
It looks like total garbage, jittery pixels and muddy textures.

"equally crystal clear" only if you're legally blind.
mrnagantWhen you are getting 20fps with DLSS off, won't it still feel like 20fps even with DLSS 3 turned on getting 80fps?
It'll feel like sped up footage being run through a blender.
Posted on Reply
#27
Raiden85
Just gave frame generation a go in this game and it honestly looks absolutely fine to me even when driving at speed, no jittery pixels and textures are just fine at 4K, even with DLSS on quality or off so it runs at native 4K with only frame generation on it looks great.
Posted on Reply
#28
Psychoholic
I went in to try it out preparing to hate it, but i must say i'm impressed, around 130-140FPS with FG turned on, and i cant tell any difference in latency.
Posted on Reply
#29
pavle
If it doesn't increase input latency, the published fps numbers are the true next generation performance! It's a method, and for now apparently the method to achieve it.
Posted on Reply
#30
Space Lynx
Astronaut
@maxus24 Can we expect a comparison chart soon? I am curious your thoughts.
Posted on Reply
#31
EatingDirt
pavleIf it doesn't increase input latency, the published fps numbers are the true next generation performance! It's a method, and for now apparently the method to achieve it.
It will be increased latency input compared to if you were actually getting the FPS displayed on the FPS counter.

For example, with DLSS + No Frame Generation, let's say you're getting 20 FPS.

With DLSS + Frame Generation, you may be getting 35 FPS, but the input latency will still be at the 20 FPS latency or a bit higher (it takes time to insert generated frames), because those 15 extra frames you're getting are all generated between the 20 frames the game is actually rendering.

Frame generation input latency will be less noticeable, and probably mostly negligible, if you're already getting 60+ FPS without it.
Posted on Reply
#32
Garrus
rv8000“Cyberpunk 2077 gets improved ghosting and visual artificing support”

FTFY

And so the “buy this $1800 GPU for improved visuals while decreasing the quality of your visuals fad continues”.
well said, and the thing I find most funny, the main reason high frame rates are wanted is not for visuals, but for input latency and blur

DLSS3 increases blur and wrecks input latency... um... if you are watching a movie and not controlling anything, 60fps is more than enough, the entire reason you want 120fps is for two reasons that DLSS makes worse
Posted on Reply
#33
phanbuey
Fergutor"Nice try"? I just used your argument against you...ah...you didn't like it...
Read the comment above, you as the rest couldn't understand that my problem is not that it doesn't do well what's promised but the double standards (and with your responses, no wonder...).
Also, not all reviews liked it, that's why I mentioned the timing/pacing, that, again, maybe it was fixed (?). But, again, that's not the/my problem.
Hey, so, are you going to tell TPU about the reviews? Haha
Your arguments are not good, you basically failed to read conclusion of the reviews, and never used the technology that you're talking about. Circular arguments about your inability to comprehend reviews are not "using your argument against you". So yeah - you still have no idea what you're talking about, and the reviews of DLSS3 do not back up your conclusion.
Posted on Reply
#34
Fergutor
phanbueyYour arguments are not good,
I know they aren't good; they are yours! XD
phanbueyyou basically failed to read conclusion of the reviews
I failed? How do you know? What reviews did I read? XD
phanbueyand never used the technology that you're talking about.
No I never used the technology I am talking about...will you tell TPU reviewers? Show us the messages when you make those please HAHA!!
phanbueyCircular arguments about your inability to comprehend reviews are not "using your argument against you".
So, wait, you are saying you are using circular arguments??? Ok?? And if I use your own reasoning and arguments I am not using your own arguments?!?! WHAT?!? HAHAHAHAHAHAH!!!!
phanbueySo yeah - you still have no idea what you're talking about
HAHHAHA!!...ok , ok...tell me what I am talking about. Please! I want to know XD!!
phanbueyand the reviews of DLSS3 do not back up your conclusion.
What conclusion? What reviews?

Dude you can't be more confused. Just read the comment I made to the one I replied before you, probably you will understand (more probably not, you're beyond help).
Posted on Reply
#35
phanbuey
FergutorNo I never used the technology I am talking about...will you tell TPU reviewers? Show us the messages when you make those please HAHA!!
They used it and concluded it was good... what do you want me to tell them exactly? That "Fergutor doesn't like it, thinks it's a gimmick, please try again?"
Posted on Reply
#36
wolf
Better Than Native
bugBreaking news, all frames are fake (in the words of Sheldon Cooper, "they're not found in nature"). They are just made from some numbers describing a scene.
In other news, I don't give a rat's a$$ is a frame is generated by the rendering pipeline or not. As long as it doesn't flicker, it's all good.
It's funny where people choose to draw the line. First it was native or bust, no upscaling, now upscaling is the best thing since sliced bread, FSR is 'good enough' and DLSS should just go die immediately. Ready for an FG repeat?

I say, simply judge the output quality and input latency, rather than draw an arbitrary conclusion where the technique used crosses some self imposed line in your brain.
dir_dIn theory wouldn't Nvidia making DLSS 3 insert Black frames do the same thing?
That's actually something that crossed my mind a few weeks ago, my LG TV can't do it at a variable refresh rate, but presumably even the older OFA of 20/30 series cards could just generate a black fame between rendered frames, perhaps even slightly boost brightness of the rendered frames to offset too... could legit help.

Love the comments from people who've clearly not seen it with their own eyes, keep regurgitating only the negative aspects of reviews you watched or read :P - Having tried it, it's a great feature.
Posted on Reply
#37
bug
wolfIt's funny where people choose to draw the line. First it was native or bust, no upscaling, now upscaling is the best thing since sliced bread, FSR is 'good enough' and DLSS should just go die immediately. Ready for an FG repeat?

I say, simply judge the output quality and input latency, rather than draw an arbitrary conclusion where the technique used crosses some self imposed line in your brain.
I mean, I get that first implementations are clumsy and probably rushed out the door to beat "the other guy" to the market. But you've got to look past that.
Posted on Reply
#38
Psychoholic
I definitely see where one could think DLSS3 and Frame generation would be a hot mess, I was kind of thinking the same thing.
After trying it myself, i have to say it exceeded my expectations.
Posted on Reply
#39
Raiden85
PsychoholicI definitely see where one could think DLSS3 and Frame generation would be a hot mess, I was kind of thinking the same thing.
After trying it myself, i have to say it exceeded my expectations.
And that's the problem, a lot of the ones hating on it are usually the ones that can't even run it. It's just one of those features you really have to try in person to see if you like it or not. It's like the 60fps vs 120fps argument, if you have only ever used 60fps then it looks great until you go 120fps then 60fps feel slow if you go back to it.

I really wish Nvidia wouldn't have called frame generation DLSS3 as it has nothing to do with upscaling, should have come up with a different name. Especially when you can run DLSS2 and "DLSS3" at the same time or just DLSS2 or just frame generation independently of each other.
Posted on Reply
#40
Steve67
wolfLove the comments from people who've clearly not seen it with their own eyes, keep regurgitating only the negative aspects of reviews you watched or read :p - Having tried it, it's a great feature.
I tried it out on The Witcher 3, fully expecting it to just be a joke, I barely use DLSS at all if I can help it. But damn I was actually pretty impressed. Passed my eye test for sure.
Posted on Reply
#41
EatingDirt
wolfIt's funny where people choose to draw the line. First it was native or bust, no upscaling, now upscaling is the best thing since sliced bread, FSR is 'good enough' and DLSS should just go die immediately. Ready for an FG repeat?

I say, simply judge the output quality and input latency, rather than draw an arbitrary conclusion where the technique used crosses some self imposed line in your brain.
I it seems like you don't understand how frame generation works, but it will never reduce input latency. It's in the name, frame generation.

These are artificially generated frames. The input latency will not be reduced over whatever framerate you're getting without frame generation, because the generated frames are not frames that your inputs are actually interacting with.
Posted on Reply
#42
Minus Infinity
Like it or not, AMD better not wait another 12 months to get FSR3.0 out the door. By then DLSS3 will be mature. It sort of sucks we now have to rely on AI fakery, but tto get this real level of raster and RT would require enormously powerful gpu using insane amount of power at least on current tech.

My main worry is going forward Nvidia will put less effort into architectural improvement and optimise just for AI trickery to do their heavy lifting.
Posted on Reply
#43
Punkenjoy
FergutorNo! 3d rendering isn't a gimmick! WTF!?
DLSS3 It is obviously a BS way to generate fluidity. A very fake way. That's my problem with it: that things that are real, people call them "gimmick", while an obviously fake one, made because they couldn't solve the problem in a normal way, ah, that "it's fine", applauds... BUT if it satisfy people anyway, because at the end of the day, the lack of fluidity due to low frames is a very important issue, then it's fine, and in that, you (and the rest) are right. I don't have a problem with that, but with the double standards.
Yes 3D rendering is just a bunch of way of cheating and trying find way of doing things that are cheaps but the end results look good enough.

By example, Lighting. In a lot of game, still today, the ambient occlusion is baked. That means that it's being rendering offline using ray-tracing then saved into texture and just being displayed as a texture. They are not real but hey, that is good enough. The main problem is if you add a dynamic light source, it will lit the area and you will still see the baked shadows and it will look odd.

To fix that they implemented various way of doing ambient occlusion in real time, most of them are approximation based on screen space z-buffer. It's better when you have dynamic lighting, but it way less accurate.

Then you have Ray tracing, that many people call a gimmick because it's really expensive to do the real thing and it cut performance to provide the same results as baked lighting. (Or the improvement over real time ambient occlusion isn't obvious for the people that don't know what they are looking at).

Another example is how game do level of details (LOD). in many game, the background stuff are just 2d Texture to save on rendering time. That is also a gimmick.

When you go over all the thing a game have to do to run smoothly, it's full of "gimmick"

And there is no big deal about all those stuff. It would be nice to have game that are full path traced with realistic materia and etc. But who want to play at 1 frame per 2 hours.
Posted on Reply
#44
sepheronx
One way to disparage the shit flinging is provide links and videos talking about if it's good or if it's bad.

Anecdotal evidence isn't evidence in either case.
Posted on Reply
#45
95Viper
Discuss the topic. However, stop the personal jabs!
Posted on Reply
#46
caroline!
matarTried this game did NOT LIKE IT ,story and gameplay if it was 3ed person maybe also tried it after a year after updates same issue,
AND So demanding for what example driving cars you hit cars nothing happens to cars so no destruction or PhysX.
Yep, literally GTA San Andreas had better car crash effects than CP2077. And stock Half-Life 2 has better water and physics, it was made in 2003.
Posted on Reply
#47
GeneticWeapon
I didn't plan on liking or using FG until trying it today, I was already getting around 85-100 fps at max settings with DLSS set to quality. With FG enabled, I'm hovering around the170 fps mark, and it's simply amazing. I have not experienced any latency problems, but I have seen some ghosting around objects such as the yellow ladders that are attached to buildings.
Posted on Reply
#48
wolf
Better Than Native
EatingDirtI it seems like you don't understand how frame generation works
I said "simply judge the output quality and input latency", I know full well how frame generation works and require no explanation, which is why input latency is part of the judging calculus, being that FG breaks the established relationship between fps and input latency, so input latency should be considered when talking about the whole experience.
Posted on Reply
#49
nguyen

TL;DR:
Native: ~40FPS, 100ms PC latency with Reflex OFF, 50ms latency with Reflex ON
DLSS Q: ~60FPS, 70ms PC latency with Reflex OFF, 35ms PC latency with Reflex ON
DLSS Q + FG: ~100FPS, 50ms PC latency (Reflex ON)

This patch introduced Nvidia Reflex, so previously everyone have to use DLSS2 to get 60FPS with 70ms of input Latency, was the game unplayable then? Nope
After this patch, 100FPS with 50ms of input latency, Nvidia already offered the solution to the input latency problem, but haters will only focus on the problem :).

Looks like lots of people here find FG a positive experience too.
Posted on Reply
#50
robot zombie
I'm all on the side of bashing nvidia for their practices, but I honestly can't say anything bad about the tech. This tech itself is great. DLSS has been good in most games since like 2.1. Yes, it does alter the image, but at high resolutions, in actual play... not that noticeable, and ends up being comparable to your typical AA solutions, if not a little more natural-looking across the whole image, in terms of the net changes to the overall image. To me, the frame rate boost and ability to run what are, the these days, much more refined and better understood RT techniques than before, make it worth trying out. The biggest problem was that only quality or *sometimes* balanced mode was really any good outside of 4k. That's changing too, though.

And I KNOW it sounds pretentious. I don't really expect people to just believe me, but perhaps you can at least see how it's possible in a logical sense. When you put everything about these technologies under the looking glass, try to take samples that encapsulate what it's doing, they tend to fall short. With all of the effects, their core strength is in what they push out of the way. If you look for overt or earth shattering, you often find boring and strange instead. It's a matter of missing the full context of those elements within the entire space. The main strength of RT effects is in the way they sneak in and up the overall plausibility factor. It's just less of a strain to believe you're there. Many different kinds of games can benefit from these things. While lighting has always been faked, RT is the better way to fake things.

The DLSS tech goes hand in hand with that. The challenge has always been having the grunt to get enough throughput for practical amounts of real-time accuracy (or at least, correction.) Correction is more attainable right now. It's not ideal. And there are tradeoffs. But from every experience I've ever had with it, it's hard for me not to see it as very worthwhile tech. Bridging that performance gap through sneaky unburdening approaches, allows for an increase in overall plausibility, at the cost of that last layer or two in image fidelity.

And you know what? That IS subjective. I think it's fair to not like what you lose in the images. But I think in time that could change, and I don't think what it offers in impact is worth entirely discarding. RT is pretty interesting as a tool, and who knows what other uses devs might find for it. To me, if it allows for better fidelity of conveyance, more convincing expression in the visuals... to me that stands out as something very valuable. It doesn't automatically make a games visuals better. But it DOES make for a better platform to convey visuals that are fundamentally good better. For instance, some classic games really look great with a proper RT conversion, and the reason they look so good is because the RT is in concert with very good visual design.

Anything that has the potential to elevate the experiences possible in games is worth keeping on-radar at the least. It's prohibitively expensive - that's worth some outrage. I can't even use this... I'll be stuck with my 3060ti for a while... and I only got lucky that a friend cut me a deal on a spare he happened to snipe out. Nvidia really, really is not a great company. I really don't care about DLSS or RTX as brands. But the technology itself is good, has IMO proven its worth, and to me, shows promising future potential. If tricks like machine learning super scaling and frame generation can open the door to it, I can't see that as a bad thing. The only bad thing about it is the absurd cost/availability.
Posted on Reply
Add your own comment
Nov 28th, 2024 12:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts