Wednesday, July 1st 2020

Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

Ahead of its PC platform release on July 14, testing of a pre-release build by Tom's Hardware reveals that "Death Stranding" will offer 4K 60 frames per second on any NVIDIA RTX 20-series graphics card if DLSS 2.0 is enabled. NVIDIA's performance-enhancing feature renders the game at a resolution lower than that of the display head, and uses AI to reconstruct details. We've detailed DLSS 2.0 in an older article. The PC version has a frame-rate limit of 240 FPS, ultra-wide resolution support, and a photo mode (unsure if it's an Ansel implementation). It has rather relaxed recommended system requirements for 1080p 60 FPS gaming (sans DLSS).
Source: Tom's Hardware
Add your own comment

62 Comments on Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

#26
TheoneandonlyMrK
I'm not that interested in an interpretation of what the Dev wanted me to play.
If it goes like dlss 1.0 ,Rtx or physx, hairworks or gameworks it's not going to shake the world.

Big meh.

Also old games scale better with resolution because they were made to actually render to the resolution, most modern games basically frame scale a 1080p world with a few, like GtaV allowing you the option to use 100% frame scaling.
Posted on Reply
#27
Xuper
Better not to go 4K/60 fps , otherwise users will be stuck on 4K res and they Can't go back to 1440p or 1080.different between two of them is massive if you experience them.
Posted on Reply
#28
toilet pepper
XuperBetter not to go 4K/60 fps , otherwise users will be stuck on 4K res and they Can't go back to 1440p or 1080.different between two of them is massive if you experience them.
Good thing I havent played on a true 4k60 panel. My 2070s barely runs Ultrawide 1440p.

And yes, I played Control with DLSS 2 and all the RTX and it looks a little better than native res.
Posted on Reply
#29
Searing
cucker tarlsonthis is true
dlss 2.0 looks very good,but I must admit it's something your eyes gotta adjust to.
I would kill for Mario Odyssey in 4K 120fps, that is an example of game rendering that is raw sharp and pure. I personally will take jaggies over blur any day. One of the ways in which it is obvious what type rendering is happening is if you think the only blurriness is in texture resolution. Mario Odyssey with new 16GB VRAM textures? Would look perfectly sharp. Doesn't matter how high the textures you use, Control won't be sharp.
Posted on Reply
#30
bug
xkm1948Nah he is just salty.

DLSS2.0 is really amazing. Great for performance boosting for GPUs that lacked that raw computation power. As for picture quality there are numerous testings showed DLSS 2.0 is the same or sometimes better than native resolution rendering.

The more DLSS 2.0 implementation, the better it is for end users with GPU that support it.
The thing people don't get about AI, is the more you train it, the better it gets. Like Google's search.
Already DLSS 2.0 shed that per title training DLSS 1.0 carried. I can only guess where DLSS 3 or 4 will go.

Picture quality, though, is subjective. Because there's no reference. It's just comparing an approximation of the desired result (no-DLSS) with another approximation (DLSS).
At the end of the day, it's just (clever) tricks that allows us to game at higher settings than the hardware could push otherwise. Anyone remembers how manufacturers were stuck for a while trying to do better SSAA until MSAA came along? Guess what, MSAA is still not on par with SSAA, but in the meantime we've come to call MSAA too taxing and willing to accept even more IQ compromises (e.g. TAA). Why? Because of performance.
Posted on Reply
#31
lugaidster
I like how most media sites are stroking Nvidia's ego with this. Arstechnica's Death Stranding for PC preview says it supports both DLSS 2.0 and AMD's FidelityFX upscaling tech. More importantly, it says that FidelityFX's upscaler preserves more detail and provides more of a boost. Most importantly, FidelityFX is vendor-neutral: their tests were done on Nvidia hardware. I especially like the fact that FidelityFX works with Pascal while DLSS 2.0 doesn't.

So... seems to me DLSS 2.0 remains overhyped. I'm sure Nvidia's marketing money will bury this, though.
bugThe thing people don't get about AI, is the more you train it, the better it gets.
This isn't strictly true. Depending on the model and the training set, it's very easy to get trapped into local optimums. The big issue with this is that it's very hard to know if you did. Moreover, when getting close to local optimums, more training does very little to improve inference result performance.

Don't get me wrong, DLSS is good tech, but it's still an approximation. There's just nothing inherent to it that makes it better at solving this particular problem that other approximations can't do.
Posted on Reply
#32
bug
lugaidsterThis isn't strictly true. Depending on the model and the training set, it's very easy to get trapped into local optimums. The big issue with this is that it's very hard to know if you did. Moreover, when getting close to local optimums, more training does very little to improve inference result performance.

Don't get me wrong, DLSS is good tech, but it's still an approximation. There's just nothing inherent to it that makes it better at solving this particular problem that other approximations can't do.
I know that, you know that. I bet Nvidia knows that.
I imagine it's pretty easy to avoid the local optimum solution if you train multiple times, applying inputs in a different order. Or if you develop an automated tool to analyze artifacts for you.

What makes DLSS better is the computations for approximations are done by Nvidia on their servers and the driver only has to apply the result when rendering. It doesn't guarantee any level of quality (though I think for only a second version, it does pretty well), but it guarantees more performance.
If you worry about loss of quality, you should worry more about variable rate shading. And that's in DX, it's going to be everywhere.
Posted on Reply
#33
lugaidster
bugI know that, you know that. I bet Nvidia knows that.
Yes, but the people you're trying to enlighten don't necessarily know that. Otherwise, why mention training is better in the first place, eh?
bugI imagine it's pretty easy to avoid the local optimum solution if you train multiple times, applying inputs in a different order. Or if you develop an automated tool to analyze artifacts for you.
It isn't. It's actually pretty hard because you don't know when you've actually gotten to a global optimum. At least not when training neural networks. You can easily weed out poor performers, but the same is not true once you get to a decent level of competence.

With respect to developing a tool to analyze artifacts, that's actually pretty hard too because artifacts are subjective. You're going to get artifacts regardless, which ones are actually acceptable?

If you had an accurate model that could easily allow you to identify a global optimum, you'd be able to build a mathematical model to achieve it and, thus, model it with a regular algorithm. You wouldn't need neural networks in the first place.
bugWhat makes DLSS better is the computations for approximations are done by Nvidia on their servers and the driver only has to apply the result when rendering. It doesn't guarantee any level of quality (though I think for only a second version, it does pretty well), but it guarantees more performance.
You've basically described all upscalers.
bugIf you worry about loss of quality, you should worry more about variable rate shading. And that's in DX, it's going to be everywhere.
This doesn't even make sense. If I'm going to use an upscaler, why wouldn't I complain about quality if there's better upscalers out there?
Posted on Reply
#34
$ReaPeR$
Lowest common denominator. So all this .... Is useless because they do not exist in the lowest common denominator. It's really ironic that tech enthusiasts fail to grasp tech reality. I'm sorry but 90% of people will not buy 400$ or 600$ or 1200$ gpus to run rtx or dlss or any of this other cr@p in 3 damn games. And if you think they will then you are in a total disconnect from reality.
Posted on Reply
#35
bug
lugaidsterYes, but the people you're trying to enlighten don't necessarily know that. Otherwise, why mention training is better in the first place, eh?



It isn't. It's actually pretty hard because you don't know when you've actually gotten to a global optimum. At least not when training neural networks. You can easily weed out poor performers, but the same is not true once you get to a decent level of competence.

With respect to developing a tool to analyze artifacts, that's actually pretty hard too because artifacts are subjective. You're going to get artifacts regardless, which ones are actually acceptable?

If you had an accurate model that could easily allow you to identify a global optimum, you'd be able to build a mathematical model to achieve it and, thus, model it with a regular algorithm. You wouldn't need neural networks in the first place.



You've basically described all upscalers.



This doesn't even make sense. If I'm going to use an upscaler, why wouldn't I complain about quality if there's better upscalers out there?
Yeah, well, in this case it's pretty easy to spot local minimum: you look at the image (using your own eyes or a high-pass filter) and see whether anything sticks out.
Posted on Reply
#36
goodeedidid
Too bad it's Death Stranding.. that game sucks ass
Posted on Reply
#37
bug
goodeedididToo bad it's Death Stranding.. that game sucks ass
The game itself doesn't matter much. If one game can do it, other can, too ;)

Edit: Oh, I see what's going on here. The game is actually pretty good, but we pretend it's bad because Nvidia has something to do with it. My bad.
Posted on Reply
#38
Unregistered
bug1070? I'm pretty sure DLSS doesn't even work with that.
Of course not, DLSS 2.0 is only available on RX20xx.
Sorry for that confusion, I mentioned the 1070 I used for 4k gaming and how powerless it was.
I've looked at screenshots of DLSS2.0 and it seems more like a sharpening filter.
dhkloppI didn't even think you could get 24" 4k, let alone it be worth bothering with. You learn something new everyday.
They do exist and damn they are sharp, especially for text it's a pleasure to use them.
#39
dicktracy
Nvidia is multiple generations ahead of everyone else. DLSS 3.0 is pretty much an apocalypse for non-Nvidia users.
Posted on Reply
#40
M2B
The amount of ignorance on this thread is truly astounding.
DLSS 2.0 is the absolute best image reconstruction technique in gaming, yet people are moaning "but it's not real 4K". Just educate yourself and appreciate the technology, you don't have to complain about every damn thing Nvidia-Related.
what a joke.
Posted on Reply
#41
watzupken
lugaidsterI like how most media sites are stroking Nvidia's ego with this. Arstechnica's Death Stranding for PC preview says it supports both DLSS 2.0 and AMD's FidelityFX upscaling tech. More importantly, it says that FidelityFX's upscaler preserves more detail and provides more of a boost. Most importantly, FidelityFX is vendor-neutral: their tests were done on Nvidia hardware. I especially like the fact that FidelityFX works with Pascal while DLSS 2.0 doesn't.

So... seems to me DLSS 2.0 remains overhyped. I'm sure Nvidia's marketing money will bury this, though.
I agree. I feel the need to have Tensor cores/ AI to perform this upscaling while theoretically sounds better, however in practice, even a dumb down version like the FidelityFX is capable of improving performance without significant loss in image quality. If this supposed upscaling by AI is significantly better in terms of image quality and ease of implementation, then DLSS 1.0 will not have failed in the first place. To me, it is the usual Nvidia strategy to bring in proprietary technology to retain their foothold.
M2BThe amount of ignorance on this thread is truly astounding.
DLSS 2.0 is the absolute best image reconstruction technique in gaming, yet people are moaning "but it's not real 4K". Just educate yourself and appreciate the technology, you don't have to complain about every damn thing Nvidia-Related.
what a joke.
Most people won't have that much to complain if,
1. This is not a proprietary technology, limited to only certain hardware
2. Because of this, you pay a premium to Nvidia, where competitor provides a dumb down version of an upscaler that works across hardware

Anyway regardless of the image quality, it is factual that it is not true 4K. That is the purpose of DLSS.
Posted on Reply
#42
Vayra86
Mamya3084Me: "Pffft, a walking simulator, who needs such a thing"
Covid-19 "Hi!"
Me: "Wow, it's like actually being outside!"
Hah but you never walked outside at TWO HUNDRED FORTY EF PEE ES
SearingThe tech press should not be spreading lies. 4K is 4K, and DLSS is not. You can claim your image looks as good, that's fine, I disagree, but say "it looks as good as 4k" use actual English that has meaning instead of marketing speak.

Many modern games use rendering techniques that look blurry and don't scale with resolution anymore. Play Detroit for example, and you'll notice that much of the image doesn't improve when you switch from 1440p to 4k. Play old games and it is totally different. That is why DLSS is being pushed because it is easy to fool people who have games like Detroit, Death Stranding, Control and FF15 that are chronically blurry.

I personally prefer the visuals of last gen. Games like Mass Effect 3 and old Unreal games before the AA craze where resolution actually leads to a proper image. The fetish for realistic graphics has led to transparency techniques for hair and vegetation that is blurry and horrible imo. I'd rather play Trials of Mana at 4k 120fps and enjoy crisp visuals than play most of the recent stuff we are getting. I even prefer the visuals of Fortnite over DLSS games.

It seems the market is diverging a bit. I love Riot Games commitment to fast performance and clean rendering techniques.
Inclined to agree. Some games are so blurry its painful. I even happen to complete kill AA altogether when only temporal is available. Yes, the jaggies are gone... with the sharpness. It gets even worse when there's a (fixed!) sharpening pass on top of that, suddenly you're looking at a cartoon. SOMETIMES, T(x)AA looks good. But whenever it does, it also hits perf pretty bad.

That said, DLSS at very high resolutions is a lot better than TAA most of the time. Definitely a middle ground preferable to newer AA in general.

Resident Evil (1, reboot) took the cake. It has an internal render resolution slider... even at max its STILL noticeably below your native display res. Like you're rendering 720p content on 1080p. And if that isn't enough... some games place a constant chromatic abberation horror on top of that. Want some LSD with your game?
watzupkenI agree. I feel the need to have Tensor cores/ AI to perform this upscaling while theoretically sounds better, however in practice, even a dumb down version like the FidelityFX is capable of improving performance without significant loss in image quality. If this supposed upscaling by AI is significantly better in terms of image quality and ease of implementation, then DLSS 1.0 will not have failed in the first place. To me, it is the usual Nvidia strategy to bring in proprietary technology to retain their foothold.



Most people won't have that much to complain if,
1. This is not a proprietary technology, limited to only certain hardware
2. Because of this, you pay a premium to Nvidia, where competitor provides a dumb down version of an upscaler that works across hardware

Anyway regardless of the image quality, it is factual that it is not true 4K. That is the purpose of DLSS.
FidelityFX you say? God no. Its far worse, you're better off using some SweetFX filters from 2010. I'm not even joking.

I'll happily pay a premium for features that actually do improve the experience, and in the case of DLSS, there is virtually free performance on the table, so is it really a premium... some beg to differ.

Not that I jumped on Turing at this point, and never would've for DLSS alone either... but the tech is pretty impressive at this point. IF you can use it. That is the deal breaker up until now, but apparently they're working to make it game agnostic. And note... competitor only provides a dumbed down version because its cheap, its easy, and it somehow signifies 'they can do it too' when in reality they really can't. Let's call it what it is, okay?
Posted on Reply
#43
InVasMani
bugThe thing people don't get about AI, is the more you train it, the better it gets.
That's only true up to a certain threshold and within a certain threshold tolerance of performance required as well. The lower the resolution the less information you have to manipulate on the one hand the better perceived uplift within a performance threshold on the other hand the worse the quality uplift relative a higher base resolution to work with if the performance threshold also scaled to account for it in turn. Higher resolutions require more performance there is no getting around it without making image quality sacrifices for the sake of performance.
M2BThe amount of ignorance on this thread is truly astounding.
DLSS 2.0 is the absolute best image reconstruction technique in gaming, yet people are moaning "but it's not real 4K". Just educate yourself and appreciate the technology, you don't have to complain about every damn thing Nvidia-Related.
what a joke.
DLSS isn't bad and it's defiantly made some big relative improvements, but it's still certainly not the same as native 4K. It's a fancy technique of post process more algorithms combined with upscaling more than anything else nothing more nothing less. It's gotten better and more convincing at it and has it's perks especially in terms of performance. That said label it the same is unfair. The bigger problem is DLSS isn't just something you turn on and it works in all cases nope it needs developers to specifically enable it's usage. That is a key difference of importance. If it worked on every game all the time and equally well when enabled it would be pretty noteworthy, but that's clearly not the case. Nvidia has a lot of AA technique that are situational and require a special developer relationship to work in the first place and like those DLSS is a really useless piece of tech when it doesn't work in all those older or unsupported titles. I tend to prefer tech that just f*cking works all the time period myself. I have nothing against Nvidia or DLSS inherently, but I think tech that works across the board under all circumstances is far more beneficial and wider reaching. How much have we heard about Mantle lately!? I mean it sounded great closer to the metal better performance seems legit, but in reality you're dependent on the developer and when you're dependent upon them mostly only the AAA developers leverage the hardware to it's full extent in grand scheme.
Posted on Reply
#44
Fluffmeister
I see VALVe, have invested some love into this title too.

Guess I'll have a crack at being Postman Pat too.

Beyond that, you have two choices when it comes to graphics cards, pick your poison and quit whining year in year out.
Posted on Reply
#46
bug
cucker tarlsongamegpu.com/rpg/%D1%80%D0%BE%D0%BB%D0%B5%D0%B2%D1%8B%D0%B5/death-stranding-test-gpu-cpu

scroll down and compare native resolution+taa vs dlss 2.0 with a slider,then look at performance gains,near 40%.
this is crazy.the image is clearer and more detailed,and runs way faster.
dunno what amd fideltyfx is but it looks like a mess.

This is FidelityFX: gpuopen.com/fidelityfx-cas/
A sharpening filter mostly. It says is does up/downscaling as well, but it's unclear how it does that.
Posted on Reply
#47
cucker tarlson
bugThis is FidelityFX: gpuopen.com/fidelityfx-cas/
A sharpening filter mostly. It says is does up/downscaling as well, but it's unclear how it does that.
looks like crap here

fidelityfx ................................................................................................... dlss 2.0


lol,it's this amd version of nvidia's dlss that's so much better according to red team fanbase
Posted on Reply
#48
medi01
iO"4K"
It's "better than 4K". (no jokes, some green brains claim verbatim 'better than original' upscaling)
Posted on Reply
#49
bug
cucker tarlsonlooks like crap here

fidelityfx ................................................................................................... dlss 2.0


lol,it's this amd version of nvidia's dlss that's so much better according to red team fanbase
It does look pretty good in their own screenshots. Give it time, DLSS wasn't great from the beginning either.
It's just that AMD doesn't talk about upscaling in FidelityFX. I'm pretty sure this is mostly a sharpening filter + some classic interpolation (e.g. bi/trilinear, nearest neighbor).
medi01It's "better than 4K". (no jokes, some green brains claim verbatim 'better than original' upscaling)
Well, when someone claims "better than original", it's pretty clear they don't know what they're talking about. Hint: there's no "original" in CGI ;)
True 4k is still an approximation of a geometric model as seen through the rendering pipeline.
Posted on Reply
#50
cucker tarlson
bugIt does look pretty good in their own screenshots. Give it time, DLSS wasn't great from the beginning either.
I wonder if they have a genuine intereset in following dlss or is it just a lazy way to show they're doing anything
Posted on Reply
Add your own comment
Jul 19th, 2024 15:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts