Wednesday, July 1st 2020

Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

Ahead of its PC platform release on July 14, testing of a pre-release build by Tom's Hardware reveals that "Death Stranding" will offer 4K 60 frames per second on any NVIDIA RTX 20-series graphics card if DLSS 2.0 is enabled. NVIDIA's performance-enhancing feature renders the game at a resolution lower than that of the display head, and uses AI to reconstruct details. We've detailed DLSS 2.0 in an older article. The PC version has a frame-rate limit of 240 FPS, ultra-wide resolution support, and a photo mode (unsure if it's an Ansel implementation). It has rather relaxed recommended system requirements for 1080p 60 FPS gaming (sans DLSS).
Source: Tom's Hardware
Add your own comment

62 Comments on Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

#51
bug
cucker tarlsonI wonder if they have a genuine intereset in following dlss or is it just a lazy way to show they're doing anything
AMD can't do DLSS, they lack the compute ability to tackle that (I mean the training part). I'm hoping they're genuinely trying a different approach, that's how proper solutions are born: by pitting different solutions against each other, seeing which works best.
Posted on Reply
#52
BoboOOZ
bugAMD can't do DLSS, they lack the compute ability to tackle that (I mean the training part). I'm hoping they're genuinely trying a different approach, that's how proper solutions are born: by pitting different solutions against each other, seeing which works best.
I'm not completely sure, but from what I understand DLSS is based on training a neural network by Nvidia on their premises, which is then only used for interpolation on the graphic card. It's the only way that makes sense.If it were trained locally, that would require a lot of resources and, at first, the results would be bad.

FWIW, you can build, train and deploy neural networks on the CPU, too, not only on GPU's. I'm pretty sure it could be done on AMD GPU's, Im unsure what would be the performance hit.

In the case of Nvidia, I think the approach was inverse, they had some AI capabilities sitting on the die (due to the pro-market requirements) and they tried to find a nice way to use them for gaming. What is unclear to me is what is the cost for Nvidia for training a NN for each game. I'm guessing it's pretty big, otherwise they would've done more games by now. The advantage, however, is that there is no hit on the graphical part of the card. I have a hard time seeing how AMD could come with a better solution, or a decent solution, not involving AI.
Posted on Reply
#53
bug
BoboOOZI'm not completely sure, but from what I understand DLSS is based on training a neural network by Nvidia on their premises, which is then only used for interpolation on the graphic card. It's the only way that makes sense.If it were trained locally, that would require a lot of resources and, at first, the results would be bad.

FWIW, you can build, train and deploy neural networks on the CPU, too, not only on GPU's. I'm pretty sure it could be done on AMD GPU's, Im unsure what would be the performance hit.

In the case of Nvidia, I think the approach was inverse, they had some AI capabilities sitting on the die (due to the pro-market requirements) and they tried to find a nice way to use them for gaming. What is unclear to me is what is the cost for Nvidia for training a NN for each game. I'm guessing it's pretty big, otherwise they would've done more games by now. The advantage, however, is that there is no hit on the graphical part of the card. I have a hard time seeing how AMD could come with a better solution, or a decent solution, not involving AI.
Well, deep neural networks have been around in theory for ages (if you can think of a 3 layer network, there's no reason you can't think of a 25+ layer one). Training them, however, is just way tougher on the hardware. Whatever AMD could do with their OpenCL implementations, Nvida can do at least 10x faster with CUDA and specialized hardware. Considering AMD is behind in this area, nedd need to move faster, not slower.

Also, per title training was only needed for DLSS 1. Starting with DLSS 2, it seems Nvidia has trained their networks well enough per title training is no longer a requirement (though I'm pretty sure they still do it to work out kinks here and there).

That's why I'm guessing if AMD is to respond to DLSS, they shouldn't get sucked into this "who trains DNNs faster" and they should work from another angle (if there is one).
Posted on Reply
#54
BoboOOZ
bugWell, deep neural networks have been around in theory for ages (if you can think of a 3 layer network, there's no reason you can't think of a 25+ layer one). Training them, however, is just way tougher on the hardware. Whatever AMD could do with their OpenCL implementations, Nvida can do at least 10x faster with CUDA and specialized hardware. Considering AMD is behind in this area, nedd need to move faster, not slower.
It's true NN have been around for a long time, it's also true they sat mostly unused most of this time :)
bugAlso, per title training was only needed for DLSS 1. Starting with DLSS 2, it seems Nvidia has trained their networks well enough per title training is no longer a requirement (though I'm pretty sure they still do it to work out kinks here and there).
Are you sure of that? I don't understand in this case why there is support for only a few games.
bugThat's why I'm guessing if AMD is to respond to DLSS, they shouldn't get sucked into this "who trains DNNs faster" and they should work from another angle (if there is one).
I'm pretty sure there is no other angle than some form of AI that would give a fast and compact solution to good upscaling. You need to add/create detail, and any way of procedural/algorithmic solution to that sounds bound for failure.
Posted on Reply
#55
bug
BoboOOZIt's true NN have been around for a long time, it's also true they sat mostly unused most of this time :)
2, 3 or 4 layer NNs have actually been in widespread use. Basic classification of stuff (think user profiling). DNNs have been around for as long, but they've been intractable for the most part.
BoboOOZAre you sure of that? I don't understand in this case why there is support for only a few games.
I meant training is no longer necessary for each and every title. Games obviously still have to make use of the DLSS library to make use of the feature.
BoboOOZI'm pretty sure there is no other angle than some form of AI that would give a fast and compact solution to good upscaling. You need to add/create detail, and any way of procedural/algorithmic solution to that sounds bound for failure.
I wouldn't know, I've been out of touch with all that for quite some time.
Posted on Reply
#56
BoboOOZ
bug2, 3 or 4 layer NNs have actually been in widespread use. Basic classification of stuff (think user profiling). DNNs have been around for as long, but they've been intractable for the most part.
This is going a bit OT, but still, I understand that you mean that NN networks have been known for a long time, it's just that they weren't largely adopted for a long time. 15 years ago everybody used different algorithms for calculating semantic distances and other probabilistic/stochastic models in order to model and class user profiles.
Only with Google's TensorFlow NN really came alive for the masses.
Posted on Reply
#57
bug
BoboOOZThis is going a bit OT, but still, I understand that you mean that NN networks have been known for a long time, it's just that they weren't largely adopted for a long time. 15 years ago everybody used different algorithms for calculating semantic distances and other probabilistic/stochastic models in order to model and class user profiles.
Only with Google's TensorFlow NN really came alive for the masses.
No, I mean NN have actually been in widespread for various pattern recognition related stuff. Just not the deep sort. Whether that qualifies as "for the masses", I don't know.
By contrast, DNNs have been almost a no show till recently.
Posted on Reply
#58
medi01
DLSS 2.0
DLSS 2.0 works as follows:[13]
  • The neural network is trained by Nvidia using "ideal" images of video games of ultra-high resolution on supercomputers and low resolution images of the same games. The result is stored on the video card driver. It is said the Nvidia uses DGX-1 servers to perform the training of the network.
  • The Neural Network stored on the driver compares the actual low resolution image with the reference and produce a full high resolution result. The inputs used by the trained Neural Network are the low resolution aliased images rendered by the game engine, and the low resolution, motion vectors from the same images, also generated by the game engine. The motion vectors tell the network which direction objects in the scene are moving from frame to frame, in order to estimate what the next frame will look like.[14]
Posted on Reply
#59
BoboOOZ
medi01DLSS 2.0
DLSS 2.0 works as follows:[13]
  • The neural network is trained by Nvidia using "ideal" images of video games of ultra-high resolution on supercomputers and low resolution images of the same games. The result is stored on the video card driver. It is said the Nvidia uses DGX-1 servers to perform the training of the network.
  • The Neural Network stored on the driver compares the actual low resolution image with the reference and produce a full high resolution result. The inputs used by the trained Neural Network are the low resolution aliased images rendered by the game engine, and the low resolution, motion vectors from the same images, also generated by the game engine. The motion vectors tell the network which direction objects in the scene are moving from frame to frame, in order to estimate what the next frame will look like.[14]
Thanks for doing the research and posting this, that means that my lazy logic assumption, without reading the spec, was correct, it works on a per-game basis.
Posted on Reply
#60
cucker tarlson
medi01DLSS 2.0
DLSS 2.0 works as follows:[13]
  • The neural network is trained by Nvidia using "ideal" images of video games of ultra-high resolution on supercomputers and low resolution images of the same games. The result is stored on the video card driver. It is said the Nvidia uses DGX-1 servers to perform the training of the network.
  • The Neural Network stored on the driver compares the actual low resolution image with the reference and produce a full high resolution result. The inputs used by the trained Neural Network are the low resolution aliased images rendered by the game engine, and the low resolution, motion vectors from the same images, also generated by the game engine. The motion vectors tell the network which direction objects in the scene are moving from frame to frame, in order to estimate what the next frame will look like.[14]
correct.
as I understand it,when it's a botched job it's entirrely on nvidia,not the developer.if it's well done,it's cause nvidia took time to optimize it.
I don't think the game developer plays any part in the process,except for the actual amount of time they give nvidia to work on it,which is a big factor probably.

I think it was me who linked it to you one day actually:nutkick:
Posted on Reply
#61
bug
BoboOOZThanks for doing the research and posting this, that means that my lazy logic assumption, without reading the spec, was correct, it works on a per-game basis.
Once again, it doesn't. The neural network is now smart/good enough that you don't need a different instance for each game. That's the difference from DLSS 1.0.
Posted on Reply
#62
InVasMani
bugThis is FidelityFX: gpuopen.com/fidelityfx-cas/
A sharpening filter mostly. It says is does up/downscaling as well, but it's unclear how it does that.
I think like DLSS as well it boils down to implimentation a bit along with spec if they've made any hardware revisions. Something tells me it can look good or bad in either case. FidelityFX can do more than just sharpening though it can also help with reflections. AMD's website on it shows and explains well enough what's possible with it. Like I said above tech of this nature is also dependant on how well it's implemented or if it is in the first place. Raw rasterization performance is always most ideal over these sorts of things if you want to utilize them on both past and present game libraries. Eventually RTRT will be more of a talking point though right now it's a joke because both the hardware isn't mature enough nor is the software. I think by the end of this console generation though we'll end up having some good discrete hardware for RTRT that matured a lot so the PS6 generation of consoles should be slick.
cucker tarlsoncorrect.
as I understand it,when it's a botched job it's entirrely on nvidia,not the developer.if it's well done,it's cause nvidia took time to optimize it.
I don't think the game developer plays any part in the process,except for the actual amount of time they give nvidia to work on it,which is a big factor probably.

I think it was me who linked it to you one day actually:nutkick:
That in itself is a big issue if true actually you'll end up bigger developers getting better results and Nvidia the not so AAA and indie developers much more differently. I can't blame them for doing so, but that will skew expectations and make DLSS's inherent perks really varied by case and I imagine the same holds true for AMD's FidelityFX.
Posted on Reply
Add your own comment
Aug 20th, 2024 00:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts