Monday, May 4th 2020
GeForce NOW Gains NVIDIA DLSS 2.0 Support In Latest Update
NVIDIA's game streaming service GeForce NOW has gained support for NVIDIA Deep Learning Super Sampling (DLSS) 2.0 in the latest update. DLSS 2.0 uses the tensor cores found in RTX series graphics cards to render games at a lower resolution and then use custom AI to construct sharp, higher resolution images. The introduction of DLSS 2.0 to GeForce NOW should allow for graphics quality to be improved on existing server hardware and deliver a smoother stutter-free gaming experience. NVIDIA announced that Control would be the first game on the platform to support DLSS 2.0, with additional games such as MechWarrior 5: Mercenaries and Deliver Us The Moon to support the feature in the future.
Source:
NVIDIA
11 Comments on GeForce NOW Gains NVIDIA DLSS 2.0 Support In Latest Update
This is how I understand it:
There is a "server" with a reference 16k image.
It lays that image against what you are seeing and then tries to make your (low res) image look as much as possible like the 16k image.
So does that mean you have to be online to even get DLSS support and/or to help it make the DLSS support for the game you are playing better?
Also why does DLSS On improve performance?
Does it mean that when you select 4k res in a game with DLSS On you are actually running it just on for example 1080p but DLSS "upscales" it to 4k with good quality?
And also, if this is how it works, why do games even need to support it? why cant Nvidia just make a 16k reference image themselves and have that communicate with the users to train the "server" in using it.
Why would this not be something you can just turn on in the Nvidia Control Panel for every game?
And its up to Nvidia then to choose which games the system learns before rolling it out to the consumer, so some games may never be chosen by them, that is kinda sad to think about.
The game is run on a render farm at very high resolutions and stills from the game are fed into a program that tries to generate a model that generalizes how the images are supposed to look. This model is then sent through a driver update to the machine where it is fed the images from the game running locally at lower resolutions and hopefully it can then scale up the images to look like the original high resolution images that were generated back on the render farm.
Emphasis on the word "hopefully".
You don't have to be connected to the server, you just need to download the model once (ideally). The model differs from game to game for accuracy purposes, you could make one global model but the results are going to be worse.
Someone wake me up if they get to that point before abandoning DLSS.
I agree, the per title optimization is utterly disgusting and useless. But this is Nvidia. Look at their drivers. They bring day one game ready stuff every time;; for reasons that vary, but its still there and they don't really miss a lot of titles at all. And in terms of pushing the performance envelope.... I do think this is the direction in the near future anyway if we want more faster bigger. Those nanometers won't get much smaller and bigger chips are not for everyone's wallet, and is completely counterproductive when growth is the norm. If you need more volume, you need smaller dies.