• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Editorial NVIDIA DLSS and its Surprising Resolution Limitations

DLSS is intended for upscaling the image.
It could be used as an antialiasing method (referred to as DLSS 2X) but we have not seen this type of application yet.
I know it's intended and advertised as such but I really doubt it does for what it was intended. I've got a serious problem believing this upscaling.
 
In your previous post you said I'm wrong that all and that the DLSS has nothing to do with RTX. I don't know what you tried but it is definitely not explaining anything. On top of that. Saying that DLSS enhances low quality image is crazy. It's been said that DLSS reduces the res of the image in comparison to TAA and that's what longdiste wrote.

You think DLSS is lowering the quality of a 4k image. It is not, because there is no 4k image being rendered. Instead, a 1440p image is being rendered and it is then upscaled to that it looks acceptably close to an actual 4k image. DLSS doesn't care if the source image is rendered uses RTX or not.

As Nvidia have stated, DLSS is based on learning. Currently, the number teaching samples is limited. As the number of teaching samples increases, supposedly DLSS will be able to generate images that are closer to the real thing.

It also doesn't matter how images look up close, it matters how the game in motion looks like. And if that sounds confusing, try pausing an otherwise crisp video on Youtube and look at the quality of the picture you get ;)
 
Last edited:
I know it's intended and advertised as such but I really doubt it does for what it was intended. I've got a serious problem believing this upscaling.
I don't. Machine learning is supposed to be good at filling in the blanks and this isn't a case where filling in the blanks needs to be perfect. It is literally a perfect situation for machine learning (if there is enough time to do the math.) If a GPU is under heavy load, it's going to be spending a lot of time (relatively speaking,) to render a frame. DLSS just renders a sizable portion of the frame while the tensor cores "fill in the blanks" after the fact. It's to give the illusion that you're running at say, 4k instead of 1440p. Dropping the resolution makes it easier for the rest of the GPU to render the frame and the otherwise unused resources of the tensors cores are then given a task after the frame has been rendered and stored in the framebuffer. This lets the tensor cores do their work while the next frame is already getting rendered.

This actually makes perfect sense, the problem though is that you shouldn't need this kind of trickery with hardware that's this fast and this expensive. This really should be a feature for slower GPUs where the GPU itself being a bottleneck is much more likely. It's also entirely possible that you need enough tensor cores to do it, in which case you already need a lot of (tensor,) resources available to do it. If that's really the case, it's benefit (to me,) seems marginal.

NVidia really likes its smoke and mirrors.
 
Back
Top