• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial NVIDIA DLSS and its Surprising Resolution Limitations

Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
DLSS is intended for upscaling the image.
It could be used as an antialiasing method (referred to as DLSS 2X) but we have not seen this type of application yet.
I know it's intended and advertised as such but I really doubt it does for what it was intended. I've got a serious problem believing this upscaling.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
In your previous post you said I'm wrong that all and that the DLSS has nothing to do with RTX. I don't know what you tried but it is definitely not explaining anything. On top of that. Saying that DLSS enhances low quality image is crazy. It's been said that DLSS reduces the res of the image in comparison to TAA and that's what longdiste wrote.

You think DLSS is lowering the quality of a 4k image. It is not, because there is no 4k image being rendered. Instead, a 1440p image is being rendered and it is then upscaled to that it looks acceptably close to an actual 4k image. DLSS doesn't care if the source image is rendered uses RTX or not.

As Nvidia have stated, DLSS is based on learning. Currently, the number teaching samples is limited. As the number of teaching samples increases, supposedly DLSS will be able to generate images that are closer to the real thing.

It also doesn't matter how images look up close, it matters how the game in motion looks like. And if that sounds confusing, try pausing an otherwise crisp video on Youtube and look at the quality of the picture you get ;)
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.80/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I know it's intended and advertised as such but I really doubt it does for what it was intended. I've got a serious problem believing this upscaling.
I don't. Machine learning is supposed to be good at filling in the blanks and this isn't a case where filling in the blanks needs to be perfect. It is literally a perfect situation for machine learning (if there is enough time to do the math.) If a GPU is under heavy load, it's going to be spending a lot of time (relatively speaking,) to render a frame. DLSS just renders a sizable portion of the frame while the tensor cores "fill in the blanks" after the fact. It's to give the illusion that you're running at say, 4k instead of 1440p. Dropping the resolution makes it easier for the rest of the GPU to render the frame and the otherwise unused resources of the tensors cores are then given a task after the frame has been rendered and stored in the framebuffer. This lets the tensor cores do their work while the next frame is already getting rendered.

This actually makes perfect sense, the problem though is that you shouldn't need this kind of trickery with hardware that's this fast and this expensive. This really should be a feature for slower GPUs where the GPU itself being a bottleneck is much more likely. It's also entirely possible that you need enough tensor cores to do it, in which case you already need a lot of (tensor,) resources available to do it. If that's really the case, it's benefit (to me,) seems marginal.

NVidia really likes its smoke and mirrors.
 
Top