TL;DR: They don't always render at a higher res and scale it down, sometimes it feels like they're rendering say four different displays at once, one frame at a time drastically increasing render input latency
The newer versions of this tech (DLAA and DLDSR vs old DSR) deal with this a lot better, and likely explains their math choices somehow (2.25x vs 4x, etc)
With nvidia this is easy to measure since the geforce experience overlay shows you render latency and system latency,
All these up and down scalers are basically the same technology, just using different comrpessions - it's youtube having 4K video but outputting on your 1080p screen, and somehow it still looks better.
You must not confuse render latency with frametimes - frametimes are just 1000\FPS, render times are how long it took the GPU to render the frame after the CPU requested it
If render times are higher than frametimes, you get input lag as you're reacting to an image from the past.
I just went and played with the nvidia version of this, DLDSR
It's rendering at a higher resolution, then downscaling - more or less like an AA setting that shows you a resolution not in use, like a reverse of hte high DPI scaling in windows
At 4K i run 150% zoom (which gives me a 2D resolution of 2560x1440) and using DLDSR pushes that 2D back to 4K
VSR, DSR, DLDSR, DLAA all do the same thing with different mathematical magic to use the most of the pixels - one example i had was in DRG (UE4 engine) where at 1440p vs 4K, a monitor screen in the loading area had visible text at 4k, while 1440p could not read the text
With DLAA (And interestingly, DLSS at the max quality setting) that text became legible
The downside? They require extra frames to be rendered, and add a lot to render latency.
In starcraft II, an easy to render DX9 title I can get as low as 3ms render latency at native resolution
These values are taken *while paused* so that there is no CPU or GPU variation, the only difference is the resolution chosen
The options:
Other relevant settings: DX9, in game Vsync off, Nvidia Fast Vsync forced on, ultra low latency mode forced on.
Without this combination i was seeing render latency spike start at 50ms and spike higher, which was about as fun as 20fps
2.25x resolution:
Math wise, this is acceptable because 1000 Hz/FPS divided by 101FPS is 95.2 - so i'm getting render latency below what is needed for that refresh rate - it feels perfectly fine
Then 1.78x
1000\122 = 8.1ms
So the render latency is making this frame rate feel slower than it actually is, but it's still fast enough to not notice.
On a 60Hz display with 16.6ms per frame this is invisible, but on a 165hz display at 6.0ms, this could feel slightly slower than normal, even with high FPS values
Now stock 4K
we're seeing 130FPS drop to 101FPS, so nvidia is right - this scaling gives *amazing* performance for what it visually achieves
But we're also down to 5.3ms, more than enough to not feel any lag whatsoever on a 165hz display
Dropping to 1080p didn't lower the render times much at all here, so it's not about the resolution itself
If you really wanted 240Hz or something for Esports, this might matter - but clearly DLDSR is the cause of the latency here, not higher resolutions
Summary here: Since SC2 drops frame rates to <60 a lot, you might as well enjoy the visual quality, as long as the render latencies dont go too high. High latency here makes it damned hard to click where you want in combat, but keyboard shortcuts would alleviate that a lot.
I'll show DLAA stock and DLSS up next with DRG, and combine them if the game lets me., posts will probably auto merge when i do so
DRG:
Notice this is a different value, total system latency - this includes the time the monitor takes to display the image at the current refresh rate. It cannot be directly compared with the SC2 results above.
When i alt tab in and out the render latency briefly appears, but the reading is erratic due to the tabbing
I run a 120FPS cap here, so the FPS should be the same with every test showing just the changes in rendering techniques
Obviously, with an uncapped value my latencies get lower - but once you reach a point your GPU is always 100% loaded, latency gets WORSE and not better
I'm running two 4K 60Hz monitors overclocked to 65Hz, it's uncommon but combined with keeping latencies low, i get the fantastic visuals and the low input latency of my 165hz displays - just that slower visual refresh leading to a little motion blurring. When my 4k 60 and 1440 165 monitor use the same VA 4ms panel, visually they're damned near impossible to tell apart in person.
All results are with:
Fast Vsync on (Doesnt work in DX12)
In game Vsync off
FPS Limiter enabled
Reflex enabled
Native 4k, 65FPS cap
My 65Hz display maths out to 15.38ms, so this is pretty much bang on "perfect"
Natrive 4k, 120FPS cap (in game limiter)
Now while my display can only update every 15.38ms, they're rendered in 13ms - meaning what i see is 2ms newer. A small change, but the entire benefit of a higher refresh display, on a lower refresh display
4K, DLSS quality
12.5ms maths out to 80hz/FPS here.
Far faster than my 65hz can achieve, so again - it feels really fast and responsive
This game supports Fidelity FX 1.0 and 2.0 so here they are for direct comparisons. 1.0 does well, with some artifacting - 2.0 looks like ass at present.
1.0 ------------------------- -------------------------------------------------------------- 2.0
(In this title, 2.0 looks and runs worse)
Now we bring in the DLDSR resolutions.
It gets ugly.
2.25x
5760x3240
65 fps cap
Despite being at 65fps, that latency maths out to 40hz - so this feels slow
Not unusably slow, but it definitely throws aim off, making you miss shots and mis-time jumps
2.25x
5760x3240
120FPS cap - the GPU Cant reach it, so this might as well be unlimited
92FPS should be 11.1s, but its four times that - so you're four frames behind where the game is
Because the GPU fell behind, the CPU had frames ready and waiting - so by the time the GPU can use them, they're old
This feels like using a laggy bluetooth mouse and is impossible to game with
Combining this with DLSS quality to remove some of the GPU load still has it in laggy unusuable territory
Then to top it off, we use DLAA - which does the same as the above example by combinging a higher render resolution, with DLSS to drop to native resolution
And it's 1/3 the latency! What a win! DLAA for life, am i right?!?
The key here is that if the GPU usage hits 100% (such as when i unlimit the FPS) DLAA goes to shit
4k
Whatever resolution this maths out to be with DLAA, we're seeing 194FPS - 5.15hz - combined with four times that latency
This 200FPS experience has the same latency and feel as running a regular old 49.2fps
Summary:
If you can use this tech without your GPU hitting 100%, you can have a good experience.
If you hit 100% usage, even erratically in gameplay: no. God no.
To show the worst possible hell, lets try that 2.25x res with DLAA
that cinematic 19hz feel really sells it, ya know?