Wednesday, February 13th 2019

NVIDIA DLSS and its Surprising Resolution Limitations

TechPowerUp readers today were greeted to our PC port analysis of Metro Exodus, which also contained a dedicated section on NVIDIA RTX and DLSS technologies. The former brings in real-time ray tracing support to an already graphically-intensive game, and the latter attempts to assuage the performance hit via NVIDIA's new proprietary alternative to more-traditional anti-aliasing. There was definitely a bump in performance from DLSS when enabled, however we also noted some head-scratching limitations on when and how it can even be enabled, depending on the in-game resolution and RTX GPU employed. We then set about testing DLSS on Battlefield V, which was also available from today, and it was then that we noticed a trend.

Take Metro Exodus first, with the relevant notes in the first image below. DLSS can only be turned on for a specific combination of RTX GPUs ranging from the RTX 2060 to the RTX 2080 Ti, but NVIDIA appear to be limiting users to a class-based system. Users with the RTX 2060, for example, can't even use DLSS at 4K and, more egregiously, owners of the RTX 2080 and 2080 Ti can not enjoy RTX and DLSS simultaneously at the most popular in-game resolution of 1920x1080, which would be useful to reach high FPS rates on 144 Hz monitors. Battlefield V has a similar, and yet even more divided system wherein the gaming flagship RTX 2080 Ti can not be used with RTX and DLSS at even 1440p, as seen in the second image below. This brought us back to Final Fantasy XV's own DLSS implementation last year, which was all or nothing at 4K resolution only. What could have prompted NVIDIA to carry this out? We speculate further past the break.
We contacted NVIDIA about this to get word straight from the green horse's mouth, hoping to be able to provide a satisfactory answer to you. Representatives for the company told us that DLSS is most effective when the GPU is at maximum work load, such that if a GPU is not being challenged enough, DLSS is not going to be made available. Accordingly, this implementation encourages users to turn on RTX first, thus increasing the GPU load, to then enable DLSS. It would thus be fair to extrapolate why the RTX 2080 Ti does not get to enjoy DLSS at lower resolutions, where perhaps it is not being taxed as hard.

We do not buy this explanation, however. Turning off VSync alone results in uncapped frame rates, which allow for a GPU load nearing 100%. NVIDIA has been championing high refresh rate displays for years now, and our own results show that we need the RTX 2080 and RTX 2080 Ti to get close to 144 FPS at 1080p, for that sweet 120+ Hz refresh rate display action. Why not let the end user decide what takes priority here, especially if DLSS aims to improve graphical fidelity as well? It was at this point where we went back to the NVIDIA whitepaper on their Turing microarchitecture, briefly discussed here for those interested.

DLSS, as it turns out, operates on a frame-by-frame basis. A Turing microarchitecture-based GPU has shader cores for gaming, tensor cores for large-scale compute/AI load, and RT cores for real-time ray tracing. As load on the GPU is applied, relevant to DLSS, this is predominantly on the tensor cores. Effectively thus, a higher FPS in a game means a higher load on the tensor cores. The different GPUs in the NVIDIA GeForce RTX family have a different number of tensor cores, and thus limit how many frames/pixels can be processed in a unit time (say, one second). This variability in the number of tensor cores is likely the major reason for said implementation of DLSS. With their approach, it appears that NVIDIA wants to make sure that the tensor cores never become the bottleneck during gaming.

Another possible reason comes via Futuremark's 3DMark Port Royal benchmark for ray tracing. It recently added support for DLSS, and is a standard bearer to how RTX and DLSS can work in conjunction to produce excellent results. Port Royal, however, is an extremely scripted benchmark using pre-determined scenes to make good use of the machine learning capabilities integrated in DLSS. Perhaps this initial round of DLSS in games is following a similar mechanism, wherein the game engine is being trained to enable DLSS on specific scenes at specific resolutions, and not in a resolution-independent way.

Regardless of what is the underlying cause, all in-game DLSS implementations so far have come with some small print attached, that sours the ultimately-free bonus of DLSS which appears to work well - when it can- providing at least an additional dial for users to play with, to fine-tune their desired balance of visual experience to game FPS.
Add your own comment

102 Comments on NVIDIA DLSS and its Surprising Resolution Limitations

#51
bug
Vayra86Possibly. Seeing is believing... so far Turing on a shader level was not much of a change despite taking up additional die space for cache. I think its quite clear the CUDA part of it won't be getting much faster. And RT is already a brute force with an efficiency pass (denoise), can't be that much left in the tank IMO.
Well, if it's brute force, maybe some finesse can be added? I don't think Nvidia threw this out just to see if it sticks, they must have planned a few generations in advance. Not knowing irks me to no end though :D
Posted on Reply
#52
EarthDog
moproblems99because NV is doing the learning where it will make the most impact.
Which is exactly what I said earlier... this isn't a money thing, or a FPS limit thing... it's about getting it out where it NEEDS to be while other 'learning' is still going on. I mean nobody can prove anything either way, but, it makes complete sense they threw it out in BF V where its actually needed as clearly it isn't an FPS limit as FF XV can use it on all cards at all resolutions and FPS.
Posted on Reply
#53
moproblems99
notbDX12 hasn't really offered anything interesting for the consumer.
I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
Posted on Reply
#54
bug
moproblems99I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
Arguing in favor of a technology that doesn't apply to more than 5% of the PCs (and I'm being generous), only confirms what @notb said.I've said it even before DX12 was released: just because there's a lower level alternative, doesn't mean everyone will be taking advantage of that. Because going lower level is not a universal solution (otherwise everything would have been written in C or ASM). But this is going to be a problem when the higher level (i.e. DX11) goes the way of the dodo.
Posted on Reply
#55
Gasaraki
"Effectively thus, a higher FPS in a game means a higher load on the tensor cores. The different GPUs in the NVIDIA GeForce RTX family have a different number of tensor cores, and thus limit how many frames/pixels can be processed in a unit time (say, one second). This variability in the number of tensor cores is likely the major reason for said implementation of DLSS. With their approach, it appears that NVIDIA wants to make sure that the tensor cores never become the bottleneck during gaming. "

Sorry, that doesn't make any sense. Why would they limit 2080Ti's at 1080 or even 1440 then? 2080Ti have the horsepower on the tensor cores to run it without bottleneck.
Posted on Reply
#56
moproblems99
bugArguing in favor of a technology that doesn't apply to more than 5% of the PCs
So what is the percentage of users that can take advantage of RTX? I'll be generous and include RTX 2060 users. What compelling reason do developers have to move forward with DX12?
Posted on Reply
#57
bug
moproblems99So what is the percentage of users that can take advantage of RTX? I'll be generous and include RTX 2060 users. What compelling reason do developers have to move forward with DX12?
Changing the subject. I'm not biting.
Posted on Reply
#58
VSG
Editor, Reviews & News
FordGT90Concept@VSG are you able to determine if it is the driver imposing the limitation, the RTX API, or the game itself? If the game itself has all of this extra code baked in, that is very concerning. For example, what happens 20 years from now with new cards on old games? It breaks the many decades old paradigm of putting the options in the hands of the players. That doesn't sit right with me.
It's very likely the game profile for individual GPUs in the driver, based on everything seen so far, but the decision itself was made in conjunction with the game developers I imagine. So it will be more complicated opening things up for newer GPUs.
EarthDogOne thing I just thought of...FFXV supports DLSS across all resolutions and gets high fps with 2080ti at 1080p... so... is it really a fps limitation?? Cant say I buy that considering...
Does it? Everything I saw only mentioned it working at 4K. I might be missing something here.
Posted on Reply
#59
londiste
Vayra86Possibly. Seeing is believing... so far Turing on a shader level was not much of a change despite taking up additional die space for cache. I think its quite clear the CUDA part of it won't be getting much faster.
Compared to what? Consumer last saw Pascal and Turing is a sizable boost over that. There is a wide range of games where Turing gives a considerable boost over Pascal coming a number of architectural changes. Three are definitely ways to get it faster.
moproblems99I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
Asymmetric mGPU needs the engine/game developer to manage work distribution across GPUs. That... hasn't really happened so far and for obvious reasons.
GasarakiSorry, that doesn't make any sense. Why would they limit 2080Ti's at 1080 or even 1440 then? 2080Ti have the horsepower on the tensor cores to run it without bottleneck.
I am pretty sure there is a simple latency consideration there. DLSS will take some time, a couple/few ms and needs to happen at the last stages of render pipeline. 2080Ti has horsepower to render a frame at these resolutions quickly enough that DLSS latency will add too much to frame render time. This is not a hardware limit as such, the limits are clearly set in games themselves.
Posted on Reply
#60
EarthDog
VSGDoes it? Everything I saw only mentioned it working at 4K. I might be missing something here.
It's me, I am wrong. The other theory has legs. FF XV is 4K only.

I still think the ability is focused where it is needed, however. As about any card can get 60 FPS in FF XV bench at 2560x1440 or lower. Why waste resources when it isn't needed?
Posted on Reply
#61
bug
EarthDogIt's me, I am wrong. The other theory has legs. :)

I still think the ability is focused where it is needed, however. As about any card can get 60 FPS in FF XV bench at 2560x1440 or lower. Why waste resources when it isn't needed?
I believe the argument was that DLSS could let you reach 144fps at lower resolutions. But guess what? So can lowering the resolution ;)
Posted on Reply
#62
EarthDog
bugI believe the argument was that DLSS could let you reach 144fps at lower resolutions. But guess what? So can lowering the resolution ;)
Sure, but, that isn't the point here either.
Posted on Reply
#63
bug
EarthDogSure, but, that isn't the point here either.
I meant, that wasn't actually a use case for DLSS. If you're not maxing out the card, you don't need DLSS.
Posted on Reply
#64
moproblems99
bugChanging the subject. I'm not biting.
If you say. The thread is now generally about why are certain features are implemented or no implemented the way the are. Clearly, publishers do not want to invest where there are no returns. Apparently, NV doesn't want to invest where the returns are low right now.

If no one wants to develop for DX12, where do we go? Are we on DX11 for years to come because no publishers want to invest in future tech? Do we need to wait until there is a revolutionary tech that publishers can't ignore? Are RTX and DLSS those features? Doesn't seem so...
Posted on Reply
#65
EarthDog
bugI meant, that wasn't actually a use case for DLSS. If you're not maxing out the card, you don't need DLSS.
What do you mean? More FPS is more FPS. It has nothing to do with maxing out the card...they run at 100% capacity all the time unless there is a different bottleneck (like CPU).
Posted on Reply
#66
bug
EarthDogWhat do you mean? More FPS is more FPS. It has nothing to do with maxing out the card...they run at 100% capacity all the time unless there is a different bottleneck (like CPU).
Meh, too tired. Maybe I'm not explaining it right.
Posted on Reply
#67
notb
moproblems99I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
I believe I wasn't clear in that post.
DX12 certainly is a more efficient API and should increase fps when used properly (we've already seen this is not always the case).

But this is not attractive to the customer. He doesn't care about few fps.

RTRT is a totally different animal. It's qualitative rather than quantitative. It really changes how games look.
I mean: if we want games to be realistic in some distant future, they will have to utilize RTRT.
Does RTRT require 3D APIs to become more low-level? I don't know. But that's the direction DX12 went. And it's a good direction in general. It's just that DX12 is really unfriendly for the coders, so:
1) the cost is huge
and
2) this most likely leads to non-optimal code and takes away some of the gains.

But since there could actually be a demand for RTRT games, at least the cost issue could go away. And who knows... maybe next revision of DX12 will be much easier to live with.
Posted on Reply
#68
moproblems99
notbBut this is not attractive to the customer. He doesn't care about few fps.
Really? How many posts come along with people asking how to increase my fps? Or 'X review had 82fps but I only have 75fps, what's wrong?' I would say the vast majority of 'gamers' chase fps regardless if it even benefits them.
notbIt's just that DX12 is really unfriendly for the coders
Well, this is what consoles have and most of the developers develop for consoles and port over to PC. In fact, the whole purpose of Mantle and Vulkan (which may or may not have pushed DX12 to what it is) was because developers wanted to be closer to the metal so they could get more performance. Is DX12 a bad implementation? I dunno but since MS made it, I don't doubt it.
notbRTRT is a totally different animal. It's qualitative rather than quantitative. It really changes how games look.
It's also subjective. Screenshots of BV5 look like hot trash (to me). It looks like anything that has a reflection is a mirror. Not everything that has a reflection is a mirror. I understand these were likely shortcuts to get the tech out there. But again, what incentive is there?
notbBut since there could actually be a demand for RTRT games, at least the cost issue could go away. And who knows... maybe next revision of DX12 will be much easier to live with
I think devs will look at sales of the RTX series and see what market share is there. When the next gen of RTX cards are released, they will watch again. If a significant amount of sales are not RTX *70 series and up, I can't see the cost outweighing the return.
Posted on Reply
#69
John Naylor
Users with the RTX 2060, for example, can't even use DLSS at 4K and, more egregiously, owners of the RTX 2080 and 2080 Ti can not enjoy RTX and DLSS simultaneously at the most popular in-game resolution of 1920x1080, which would be useful to reach high FPS rates on 144 Hz monitors. Battlefield V has a similar, and yet even more divided system wherein the gaming flagship RTX 2080 Ti can not be used with RTX and DLSS at even 1440p, as seen in the second image below.
From my perspective, "2060" and "4k" should not ever be used in the same sentence ... same for "2080 Ti and 1080p"; is there a game in the test suite where a manually OC'd 2080 Ti can't do 100+ fps. I really can't see someone springing for well over $1,000 for a 2080 Ti ($1,300 for an AIB A series) and using it with a $250 144 Hz monitor . Yes it's the most popular resosultion and it's typicall used with the most popular cards which are in the same budget range. The 3 most popular are ... NVIDIA GeForce GTX 1060, NVIDIA GeForce GTX 1050 Ti and NVIDIA GeForce GTX 1050. I'm using a 144 Hz monitor .... but turning on MBR drops that to 120. Are we really imagining an instance where someone lays out $1,300 for an AIB 2080 Ti and pairs it witha $250 monitor ? To my eyes, that's like complaining that your new $165,00, 750 HP sports car does not have an "Eco mode"
AnymalExactly, nvidia is pushing tech further, they can do this now simce AMD is 2 years behind and only 30% market share. Next gen of geforces on 7nm will bring perf we desire and tech will advance even further so we will again desire for more. AMD and their consoles are stagnant.
I think that market share estimate is a bit generous. Market Share for nVidia over recent years has been reported at 70 - 80 % so AMD it's oft assumed that AMD has the rest ... but Intel is closing in on 11% leaving AMD with just 15% but it's been inching up in recent months about 0.1% which is a good sign. If we take Intel out of the equation and just focus on discrete cards ... It's about to 83% to 17% at this time.

The biggest gainers among the top 25 in the last month according to Steam HW Survey were (by order of cards out there): 4th place 1070 (+0.18%), entire R7 series (+0.19%), 21st place RX 580 (+ 0.15%) and 24th place GTX 650 (+0.15%) ... Biggest losers were the 1st place 1060 (-0.52%) 14th place GTX 950 with - 0.19%/ The 2070 doubled it's market share to 0.33 % ... and the 2080 is up 50% to 0.31% share which kinda surprised me. The RX Vega (includes combined Vega 3, Vega 6, Vega 8, RX Vega 10, RX Vega 11, RX Vega 56, RX Vega 64, RX Vega 64 Liquid, and apparently, Radeon VII) made a nice 1st showing at 0.16%. Also interesting that the once dominant 970 will likely drop below 3% in next month.
bugArguing in favor of a technology that doesn't apply to more than 5% of the PCs (and I'm being generous), only confirms what @notb said.I've said it even before DX12 was released: just because there's a lower level alternative, doesn't mean everyone will be taking advantage of that. Because going lower level is not a universal solution (otherwise everything would have been written in C or ASM). But this is going to be a problem when the higher level (i.e. DX11) goes the way of the dodo.
I thot about that for a bit. If we use 5% as the cutoff for discussion, then all we can talk about is technology that shows its benefits for:

1920 x 1080 = 60.48%
1366 x 768 = 14.02%

Even 2560 x 1440 is in use by only 3.97% .... 2160p is completely off the table as it is used by only 1.48 %. But don't we all want to "move up" at some point in the near future ?

The same arguments were used when the automobile arrived, unreliable, will never replace the horse ! .... and most other technologies. I'm old enough to remember when it was said "Bah, who would ever actually buy a color TV ?" Technology advances much like human development, "walking is stoopid, all I gotta do is whine and momma will carry me ... " . I sucked at baseball my 1st year; I got better (a little). I sucked at football my 1st year (got better each year I played). I sucked at basketball my 1st year, was pretty good by college. Technology advances slowly, we find what works and then take it as far as it will go ... eventually, our needs outgrow the limits of the tech you in use and you need new tech. Where's Edison's carbon filament today ? When any tech arrives, in its early iterations, expect it to be less efficient, less cost effective but it has room to grow. Look at IPS ... when folks started thinking "Ooh IPS has more accurate color, let's use it for gaming" ... turned out it wasn't a good idea by any stretch of the imagination.

But over time, the tech advanced, AUoptronics screens came along and we had a brand new gaming experience. Should IPS development have been shut down because less than 5% of folks were using it (at least properly and satisfactoruly) ? My son wanted an IPS screen for his photo work (which he spent $1250 on) thinking it would be OK for gaming ... 4 months later he had a 2nd (TN) monitor as the response time and lag drive him nutz and every time he went into a dark place, he'd get dead cause everyone and everything could see him long before he could see them from the IPS glow. Now, when not on one of those AU screens, feels like I am eating oatmeal but w/o any cinnamon, maple syrup, milk or anything else which provides any semblance of flavor.

But if we're going to say that what is being done by < 5% of gamers doesn't matter, then we are certainly saying that we should not be worrying about a limitation that does not allow a 2080 Ti owner to use a feature at 1080p. That's like buying a $500 tie to wear with a $99 suit
Posted on Reply
#70
akaloith
the bargain card is rtx 2060
if they allowed dlss at 4k without rtx (all were hoping for this)
then 2060 would be perfectly capable of 4k gaming
so noone would buy 2070 2080 2080ti

I guess sometime users would be able to unlock dlss without the above limitations
the gain or loss in image quality is VERY serious matter, promissing performance gains and butchering image quality is stealing and fraud
Posted on Reply
#71
Emu
2080 ti and 3440x1440@100Hz user here. DLSS seems to work fine at that resolution in BFV. It does make things a bit blurry though which is really annoying. I haven't tested it at 100Hz yet because everytime I update my driver it forgets that my monitor is 100Hz until I reboot it which I even forgot about until I turned on the FPS meter in BFV and wondered why it was pegged at 60fps.
Posted on Reply
#72
FordGT90Concept
"I go fast!1!11!1!"
VSGIt's very likely the game profile for individual GPUs in the driver, based on everything seen so far, but the decision itself was made in conjunction with the game developers I imagine. So it will be more complicated opening things up for newer GPUs.
Isn't it the UI in the game that is flipping though? Or do you try to run both and somehow discover the game is lying?

If the UI itself is instantly changing settings based on selected settings, then that is beyond what a game profile for a driver can usually do. The game has to be using an API of some kind that test settings against the game profile. One could perhaps run the game and check modules to see if it is loading some NVIDIA branded library specific to RTX to do that.
Posted on Reply
#73
Super XP
Emu2080 ti and 3440x1440@100Hz user here. DLSS seems to work fine at that resolution in BFV. It does make things a bit blurry though which is really annoying. I haven't tested it at 100Hz yet because everytime I update my driver it forgets that my monitor is 100Hz until I reboot it which I even forgot about until I turned on the FPS meter in BFV and wondered why it was pegged at 60fps.
Keep it disabled. You are better off.
Posted on Reply
#74
Emu
Super XPKeep it disabled. You are better off.
Yeah, it is looking that way - the extra FPS might be nice but I am doubting the value given the blurriness it gives to the game.
Posted on Reply
#75
Super XP
EmuYeah, it is looking that way - the extra FPS might be nice but I am doubting the value given the blurriness it gives to the game.
I am sure they will fix this issue, but don't count on it. Something that gives you faster performance without Picture Quality Compromise? That is a very difficult task, one which I can't see happening.
To give you an example of hype, I remember when AMD released in its drivers the Morphological Filtering. It was kind of a hype way back. Its still available in the Drivers, but when I enable it, the Picture Quality doesn't look right. lol
Posted on Reply
Add your own comment
Dec 21st, 2024 20:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts