• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial NVIDIA DLSS and its Surprising Resolution Limitations

Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
It's not only the fab node (though that is a big part of the equation). With the data collected from Turing, it's possible to fine tune the hardware resources (e.g. the ratio of CUDA cores:tensor cores:RT cores or something like that).

Possibly. Seeing is believing... so far Turing on a shader level was not much of a change despite taking up additional die space for cache. I think its quite clear the CUDA part of it won't be getting much faster. And RT is already a brute force with an efficiency pass (denoise), can't be that much left in the tank IMO. That is not to say Nvidia hasn't surprised us before, but that is exactly what I'm waiting for here. DLSS sure as hell isn't it, and it sure as hell was intended to be.

Call me the pessimist here... but the state of this new generation of graphics really doesn't look rosy. Its almost impossible to do a fair like-for-like comparison with all these techniques that are proprietary and on a per-game basis, Nvidia is creating tech that makes everything abstract which is obviously a way to hide problems as much as it is a solution.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Possibly. Seeing is believing... so far Turing on a shader level was not much of a change despite taking up additional die space for cache. I think its quite clear the CUDA part of it won't be getting much faster. And RT is already a brute force with an efficiency pass (denoise), can't be that much left in the tank IMO.
Well, if it's brute force, maybe some finesse can be added? I don't think Nvidia threw this out just to see if it sticks, they must have planned a few generations in advance. Not knowing irks me to no end though :D
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
because NV is doing the learning where it will make the most impact.
Which is exactly what I said earlier... this isn't a money thing, or a FPS limit thing... it's about getting it out where it NEEDS to be while other 'learning' is still going on. I mean nobody can prove anything either way, but, it makes complete sense they threw it out in BF V where its actually needed as clearly it isn't an FPS limit as FF XV can use it on all cards at all resolutions and FPS.
 
Joined
Mar 10, 2015
Messages
3,984 (1.11/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
DX12 hasn't really offered anything interesting for the consumer.

I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
Arguing in favor of a technology that doesn't apply to more than 5% of the PCs (and I'm being generous), only confirms what @notb said.I've said it even before DX12 was released: just because there's a lower level alternative, doesn't mean everyone will be taking advantage of that. Because going lower level is not a universal solution (otherwise everything would have been written in C or ASM). But this is going to be a problem when the higher level (i.e. DX11) goes the way of the dodo.
 
Joined
Nov 29, 2016
Messages
671 (0.23/day)
System Name Unimatrix
Processor Intel i9-9900K @ 5.0GHz
Motherboard ASRock x390 Taichi Ultimate
Cooling Custom Loop
Memory 32GB GSkill TridentZ RGB DDR4 @ 3400MHz 14-14-14-32
Video Card(s) EVGA 2080 with Heatkiller Water Block
Storage 2x Samsung 960 Pro 512GB M.2 SSD in RAID 0, 1x WD Blue 1TB M.2 SSD
Display(s) Alienware 34" Ultrawide 3440x1440
Case CoolerMaster P500M Mesh
Power Supply Seasonic Prime Titanium 850W
Keyboard Corsair K75
Benchmark Scores Really Really High
"Effectively thus, a higher FPS in a game means a higher load on the tensor cores. The different GPUs in the NVIDIA GeForce RTX family have a different number of tensor cores, and thus limit how many frames/pixels can be processed in a unit time (say, one second). This variability in the number of tensor cores is likely the major reason for said implementation of DLSS. With their approach, it appears that NVIDIA wants to make sure that the tensor cores never become the bottleneck during gaming. "

Sorry, that doesn't make any sense. Why would they limit 2080Ti's at 1080 or even 1440 then? 2080Ti have the horsepower on the tensor cores to run it without bottleneck.
 
Joined
Mar 10, 2015
Messages
3,984 (1.11/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Arguing in favor of a technology that doesn't apply to more than 5% of the PCs

So what is the percentage of users that can take advantage of RTX? I'll be generous and include RTX 2060 users. What compelling reason do developers have to move forward with DX12?
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
So what is the percentage of users that can take advantage of RTX? I'll be generous and include RTX 2060 users. What compelling reason do developers have to move forward with DX12?
Changing the subject. I'm not biting.
 

VSG

Editor, Reviews & News
Staff member
Joined
Jul 1, 2014
Messages
3,695 (0.97/day)
@VSG are you able to determine if it is the driver imposing the limitation, the RTX API, or the game itself? If the game itself has all of this extra code baked in, that is very concerning. For example, what happens 20 years from now with new cards on old games? It breaks the many decades old paradigm of putting the options in the hands of the players. That doesn't sit right with me.

It's very likely the game profile for individual GPUs in the driver, based on everything seen so far, but the decision itself was made in conjunction with the game developers I imagine. So it will be more complicated opening things up for newer GPUs.

One thing I just thought of...FFXV supports DLSS across all resolutions and gets high fps with 2080ti at 1080p... so... is it really a fps limitation?? Cant say I buy that considering...

Does it? Everything I saw only mentioned it working at 4K. I might be missing something here.
 
Joined
Feb 3, 2017
Messages
3,821 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Possibly. Seeing is believing... so far Turing on a shader level was not much of a change despite taking up additional die space for cache. I think its quite clear the CUDA part of it won't be getting much faster.
Compared to what? Consumer last saw Pascal and Turing is a sizable boost over that. There is a wide range of games where Turing gives a considerable boost over Pascal coming a number of architectural changes. Three are definitely ways to get it faster.
I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
Asymmetric mGPU needs the engine/game developer to manage work distribution across GPUs. That... hasn't really happened so far and for obvious reasons.
Sorry, that doesn't make any sense. Why would they limit 2080Ti's at 1080 or even 1440 then? 2080Ti have the horsepower on the tensor cores to run it without bottleneck.
I am pretty sure there is a simple latency consideration there. DLSS will take some time, a couple/few ms and needs to happen at the last stages of render pipeline. 2080Ti has horsepower to render a frame at these resolutions quickly enough that DLSS latency will add too much to frame render time. This is not a hardware limit as such, the limits are clearly set in games themselves.
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Does it? Everything I saw only mentioned it working at 4K. I might be missing something here.
It's me, I am wrong. The other theory has legs. FF XV is 4K only.

I still think the ability is focused where it is needed, however. As about any card can get 60 FPS in FF XV bench at 2560x1440 or lower. Why waste resources when it isn't needed?
 
  • Like
Reactions: VSG

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It's me, I am wrong. The other theory has legs. :)

I still think the ability is focused where it is needed, however. As about any card can get 60 FPS in FF XV bench at 2560x1440 or lower. Why waste resources when it isn't needed?
I believe the argument was that DLSS could let you reach 144fps at lower resolutions. But guess what? So can lowering the resolution ;)
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I believe the argument was that DLSS could let you reach 144fps at lower resolutions. But guess what? So can lowering the resolution ;)
Sure, but, that isn't the point here either.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Sure, but, that isn't the point here either.
I meant, that wasn't actually a use case for DLSS. If you're not maxing out the card, you don't need DLSS.
 
Joined
Mar 10, 2015
Messages
3,984 (1.11/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Changing the subject. I'm not biting.

If you say. The thread is now generally about why are certain features are implemented or no implemented the way the are. Clearly, publishers do not want to invest where there are no returns. Apparently, NV doesn't want to invest where the returns are low right now.

If no one wants to develop for DX12, where do we go? Are we on DX11 for years to come because no publishers want to invest in future tech? Do we need to wait until there is a revolutionary tech that publishers can't ignore? Are RTX and DLSS those features? Doesn't seem so...
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I meant, that wasn't actually a use case for DLSS. If you're not maxing out the card, you don't need DLSS.
What do you mean? More FPS is more FPS. It has nothing to do with maxing out the card...they run at 100% capacity all the time unless there is a different bottleneck (like CPU).
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
What do you mean? More FPS is more FPS. It has nothing to do with maxing out the card...they run at 100% capacity all the time unless there is a different bottleneck (like CPU).
Meh, too tired. Maybe I'm not explaining it right.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
I disagree. I can't remember the term or the specific details so I am just going to call it asymmetric mGPU. That could have been a huge plus for the masses of budget oriented builds.
I believe I wasn't clear in that post.
DX12 certainly is a more efficient API and should increase fps when used properly (we've already seen this is not always the case).

But this is not attractive to the customer. He doesn't care about few fps.

RTRT is a totally different animal. It's qualitative rather than quantitative. It really changes how games look.
I mean: if we want games to be realistic in some distant future, they will have to utilize RTRT.
Does RTRT require 3D APIs to become more low-level? I don't know. But that's the direction DX12 went. And it's a good direction in general. It's just that DX12 is really unfriendly for the coders, so:
1) the cost is huge
and
2) this most likely leads to non-optimal code and takes away some of the gains.

But since there could actually be a demand for RTRT games, at least the cost issue could go away. And who knows... maybe next revision of DX12 will be much easier to live with.
 
Joined
Mar 10, 2015
Messages
3,984 (1.11/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
But this is not attractive to the customer. He doesn't care about few fps.

Really? How many posts come along with people asking how to increase my fps? Or 'X review had 82fps but I only have 75fps, what's wrong?' I would say the vast majority of 'gamers' chase fps regardless if it even benefits them.

It's just that DX12 is really unfriendly for the coders

Well, this is what consoles have and most of the developers develop for consoles and port over to PC. In fact, the whole purpose of Mantle and Vulkan (which may or may not have pushed DX12 to what it is) was because developers wanted to be closer to the metal so they could get more performance. Is DX12 a bad implementation? I dunno but since MS made it, I don't doubt it.

RTRT is a totally different animal. It's qualitative rather than quantitative. It really changes how games look.

It's also subjective. Screenshots of BV5 look like hot trash (to me). It looks like anything that has a reflection is a mirror. Not everything that has a reflection is a mirror. I understand these were likely shortcuts to get the tech out there. But again, what incentive is there?

But since there could actually be a demand for RTRT games, at least the cost issue could go away. And who knows... maybe next revision of DX12 will be much easier to live with

I think devs will look at sales of the RTX series and see what market share is there. When the next gen of RTX cards are released, they will watch again. If a significant amount of sales are not RTX *70 series and up, I can't see the cost outweighing the return.
 
Joined
Mar 18, 2015
Messages
2,963 (0.83/day)
Location
Long Island
Users with the RTX 2060, for example, can't even use DLSS at 4K and, more egregiously, owners of the RTX 2080 and 2080 Ti can not enjoy RTX and DLSS simultaneously at the most popular in-game resolution of 1920x1080, which would be useful to reach high FPS rates on 144 Hz monitors. Battlefield V has a similar, and yet even more divided system wherein the gaming flagship RTX 2080 Ti can not be used with RTX and DLSS at even 1440p, as seen in the second image below.

From my perspective, "2060" and "4k" should not ever be used in the same sentence ... same for "2080 Ti and 1080p"; is there a game in the test suite where a manually OC'd 2080 Ti can't do 100+ fps. I really can't see someone springing for well over $1,000 for a 2080 Ti ($1,300 for an AIB A series) and using it with a $250 144 Hz monitor . Yes it's the most popular resosultion and it's typicall used with the most popular cards which are in the same budget range. The 3 most popular are ... NVIDIA GeForce GTX 1060, NVIDIA GeForce GTX 1050 Ti and NVIDIA GeForce GTX 1050. I'm using a 144 Hz monitor .... but turning on MBR drops that to 120. Are we really imagining an instance where someone lays out $1,300 for an AIB 2080 Ti and pairs it witha $250 monitor ? To my eyes, that's like complaining that your new $165,00, 750 HP sports car does not have an "Eco mode"



Exactly, nvidia is pushing tech further, they can do this now simce AMD is 2 years behind and only 30% market share. Next gen of geforces on 7nm will bring perf we desire and tech will advance even further so we will again desire for more. AMD and their consoles are stagnant.

I think that market share estimate is a bit generous. Market Share for nVidia over recent years has been reported at 70 - 80 % so AMD it's oft assumed that AMD has the rest ... but Intel is closing in on 11% leaving AMD with just 15% but it's been inching up in recent months about 0.1% which is a good sign. If we take Intel out of the equation and just focus on discrete cards ... It's about to 83% to 17% at this time.

The biggest gainers among the top 25 in the last month according to Steam HW Survey were (by order of cards out there): 4th place 1070 (+0.18%), entire R7 series (+0.19%), 21st place RX 580 (+ 0.15%) and 24th place GTX 650 (+0.15%) ... Biggest losers were the 1st place 1060 (-0.52%) 14th place GTX 950 with - 0.19%/ The 2070 doubled it's market share to 0.33 % ... and the 2080 is up 50% to 0.31% share which kinda surprised me. The RX Vega (includes combined Vega 3, Vega 6, Vega 8, RX Vega 10, RX Vega 11, RX Vega 56, RX Vega 64, RX Vega 64 Liquid, and apparently, Radeon VII) made a nice 1st showing at 0.16%. Also interesting that the once dominant 970 will likely drop below 3% in next month.


Arguing in favor of a technology that doesn't apply to more than 5% of the PCs (and I'm being generous), only confirms what @notb said.I've said it even before DX12 was released: just because there's a lower level alternative, doesn't mean everyone will be taking advantage of that. Because going lower level is not a universal solution (otherwise everything would have been written in C or ASM). But this is going to be a problem when the higher level (i.e. DX11) goes the way of the dodo.

I thot about that for a bit. If we use 5% as the cutoff for discussion, then all we can talk about is technology that shows its benefits for:

1920 x 1080 = 60.48%
1366 x 768 = 14.02%

Even 2560 x 1440 is in use by only 3.97% .... 2160p is completely off the table as it is used by only 1.48 %. But don't we all want to "move up" at some point in the near future ?

The same arguments were used when the automobile arrived, unreliable, will never replace the horse ! .... and most other technologies. I'm old enough to remember when it was said "Bah, who would ever actually buy a color TV ?" Technology advances much like human development, "walking is stoopid, all I gotta do is whine and momma will carry me ... " . I sucked at baseball my 1st year; I got better (a little). I sucked at football my 1st year (got better each year I played). I sucked at basketball my 1st year, was pretty good by college. Technology advances slowly, we find what works and then take it as far as it will go ... eventually, our needs outgrow the limits of the tech you in use and you need new tech. Where's Edison's carbon filament today ? When any tech arrives, in its early iterations, expect it to be less efficient, less cost effective but it has room to grow. Look at IPS ... when folks started thinking "Ooh IPS has more accurate color, let's use it for gaming" ... turned out it wasn't a good idea by any stretch of the imagination.

But over time, the tech advanced, AUoptronics screens came along and we had a brand new gaming experience. Should IPS development have been shut down because less than 5% of folks were using it (at least properly and satisfactoruly) ? My son wanted an IPS screen for his photo work (which he spent $1250 on) thinking it would be OK for gaming ... 4 months later he had a 2nd (TN) monitor as the response time and lag drive him nutz and every time he went into a dark place, he'd get dead cause everyone and everything could see him long before he could see them from the IPS glow. Now, when not on one of those AU screens, feels like I am eating oatmeal but w/o any cinnamon, maple syrup, milk or anything else which provides any semblance of flavor.

But if we're going to say that what is being done by < 5% of gamers doesn't matter, then we are certainly saying that we should not be worrying about a limitation that does not allow a 2080 Ti owner to use a feature at 1080p. That's like buying a $500 tie to wear with a $99 suit
 
Last edited:
Joined
Jul 23, 2016
Messages
5 (0.00/day)
the bargain card is rtx 2060
if they allowed dlss at 4k without rtx (all were hoping for this)
then 2060 would be perfectly capable of 4k gaming
so noone would buy 2070 2080 2080ti

I guess sometime users would be able to unlock dlss without the above limitations
the gain or loss in image quality is VERY serious matter, promissing performance gains and butchering image quality is stealing and fraud
 
Last edited:

Emu

Joined
Jan 5, 2018
Messages
28 (0.01/day)
2080 ti and 3440x1440@100Hz user here. DLSS seems to work fine at that resolution in BFV. It does make things a bit blurry though which is really annoying. I haven't tested it at 100Hz yet because everytime I update my driver it forgets that my monitor is 100Hz until I reboot it which I even forgot about until I turned on the FPS meter in BFV and wondered why it was pegged at 60fps.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
It's very likely the game profile for individual GPUs in the driver, based on everything seen so far, but the decision itself was made in conjunction with the game developers I imagine. So it will be more complicated opening things up for newer GPUs.
Isn't it the UI in the game that is flipping though? Or do you try to run both and somehow discover the game is lying?

If the UI itself is instantly changing settings based on selected settings, then that is beyond what a game profile for a driver can usually do. The game has to be using an API of some kind that test settings against the game profile. One could perhaps run the game and check modules to see if it is loading some NVIDIA branded library specific to RTX to do that.
 
Joined
Mar 23, 2005
Messages
4,092 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 144Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
2080 ti and 3440x1440@100Hz user here. DLSS seems to work fine at that resolution in BFV. It does make things a bit blurry though which is really annoying. I haven't tested it at 100Hz yet because everytime I update my driver it forgets that my monitor is 100Hz until I reboot it which I even forgot about until I turned on the FPS meter in BFV and wondered why it was pegged at 60fps.
Keep it disabled. You are better off.
 
Top