Monday, July 5th 2021

TechPowerUp Hosts NVIDIA DLSS Client Libraries

TechPowerUp is hosting a repository of NVIDIA DLSS client libraries that game developers pack with their DLSS-compatible games. This library goes by the file name "nvngx_dlss.dll." Recent discussions on tech forums and Reddit revealed that the DLSS libraries, which are usually located in the game's installation folders, are user-swappable, meaning that one can replace their DLSS library file for a different version that enables a different set of features or potentially even image quality or performance improvements. So far people have shared these files through sites like Mega, which of course introduces a malware risk. That's why we decided to host these files ourselves, located in our Downloads section. We currently have 23 versions, all hand-verified to be the unmodified originals, ranging from 1.0.0 to 2.2.10. Click on the "show older versions" link below the "Download" button to list the other versions. We will keep adding to the collection as we come across more of these. Have something we don't? Comment below.

DOWNLOAD: NVIDIA DLSS DLL
Add your own comment

33 Comments on TechPowerUp Hosts NVIDIA DLSS Client Libraries

#2
nguyen
been wondering if 2.2.10 bring any improvement vs 2.2.6
Posted on Reply
#3
Camm
For those wondering, this doesn't work on DLSS 1.0 games (BFV for example just greys out DLSS).
Posted on Reply
#4
shk021051
I don't get why Nvidia doesn't warn devs to update their games with the new dlss version
Posted on Reply
#5
eidairaman1
The Exiled Airman
If amd releases FSR dlls etc will TPU host them aswell?
Posted on Reply
#6
W1zzard
eidairaman1If amd releases FSR dlls etc will TPU host them aswell?
Absolutely, but from what I understand (and have seen in the FSR titles I tested), FSR doesn't use a separate DLL.

The FSR shaders just get integrated into the game's files like its own shaders.
Posted on Reply
#7
_Flare
W1zzardAbsolutely, but from what I understand (and have seen in the FSR titles I tested), FSR doesn't use a separate DLL.

The FSR shaders just get integrated into the game's files like its own shaders.
github.com/NarutoUA/gta5_fsr
he just provides d3d11.dll and gta5_fsr.ini thats it, yes he did modding in those to fit GTA V
Posted on Reply
#8
TheoneandonlyMrK
Nóooooo, waaait ,yes please.
And Thank you,
Made testing later versions easier.
Posted on Reply
#9
InVasMani
TPU just trying to machine learn how to let users better use their own inference as to which DLSS configuration is best. I felt the R6S configuration was better out of the 3 that I saw recently personally.
Posted on Reply
#10
Star_Hunter
Is there a record of any game dev actually updating the DLSS version post release?

No one would even need to go update the dll if the developers did this via the normal patching process. Obviously there is time needed for determining if there is a benefit to using a newer version (which has already been proven in multiple examples) and then validation that nothing broke due to API changes. It just seems that Nvidia needs to push developers to keep their DLSS versions up to date.
Posted on Reply
#11
wolf
Better Than Native
Star_HunterIs there a record of any game dev actually updating the DLSS version post release?
Control went from 1.0 to '1.9' to 2.1, and Metro Exodus shipped with 1.0 and the new enhanced edition (that anyone who bought the original also gets) is 2.1

Although I agree, if you shipping a DLSS 2.0 game, those devs should 100% be spending the minimal amount of time needed to inject the new DLL, test it and patch the game with it - IF this is allowed after all.
Posted on Reply
#12
Mussels
Freshwater Moderator
Big round of applause here, once i saw that people could just update the .dll i knew it'd be hosted around somewhere

aquinus must be having a bad day, this got weird
Posted on Reply
#13
InVasMani
If DLSS's machine learning AI wasn't sucking a bit at it's inference it would do this automatically anyway so if we're to blame or fault anyone perhaps Nvidia should be for it's marketing being a bit misleading and overhyping while under delivering.

This begs the question of why this isn't also part of the inference process of DLSS across games where it checks several DLSS DLL configurations and picks the most optimal based on the game? If anything it's a hands off opportunity for Nvidia to do better with DLSS til they can start doing what I mentioned because they should be anyway.

Nvidia would be playing with fire by going after TMU over this matter and I'm pretty certain their fully aware of it. Worst case what is Nvidia going to do send them a cease and desist and TPU writes a article about it and maybe or maybe not holds a bit of a grudge and even if TPU doesn't directly many in the tech world certainly might have a fair degree of resentment towards Nvidia on the matter.

I can see where maybe it could be considered a grey area, but is it something Nvidia wants to concern itself with at this point in time!!? Instead of making DLSS more compelling they'd effectively be doing the exact opposite retroactively. It might stir developers backlash to flip to FSR and gamer's switching over to AMD it would almost be Seppuku of Nvidia to do so. I don't think the tech community as a whole would standby Nvidia doing so.
Posted on Reply
#14
W1zzard
I moved all the "this is piracy" comments to a separate thread, where you can continue the discussion.

This thread is for what different it actually makes, technical details, etc.
Posted on Reply
#15
Midland Dog
W1zzardI moved all the "this is piracy" comments to a separate thread, where you can continue the discussion.

This thread is for what different it actually makes, technical details, etc.
you get it, respect
Posted on Reply
#16
wolf
Better Than Native
I've not tried 2.2.10 but 2.2.6 in Metro EE and Doom Eternal seem fine, can;t say I saw much if any ghosting before, especially in DOOM.. ripping and tearing at a locked 140fps on Nightmare difficulty, not a lot of time to nitpick!
Posted on Reply
#17
kiddagoat
I wonder if something like this would trigger anti-cheat such as BattleEye in Rainbow Six: Siege, or in Call of Duty?
Posted on Reply
#18
W1zzard
kiddagoatI wonder if something like this would trigger anti-cheat such as BattleEye in Rainbow Six: Siege, or in Call of Duty?
It depends. If the anti-cheat checks if the file is digitally signed by NVIDIA, or a "known" DLSS DLL version then it should pass. If it looks for that one specific file, then it'll not pass.
Posted on Reply
#19
Sora
W1zzardI moved all the "this is piracy" comments to a separate thread, where you can continue the discussion.

This thread is for what different it actually makes, technical details, etc.
its not piracy, but the binaries are signed nvidia modules provided by nvidia, so you'll get a nice threatening letter from them if they actually care.
Doubt it though.

the main issue is going to be that alot of the improvements people see are simply placebo, the dlss files are just runtimes, with the game itself setting the values, newer versions don't have any new inferenced data for a game to use its just new features that games can't use because they don't know exist or bug fixes(or new bugs).
Posted on Reply
#20
W1zzard
Soranewer versions don't have any new inferenced data for a game
DLSS 2.0 doesn't use per-game data
Posted on Reply
#21
Sora
W1zzardDLSS 2.0 doesn't use per-game data
That's incorrect.

What 2.0 doesn't require is for the inferencing model to be altered per game, all 2.0 games are inferenced through the same generalized model with the same retained datasets, so every new game that goes through the model further contributes to games down the line progressively improving the technology.
NvidiaDLSS 2.0 has two primary inputs into the AI network:
  1. Low resolution, aliased images rendered by the game engine
  2. Low resolution, motion vectors from the same images -- also generated by the game engine
    The NVIDIA DLSS 2.0 Architecture


    A special type of AI network, called a convolutional autoencoder, takes the low resolution current frame, and the high resolution previous frame, to determine on a pixel-by-pixel basis how to generate a higher quality current frame.

    During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images.

    Once the network is trained, NGX delivers the AI model to your GeForce RTX PC or laptop via Game Ready Drivers and OTA updates. With Turing’s Tensor Cores delivering up to 110 teraflops of dedicated AI horsepower, the DLSS network can be run in real-time simultaneously with an intensive 3D game. This simply wasn’t possible before Turing and Tensor Cores.
Changing the DLL bundled with the game isn't increasing the known inferencing data, this is only possibly via NGX binary updates which can be published through GFE or as a Driver update.
Posted on Reply
#22
W1zzard
Soraare inferenced through the same generalized model with the same retained datasets, so every new game that goes through the model further contributes to games down the line progressively improving the technology.
Source? I'm not aware that NVIDIA trains DLSS 2.0 with "every game that supports DLSS"
Posted on Reply
#23
Sora
W1zzardSource? I'm not aware that NVIDIA trains DLSS 2.0 with "every game that supports DLSS"
It was in the announcement slides
  • One Network for All Games and Applications - DLSS offers a generalized AI network that removes the need to train for each specific game or application.
The DLSS 1.0 model started each instance of each game at each resolution completely new without any former inferencing information at hand,
Everything that 2.0 see's, it remembers and stores for potential future use, low and high resolution stills from the dlss 2.0 game are still introduced into the auto encoder so its still training on specific game data.
This removes and replaces the wrongly assumed Deterministic model that 1.0 had causing significant motion artifacts and blur

Also a good part of why 2.0 Training is faster than 1.0 and also capable of introducing detail that was never in the original image.
Posted on Reply
#24
W1zzard
SoraEverything that 2.0 see's, it remembers and stores for potential future use, low and high resolution stills from the dlss 2.0 game are still introduced into the auto encoder so its still training on specific game data.
Are you claiming DLSS learns while running on your or my computer? This is completely wrong

NVIDIA trains (creates) the network in their labs and ships the network to us in the DLSS DLL + evolutionary workarounds, tweaks and fixes
Posted on Reply
#25
Sora
W1zzardAre you claiming DLSS learns while running on your or my computer? This is completely wrong
No, thats exactly the opposite of what i said.
W1zzardNVIDIA trains (creates) the network in their labs and ships the network to us in the DLSS DLL + evolutionary workarounds, tweaks and fixes
The DLSS 2.0 NGX Autoencoder is a persistent neural network.
Posted on Reply
Add your own comment
Dec 22nd, 2024 05:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts