Sunday, June 29th 2025

NVIDIA DLSS Transformer Cuts VRAM Usage by 20%

NVIDIA has officially released its DLSS Transformer model as part of the 310.3.0 SDK, combining advanced upscaling with a substantial reduction in video memory requirements. This update directly addresses the needs of gamers running on 8 GB or lower graphics cards by trimming VRAM usage by 20%. Unlike previous DLSS versions that relied on convolutional neural networks to infer missing pixels, the Transformer approach evaluates the relationships among all pixels in a frame and applies that understanding across multiple frames. Despite the increased sophistication of this method, NVIDIA's engineers have refined their memory management routines to maintain lean resource demands and deliver sharper, more consistent visuals.

Tested at regular resolutions, the improvements are striking: running DLSS at 1080p now consumes just 87.8 MB of VRAM, down from 106.9 MB in the prior SDK release, while similar reductions of around 20% apply at 1440p, 4K and even 8K. For those using GPUs with limited memory, these savings translate into smoother performance and the ability to enable richer graphics features without compromising image quality. Game developers and engine partners can expect to integrate the DLSS Transformer model into their titles and tools over the coming months, with early tests already highlighting crisper edges, more stable frame rates and consistently high upscaling performance.
Source: via VideoCardz
Add your own comment

97 Comments on NVIDIA DLSS Transformer Cuts VRAM Usage by 20%

#1
Nostras
Although positive news, are we really that impressed by a reduction from 1.3% to 1.07% for an 8GB card?
My takeaway is that DLSS consumes a lot less VRAM than you'd expect.
Posted on Reply
#2
agent_x007
"s" We are meant to not question the NV, we should buy more 50-series. Repeat after me "the more you buy, the more you save" ! "/s"
Posted on Reply
#3
Prima.Vera
How about the Fake Frame generator? Still requires a huge amount a VRAM.
Posted on Reply
#4
ymdhis
That's no excuse for their cards to come with 50% less VRAM.
Posted on Reply
#5
ZoneDymo
solid if you dont want to give your customers more Vram while keep asking more and more money.
Posted on Reply
#6
THANATOS
ymdhisThat's no excuse for their cards to come with 50% less VRAM.
The refresh (Super) will see 50% uplift thanks to 24Gbit modules. But not sure If every model will see a super refresh.
Posted on Reply
#7
Hyderz
looks like this year nvidia is fine tuning its AI stuff... dlss, frame gen.. all to run better lowering vram across the board...
how much does it help... hope we can get some sort of benchmark going on
Posted on Reply
#8
Macro Device
Hyderzhow much does it help... hope we can get some sort of benchmark going on
Not much. There's very little NV can do to make their 8 GB cards magically outperform hypothetical 10 GB counterparts by driver and feature optimisations. Video games and tools are written by 3rd parties who more often than not disregard the whole NV advancements as a thing.

Of course an NV GPU with X GB VRAM will run out of VRAM later than an AMD GPU with the exact same amount because of better overflow handling and overall superior VRAM management (+ higher bandwidth if we compare 9070 VS 5070; 9060 VS 5060) but the difference isn't massive, 10 percent at the very most if we only look at the worst case scenarios for AMD.

3 gigabyte VRAM modules were and still are a must for 5060 series. 12 GB woulda made all the sense in the world... but alas.
Posted on Reply
#9
arbiter
Macro DeviceNot much. There's very little NV can do to make their 8 GB cards magically outperform hypothetical 10 GB counterparts by driver and feature optimisations. Video games and tools are written by 3rd parties who more often than not disregard the whole NV advancements as a thing.

Of course an NV GPU with X GB VRAM will run out of VRAM later than an AMD GPU with the exact same amount because of better overflow handling and overall superior VRAM management (+ higher bandwidth if we compare 9070 VS 5070; 9060 VS 5060) but the difference isn't massive, 10 percent at the very most if we only look at the worst case scenarios for AMD.

3 gigabyte VRAM modules were and still are a must for 5060 series. 12 GB woulda made all the sense in the world... but alas.
Go watch videos on YT of AMD vs nvidia side by side. You will notice nvidia cards usually use around 10% less vram usage over all. So to say can do very little well they can.
Posted on Reply
#10
phints
Very good DLSS4 transformer model is fantastic, but skipping RTX 50 series it's a joke. Hopefully RTX 60 series has TSMC 3N or better, and 16GB VRAM at minimum for the 6070 tier so it's actually an upgrade not just a TDP increase.
Posted on Reply
#11
THANATOS
Macro DeviceOf course an NV GPU with X GB VRAM will run out of VRAM later than an AMD GPU with the exact same amount because of better overflow handling and overall superior VRAM management (+ higher bandwidth if we compare 9070 VS 5070; 9060 VS 5060) but the difference isn't massive, 10 percent at the very most if we only look at the worst case scenarios for AMD.
Don't know about Nvidia's superiority in what you mentioned, but I know RX 9060XT has PCIe 5.0 x16 while 5060(Ti) has only PCIe 5.0 x8. This difference is showing when Vram is not enough.
Macro Device3 gigabyte VRAM modules were and still are a must for 5060 series. 12 GB woulda made all the sense in the world... but alas.
In desktop you still have the option of buying the 16GB version at least with 5060Ti.
In my opinion the worst situation is in laptops, that's where we especially need 24Gbit modules.
Posted on Reply
#12
_roman_
Come on 8GiB VRAM is enough. /Sarcasm.

That article sounds like more compression. I also experimented with file system compression. 20 % is quite usual rate for low cpu file compression.
Edit: The common thing is the 20% compression. You may check the title for the 20%: NVIDIA DLSS Transformer Cuts VRAM Usage by 20%

That feature will be bought with gpu performance or SYSTEM DRAM and CPU performance. There is no free stuff.
Posted on Reply
#13
Onasi
DLSS Transformers is literally the first time that I would say NV has an undisputed “killer feature”. It’s genuinely amazing tech, even Performance presets look very good with it. It’s finally arrived at the “basically free performance” level DLSS was touted as for all these years and it’s nice that NV keeps optimizing it. I absolutely don’t condone releasing 8Gb cards in 2025, but the market is what it is, unfortunately.
_roman_That article sounds like more compression. I also experimented with file system compression. 20 % is quite usual rate for low cpu file compression.
Nothing in common whatsoever, but do go off.
Posted on Reply
#14
8tyone
I might be wrong but doesnt DLSS Trasformer also lowers performance by ~10% ?
Posted on Reply
#15
Sithaer
OnasiDLSS Transformers is literally the first time that I would say NV has an undisputed “killer feature”. It’s genuinely amazing tech, even Performance presets look very good with it. It’s finally arrived at the “basically free performance” level DLSS was touted as for all these years and it’s nice that NV keeps optimizing it.
Yup, to me DLSS Transformer is a game changer and it was the main deciding factor to go with an Nvidia card instead of an AMD card when I've upgraded recently.
Yes FSR 4 is good but the support is just lacking and nowadays I do use DLSS in pretty much every game that supports it and I will continue to do so cause yeah at this point its like free performance and at least I can get rid of that crappy TAA that most games seems to have.
Posted on Reply
#16
Dr. Dro
8tyoneI might be wrong but doesnt DLSS Trasformer also lowers performance by ~10% ?
Nowhere near 10%, even on older GPUs. It does take longer than the CNN model to infer, but worst case scenario you're looking at 3-5% if it's on a low end Ampere or Turing GPU with low tensor performance. TPU's showcase of the technology found 3% on the RTX 3060. Should be nearly indistinguishable (0-2%) on higher end previous generation GPUs or on Ada and Blackwell architecture ones, even on lower end models.
Posted on Reply
#17
Sithaer
Dr. DroNowhere near 10%, even on older GPUs. It does take longer than the CNN model to infer, but worst case scenario you're looking at 3-5% if it's on a low end Ampere or Turing GPU with low tensor performance. TPU's showcase of the technology found 3% on the RTX 3060. Should be nearly indistinguishable (0-2%) on higher end previous generation GPUs or on Ada and Blackwell architecture ones, even on lower end models.
Yeah it was already more than fine on my previous 3060 Ti, I did notice a slight performance hit but imo it was still worth it over CNN Quality heck Transformer Balanced looks about the same if not better than CNN Quality at least in the games where I've tested them.
Posted on Reply
#18
Dr. Dro
SithaerYeah it was already more than fine on my previous 3060 Ti, I did notice a slight performance hit but imo it was still worth it over CNN Quality heck Transformer Balanced looks about the same if not better than CNN Quality at least in the games where I've tested them.
Yup, best case scenario IMO, other than people looking for max image quality by using DLAA with preset K, would be laptops with limited VRAM (4-6 GB), transformer model is enough of a visual upgrade that you can safely drop from Balanced to Performance, gain fps or lower power consumption while retaining a good image quality. That's where it'll win the most IMO, and I expect Switch 2 games will use this tech a lot.
Posted on Reply
#19
Sithaer
Dr. DroYup, best case scenario IMO, other than people looking for max image quality by using DLAA with preset K, would be laptops with limited VRAM (4-6 GB), transformer model is enough of a visual upgrade that you can safely drop from Balanced to Performance, gain fps or lower power consumption while retaining a good image quality. That's where it'll win the most IMO, and I expect Switch 2 games will use this tech a lot.
DLAA was a bit too much for my 3060 Ti especially with preset K but that does indeed looks the best like its a clear upgrade over native so for anyone who can handle that its the way to go.
If that tech is in a handheld then that should be really nice for sure.
Currently I can do PT Cyberpunk with DLSS Transformer Balanced+Ray reconstruction with a mix of high and ultra settings on top and that way its a solid 60+ FPS which is a nice base for framegen if I want to use it and the game plays and looks really damn good. 'Most likely how I'm gonna replay the game with Phantom Liberty once I get to it :laugh: '
Posted on Reply
#20
Dr. Dro
SithaerDLAA was a bit too much for my 3060 Ti especially with preset K but that does indeed looks the best like its a clear upgrade over native so for anyone who can handle that its the way to go.
If that tech is in a handheld then that should be really nice for sure.
Currently I can do PT Cyberpunk with DLSS Transformer Balanced+Ray reconstruction with a mix of high and ultra settings on top and that way its a solid 60+ FPS which is a nice base for framegen if I want to use it and the game plays and looks really damn good. 'Most likely how I'm gonna replay the game with Phantom Liberty once I get to it :laugh: '
Your new 5070 will do great there, that's for sure. And you can always enjoy most games with DLAA preset K included with it, as long as you can replace the DLSS DLL, the new DLLs will work on any game that supports DLSS 3. Only one I'm aware of that you cannot upgrade the DLL is Honkai Star Rail because it'll fail the file integrity check rendering the game unplayable. It has DLSS 3.7.10 though, so it still looks pretty good. Unlikely miHoYo will keep it updated, though, considered they're as lazy as it gets with the PC versions of their games. I love their games, but their PC versions clearly aren't their highest priority.
Posted on Reply
#21
RogueSix
Don't get the whining about any alleged lack of VRAM. If a certain nVidia card does not have as much VRAM as you need then... I don't know... don't buy it, I guess? Or is anyone holding a gun to your head and forcing you to buy a card with what you perceive to be too little VRAM?

Alternatively, buy an AMD card since they seem to be giving away VRAM for free in spades, right?

Life could be so simple but no... here we go with another completely unnecessary whinefest :D .
Posted on Reply
#22
Macro Device
arbiterGo watch videos on YT of AMD vs nvidia side by side. You will notice nvidia cards usually use around 10% less vram usage over all. So to say can do very little well they can.
This is exactly what I said so I don't know why you posted that in an arguing manner...
THANATOSI know RX 9060XT has PCIe 5.0 x16 while 5060(Ti) has only PCIe 5.0 x8
For 8 GB models, it actually matters but not THAT significantly. Sure, a cut-down NV card is worse in this regard but not drastically. For 16 gig models, however, running on PCI-e 4.0/5.0 guarantees you'll see no meaningful impact on either GPU. Maybe 1 or 3 percent here and there will be lost but is it really a big deal?
THANATOSIn desktop you still have the option of buying the 16GB version at least with 5060Ti.
Sure but it's ridiculously expensive. And I'm yet to figure why would anyone in their right mind seriously consider buying a laptop for gaming.
Posted on Reply
#23
Sithaer
Dr. DroYour new 5070 will do great there, that's for sure. And you can always enjoy most games with DLAA preset K included with it, as long as you can replace the DLSS DLL, the new DLLs will work on any game that supports DLSS 3. Only one I'm aware of that you cannot upgrade the DLL is Honkai Star Rail because it'll fail the file integrity check rendering the game unplayable. It has DLSS 3.7.10 though, so it still looks pretty good. Unlikely miHoYo will keep it updated, though, considered they're as lazy as it gets with the PC versions of their games. I love their games, but their PC versions clearly aren't their highest priority.
Currently I'm playing Wuthering Waves and its latest update with RT on max + with the latest or preset K set in the App and it seems to work pretty well, the game has a bottleneck somewhere else anyway so I have the settings cranked up since its not my GPU thats limiting it. 'Supposedly its an issue with this update since it was fine in the other regions..'
Lately I don't do DLL swaps and only use the Nvidia App as long as the game is supported there.
Posted on Reply
#24
InVasMani
There is a lot of room for it to iterate and get better and better over time with the aid of AI.
Posted on Reply
#25
blahoink01
I'm still trying to figure out why I would want to run DLSS at all. The quantity of NV's black screen of death increases dramatically, and image quality is most certainly worse than native. I get more than acceptable framerates out of native res so can someone help me understand why any of this matters? I have to be missing some key data point.
Posted on Reply
Add your own comment
Jul 4th, 2025 02:10 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts