• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains

Joined
Sep 1, 2020
Messages
2,304 (1.51/day)
Location
Bulgaria
RT has been used in the film industry for a long long time already. The performance gain per watt in ten years or so is measured in tens of thousands of percents.
Cinema and movies is off topic to this discussion?
 
Joined
May 8, 2019
Messages
132 (0.07/day)
Somethere I read system requirements for gaming on 1080p @60 fps with full ray tracing effects enabled(if game support full ray tracing)... At least 32GB VRAM(64GB recommended) and around 60 teraflops FP32 performance...If that is true never will happen 4k or more gaming with full RT if not use some fakes like DLSS or Fidelity or something hacks. Hardware is too weak now and will be not enough an in future. Never!.
You're like Bill Gates in 1982!
 
Joined
Oct 15, 2019
Messages
584 (0.32/day)
Cinema and movies is off topic to this discussion?
Raytracing is the same for both. There is no special ”gaming only” raytracing. It has now popped up for gaming because the performance is finally good enough, not because someone found new gaming specific algorithms or other bullcrap.
 
Joined
Sep 28, 2012
Messages
976 (0.22/day)
System Name Poor Man's PC
Processor AMD Ryzen 7 7800X3D
Motherboard MSI B650M Mortar WiFi
Cooling Thermalright Phantom Spirit 120 with Arctic P12 Max fan
Memory 32GB GSkill Flare X5 DDR5 6000Mhz
Video Card(s) XFX Merc 310 Radeon RX 7900 XT
Storage XPG Gammix S70 Blade 2TB + 8 TB WD Ultrastar DC HC320
Display(s) Xiaomi G Pro 27i MiniLED
Case Asus A21 Case
Audio Device(s) MPow Air Wireless + Mi Soundbar
Power Supply Enermax Revolution DF 650W Gold
Mouse Logitech MX Anywhere 3
Keyboard Logitech Pro X + Kailh box heavy pale blue switch + Durock stabilizers
VR HMD Meta Quest 2
Benchmark Scores Who need bench when everything already fast?
Well, the baseline for the result was TXAA sampling techniques used in the game demo

First of all, TXAA is already a low bar to begin with, a temporal aliasing that takes "blurry" approach. I don't understand how people describing image fidelity as of now, you got shitty temporal TXAA, motion blur option in your game, DLSS masked downscaling in your GPU, another motion compensation in your monitor and you dare to say "everyone was impressed by the quality" ? What? When was the last time they had an eye check-up?
 
Joined
Oct 28, 2012
Messages
1,184 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Make it non proprietary or you can stick it right back up in that scalper's anus.
At this point, I feel like this is something that should be implemented inside DX12. I'm starting to lose hope about opensource, it's always the same thing : "open source is great" on one side, "open source is hard, you get better support with proprietary stuff" on the otherside.
CUDA is a prime exemple of that. Open Cl was Apple baby, made when Jobs was still there, the "closed garden" company made it open source. 5 years later they threw the towel and decided to make metal, because Open Cl failed to gain enough traction, meanwhile CUDA on mac OSX was a thing. If you are using an Nvidia gpu, you can be assured that any kind of gpu acceleration app will work with your hardware. You are using AMD ? well good luck, you are going to be more limited in your choice of apps.

If direct compute was broader than just gaming, a thing, we could have enjoyed true mainstream platform agnostic GpGPU. The good news is that AMD is working with Microsoft to make their own "DLSS" so we can expect to have a solution that won't discriminate. We just have to hope that it's going to be competitive with Nvidia offers, so that devs won't be pulled appart for being being forced to implement both.
AMD has an answer to DLSS, DirectML Super Resolution | OC3D News (overclock3d.net)
 
Joined
Sep 1, 2020
Messages
2,304 (1.51/day)
Location
Bulgaria
Raytracing is the same for both. There is no special ”gaming only” raytracing. It has now popped up for gaming because the performance is finally good enough, not because someone found new gaming specific algorithms or other bullcrap.
Effects in movies is out of our discussion. We use movies like ready for play content, works with which are makes is professional and has not related to performance of our graphic cards.
 
Joined
Oct 15, 2019
Messages
584 (0.32/day)
Effects in movies is out of our discussion. We use movies like ready for play content, works with which are makes is professional and has not related to performance of our graphic cards.
For gods sake. Just stop with the nonsense. Raytracing in movies is very much related to raytracing in games. Same algorithms, same hardware.
Edit: why do you think that render to disk and render to screen are so different that they cannot be compared performance wise?

I’m willing to bet any amount of money against your claim that raytracing will NEVER be viable at 4k resolution. How can anyone be so pessimistic about this is beyond me. The only way it won’t become reality is if WWIII wipes the human race out of existense.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,270 (6.02/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
RT has been used in the film industry for a long long time already. The performance gain per watt in ten years or so is measured in tens of thousands of percents.
And stuck at 24 FPS ever since :)

How is this a metric for realtime calculations? The baseline simply isn't there because you can't really tell what percentage of the overall load, consists of RT load.

Its anyone's guess so Huang can tell us whatever he likes. You can choose to believe it or not, and in both cases you'd be right. The ultimate FUD really, this is why they wanted to pre-empt AMD with it. Not to make tons of games with RT... but to set the baseline as blurry and uncontrolled as possible. We now live in the perception that 'Nvidia does RT better'... based on what exactly :)

Remember... TEN GIGARAYS, boys. We didn't have a clue then and we still don't in 2021. All we know is the perf/dollar took a nosedive since that very speech, and even before the current pandemic.

For gods sake. Just stop with the nonsense. Raytracing in movies is very much related to raytracing in games. Same algorithms, same hardware.
Edit: why do you think that render to disk and render to screen are so different that they cannot be compared performance wise?
Precooked versus realtime. Are you that dense or that stupid? No you don't get to pick anything else. Holy crap, son.
 
Last edited:
D

Deleted member 185088

Guest
Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
 
Joined
Sep 1, 2020
Messages
2,304 (1.51/day)
Location
Bulgaria
For gods sake. Just stop with the nonsense. Raytracing in movies is very much related to raytracing in games. Same algorithms, same hardware
Movies is easy to play in 4k and now in 8K with iGPU. It's ready for play content not raw material. Play(watching) movies and play games is not same and not use same path and resources in graphic cards hardware.
 
Joined
Sep 17, 2014
Messages
22,270 (6.02/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
Its as difficult as Gsync.

Give it time.
 
Joined
Oct 28, 2012
Messages
1,184 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
AMD is about to join them with "A.I on silicon". Going foward we can forget about gpu being exclusively about gaming. Software using A.I won't appear if the hardware isn't there. With Windows being fragmented like crazy, we are probably going to be behind MacOs in that regard for a while, but it's going to happen.

It's weird of phones somewhat became more bleeding edge than the PC for that kind of thing.
 
Joined
Oct 15, 2019
Messages
584 (0.32/day)
How is this a metric for realtime calculations? The baseline simply isn't there because you can't really tell what percentage of the overall load, consists of RT load.
OMFG. It’s a metric for computational performance, rays cast per second, as part of the RENDERING of the film in the RENDER FARM of the production company.
Movies is easy to play in 4k and now in 8K with iGPU. It's ready for play content not raw material. Play(watching) movies and play games is not same and not use same path and resources in graphic cards hardware.
But how easy is it to render those films in the first place? Please don’t associate h264 decoding with raytracing, for the love of god.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,685 (3.70/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
That is exactly how the Unreal Engine integration works. Plus you get a dial for the amount of sharpening applied
 
Joined
Feb 11, 2009
Messages
5,535 (0.96/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).

ahh remember when games supported proper AA and not this temporal and FXAA crap?
 
Joined
Feb 23, 2019
Messages
6,015 (2.89/day)
Location
Poland
Processor Ryzen 7 5800X3D
Motherboard Gigabyte X570 Aorus Elite
Cooling Thermalright Phantom Spirit 120 SE
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E @ 3800 CL16
Video Card(s) RTX3080 Ti FE
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) LG 34GN850P-B
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse Endgame Gear XM1R
Keyboard Wooting Two HE
ahh remember when games supported proper AA and not this temporal and FXAA crap?
Yeah but back then we were stuck at 1280x1024 max and did not even think about MSAA at FHD or 4K. Good luck pulling that off now.
 
Joined
Feb 3, 2017
Messages
3,719 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
DLSS needs data from game engine. Movement vectors, if I remember correctly. It is similar to TAA in several aspects.
ahh remember when games supported proper AA and not this temporal and FXAA crap?
TAA seems to be the norm these days, unfortunately. It is just the inevitable evolution towards lower performance hit. FXAA/MLAA and their relatives were the first gen postprocessing AA methods that new evolved into TAA. MSAA is still there in some games and can mostly be forced but it is relatively heavy on performance (and sometimes not only relatively). On the other hand, I remember the beginnings of AA with SSAA at first and MSAA in its bunch of variations seemed like a godsend :)
 
Joined
Oct 12, 2005
Messages
703 (0.10/day)
From some of the Cyberpunk Bug, some people have figured out that DLSS is a combo of temporal AA/Denoiser + checkerboard redering. It move the render from in both axis (up/down) (Left/Right) and upscale it using the information from previous frame.

(And that make just the fact that AMD haven't release their solution yet, or the fact that we still don't have a final, opensource, all engine solution even worst).

On still frame with no movement, it look very good, and sadly this is how most people make visual comparaison. But on big movement, it create ghosting and other artifact like ghosting. Funny that you buy a 240 super fast monitor to get ghosting anyway.

But it's still a better technology than Radeon Boost (witch is just upscaling with image sharpening).

But the main thing people forget about all these upscaling solution is what is the native resolution. At 4K, both give good results, but at 1080 both suck. At 1440p, i think image sharpening suck and DLSS is borderline ok in quality mode but i would prefer to play native.

These are just the beginning of upscaling technology, and like Variable rate shading, these are the future, no matter what we want. But i hope that they get more open and vendor agnostics or else the future will suck.
 
Joined
Oct 28, 2012
Messages
1,184 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
That might be a bit off topic but looking at some comments, it feels like gaming need to get better on several point that don't always go well together.

We want high refresh rate but getting high quality AA will lower performance. There's the push for 4k gaming, were AA isn't always useful...but we still want high refresh rate. And all of that with better graphics. Upscaling is proposed as a solution, but people would rather get brute force native 4k 144fps.

4k, and even QHD still haven't become the bare minimum for pc monitors, but we are already seeing "8k gaming" in marketing materials, wich is waaaaaay premature. Unless everyone game manage to run like Doom, I have a hard time seeing 4k becoming the new target of 200-300$ GPUs in a near future :D
 
Joined
Sep 28, 2012
Messages
976 (0.22/day)
System Name Poor Man's PC
Processor AMD Ryzen 7 7800X3D
Motherboard MSI B650M Mortar WiFi
Cooling Thermalright Phantom Spirit 120 with Arctic P12 Max fan
Memory 32GB GSkill Flare X5 DDR5 6000Mhz
Video Card(s) XFX Merc 310 Radeon RX 7900 XT
Storage XPG Gammix S70 Blade 2TB + 8 TB WD Ultrastar DC HC320
Display(s) Xiaomi G Pro 27i MiniLED
Case Asus A21 Case
Audio Device(s) MPow Air Wireless + Mi Soundbar
Power Supply Enermax Revolution DF 650W Gold
Mouse Logitech MX Anywhere 3
Keyboard Logitech Pro X + Kailh box heavy pale blue switch + Durock stabilizers
VR HMD Meta Quest 2
Benchmark Scores Who need bench when everything already fast?
But it's still a better technology than Radeon Boost (witch is just upscaling with image sharpening).

Definitely off topic but here's one example from ported mobile game

TAA


MSAA


AMD RIS


I guess "image sharpening" does a better job than just MSAA.
 
Joined
May 8, 2019
Messages
132 (0.07/day)
That might be a bit off topic but looking at some comments, it feels like gaming need to get better on several point that don't always go well together.

We want high refresh rate but getting high quality AA will lower performance. There's the push for 4k gaming, were AA isn't always useful...but we still want high refresh rate. And all of that with better graphics. Upscaling is proposed as a solution, but people would rather get brute force native 4k 144fps.

4k, and even QHD still haven't become the bare minimum for pc monitors, but we are already seeing "8k gaming" in marketing materials, wich is waaaaaay premature. Unless everyone game manage to run like Doom, I have a hard time seeing 4k becoming the new target of 200-300$ GPUs in a near future :D
Hows your 200 MP camera with 1/200" sensor?
 
Joined
Oct 12, 2005
Messages
703 (0.10/day)
Definitely off topic but here's one example from ported mobile game

TAA


MSAA


AMD RIS


I guess "image sharpening" does a better job than just MSAA.

Games using toon shaders style are the best case scenario for RIS.
 
Joined
Oct 28, 2012
Messages
1,184 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Hows your 200 MP camera with 1/200" sensor?
It's funny that you would mention that, those 200mp is the way that smartphone are cheating to make up with the difficulty to have an optical zoom. Just crop on an high pixel photo. Meanwhile professional Dslr are still around 24mp. Same for low light, it's hard to get an aps-c sensor on a phone, so...they are using a.i to get better pictures at night. Photography on smartphone got a lot to do with software, which is something that seems to get hate when gpu are trying to pull the same tricks.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,270 (6.02/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
That is exactly how the Unreal Engine integration works. Plus you get a dial for the amount of sharpening applied

I'm really looking forward to an in-depth on that one and a side by side of the performance gains between different types of support for DLSS 2.0 and how it stacks up against the earlier ones.

Like I said, the technology is impressive, but as long as it is held proprietary, implementation is at risk of a PhysX or Gsync situation. Neither is optimal, or necessary, and I think we're already paying a lot for RT. Its Nvidia's turn to either invest or collab, IMHO. Moving it to easy integration within engine toolkits is a step up, but its still not quite what I'd like to see.

OMFG. It’s a metric for computational performance, rays cast per second, as part of the RENDERING of the film in the RENDER FARM of the production company.

But how easy is it to render those films in the first place? Please don’t associate h264 decoding with raytracing, for the love of god.
We could render tons of rays already for ages, the rendering isn't the issue, the speed at which you can, is the issue. What you're seeing in film is not a computer game supported by lots of cheap rasterization with a few rays cast over it. And what's being rendered in film is not being rendered in realtime, for the same obvious reason.

The whole box of metrics is therefore different. We demand different things when we play games compared to watching film, too. We want a certain amount of FPS, preferably an order of magnitude (or 2) higher than we watch movies. We also want interactivity - manipulation of the scene. This does not happen in film. So where a render farm can work at a snail's pace to produce one frame, a graphics card needs to complete that frame within milliseconds.

I can't believe I'm spelling this out tbh.
 
Last edited:
Joined
Sep 1, 2020
Messages
2,304 (1.51/day)
Location
Bulgaria
we're already paying a lot for RT.
This situation is not your free will. Nvidia and others decided to justify their attentions to get more money from people with present unreal needs. Games with predefined effects is more than enough beauty when their creators are also and good artists.

I even think of defining the imposition of real-time calculated effects, such as violence on consumers' personal budgets. Because once when Nvidia and others have decided that all models are RTX(or DXR whatever choose different name) they do not leave people the right to choose. Yes today we have option to disable it when play games...But we pay for it with the increased price of the hardware without anyone asking if we want to own it.
 
Top