Wednesday, February 3rd 2021

NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains

When NVIDIA launched the second iteration of its Deep Learning Super Sampling (DLSS) technique used to upscale lower resolutions using deep learning, everyone was impressed by the quality of the rendering it is putting out. However, have you ever wondered how it all looks from the developer side of things? Usually, games need millions of lines of code and even some small features are not so easy to implement. Today, thanks to Tom Looman, a game developer working with Unreal Engine, we have found out just how the integration process of DLSS 2.0 looks like, and how big are the performance benefits coming from it.

Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.
Source: Tom Looman Blog
Add your own comment

74 Comments on NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains

#26
rutra80
TumbleGeorgeSomethere I read system requirements for gaming on 1080p @60 fps with full ray tracing effects enabled(if game support full ray tracing)... At least 32GB VRAM(64GB recommended) and around 60 teraflops FP32 performance...If that is true never will happen 4k or more gaming with full RT if not use some fakes like DLSS or Fidelity or something hacks. Hardware is too weak now and will be not enough an in future. Never!.
You're like Bill Gates in 1982!
Posted on Reply
#27
Dredi
TumbleGeorgeCinema and movies is off topic to this discussion?
Raytracing is the same for both. There is no special ”gaming only” raytracing. It has now popped up for gaming because the performance is finally good enough, not because someone found new gaming specific algorithms or other bullcrap.
Posted on Reply
#28
1d10t
Well, the baseline for the result was TXAA sampling techniques used in the game demo
First of all, TXAA is already a low bar to begin with, a temporal aliasing that takes "blurry" approach. I don't understand how people describing image fidelity as of now, you got shitty temporal TXAA, motion blur option in your game, DLSS masked downscaling in your GPU, another motion compensation in your monitor and you dare to say "everyone was impressed by the quality" ? What? When was the last time they had an eye check-up?
Posted on Reply
#29
dyonoctis
Vayra86Make it non proprietary or you can stick it right back up in that scalper's anus.
At this point, I feel like this is something that should be implemented inside DX12. I'm starting to lose hope about opensource, it's always the same thing : "open source is great" on one side, "open source is hard, you get better support with proprietary stuff" on the otherside.
CUDA is a prime exemple of that. Open Cl was Apple baby, made when Jobs was still there, the "closed garden" company made it open source. 5 years later they threw the towel and decided to make metal, because Open Cl failed to gain enough traction, meanwhile CUDA on mac OSX was a thing. If you are using an Nvidia gpu, you can be assured that any kind of gpu acceleration app will work with your hardware. You are using AMD ? well good luck, you are going to be more limited in your choice of apps.

If direct compute was broader than just gaming, a thing, we could have enjoyed true mainstream platform agnostic GpGPU. The good news is that AMD is working with Microsoft to make their own "DLSS" so we can expect to have a solution that won't discriminate. We just have to hope that it's going to be competitive with Nvidia offers, so that devs won't be pulled appart for being being forced to implement both.
AMD has an answer to DLSS, DirectML Super Resolution | OC3D News (overclock3d.net)
Posted on Reply
#30
TumbleGeorge
DrediRaytracing is the same for both. There is no special ”gaming only” raytracing. It has now popped up for gaming because the performance is finally good enough, not because someone found new gaming specific algorithms or other bullcrap.
Effects in movies is out of our discussion. We use movies like ready for play content, works with which are makes is professional and has not related to performance of our graphic cards.
Posted on Reply
#31
Dredi
TumbleGeorgeEffects in movies is out of our discussion. We use movies like ready for play content, works with which are makes is professional and has not related to performance of our graphic cards.
For gods sake. Just stop with the nonsense. Raytracing in movies is very much related to raytracing in games. Same algorithms, same hardware.
Edit: why do you think that render to disk and render to screen are so different that they cannot be compared performance wise?

I’m willing to bet any amount of money against your claim that raytracing will NEVER be viable at 4k resolution. How can anyone be so pessimistic about this is beyond me. The only way it won’t become reality is if WWIII wipes the human race out of existense.
Posted on Reply
#32
Vayra86
DrediRT has been used in the film industry for a long long time already. The performance gain per watt in ten years or so is measured in tens of thousands of percents.
And stuck at 24 FPS ever since :)

How is this a metric for realtime calculations? The baseline simply isn't there because you can't really tell what percentage of the overall load, consists of RT load.

Its anyone's guess so Huang can tell us whatever he likes. You can choose to believe it or not, and in both cases you'd be right. The ultimate FUD really, this is why they wanted to pre-empt AMD with it. Not to make tons of games with RT... but to set the baseline as blurry and uncontrolled as possible. We now live in the perception that 'Nvidia does RT better'... based on what exactly :)

Remember... TEN GIGARAYS, boys. We didn't have a clue then and we still don't in 2021. All we know is the perf/dollar took a nosedive since that very speech, and even before the current pandemic.
DrediFor gods sake. Just stop with the nonsense. Raytracing in movies is very much related to raytracing in games. Same algorithms, same hardware.
Edit: why do you think that render to disk and render to screen are so different that they cannot be compared performance wise?
Precooked versus realtime. Are you that dense or that stupid? No you don't get to pick anything else. Holy crap, son.
Posted on Reply
#33
Unregistered
Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
#34
TumbleGeorge
DrediFor gods sake. Just stop with the nonsense. Raytracing in movies is very much related to raytracing in games. Same algorithms, same hardware
Movies is easy to play in 4k and now in 8K with iGPU. It's ready for play content not raw material. Play(watching) movies and play games is not same and not use same path and resources in graphic cards hardware.
Posted on Reply
#35
Vayra86
Xex360Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
Its as difficult as Gsync.

Give it time.
Posted on Reply
#36
dyonoctis
Xex360Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
AMD is about to join them with "A.I on silicon". Going foward we can forget about gpu being exclusively about gaming. Software using A.I won't appear if the hardware isn't there. With Windows being fragmented like crazy, we are probably going to be behind MacOs in that regard for a while, but it's going to happen.

It's weird of phones somewhat became more bleeding edge than the PC for that kind of thing.
Posted on Reply
#37
Dredi
Vayra86How is this a metric for realtime calculations? The baseline simply isn't there because you can't really tell what percentage of the overall load, consists of RT load.
OMFG. It’s a metric for computational performance, rays cast per second, as part of the RENDERING of the film in the RENDER FARM of the production company.
TumbleGeorgeMovies is easy to play in 4k and now in 8K with iGPU. It's ready for play content not raw material. Play(watching) movies and play games is not same and not use same path and resources in graphic cards hardware.
But how easy is it to render those films in the first place? Please don’t associate h264 decoding with raytracing, for the love of god.
Posted on Reply
#38
W1zzard
Xex360It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
That is exactly how the Unreal Engine integration works. Plus you get a dial for the amount of sharpening applied
Posted on Reply
#39
ZoneDymo
Xex360Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
ahh remember when games supported proper AA and not this temporal and FXAA crap?
Posted on Reply
#40
Chomiq
ZoneDymoahh remember when games supported proper AA and not this temporal and FXAA crap?
Yeah but back then we were stuck at 1280x1024 max and did not even think about MSAA at FHD or 4K. Good luck pulling that off now.
Posted on Reply
#41
londiste
Xex360Why is it so difficult for nVidia to implement this technology if it's so good. It should à toggle like AA in the control panel, you select DLSS and get some upscaling on most games.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
DLSS needs data from game engine. Movement vectors, if I remember correctly. It is similar to TAA in several aspects.
ZoneDymoahh remember when games supported proper AA and not this temporal and FXAA crap?
TAA seems to be the norm these days, unfortunately. It is just the inevitable evolution towards lower performance hit. FXAA/MLAA and their relatives were the first gen postprocessing AA methods that new evolved into TAA. MSAA is still there in some games and can mostly be forced but it is relatively heavy on performance (and sometimes not only relatively). On the other hand, I remember the beginnings of AA with SSAA at first and MSAA in its bunch of variations seemed like a godsend :)
Posted on Reply
#42
Punkenjoy
From some of the Cyberpunk Bug, some people have figured out that DLSS is a combo of temporal AA/Denoiser + checkerboard redering. It move the render from in both axis (up/down) (Left/Right) and upscale it using the information from previous frame.

(And that make just the fact that AMD haven't release their solution yet, or the fact that we still don't have a final, opensource, all engine solution even worst).

On still frame with no movement, it look very good, and sadly this is how most people make visual comparaison. But on big movement, it create ghosting and other artifact like ghosting. Funny that you buy a 240 super fast monitor to get ghosting anyway.

But it's still a better technology than Radeon Boost (witch is just upscaling with image sharpening).

But the main thing people forget about all these upscaling solution is what is the native resolution. At 4K, both give good results, but at 1080 both suck. At 1440p, i think image sharpening suck and DLSS is borderline ok in quality mode but i would prefer to play native.

These are just the beginning of upscaling technology, and like Variable rate shading, these are the future, no matter what we want. But i hope that they get more open and vendor agnostics or else the future will suck.
Posted on Reply
#43
dyonoctis
That might be a bit off topic but looking at some comments, it feels like gaming need to get better on several point that don't always go well together.

We want high refresh rate but getting high quality AA will lower performance. There's the push for 4k gaming, were AA isn't always useful...but we still want high refresh rate. And all of that with better graphics. Upscaling is proposed as a solution, but people would rather get brute force native 4k 144fps.

4k, and even QHD still haven't become the bare minimum for pc monitors, but we are already seeing "8k gaming" in marketing materials, wich is waaaaaay premature. Unless everyone game manage to run like Doom, I have a hard time seeing 4k becoming the new target of 200-300$ GPUs in a near future :D
Posted on Reply
#44
1d10t
PunkenjoyBut it's still a better technology than Radeon Boost (witch is just upscaling with image sharpening).
Definitely off topic but here's one example from ported mobile game

TAA


MSAA


AMD RIS


I guess "image sharpening" does a better job than just MSAA.
Posted on Reply
#45
rutra80
dyonoctisThat might be a bit off topic but looking at some comments, it feels like gaming need to get better on several point that don't always go well together.

We want high refresh rate but getting high quality AA will lower performance. There's the push for 4k gaming, were AA isn't always useful...but we still want high refresh rate. And all of that with better graphics. Upscaling is proposed as a solution, but people would rather get brute force native 4k 144fps.

4k, and even QHD still haven't become the bare minimum for pc monitors, but we are already seeing "8k gaming" in marketing materials, wich is waaaaaay premature. Unless everyone game manage to run like Doom, I have a hard time seeing 4k becoming the new target of 200-300$ GPUs in a near future :D
Hows your 200 MP camera with 1/200" sensor?
Posted on Reply
#46
Punkenjoy
1d10tDefinitely off topic but here's one example from ported mobile game

TAA


MSAA


AMD RIS


I guess "image sharpening" does a better job than just MSAA.
Games using toon shaders style are the best case scenario for RIS.
Posted on Reply
#47
dyonoctis
rutra80Hows your 200 MP camera with 1/200" sensor?
It's funny that you would mention that, those 200mp is the way that smartphone are cheating to make up with the difficulty to have an optical zoom. Just crop on an high pixel photo. Meanwhile professional Dslr are still around 24mp. Same for low light, it's hard to get an aps-c sensor on a phone, so...they are using a.i to get better pictures at night. Photography on smartphone got a lot to do with software, which is something that seems to get hate when gpu are trying to pull the same tricks.
Posted on Reply
#48
Vayra86
W1zzardThat is exactly how the Unreal Engine integration works. Plus you get a dial for the amount of sharpening applied
I'm really looking forward to an in-depth on that one and a side by side of the performance gains between different types of support for DLSS 2.0 and how it stacks up against the earlier ones.

Like I said, the technology is impressive, but as long as it is held proprietary, implementation is at risk of a PhysX or Gsync situation. Neither is optimal, or necessary, and I think we're already paying a lot for RT. Its Nvidia's turn to either invest or collab, IMHO. Moving it to easy integration within engine toolkits is a step up, but its still not quite what I'd like to see.
DrediOMFG. It’s a metric for computational performance, rays cast per second, as part of the RENDERING of the film in the RENDER FARM of the production company.

But how easy is it to render those films in the first place? Please don’t associate h264 decoding with raytracing, for the love of god.
We could render tons of rays already for ages, the rendering isn't the issue, the speed at which you can, is the issue. What you're seeing in film is not a computer game supported by lots of cheap rasterization with a few rays cast over it. And what's being rendered in film is not being rendered in realtime, for the same obvious reason.

The whole box of metrics is therefore different. We demand different things when we play games compared to watching film, too. We want a certain amount of FPS, preferably an order of magnitude (or 2) higher than we watch movies. We also want interactivity - manipulation of the scene. This does not happen in film. So where a render farm can work at a snail's pace to produce one frame, a graphics card needs to complete that frame within milliseconds.

I can't believe I'm spelling this out tbh.
Posted on Reply
#49
TumbleGeorge
Vayra86we're already paying a lot for RT.
This situation is not your free will. Nvidia and others decided to justify their attentions to get more money from people with present unreal needs. Games with predefined effects is more than enough beauty when their creators are also and good artists.

I even think of defining the imposition of real-time calculated effects, such as violence on consumers' personal budgets. Because once when Nvidia and others have decided that all models are RTX(or DXR whatever choose different name) they do not leave people the right to choose. Yes today we have option to disable it when play games...But we pay for it with the increased price of the hardware without anyone asking if we want to own it.
Posted on Reply
#50
Vayra86
TumbleGeorgeThis situation is not your free will. Nvidia and others decided to justify their attentions to get more money from people with present unreal needs. Games with predefined effects is more than enough beauty when their creators are also and good artists.

I even think of defining the imposition of real-time calculated effects, such as violence on consumers' personal budgets. Because once when Nvidia and others have decided that all models are RTX(or DXR whatever choose different name) they do not leave people the right to choose. Yes today we have option to disable it when play games...But we pay for it with the increased price of the hardware without anyone asking if we want to own it.
Well.... history for computer graphics has a few examples of completely failed technological advancements that initially lots of people were all crazy about.
3DFX.
PhysX
VR - its been launched and then re-launched how many times now? Still not really sticking to anything more than niche.
...

And what about API adoption? Some of them were either dragged out to infinity (DirectX 9.0 (c) and DX11) while others were barely used and still are only just gaining ground like DX12 and 10. Or Vulkan.

There is a whole industry behind this with varying demands and investments, and developers have so much to choose from, its really not as simple as you might think. Budget is limited, and every feature costs money and dev time. Time to market is usually what kills projects. The whole premise of RT was that developers would be able to do less work on a scene, having calculations done for them, etc. But realistically, you still have to design scenes and you're just adding another layer of effects to it that have to be implemented in the whole process.

Another potential hurdle for RT is the state of the world right now. This was supposed to be Ampere's glory moment, RT's 'getting big' generation. What do we have? GPUs that are priced out of the market or simply not there at all, Consoles that launch with new hardware but no games to show it off, and similar availability issues, and a global pandemic keeping us home with lots of time to not enjoy it. The stars for RT are definitely not aligned and this whole situation will probably set it back a few years, if not more. After all, what are devs going to target now? New gen hardware? Or the stuff everybody already has? If you want sales, the last thing you want is to have half your consumer base feel like 'have-nots'.
Posted on Reply
Add your own comment
Sep 29th, 2024 06:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts