Wednesday, February 3rd 2021
NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains
When NVIDIA launched the second iteration of its Deep Learning Super Sampling (DLSS) technique used to upscale lower resolutions using deep learning, everyone was impressed by the quality of the rendering it is putting out. However, have you ever wondered how it all looks from the developer side of things? Usually, games need millions of lines of code and even some small features are not so easy to implement. Today, thanks to Tom Looman, a game developer working with Unreal Engine, we have found out just how the integration process of DLSS 2.0 looks like, and how big are the performance benefits coming from it.
Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.
Source:
Tom Looman Blog
Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.
74 Comments on NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains
CUDA is a prime exemple of that. Open Cl was Apple baby, made when Jobs was still there, the "closed garden" company made it open source. 5 years later they threw the towel and decided to make metal, because Open Cl failed to gain enough traction, meanwhile CUDA on mac OSX was a thing. If you are using an Nvidia gpu, you can be assured that any kind of gpu acceleration app will work with your hardware. You are using AMD ? well good luck, you are going to be more limited in your choice of apps.
If direct compute was broader than just gaming, a thing, we could have enjoyed true mainstream platform agnostic GpGPU. The good news is that AMD is working with Microsoft to make their own "DLSS" so we can expect to have a solution that won't discriminate. We just have to hope that it's going to be competitive with Nvidia offers, so that devs won't be pulled appart for being being forced to implement both.
AMD has an answer to DLSS, DirectML Super Resolution | OC3D News (overclock3d.net)
Edit: why do you think that render to disk and render to screen are so different that they cannot be compared performance wise?
I’m willing to bet any amount of money against your claim that raytracing will NEVER be viable at 4k resolution. How can anyone be so pessimistic about this is beyond me. The only way it won’t become reality is if WWIII wipes the human race out of existense.
How is this a metric for realtime calculations? The baseline simply isn't there because you can't really tell what percentage of the overall load, consists of RT load.
Its anyone's guess so Huang can tell us whatever he likes. You can choose to believe it or not, and in both cases you'd be right. The ultimate FUD really, this is why they wanted to pre-empt AMD with it. Not to make tons of games with RT... but to set the baseline as blurry and uncontrolled as possible. We now live in the perception that 'Nvidia does RT better'... based on what exactly :)
Remember... TEN GIGARAYS, boys. We didn't have a clue then and we still don't in 2021. All we know is the perf/dollar took a nosedive since that very speech, and even before the current pandemic. Precooked versus realtime. Are you that dense or that stupid? No you don't get to pick anything else. Holy crap, son.
I still feel they are wasting silicon on tensor cores as most of the time they do absolutely nothing, would've better to have more cuda cores (or whatever they are called now).
Give it time.
It's weird of phones somewhat became more bleeding edge than the PC for that kind of thing.
(And that make just the fact that AMD haven't release their solution yet, or the fact that we still don't have a final, opensource, all engine solution even worst).
On still frame with no movement, it look very good, and sadly this is how most people make visual comparaison. But on big movement, it create ghosting and other artifact like ghosting. Funny that you buy a 240 super fast monitor to get ghosting anyway.
But it's still a better technology than Radeon Boost (witch is just upscaling with image sharpening).
But the main thing people forget about all these upscaling solution is what is the native resolution. At 4K, both give good results, but at 1080 both suck. At 1440p, i think image sharpening suck and DLSS is borderline ok in quality mode but i would prefer to play native.
These are just the beginning of upscaling technology, and like Variable rate shading, these are the future, no matter what we want. But i hope that they get more open and vendor agnostics or else the future will suck.
We want high refresh rate but getting high quality AA will lower performance. There's the push for 4k gaming, were AA isn't always useful...but we still want high refresh rate. And all of that with better graphics. Upscaling is proposed as a solution, but people would rather get brute force native 4k 144fps.
4k, and even QHD still haven't become the bare minimum for pc monitors, but we are already seeing "8k gaming" in marketing materials, wich is waaaaaay premature. Unless everyone game manage to run like Doom, I have a hard time seeing 4k becoming the new target of 200-300$ GPUs in a near future :D
TAA
MSAA
AMD RIS
I guess "image sharpening" does a better job than just MSAA.
Like I said, the technology is impressive, but as long as it is held proprietary, implementation is at risk of a PhysX or Gsync situation. Neither is optimal, or necessary, and I think we're already paying a lot for RT. Its Nvidia's turn to either invest or collab, IMHO. Moving it to easy integration within engine toolkits is a step up, but its still not quite what I'd like to see. We could render tons of rays already for ages, the rendering isn't the issue, the speed at which you can, is the issue. What you're seeing in film is not a computer game supported by lots of cheap rasterization with a few rays cast over it. And what's being rendered in film is not being rendered in realtime, for the same obvious reason.
The whole box of metrics is therefore different. We demand different things when we play games compared to watching film, too. We want a certain amount of FPS, preferably an order of magnitude (or 2) higher than we watch movies. We also want interactivity - manipulation of the scene. This does not happen in film. So where a render farm can work at a snail's pace to produce one frame, a graphics card needs to complete that frame within milliseconds.
I can't believe I'm spelling this out tbh.
I even think of defining the imposition of real-time calculated effects, such as violence on consumers' personal budgets. Because once when Nvidia and others have decided that all models are RTX(or DXR whatever choose different name) they do not leave people the right to choose. Yes today we have option to disable it when play games...But we pay for it with the increased price of the hardware without anyone asking if we want to own it.
3DFX.
PhysX
VR - its been launched and then re-launched how many times now? Still not really sticking to anything more than niche.
...
And what about API adoption? Some of them were either dragged out to infinity (DirectX 9.0 (c) and DX11) while others were barely used and still are only just gaining ground like DX12 and 10. Or Vulkan.
There is a whole industry behind this with varying demands and investments, and developers have so much to choose from, its really not as simple as you might think. Budget is limited, and every feature costs money and dev time. Time to market is usually what kills projects. The whole premise of RT was that developers would be able to do less work on a scene, having calculations done for them, etc. But realistically, you still have to design scenes and you're just adding another layer of effects to it that have to be implemented in the whole process.
Another potential hurdle for RT is the state of the world right now. This was supposed to be Ampere's glory moment, RT's 'getting big' generation. What do we have? GPUs that are priced out of the market or simply not there at all, Consoles that launch with new hardware but no games to show it off, and similar availability issues, and a global pandemic keeping us home with lots of time to not enjoy it. The stars for RT are definitely not aligned and this whole situation will probably set it back a few years, if not more. After all, what are devs going to target now? New gen hardware? Or the stuff everybody already has? If you want sales, the last thing you want is to have half your consumer base feel like 'have-nots'.