Wednesday, February 3rd 2021
NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains
When NVIDIA launched the second iteration of its Deep Learning Super Sampling (DLSS) technique used to upscale lower resolutions using deep learning, everyone was impressed by the quality of the rendering it is putting out. However, have you ever wondered how it all looks from the developer side of things? Usually, games need millions of lines of code and even some small features are not so easy to implement. Today, thanks to Tom Looman, a game developer working with Unreal Engine, we have found out just how the integration process of DLSS 2.0 looks like, and how big are the performance benefits coming from it.
Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.
Source:
Tom Looman Blog
Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.
74 Comments on NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains
Impressive if you take them at their word, which would be extremely ill advised given this is Nvidia we are talking about. How many times to people have to fall for Nvidia marketing numbers?
Hardware Unboxed found that you get anywhere from a 12% boost to a 60% boost at 4K (so best case scenario) depending on the game. I don't know where they are getting the 60-180% but it seems likely complete BS. You cannot gain more performance from upscaling than you would have lost by increasing the resolution instead. If I take a 50% performance hit by bumping my resolution to 4K from 2K, the most I can gain is that 50% back through upscaling 2K to 4K. The only scenario where you could gain more than 60% is if you took an old AF video card and tried to run it at 4K, by which point the card doesn't support DLSS anyways.
There's a big convergence going on betwen those industry and gaming. The need for ray traced real time graphic might have been something that was pushed by actors outside of gaming. That wouldn't be the first time that "hollywood" tech make it into games. For PBR (physcially based rendering) Epic looked at the work of Dysney for Wreck-It Ralph, and found a way to adapt the principle for real time graphics.
SGI, the guy who made Open GL and Maya, where the guy who helped the dinosaur in jurassic park become a reality, but down the line all the work that they did for movies ended benefiting video games in some way. They worked with sega, the N64 hardware was made by them. As a matter of fact Nvidia and ATI blurred the line with a few of their patents when they started doing their GPUs
SGI, Nvidia Bury the Hatchet, Agree to Share Patents - Computer Business Review (cbronline.com)
SGI v. AMD: Chief Judge Rader on Claim Construction | Patently-O
Even Apple the special snowflake who's not that big into big AAA gaming, ignored the 3D screen fad, and didn't jump into VR as fast as google and everyone else, are already interested in RT. It's interesting because Apple doesn't really do bleeding edge until something is actually usable in some kind of way. (microsoft was first with windows on ARM, but it wasn't ready at all, Apple was late, but did it the right way)
Ray Tracing with Metal - WWDC 2019 - Videos - Apple Developer
Remember how smartphone were meant for professional at first, that it was too expensive to ever become mainstream, and now every teenager got one ? Tech is evolving and what we are doing with it does as well. Eventually the entry point will become low enough, but with everything from phones to cars competing for chip manufacturing, I don't know what will happen. Nvidia switched to a two year realease cycle, and with Intel joining the TSMC fun things might become more and more complex
but honestly, everyone here knows only too well how full of hot air you are. i mean .. the big boss himself slapped you for telling lies multiple times just under this single article, with style and gusto i might add comedy gold
Cool story! But keep quoting, maybe you'll get those results that way. I'm not a fan of getting marketing force-fed to me so that eventually I'll believe nonsense. Apparently that is what some here do prefer, before seeing the actual numbers. To each their own ;) Maybe it signifies the overall level of conversation in here more than anything? We're looking at an Nvidia announcement specifying big performance gains, but with no numbers to back it up, and limited to an engine combined with an Nvidia card. So yes, comparisons matter.
The PhysX implementation that is non proprietary, is a CPU physx library. Not the GPU one that accelerated pretty neat effects in a handful of games. In that sense, we have replacements now but overall the push for good ingame physics engines is on a very low priority altogether. Funny how PhysX is getting phased out regardless, don't you think? I also love your assumptions, because you apparently really were eager to make a hate post towards my person before actually thinking about it. I would love more and broad adoption of GPU PhysX, because the fact remains that what's left of it on CPU is extremely limited.
Oh btw, your link actually provides more support for the argument that PhysX is dead, the new library isn't based on it either:
"Chaos Physics is a Beta feature that is the light-weight physics simulation solution used in Fortnite"
'Ouch'... yeah you really burned shit bro, damn! Hope you feel good about it.
2.0 is essentially TAA derivative with all weaknesses (blur, loss of detail, terrible handling of small, quickly moving objects) and strengths of it (improved lines, details improve over time).
Source:
www.anandtech.com/show/15648/nvidia-intros-dlss-20-adds-motion-vectors
Still to remember is:
DLSS 2 processing eats good half of the perfrormance bump due to running at lower resolutions.
AMD has a wide range of CROSS PLAT, CROSS VENDOR tech that has excellent results (Fidelity FX CAS, and checkboard rendering in particular) that are simply not hyped into oblivion by FUD lowers.
Still remember this:
DLSS 2.0 > Anyting amd can offer including image quality and performance wise.
Death Stranding with DLSS 2.0 on looks better than the native. But you tried.
Oh, user with barely any posts is triggered about lack of butt kissing terrifyingly overhyped NV tech, color me surprised... ;)
There is only one guy triggered here the one who has gone so far he checks other accounts and mentions the number of posts as the way how to validate his delusions. Imagine stooping so low. Thanks for the laughs.
Did you have a chance to watch "The Hobbit" on 60fps?? A DISASTER. The whole movie effect was gone, was like watching a live session on TV.
23.99fps is the perfect format for the movies, I wouldn't want any other way. For Live TV, like F1, football, concerts, etc, yeah, give me 60 or even 120fps.
Its a matter of what you're getting used to. Not all film is 24hz either and the Hobbit was 48hz wasn't it?
My experience, especially in cinema is that the low framerate causes lots of problems in fast motion. Things feel like a slideshow sometimes. Does it feel better in some way? I really can't say that it does. The Hobbit was a strange exception to a rule so it stood out and I agree as an example taken like that, it wasn't pretty. But I also think higher refresh rates were never explored for film simply because it means a major cost increase as you have many more frames to edit, and there is no apparent 'demand' for it. Which can also be explained as, we're used to it, we don't know if it can be better at higher FPS. But still, gaming tells us that higher framerates are a direct win for motion clarity and immersion.
As much as preference is what it is, keep in mind our mind and eyes are very capable of getting comfortable with pretty shitty things, just look back on film itself... If you look at very old footage now its almost painful. We have adjusted.
Teraflops: a unit of computing speed equal to one million million (10^12) floating-point operations per second.
That number does not tell the whole story of a GPUs performance.
Sow why is everybody stuck on how it's done ?! As long you don't have to rub two dicks together who cares.
Even 1440p is plenty of pixels for most screens.
Still, bush in frequently shared Death Stranding "improvement" screenshot from sites overhyping the tech, is very visibly blurred/lost details, at the same time, long grass on the right is improved, as one would have expected from a TAA derivative, shrug.
To be clear, this isn't a bad strategy when your hardware can't run native. It's just a little amusing to say it's as good as running native.