Raevenlord
News Editor
- Joined
- Aug 12, 2016
- Messages
- 3,755 (1.24/day)
- Location
- Portugal
System Name | The Ryzening |
---|---|
Processor | AMD Ryzen 9 5900X |
Motherboard | MSI X570 MAG TOMAHAWK |
Cooling | Lian Li Galahad 360mm AIO |
Memory | 32 GB G.Skill Trident Z F4-3733 (4x 8 GB) |
Video Card(s) | Gigabyte RTX 3070 Ti |
Storage | Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB |
Display(s) | Acer Nitro VG270UP (1440p 144 Hz IPS) |
Case | Lian Li O11DX Dynamic White |
Audio Device(s) | iFi Audio Zen DAC |
Power Supply | Seasonic Focus+ 750 W |
Mouse | Cooler Master Masterkeys Lite L |
Keyboard | Cooler Master Masterkeys Lite L |
Software | Windows 10 x64 |
NVIDIA, in a blog post/Q&A on its DLSS technology, promised implementation and image quality improvements on its Metro Exodus rendition of the technology. If you'll remember, AMD recently vouched for other, non-proprietary ways of achieving desired quality of AA technology across resolutions such as TAA and SMAA, saying that DLSS introduces "(...) image artefacts caused by the upscaling and harsh sharpening." NVIDIA in its blog post has dissected DLSS in its implementation, also clarifying some lingering questions on the technology and its resolution limitations that some us here at TPU had already wondered about.
The blog post describes some of the limitations in DLSS technology, and why exactly image quality issues might be popping out here and there in titles. As we knew from NVIDIA's initial RTX press briefing, DLSS basically works on top of an NVIDIA neural network. Titled the NGX, it processes millions of frames from a single game at varying resolutions, with DLSS, and compares it to a given "ground truth image" - the highest quality possible output sans any shenanigans, generated from just pure raw processing power. The objective is to train the network towards generating this image without the performance cost. This DLSS model is then made available for NVIDIA's client to download and to be run at your local RTX graphics card level, which is why DLSS image quality can be improved with time. And it also helps explain why closed implementations of the technology, such as 3D Mark's Port Royal benchmark, show such incredible image quality scenarios compared to, say, Metro Exodus - there is a very, very limited number of frames that the neural network needs to process towards achieving the best image quality.
Forumites: This is an Editorial
The nature of DLSS means that the network needs to be trained for every conceivable resolution, since different rendering resolutions will require different processing for the image to resemble the ground truth we're looking for. This is the reason for Metro Exodus' limits in DLSS - it's likely that NVIDIA didn't actually choose not to enable it at 1080p with RTX off, it was just a case of the rendering time on its NGX cluster not being enough, in time for launch, to cover all of the most popular resolutions, with or without RTX, across its three available settings. So NVIDIA mated both settings, to allow the greatest image quality and performance improvements for those gamers that want to use RTX effects, and didn't train the network for non-RTX scenarios.
This brings with it a whole lot of questions - how long exactly does NVIDIA's neural network take to train an entire game's worth of DLSS integration? With linear titles, this is likely a great technology - but apply this to an open-world setting (oh hey, like Metro Exodus) and this seems like an incredibly daunting task. NVIDIA had this to say on its blog post:
So this not only speaks to NVIDIA recognizing that DLSS image quality isn't at the level it's supposed to be (which implies it can actually degrade image quality, giving further credence to AMD's remarks on the matter), but also confirms that they're constantly working on improving DLSS' performance and image quality - and more interestingly, that this is something they can always change, server-side. I'd question the sustainability of DLSS' usage, though; the number of DLSS-enabled games is low enough as it is - and yet NVIDIA seems to be having difficuly in keeping up even when it comes to AAA releases. Imagine if DLSS picked up like NVIDIA would like it (or would they?) and expand to most of the launched games. Looking at what we know, I don't even think that scenario of support would be possible - NVIDIA's neural network would be bottlenecked with all the processing time required for these games, their different rendering resolutions and RTX settings.
DLSS really is a very interesting technology that empowers the RTX graphics card of every user with the power of the cloud, as NVIDIA said it did. However, there are certainly some quirks that require more processing time than they've been given, and there are limits to how much processing power NVIDIA can and will dedicate to each title. That the network needs to be trained again and again and again for every new title out there bodes well for a controlled, NVIDIA-fed games development environment, but that's not the real world - especially not with an AMD-led console market. I'd question DLSS' longevity and success on these factors alone, whilst praising its technology and forward-thinking design immensely.
View at TechPowerUp Main Site
The blog post describes some of the limitations in DLSS technology, and why exactly image quality issues might be popping out here and there in titles. As we knew from NVIDIA's initial RTX press briefing, DLSS basically works on top of an NVIDIA neural network. Titled the NGX, it processes millions of frames from a single game at varying resolutions, with DLSS, and compares it to a given "ground truth image" - the highest quality possible output sans any shenanigans, generated from just pure raw processing power. The objective is to train the network towards generating this image without the performance cost. This DLSS model is then made available for NVIDIA's client to download and to be run at your local RTX graphics card level, which is why DLSS image quality can be improved with time. And it also helps explain why closed implementations of the technology, such as 3D Mark's Port Royal benchmark, show such incredible image quality scenarios compared to, say, Metro Exodus - there is a very, very limited number of frames that the neural network needs to process towards achieving the best image quality.
Forumites: This is an Editorial
The nature of DLSS means that the network needs to be trained for every conceivable resolution, since different rendering resolutions will require different processing for the image to resemble the ground truth we're looking for. This is the reason for Metro Exodus' limits in DLSS - it's likely that NVIDIA didn't actually choose not to enable it at 1080p with RTX off, it was just a case of the rendering time on its NGX cluster not being enough, in time for launch, to cover all of the most popular resolutions, with or without RTX, across its three available settings. So NVIDIA mated both settings, to allow the greatest image quality and performance improvements for those gamers that want to use RTX effects, and didn't train the network for non-RTX scenarios.
This brings with it a whole lot of questions - how long exactly does NVIDIA's neural network take to train an entire game's worth of DLSS integration? With linear titles, this is likely a great technology - but apply this to an open-world setting (oh hey, like Metro Exodus) and this seems like an incredibly daunting task. NVIDIA had this to say on its blog post:
For Metro Exodus, we've got an update coming that improves DLSS sharpness and overall image quality across all resolutions that didn't make it into day of launch. We're also training DLSS on a larger cross section of the game, and once these updates are ready you will see another increase in quality. Lastly, we are looking into a few other reported issues, such as with HDR, and will update as soon as we have fixes.
So this not only speaks to NVIDIA recognizing that DLSS image quality isn't at the level it's supposed to be (which implies it can actually degrade image quality, giving further credence to AMD's remarks on the matter), but also confirms that they're constantly working on improving DLSS' performance and image quality - and more interestingly, that this is something they can always change, server-side. I'd question the sustainability of DLSS' usage, though; the number of DLSS-enabled games is low enough as it is - and yet NVIDIA seems to be having difficuly in keeping up even when it comes to AAA releases. Imagine if DLSS picked up like NVIDIA would like it (or would they?) and expand to most of the launched games. Looking at what we know, I don't even think that scenario of support would be possible - NVIDIA's neural network would be bottlenecked with all the processing time required for these games, their different rendering resolutions and RTX settings.
DLSS really is a very interesting technology that empowers the RTX graphics card of every user with the power of the cloud, as NVIDIA said it did. However, there are certainly some quirks that require more processing time than they've been given, and there are limits to how much processing power NVIDIA can and will dedicate to each title. That the network needs to be trained again and again and again for every new title out there bodes well for a controlled, NVIDIA-fed games development environment, but that's not the real world - especially not with an AMD-led console market. I'd question DLSS' longevity and success on these factors alone, whilst praising its technology and forward-thinking design immensely.
View at TechPowerUp Main Site