Tuesday, October 16th 2018

NVIDIA Releases Comparison Benchmarks for DLSS-Accelerated 4K Rendering

NVIDIA released comparison benchmarks for its new AI-accelerated DLSS technology, which is part of their new Turing architecture's call to fame. Using the Infiltrator benchmark with its stunning real-time graphics, NVIDIA showcased the performance benefits of using DLSS-improved 4K rendering instead of the usual 4K rendering + TAA (Temporal Anti-Aliasing). Using a Core i9-7900X 3.3GHz CPU paired with 16 GB of Corsair DDR4 memory, Windows 10 (v1803) 64-bit, and version 416.25 of the NVIDIA drivers, the company showed tremendous performance improvements that can be achieved with the pairing of both Turing's architecture strengths and the prowess of DLSS in putting Tensor cores to use in service of more typical graphics processing workloads.

The results speak for themselves: with DLSS at 4K resolution, the upcoming NVIDIA RTX 2070 convincingly beats its previous-gen pair by doubling performance. Under these particular conditions, the new king of the hill, the RTX 2080 Ti, convincingly beats the previous gen's halo product in the form of the Titan Xp, with a 41% performance lead - but so does the new RTX 2070, which is being sold at half the asking price of the original Titan Xp.
Source: NVIDIA Blogs
Add your own comment

43 Comments on NVIDIA Releases Comparison Benchmarks for DLSS-Accelerated 4K Rendering

#26
bug
HTCIf it were, current games would already be benefiting from it and yet they're not.
That doesn't automatically mean it's hard to implement. It could be because of the non-existing installed base of hardware to run it. The eternal chicken-and-egg problem with every new addition.

SotTR devs didn't even have enough time to implement RT in all the scenes in the demo (game will/did launch without RT, to be added in a later patch).
Posted on Reply
#27
HTC
bugThat doesn't automatically mean it's hard to implement. It could be because of the non-existing installed base of hardware to run it. The eternal chicken-and-egg problem with every new addition.

SotTR devs didn't even have enough time to implement RT in all the scenes in the demo (game will/did launch without RT, to be added in a later patch).
No way: only an nVidia 2070+ card is required for it, as far as we've been told: what other base of hardware are you talking about?

But but but it's supposed to be so easy to implement ... :rolleyes:
Posted on Reply
#28
bug
HTCNo way: only an nVidia 2070+ card is required for it, as far as we've been told: what other base of hardware are you talking about?

But but but it's supposed to be so easy to implement ... :rolleyes:
How many users are currently running at least a GTX 2070 to make it worth the devs effort?

And there are upcoming titles making use of DLSS, make no mistake about that: nvidianews.nvidia.com/news/nvidia-rtx-platform-brings-real-time-ray-tracing-and-ai-to-barrage-of-blockbuster-games
SotTR is one of them.
Already released titles won't do it because they've already got your money.
Posted on Reply
#29
HTC
bugHow many users are currently running at least a GTX 2070 to make it worth the devs effort?

And there are upcoming titles making use of DLSS, make no mistake about that: nvidianews.nvidia.com/news/nvidia-rtx-platform-brings-real-time-ray-tracing-and-ai-to-barrage-of-blockbuster-games
SotTR is one of them.
Already released titles won't do it because they've already got your money.
Exactly why i highly doubt we'll be seeing ray tracing take off in this generation.

Mantle was in the exact same position and look how that turned out ...

We can talk all we want about upcoming titles that are supposed to hit the market with the tech enabled but, until they hit the market and we can actually find out if it's worth it from both the visual and the performance aspects, it's all talk.

nVidia has the advantage of being in a higher market position, when compared to AMD @ the time they were introducing Mantle, and that can have a leverage effect.

We shall see ...
Posted on Reply
#30
bug
I'm not holding my breath for performance. It will suck. But the developers need to start somewhere, don't they?
Posted on Reply
#31
HTC
bugI'm not holding my breath for performance. It will suck. But the developers need to start somewhere, don't they?
True, but i just think nVidia is trying too hard in the sense the difference is too pronounced, hence the required powerful card for it to work (2070+).

I clearly remember the presentation and the difference could be described as "night and day".

If they made "only" very noticeable improvements, the performance hit wouldn't be as big and so even lower 2000 cards could work, meaning developers wouldn't just be catering to enthusiasts but mainstream as well and would therefore would be far more inclined to have the technology enabled in their games from the get go, and even patching already released games.
Posted on Reply
#32
bug
HTCTrue, but i just think nVidia is trying too hard in the sense the difference is too pronounced, hence the required powerful card for it to work (2070+).

I clearly remember the presentation and the difference could be described as "night and day".

If they made "only" very noticeable improvements, the performance hit wouldn't be as big and so even lower 2000 cards could work, meaning developers wouldn't just be catering to enthusiasts but mainstream as well and would therefore would be far more inclined to have the technology enabled in their games from the get go, and even patching already released games.
My feeling is, with the huge die their margins are probably already razor-thin, so besides getting the tech in the hands of enthusiasts, Nvidia doesn't actually want to sell many of these. But again, that's ust my feeling.
Posted on Reply
#33
HTC
bugMy feeling is, with the huge die their margins are probably already razor-thin, so besides getting the tech in the hands of enthusiasts, Nvidia doesn't actually want to sell many of these. But again, that's ust my feeling.
100% false, and no: i don't have any sources for that.

From what we know of nVidia is that they don't sell for low profit: @ least not huge die chips.

But it's true that they are most likely having yield issues, due to the die's size. That's the problem with huge dies: just ask Intel about it, regarding high end server chips.

For example, the 2080 has a die size of 545 mm2. If we get the square root, it's just under 23.35 and we'll round that to 23.4 for this example.

If we use the die per wafer calculator for this, and assuming a low density defect for a mature process (which i'm not entirely sure of), we get:



Less then 60% yield rate, and that's before binning, because there are "normal" 2080s, FE 2080s and AIB 2080s. This adds to the cost as you said, but then there's "the nVidia tax" which inflates stuff even more.
Posted on Reply
#34
WikiFM
HTCIf it were, current games would already be benefiting from it and yet they're not.

Heck: Shadow of the Tomb Raider doesn't have it and this game was one of the few showcased with it enabled when Turing cards were released, during that presentation by nVidia's CEO. How long ago was that, exactly? If it were easy, as you claim, it should be already enabled, no?
bugThat doesn't automatically mean it's hard to implement. It could be because of the non-existing installed base of hardware to run it. The eternal chicken-and-egg problem with every new addition.

SotTR devs didn't even have enough time to implement RT in all the scenes in the demo (game will/did launch without RT, to be added in a later patch).
Since the first weeks of releasing a new game are the ones where most people buy and play it I dont see any incentive from developer to add RTX and/or DLSS in SOTR.
bugMy feeling is, with the huge die their margins are probably already razor-thin, so besides getting the tech in the hands of enthusiasts, Nvidia doesn't actually want to sell many of these. But again, that's ust my feeling.
Actually when margins are thin, you increase production since you need to sell more to achieve the projected profit, is low margin high volume business model.
Posted on Reply
#35
xorbe
There was no public release of this 416.25 driver.
Posted on Reply
#36
Vya Domus
Allegedly DLSS doesn't look much better than 1440p and TAA. Which would make sense, at the end of the day this is still just a scaling algorithm that takes a fully rendered frame from a lower resolution source and upscales it to a 4K output. I feel like there is wasted potential here, using the Tensor Cores for smarter sparse rendering techniques would have proved more useful. Nvidia invested so much into this AI field that they now try to shove it in consumer products in order to get something out of it, whether it makes sense or not.
Posted on Reply
#37
HTC
Vya DomusAllegedly DLSS doesn't look much better than 1440p and TAA. Which would make sense, at the end of the day this is still just a scaling algorithm that takes a fully rendered frame from a lower resolution source and upscales it to a 4K output. I feel like there is wasted potential here, using the Tensor Cores for smarter sparse rendering techniques would have proved more useful. Nvidia invested so much into this AI field that they now try to shove it in consumer products in order to get something out of it, whether it makes sense or not.
That's another thing, and i have to wonder really: were it AMD that came up with this method and implemented it 1st, would nVidia not cry foul?

You're not actually seeing images being rendered @ 4K but rather @ a lower resolution, applied all the enhancements @ that resolution via DLSS, and then upscaled to 4K: is this not the same as watching a full HD clip in fullscreen but with a game instead of a video, minus the enhancements part?

Kudos to nVidia to have come up with a way to do it in real time, but it's still cheating, IMO.
Posted on Reply
#39
bug
WikiFMSince the first weeks of releasing a new game are the ones where most people buy and play it I dont see any incentive from developer to add RTX and/or DLSS in SOTR.
SotTR has been announced to receive these in a patch, whether you see it or not.
WikiFMActually when margins are thin, you increase production since you need to sell more to achieve the projected profit, is low margin high volume business model.
That's a blanket statement that doesn't fit here. The margin are thin precisely because when the chip is big, you have to throw away a big part of the waffer. Between that and everybody else fighting for 14nm production, good luck increasing production.
Posted on Reply
#40
Eric3988
DLSS, and RTX are both promising but until Nvidia can brag how many games actually support it NOW I don't think it really matters. No way can they convince me to break out the wallet on hardware that costs 50% than it should for features that aren't benefiting me now.
Posted on Reply
#41
Prima.Vera
Looking at current presentation slides, the quality is kind of sub mediocre. It's quality is way way worst than TSSAAx8 for example, while being marginally faster. And don't even get me started with SMAA which has zero impact on performance and can be used on any generation card from both vendors.
Posted on Reply
#42
WikiFM
bugThat's a blanket statement that doesn't fit here. The margin are thin precisely because when the chip is big, you have to throw away a big part of the waffer. Between that and everybody else fighting for 14nm production, good luck increasing production.
That is why it is not good that your whole production relays in a single foundry, specially if it is the single foundry of others too.

Actually what determines if the IQ of DLSS is equivalent to X resolution? The user eyes? A mathematical algo? The developer graphics settings? The NVIDIA's marketing team?
Posted on Reply
#43
bug
WikiFMThat is why it is not good that your whole production relays in a single foundry, specially if it is the single foundry of others too.
Now you just sound like a teen noob (that's not derogatory, I used to be one myself). You think fabs and production capacity in general grow on trees? You think having to source your stuff from multiple sources lowers your costs?
WikiFMActually what determines if the IQ of DLSS is equivalent to X resolution? The user eyes? A mathematical algo? The developer graphics settings? The NVIDIA's marketing team?
That would be primarily a diff between the original and anti-aliased image. The more you manage to stick to actual edges in the image and alter nothing else, the better the IQ.
Posted on Reply
Add your own comment
Mar 15th, 2025 22:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts