• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Cyberpunk 2077: DLSS vs. FSR Comparison

Joined
Feb 20, 2022
Messages
175 (0.18/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
This thread is a weird place to ask about this, but nonetheless, I'll say that I'd lean toward the 3080 12GB. In terms of raw performance, as discussed, they are very closely matched. What gives Nvidia the edge is its additional features. Nvidia has superior hardware encoding support, superior ray tracing support, and DLSS support. Even if you don't want to turn ray tracing on in every game, it's a nice value add for when you do want to use it. And while FSR 2.0 is coming and may be competitive with DLSS, it's less mature and fewer games will support it, at least at first.

The one point I'd give in AMD's favor is its lower power consumption. If power efficiency/heat management is important to you, then maybe go with AMD. But even if that's the case, then I think you should try to find a way to make the 3080 12GB work instead (Ampere cards undervolt well, and you can keep the 3080 12GB's power consumption to under 300W with a good VF curve)
An overclocked 5800x system @ 5GHz and a 6900xt (drawing 380 watts in ingame benchmarks). Is beaten by my 10900k and a stock 3080 ti with a power limit of 380 watts and a ingame power draw of 350 watts in games like Shadow of the Tomb raider 4k and Cyberpunk 2077 4k. Example benchmark video.

Shadow Of The Tomb Raider using the same settings. A 10900k with a 3080 ti with a 380 watt power limit hits 102fps with no overclock 4k. The overclocked 6900xt hits 99fps with a massive overclock. 4k highest with TAA.

The video uses custom settings in Cyberpunk 2077. In Cyberpunk 2077 the RTX 3080 TI at stock is faster than the 6900xt with a massive overclock (380 watts power draw)@ 4k Ultra settings and no custom settings at the end of the benchmark.

This begs the question if the 6900xt is more power efficent than the 3090 ti? Top of AMD vs top of nvidia? Turns out that a 3090 ti could be faster at 300 watts than a 6900xt.

If we look at Igor's 4K test results, we can see that the stock Suprim 3090 Ti scores an average of 107.4 FPS over the entire test suite. That drops to 96.3 FPS for the 300W tuned 3090 Ti. Interestingly, the card still beats out the 6900XT and is well ahead of the RTX 3080 and 3080 12GB.

Meanwhile the power consumption readings are even more interesting! The default 3090 Ti records an average power consumption of 465.7W, while the 300W limited card delivers a 313.8W reading. That's a huge difference and it proves just how aggressive Nvidia was in order to ensure the 3090 Ti retained the outright performance crown. Source
 

Attachments

  • SOTTR 4k highest TAA.jpg
    SOTTR 4k highest TAA.jpg
    2.2 MB · Views: 62
  • Cyberpunk 2077.jpg
    Cyberpunk 2077.jpg
    432.7 KB · Views: 50
Joined
Oct 12, 2005
Messages
703 (0.10/day)
The Efficiency curve is not linear. If you compare a product with another one and you limit the power it was intended to receive, it automatically become more efficient.

At the same time, if you take another product and push it beyond it's designed power usage, it will become less efficient. We cannot really declare that X is better than Y using 1-2 game by tweaking just one card or aiming for an arbitrary power consumption.

This is more an attempt to confirm a bias than anything else.
 
Joined
Feb 20, 2022
Messages
175 (0.18/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
The Efficiency curve is not linear. If you compare a product with another one and you limit the power it was intended to receive, it automatically become more efficient.

At the same time, if you take another product and push it beyond it's designed power usage, it will become less efficient. We cannot really declare that X is better than Y using 1-2 game by tweaking just one card or aiming for an arbitrary power consumption.

This is more an attempt to confirm a bias than anything else.
You can 100% set both cards to the same power limit and compare performance. Logically its not a problem and the conclusion is valid. For example both the nvidia and amd card at 300 watts. Thats basically the stock for the 6900xt. At 300 watts a 3090 ti out performs a 6900xt. The conclusion if true in a decent sample size would be logically valid.
 
Joined
Oct 12, 2005
Messages
703 (0.10/day)
You can 100% set both cards to the same power limit and compare performance. Logically its not a problem and the conclusion is valid. For example both the nvidia and amd card at 300 watts. Thats basically the stock for the 6900xt. At 300 watts a 3090 ti out performs a 6900xt. The conclusion if true in a decent sample size would be logically valid.

No it's not as on both card, the efficiency curve with voltage is not the same. So a test at 300w could have one card as the winner and at 250w it could be another one, it could also be different at 400w.

Also yes, a decent sample size of +20 games with a set of game that favor both architectures would be better. By running it at multiple watt limit Plus letting it run as stock (the most important test of all) would give people a decent sample to provide a conclusion.

Also it need to have more cards as it can varies from sample to sample and model to model. a card that is able to run cooler with a more efficient cooling will have less leakage. This will improve efficiency too.
 
Joined
Feb 20, 2022
Messages
175 (0.18/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
No it's not as on both card, the efficiency curve with voltage is not the same. So a test at 300w could have one card as the winner and at 250w it could be another one, it could also be different at 400w.

Also yes, a decent sample size of +20 games with a set of game that favor both architectures would be better. By running it at multiple watt limit Plus letting it run as stock (the most important test of all) would give people a decent sample to provide a conclusion.

Also it need to have more cards as it can varies from sample to sample and model to model. a card that is able to run cooler with a more efficient cooling will have less leakage. This will improve efficiency too.
The RTX 3090 TI wins at stock which is like 500 watts. The 6900xt is 300 watts stock. All he did was retune the RTX 3090 TI for 300 watts and thats fine.

This time I not only narrowed down the power limit, but also adjusted the VF curve. I used the curve of the NVIDIA RTX A6000 as a blueprint, which is also slowed down with a power limit of 300 watts and has the same, uncut GA102. source
Basically he recreated the curve for the A6000 which is the same die. This v/f curve is for the 300 watts limit on the A6000. Even so he had to lower the maximum clock.

The frequency setting was not that easy, because there were always instabilities and I had to lower the clock to a maximum of 2050 MHz.

So this is the RTX 3090 TI as if it was designed for a 300 watts power target, like the A600 or as close as was possible. Many people on youtube are benchmarking 6900xt's with a power draw of 380 watts in the video stats. Many RTX 3080 ti's have a maximum power limit of 380-400 watts, with only the best models at a maium of 450 watts. This is just about curiosity really.

The result is valid but this is a sample size of one card and a small sample set of games. These are the real logical issues with any conclusion made from these results. Few people will do this to a 3090 ti.
 
Joined
Dec 30, 2021
Messages
384 (0.37/day)
I mean, you're comparing an overclocked 6900 XT against a stock 3080 Ti. Of course the overclocked card is going to be less power efficient than the card under stock settings—that's how overclocking works. You trade a large amount of power consumption for a small performance gain.

And then you're comparing an undervolted 3090 Ti against a stock 6900 XT. Again, not a like-for-like comparison. Has anyone compared an undervolted 3090 Ti against an undervolted 6900 XT?
 
Joined
Feb 20, 2022
Messages
175 (0.18/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
I mean, you're comparing an overclocked 6900 XT against a stock 3080 Ti. Of course the overclocked card is going to be less power efficient than the card under stock settings—that's how overclocking works. You trade a large amount of power consumption for a small performance gain.

And then you're comparing an undervolted 3090 Ti against a stock 6900 XT. Again, not a like-for-like comparison. Has anyone compared an undervolted 3090 Ti against an undervolted 6900 XT?
Again my 3080 ti is also power limited to the same power draw of the 6900xt overclocked. The 6900xt draws a maximum of 380 watts in stats, the 3080 ti 350 watts. The stock 3090 ti and stock 6900xt are both 300 watts. The goal is to see how the RTX 3090 TI can perform at 300 watts. If you want apples to apples then reduce the 6900xt's power draw by the same % (300 watts is about 60% of the RTX 3090 ti's power draw) and see what happens. There will be performance regression, just like the 3090 ti but it will be bad reguardless of what you do to the 6900xt.
 
Joined
Oct 12, 2005
Messages
703 (0.10/day)
Again my 3080 ti is also power limited to the same power draw of the 6900xt overclocked. The 6900xt draws a maximum of 380 watts in stats, the 3080 ti 350 watts. The stock 3090 ti and stock 6900xt are both 300 watts. The goal is to see how the RTX 3090 TI can perform at 300 watts. If you want apples to apples then reduce the 6900xt's power draw by the same % (300 watts is about 60% of the RTX 3090 ti's power draw) and see what happens. There will be performance regression, just like the 3090 ti but it will be bad reguardless of what you do to the 6900xt.

The main reason that most site do not do these kind of analysis is to get something reliable, you need a large sample size, meaning a lot of tests. They will just do it with stock settings because that should remain similar between 2 cards.

Else it's just tunning and you enter the realm of silicon lottery. Not all chips can be undervolted the same way and some will go higher in frequency at a lower voltage than others.

To really get good data on both company on non-stock settings, we would have to test a significant portions of chips to really get meaningful data. We could then see if some chips are better on average than another.
 
Joined
Feb 20, 2022
Messages
175 (0.18/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
The main reason that most site do not do these kind of analysis is to get something reliable, you need a large sample size, meaning a lot of tests. They will just do it with stock settings because that should remain similar between 2 cards.

Else it's just tunning and you enter the realm of silicon lottery. Not all chips can be undervolted the same way and some will go higher in frequency at a lower voltage than others.

To really get good data on both company on non-stock settings, we would have to test a significant portions of chips to really get meaningful data. We could then see if some chips are better on average than another.
As the 6900xt is at tock there is no issues there. The RTX 3090 ti is already a very power card compared to the 6900xt. Its going to still perform well after the performance regression. What the article does imply is NVidia are pushing the card well past the point its very efficient power wise. It takes 40% more power for the extra performance. As the source states,

Proof that the last 10% of performance comes at a huge efficiency cost.
If a drop from 450W to 300W costs you about 10% worth of performance at 66% of the power consumption, it's likely there's a very good sweet spot in there, perhaps 375W for a 5% loss. That's a level many would be willing to accept, especially in the summer months! Source

There is enough evidence to imply that the cards are pushed to the limits power wise for a small performance gain and high price premium.
 
Joined
Nov 30, 2020
Messages
47 (0.03/day)
Does anyone see a difference between DLSS Quality and Performance? They seem quite identical to me, both ingame and in this post.
 
Joined
Apr 14, 2022
Messages
739 (0.79/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
Yes.
I cannot see a difference between balanced, quality and ultra quality.
 
Joined
Sep 10, 2018
Messages
6,843 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
Does anyone see a difference between DLSS Quality and Performance? They seem quite identical to me, both ingame and in this post.
Yes.
I cannot see a difference between balanced, quality and ultra quality.

The biggest difference are on fine detail like the palm tree leaves and power lines on a 65 inch display balanced and Quality are easy to tell apart even with the latest DLL. I personally don't like anything below quality on any game but that could be down to the size screen I use for gaming.
 
Joined
Nov 30, 2020
Messages
47 (0.03/day)
The biggest difference are on fine detail like the palm tree leaves and power lines on a 65 inch display balanced and Quality are easy to tell apart even with the latest DLL. I personally don't like anything below quality on any game but that could be down to the size screen I use for gaming.
I'm using a Sony 65" Oled and a 4090, just can't see a difference.
 
Joined
Sep 10, 2018
Messages
6,843 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I'm using a Sony 65" Oled and a 4090, just can't see a difference.

That's a nice bonus for you then not seeing the image degradation at lower DLSS settings. Free performance boost.
 
Top