• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 4090 Founders Edition

Yeah, comparing RTX 3080 with RTX 2080 Ti:

At 1080p it's 18% faster
At 1440p it's 23% faster
At 4K it's 27.5% faster.

MSRP of RTX 2080 Ti was $1499. MSRP of RTX 3080 was $699.

But now we should be glad the RTX 4080 12GB barely matches the RTX 3090 Ti - so we'll be able to buy a 1100 EUR card that matches a 1200 EUR card - wow, much price / performance increase?

And in some cases I predict RTX 4080 12Gb will be much slower, and we'll see a 1100 EUR card barely match the 800 EUR RTX 3080.
 
Last edited:
@W1zzard we need Spider-Man remasted and overwatch on future benchmarks on the 7900 xt.. borderlands and witcher are old
Will definitely add Spiderman even though it's super CPU limited. Not sure about Overwatch due to its always-online design, also it runs 4896748956 FPS and is CPU limited, too. Will probably kick Borderlands 3, but Witcher 3 stays, because very important DX11 game
 
Will definitely add Spiderman even though it's super CPU limited. Not sure about Overwatch due to its always-online design, also it runs 4896748956 FPS and is CPU limited, too. Will probably kick Borderlands 3, but Witcher 3 stays, because very important DX11 game

Is it possible to add a review to compare different CPUs running with RTX 4090?
 
Is it possible to add a review to compare different CPUs running with RTX 4090?
Anything is possible ;)

5800X vs 12900K vs 7700X vs 13900K definitely sounds interesting, but that'll be A LOT of work
 
Yeah, comparing RTX 3080 with RTX 2080 Ti:

At 1080p it's 18% faster
At 1440p it's 23% faster
At 4K it's 27.5% faster.

MSRP of RTX 2080 Ti was $1499. MSRP of RTX 3080 was $699.

But now we should be glad the RTX 4080 12GB barely matches the RTX 3090 Ti - so we'll be able to buy a 1100 EUR card that matches a 1200 EUR card - wow, much price / performance increase?

And in some cases I predict RTX 4080 12Gb will be much slower, and we'll see a 1100 EUR card barely match the 800 EUR RTX 3080.
people pushing the narrative to be amazed at a generic generational performance leap and to be grateful for higher prices is really funny :)
 
DLAA is a real technology, it's NVIDIA deep learning anti aliasing, similar to DLSS but without the lower render resolution then upscaling. FG is frame generation.

DLAA is better than TAA, but doesn't offer the performance benefits of DLSS.
Yes, it was my mistake. I thought it should have been DLSS + FG, not DLAA + FG. :)
 
AMD will have a REALLY tough time trying to come anywhere near this with 7000 series, especially if their new cpus are any indication, since they are literally less efficient than previous gen:
You realize that the 7950x at 65W has the same performance as 5950X at stock? AMD, Intel and nV have all opted to increase their power consumption drastically and out of the window of optimal energy efficiency just for a few percent more performance.
 
people pushing the narrative to be amazed at a generic generational performance leap and to be grateful for higher prices is really funny :)

That's the thing - it seems like "generic generational performance leap" will be present only in RTX 4090, even the RTX 4080 16GB seems too cut to be able to do a RTX 3080 + 50 - 70%, and that's a 1500 EUR card now!

Will we see the push of "but DLSS 3.0 does that, and more!", sO who cares about rasterisation uplift? And have a Turing kind of release - perhaps even worse?
 
You realize that the 7950x at 65W has the same performance as 5950X at stock? AMD, Intel and nV have all opted to increase their power consumption drastically and out of the window of optimal energy efficiency just for a few percent more performance.
Ahh, so now that the ball is in the other court, it's fine to compare performance at a certain, limited power and not only at stock? Back when 12900k was killing it in this metric, all that mattered was its "horrible stock consumption", hehe... And yes, I realize that, but 4090 just pushed the bar so high, there is no way 7000 series will have any hope of even coming within a class of its performance while still staying at least somewhat on the efficiency side of the curve.
 
there is no way 7000 series will have any hope of even coming within a class of its performance while still staying at least somewhat on the efficiency side of the curve.
Let's wait for reviews
 
And yes, I realize that, but 4090 just pushed the bar so high, there is no way 7000 series will have any hope of even coming within a class of its performance while still staying at least somewhat on the efficiency side of the curve.

That's not necessarily a given. Lower end Ada cards are more severely cut than it was normal in past - in term of units, memory bandwidth - will they be pushed more to compensate, and so operate more inefficiently?

Very few people actually care about RTX 4090. It's not a normal gaming card, no matter how reviewers and Youtube influencer drool over it's performance.
 
Will definitely add Spiderman even though it's super CPU limited. Not sure about Overwatch due to its always-online design, also it runs 4896748956 FPS and is CPU limited, too. Will probably kick Borderlands 3, but Witcher 3 stays, because very important DX11 game
Thanks a lot for your hard work!

I agree on Witcher 3, for me it's still a reference and one of the first games I see on any GPU review, was very balanced and their results are important!
 
there is no way 7000 series will have any hope of even coming within a class of its performance while still staying at least somewhat on the efficiency side of the curve.
people said the exact same thing back when 6000 series was about to be launched xD
 
Yeah, comparing RTX 3080 with RTX 2080 Ti

You're comparing a flagship card with a non-flagship card: try comparing it to the 2080 instead (i'm referring to launch day reviews).

Do that and then, when 4080 releases, compare it's % lead VS the 3080 with the % lead of the 3080 VS the 2080, and then factor in the prices of the cards.
 
people said the exact same thing back when 6000 series was about to be launched xD
No they didn't, at least those of us who can think didn't at the very least. Back then AMD had the superior process due to Nvidia favoring volume and going Samsung (which proved to be a great business move; they sold an order of magnitude more 3000 series than AMD did 6000). This time though, if anything, Nvidia will even have a small node edge (N4 vs N5) and it's not hard to predict the outcome. :cool:
 
This time though, if anything, Nvidia will even have a small node edge (N4 vs N5) and it's not hard to predict the outcome.
The difference in nodes could very well be in naming only and effectively the same when it comes to perf/watt/prices
 
Just like I was saying back in 3000 vs 6000 efficiency debate, Nvidia on a cutting edge node is far above the rest and it truly shows now:
energy-efficiency.png

And that's with the top-tier, no-holds-barred card, designed for maximum performance. Optimize it with lower power limit and some undervolting and you get something that's leagues beyond anything else, which is clearly shown by ridiculously low "60hz v-sync" consumption.
power-vsync.png

AMD will have a REALLY tough time trying to come anywhere near this with 7000 series, especially if their new cpus are any indication, since they are literally less efficient than previous gen:
efficiency-multithread.png
The 4090 is quite clearly CPU limited in the TPU efficiency test scenario (not as hard as 1080p, but with an average that close, it's CPU limited most of the time), rendering that comparison quite invalid, as the card is essentially running under clocked. Of course the performance is bonkers nonetheless, and UV/UC/power limiting potential for this card is HUGE (as Der8auer has demonstrated), but these results are not representative. @W1zzard needs to get on this and find another efficiency test scenario.

Edit: autocorrect. Also w1zzard has tested the 4090, 3090 and 6900 XT in 2160p with some interesting results - the AMD card is about the same relative efficiency as it lags behind at the higher resolution, but the 3090 looks much better compared to the 4090.
 
Last edited:
"bullshit fake frames"

Tell me more...

After reading pre-review earlier speculations of 4x increase in FPS with DLSS 3 enabled... i immediately fell into the "what if" pit of too-good-to-be-true skepticism (marketing gimmickery?). So kill the curiousity, tell me more!

Here are a couple of informative videos:



I don't expect you to watch all of that, though. The gist seems to be that DLSS 3 frames aren't quite "fake," but they are definitely "half-fake," or maybe even "three-quarters fake." Certainly NVIDIA's marketing around DLSS 3 trends towards fake. Why? Because the extra frames generated by DLSS 3 don't reduce input latency, at all, in contrast to normal extra framerate. In some cases DLSS 3 even makes latency marginally worse than it would be at a lower native framerate.

At first this didn't sound so bad to me, but it turns out that the use case for this tech is a pretty small niche. For example, if you're already at or near your monitor's max refresh rate, then DLSS 3 is wasted, because the screen can't convey the visual smoothness benefits. Likewise, if you're looking to push stratospheric FPS for competitive gaming, DLSS 3 is completely pointless.

On the other side of the spectrum, at lower FPS numbers DLSS 3's visual artifacting is more noticeable, so the extra frames provided come at a higher visual cost without providing any benefit in terms of responsiveness. Plus DLSS 3 disables V-Sync and FPS limiters by default, so there's tearing if you don't have Variable Refresh Rate or if you're below/above your monitor's thresholds for VRR. These factors limit DLSS 3's appeal as an FPS booster to lower end or mid-range hardware.

So FWIW, Tim says this tech is best for people who fit the following criteria:

- They're already capable of running the game at roughly 100-120 FPS without DLSS 3;
- They're running a (VRR-capable) monitor with a refresh rate significantly higher than 100-120 Hz, and
- They're playing games that aren't especially latency sensitive (e.g. graphically impressive single player stuff, like Cyberpunk 2077)

I don't believe this is an especially large market. People expecting DLSS 3 to be anywhere near as impactful as DLSS 2 are destined for disappointment.

EDIT: Here's the companion article to the HUB video linked above, for those who are more text-inclined: https://www.techspot.com/article/2546-dlss-3/
 
Last edited:
In a dynamic scene frame generation is basically useless, especially high FPS scenarios. I can't see how the predicted or AI generated frame can be as accurate as the real scene ever!
You would probably need 10x the computational power & 1000x-10000x more AI training to get it working in an acceptable way, acceptable to me at least.
 
The 4090 is quite clearly CPU limited in the TPU efficiency test scenario (not as hard as 1080p, but with an average that close, it's CPU limited most of the time), rendering that comparison quite invalid, as the card is esse tislly running under clocked. Of course the performance is bonkers nonetheless, and UV/UC/power limiting potential for this card is HUGE (as Der8auer has demonstrated), but these results are not representative. @W1zzard needs to get on this and find another efficiency test scenario.
I think increasing the test scene's resolution to UHD for cards faster than 3090 Ti will be enough to resolve that.
 
I think increasing the test scene's resolution to UHD for cards faster than 3090 Ti will be enough to resolve that.
Possibly, though it also skews inter-architectural comparisons as different architectures scale across resolutions differently. The ideal for a broadly representative test would be a demanding 1440p title that still scales to very high fps without becoming cpu limited.
 
Possibly, though it also skews inter-architectural comparisons as different architectures scale across resolutions differently. The ideal for a broadly representative test would be a demanding 1440p title that still scales to very high fps without becoming cpu limited.
Funnily enough, for all the talk of DX12 decreasing CPU bottlenecks, the one game that doesn't seem CPU limited at 1440p is The Witcher 3.

I also excluded all the games from TPU's test suite that are clearly CPU limited and got somewhat better speedups for the 4090: 53% and 73% over the 3090 Ti and the 3090 respectively at 4K. The games that I excluded are:

  • Battlefield V
  • Borderlands 3
  • Civilization VI
  • Divinity Original Sin II
  • Elden Ring
  • F1 22
  • Far Cry 6
  • Forza Horizon 5
  • Guardians of the Galaxy
  • Halo Infinite
  • Hitman 3
  • Watch Dogs Legion
 
I think increasing the test scene's resolution to UHD for cards faster than 3090 Ti will be enough to resolve that.
All cards have to run the same scene + resolution, because "efficiency" = "fps / power" .. and it has to be a game that's fair to both vendors .. and something popular .. leaning towards switching to doom eternal 4k for all cards .. even very old cards get decent fps there and dont fall off a cliff due to vram limits
 
All cards have to run the same scene + resolution, because "efficiency" = "fps / power" .. and it has to be a game that's fair to both vendors .. and something popular .. leaning towards switching to doom eternal 4k for all cards .. even very old cards get decent fps there and dont fall off a cliff due to vram limits
My bad; I forgot about the efficiency metric. I was only thinking of peak power.
 
My bad; I forgot about the efficiency metric. I was only thinking of peak power.
you mean "maximum" in my charts? that's furmark and definitely not cpu limited. but furmark is a totally unrealistic load, that's why I also have a real gaming load and the differences are huge
 
Back
Top