• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 4090 Founders Edition

you mean "maximum" in my charts? that's furmark and definitely not cpu limited. but furmark is a totally unrealistic load, that's why I also have a real gaming load and the differences are huge
I phrased it badly. I meant the Gaming power usage; I consider that the maximum for most purposes as Furmark is unrealistic. I was thinking that the game and resolution were chosen for a near peak gaming power draw.
 
How come 40%-somthing perf boost is not a disappointment after promised "2-4 times faster"?

3080Ti vs 2080Ti was a 56% bump, mind you.

How is 4080 supposed to compete, given it is a heavily cut down version of 4090?
 
Last edited:
Funnily enough, for all the talk of DX12 decreasing CPU bottlenecks, the one game that doesn't seem CPU limited at 1440p is The Witcher 3.

I also excluded all the games from TPU's test suite that are clearly CPU limited and got somewhat better speedups for the 4090: 53% and 73% over the 3090 Ti and the 3090 respectively at 4K. The games that I excluded are:

  • Battlefield V
  • Borderlands 3
  • Civilization VI
  • Divinity Original Sin II
  • Elden Ring
  • F1 22
  • Far Cry 6
  • Forza Horizon 5
  • Guardians of the Galaxy
  • Halo Infinite
  • Hitman 3
  • Watch Dogs Legion

List of DirectX 12 games - PCGamingWiki PCGW - bugs, fixes, crashes, mods, guides and improvements for every PC game: only 240 games vs 3,099 games for DX 12 vs DX11.
Raytracing is even smaller 141 games support raytracing : List of games that support ray tracing - PCGamingWiki PCGW - bugs, fixes, crashes, mods, guides and improvements for every PC game
 
How come 40%-somthing perf boost is not a disappointment after promised "2-4 times faster"?

3080Ti vs 2080Ti was a 56% bump, mind you.

How is 4080 supposed to compete, given it is a heavily cut down version of 4090?

I think we'll have a Turing situation all over again. Higher price increase than performance increase.

In 2018 it was explained by RTX - raytracing and DLSS. Although it took quite a long time for both technologies to become widely adapted, and most people never enjoyed ray tracing on RTX 2080 - it was just too slow.

Now they'll say we have a revolution in frame doubling with DLSS 3.0. Although it increases latency and leaves clear artifacts for all to see with moving GUI elements and such, but it's your fault if you notice it - you should be playing game, not looking for artifacts!
 
"2-4 times faster"?
Base performance increase: +50%
Turn on DLSS Super Resolution: +50%-+100% depending on setting
Turn on DLSS Frame Generation: +100%
 
Last edited:
I think Hardware Unboxed has done a good job at showing the visual artefacts of DLSS 3.0:


It's all hard to detect in many cases, but some things are very apparent. Any moving GUI element is really hard to predict by frame generation, so they are heavily garbled in AI generated frames - which cause them to flicker.

Could this be repaired? Well, game could render the GUI elements after the frame generation, but that would require a compleyely different approach - and more involvement by developers.
 
How come 40%-somthing perf boost is not a disappointment after promised "2-4 times faster"?

3080Ti vs 2080Ti was a 56% bump, mind you.

How is 4080 supposed to compete, given it is a heavily cut down version of 4090?
2x-4x in a very specific game\demo.
No one expected to see those increases on every game.
This repetitive mantra about 2-4x is sooo very boring.

If you into the bashing business than bash on mate, but know it is look somewhat pathetic.
I very much agree that this product is quite stupid (as any >1000$ GPU out there for gaming imo) but the "you promised me 2-4x" mantra is not the reason.
There much more other, real things, to criticize this GPU about.
 
Last edited:
But it is silly.

Even if you take into account that Nvidia chose best case scenario - it's pure bullshit, when you notice this claim holds true (even in very specific scenario) only because they compared it to non-DLSS result. As if we don't have the DLSS for 4 years now.

It's not "pathetic" to call bulshit on such practices.
 
When 150 watts was serious business,,, and for 2 PCBs :roll:

image_2022-10-14_113650200.png
 
But it is silly.

Even if you take into account that Nvidia chose best case scenario - it's pure bullshit, when you notice this claim holds true (even in very specific scenario) only because they compared it to non-DLSS result. As if we don't have the DLSS for 4 years now.

It's not "pathetic" to call bulshit on such practices.

I wrote:
"If you into the bashing business than bash on mate, but know it is look somewhat pathetic"
"Somewhat pathetically" naive, if you will, to think that you get "2-4x".
If NV was misleading in its presentation about best case scenario "2-4x" than OK fight them to the moon and I will join but as you said- this is a very much valid graph, no matter how much sugar-coated it is.
So, to be lead by NV PR presentation about "2-4x" and to believe\expect that you will see that improvement in day-to-day usage, while knowing their way of doing business, is either (somewhat pathetically) naive or just "somewhat pathetic" way of bashing because it is not actual, proper, bullshit.

What I will bash on (if i`m into that sort of practice, which i`m not): price, total power consumption, phisical size and weight, DLSS3 only practical for the very high end usage (>120FPS on 240Hz screens) while increasing input lag and so on.

Ranting about about a point that not really exist take all the air from other valid ranting you (and others) might say.
 
Last edited:
Base performance increase: +50%
Turn on DLSS Super Resolution: +50%-+100% depending on setting
Turn on DLSS Frame Generation: +100%
no way!... but what if you enable Deep Learning Super Sample Super Sample on, you know, the 3000 series gpu?
 
"Somewhat pathetically" naive, if you will, to think that you get "2-4x".
If NV was misleading in its presentation about best case scenario "2-4x" than OK fight them to the moon and I will join but as you said- this is a very much valid graph, no matter how much sugar-coated it is.

We all expect first-party benchmark results to be sugar-coated. That NVIDIA would inflate its performance numbers in marketing is unremarkable, but in this case the inflation is especially dishonest, because the extra frames generated by DLSS 3.0 don't give you at least half the benefit of frames generated by other means (i.e. reduced input latency).

In retrospect, I think I was too kind earlier in discussing this feature. A commenter on the Techspot article I linked described DLSS 3.0 as a "motion fidelity" feature, rather than a performance boost, and that seems like the most sensible way to look at it. Imagine a feature that increased perceived smoothness without adding extra frames. That's what DLSS 3.0 does, in effect. It's purely a visual enhancement, though one that comes with a trade off to picture quality.

(One of the more interesting, and I think damning, passages in the Techspot article observes that DLSS 2.0 in Performance mode gave the same FPS and picture quality as DLSS 2.0 in Quality mode when combined with DLSS 3.0 in a particular game/scenario, and thus DLSS 3.0 was worse than pointless in that scenario, increasing latency in return for zero benefit.)

I think DLSS 3.0 is an impressive invention, and in time it could prove to be useful, but it isn't remotely comparable to extra GPU horsepower.

EDIT: Also I think it's somewhat annoying that NVIDIA chose to label its AI-frame-generation tech as "DLSS 3.0," implying that it's in some way not only linked to DLSS 2.0, but superior to it. In fact, the two features have basically nothing to do with one another. DLSS 2.0 increases real frame rate by rendering the scene in a lower resolution and then ingeniously scaling it up to look like you're running in native. (And in some cases, DLSS 2.0 can actually enhance the image, which is a neat trick.) DLSS 3.0 is a fancy interpolation technology that increases perceived motion smoothness. You can choose to enable one or both; they operate independently.

DLSS 2.0 will remain vastly more useful to the average gamer long after DLSS 3.0 proliferates to the masses. Vastly vastly more useful; it isn't a contest.
 
Last edited:
We all expect first-party benchmark results to be sugar-coated. That NVIDIA would inflate its performance numbers in marketing is unremarkable, but in this case the inflation is especially dishonest, because the extra frames generated by DLSS 3.0 don't give you at least half the benefit of frames generated by other means (i.e. reduced input latency).

In retrospect, I think I was too kind earlier in discussing this feature. A commenter on the Techspot article I linked described DLSS 3.0 as a "motion fidelity" feature, rather than a performance boost, and that seems like the most sensible way to look at it. Imagine a feature that increased perceived smoothness without adding extra frames. That's what DLSS 3.0 does, in effect. It's purely a visual enhancement, though one that comes with a trade off to picture quality.

(One of the more interesting, and I think damning, passages in the Techspot article observes that DLSS 2.0 in Performance mode gave the same FPS and picture quality as DLSS 2.0 in Quality mode when combined with DLSS 3.0 in a particular game/scenario, and thus DLSS 3.0 was worse than pointless in that scenario, increasing latency in return for zero benefit.)

I think DLSS 3.0 is an impressive invention, and in time it could prove to be useful, but it isn't remotely comparable to extra GPU horsepower.

EDIT: Also I think it's somewhat annoying that NVIDIA chose to label its AI-frame-generation tech as "DLSS 3.0," implying that it's in some way not only linked to DLSS 2.0, but superior to it. In fact, the two features have basically nothing to do with one another. DLSS 2.0 increases real frame rate by rendering the scene in a lower resolution and then ingeniously scaling it up to look like you're running in native. (And in some cases, DLSS 2.0 can actually enhance the image, which is a neat trick.) DLSS 3.0 is a fancy interpolation technology that increases perceived motion smoothness. You can choose to enable one or both; they operate independently.

DLSS 2.0 will remain vastly more useful to the average gamer long after DLSS 3.0 proliferates to the masses. Vastly vastly more useful; it isn't a contest.
Agreed.
Unlike dlss2\fsr2, dlss3 is a gimmick feature, much like RTX, and the pros are smaller than the cons considrablly.
In time though, NV can improve dlss3 and, except maybe the inceaced input-leg problem, deal with all the visual artifacts.
I`m sure dlss3.4 will be much better just as dlss2.4 is now.
 
Last edited:
Agreed.
Unlike dlss2\fsr2, dlss3 is a gimmick feature, much like RTX, and the pros are smaller than the cons considrablly.

In time though, NV can improve dlss3 and, except maybe the inceaced input-leg problem, deal with all the visual artifacts.
I`m sure dlss3.4 will be much better just as dlss2.4 is now.

As I understand it, the latency problem can't ever really be "fixed." NVIDIA might reduce the latency cost of enabling the feature (relative to not enabling it), but the extra frames generated by DLSS 3.0 will always be "fake," or dumb if you prefer, with respect to the player's inputs. That's just the nature of interpolation.

Image quality might be improved, though, and hopefully at some point NVIDIA will find a way to allow V-Sync and framerate caps with DLSS 3.0 on. The Digital Foundry guy actually already forced V-Sync to work in certain situations, which is encouraging.

But again there are certain fundamental limitations here that can't be eliminated--e.g. frames generated by DLSS 3.0 will always be pointless above the maximum refresh rate of your monitor, and DLSS 3.0's picture quality will always be worse at lower FPS, because a longer delay between "real" frames requires the AI to guess more in constructing the mid-point image between them. (And if the frames are displayed longer, obviously, the human eye is more likely to notice errors.) Thus, DLSS 3.0 will tend to skew against the very people you'd expect to want it most (i.e. those without high-refresh monitors or strong rendering hardware, or on the other end of the spectrum, competitive gamers who want stratospheric FPS to improve latency).

I'm sure there's room for improvement, but it's a niche feature now and I doubt that niche will ever dramatically expand. And even if DLSS 3.0 were 100% perfected, it still wouldn't be analogous to adding extra FPS in the traditional way. DLSS 2.0 and its analogues, on the other hand, are and will continue to be extremely useful to huge swathes of the user base, precisely because they do provide real performance boosts analogous to traditional frame rate improvements.

EDIT: lol, it looks like I misread your post. I thought you said NVIDIA might improve the input lag. My fault, man.
 
Agreed.
Unlike dlss2\fsr2, dlss3 is a gimmick feature, much like RTX, and the pros are smaller than the cons considrablly.

In time though, NV can improve dlss3 and, except maybe the inceaced input-leg problem, deal with all the visual artifacts.
I`m sure dlss3.4 will be much better just as dlss2.4 is now.

After seeing a couple of reviews from tech-tubers i'm of the same opinion. Perhaps nV is rattled with AMD trailing closely behind + Intel now enlisted with hopefully stronger competition ahead. Rather then offering raw performance at a reasonable cost it appears the king of the hill decided to milk the hill further with the "perception" of wider performance gains. I wander if its desperate times or just another nV authoritative swindling strategy to rob the less informed/insensible rich?
 
Base performance increase: +50%
Turn on DLSS Super Resolution: +50%-+100% depending on setting
Turn on DLSS Frame Generation: +100%
Oh, that is how that works.
My TV can do "frame generation".
I guess, that's another 200% on top, if base fps is low enough.

2-4x" mantra

If you are fine with "2-4 times" claims, in regards to a card that per TPU tests is about 45% faster, that's cool, I guess.
But fairly pathetic in my books.



Even if one zooms in at what was claimed:

"in games, it's 2 times"
 
Last edited:
So W1zzard ?
Tom's Harwared a links for a compared SLI rtx 3090 ti's vs an RTX 4090 running DLSS on microsoft flight simulator.
W1zzard any plans later on to compare two RTX 3090 Ti's in SLI/mGPU in RTX SLI vs one RTX 4090?

most likly going to be a no, because you focus on Triple A games that everyone has to be playing for a mass audience. Makes the forum here feel like a gaming forum more than a pc enthuiast fourm

I do have a list on supposivly mGPU's games, they need confirmation, It might help.
RTX 4090 Beats Two RTX 3090s in SLI — Barely | Tom's Hardware (tomshardware.com)

1.Why would use DLSS on SLI then complain about the second card not being loaded enough? :confused:

2. Can someone tell me why or how the hell they got cyberpunk 2077 supporting SLI or mGPU, when it was detailed by CD project that they had no plans to impliment it ? :confused:
 
2x-4x in a very specific game\demo.
I've came across where I got that from, the resetera:

RTX 4080
  • Starts at $900 for 12GB G6X, $1200 for 16GB G6X
  • 2-4x faster than 3080 Ti
  • Launching in November

RTX 4090
  • $1600
  • 24GB G6X
  • 2-4x faster than 3090 Ti
  • Launching on October 12th

  • Ada Lovelace is 2x faster in rasterization and 4x faster in ray-tracing compared to Ampere
  • Ada Lovelace GPUs are significantly more energy efficient compared to Ampere


So, reality is quite far from it, ain't it? (oh, I mean, besides the pricing, although in DE AIBs want 25% on top... I guess I know why EVGA quit)
 
mfsfs is a cpu limited pos. the only way to get higher fps is by using frame doubling in dlss
you mean it's Single threaded even on DX12. Stop calling cpu limited when it barely even uses the cpu.
how about calling cpu limited when the cpu like 5600x or 12400K is at 80% load and going to something like a 5900x or a 12700K increases Frame rates & keeps a decent load on the cpu/
It's crappy built engine that wasn't properly built for multi threading on dx12
 
you mean it's Single threaded even on DX12. Stop calling cpu limited when it barely even uses the cpu.
how about calling cpu limited when the cpu like 5600x or 12400K is at 80% load and going to something like a 5900x or a 12700K increases Frame rates & keeps a decent load on the cpu/
It's crappy built engine that wasn't properly built for multi threading on dx12
Single thread limited is still, well, cpu limited pal...
 
I've came across where I got that from, the resetera:

RTX 4080
  • Starts at $900 for 12GB G6X, $1200 for 16GB G6X
  • 2-4x faster than 3080 Ti
  • Launching in November

RTX 4090
  • $1600
  • 24GB G6X
  • 2-4x faster than 3090 Ti
  • Launching on October 12th

  • Ada Lovelace is 2x faster in rasterization and 4x faster in ray-tracing compared to Ampere
  • Ada Lovelace GPUs are significantly more energy efficient compared to Ampere


So, reality is quite far from it, ain't it? (oh, I mean, besides the pricing, although in DE AIBs want 25% on top... I guess I know why EVGA quit)
Yes, realty is very different from suger costed PR presentation (yet they are right according to the spacific details of the graph\test). What's to it? Something new?
 
Last edited:
Anyone know how DLSS 3.0 is in "slower gameplay games" that is CPU limited?
Like Stellaris late-game or other Paradox map games. City: Skylines and similar games
 
people pushing the narrative to be amazed at a generic generational performance leap and to be grateful for higher prices is really funny :)
It is the beast of the moment and no one is forcing you to buy it. You want pure performance in 4K, you buy it... if you can afford it. You want to play decently in 8k, buy it. Play WoT in 1080p, don't buy it!
For Content Creation, this price is really low. It takes 2x3090Ti to beat it in this segment.
 
Back
Top