• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5090 Founders Edition

Most curious about this is that RT performance drop is sometimes a couple of % worse gen on gen, how bizarre for the GPU vendor who makes the most fuss about this :

View attachment 381170
Jensen is often talking about they completely changed some aspect of the RT core inner working, massive improvement in very specific areas, but the performance increase seems to be very inconsistent games to games, or even from a simple switch in resolution.
 
So with a $2000 price tag, look at all the money you're saving!!!.... :rolleyes:
To be fair. While the $2k price is steep, you do get the performance you pay for and it's hella cheaper than the professional cards that START at $4000.

So, you admit this is a workstation card for businesses/ companies which make millions ?
 
I wonder how much is the GPU actually CPU limited. We've known that for most of the 4090's life that card was very badly CPU limited, and this one would naturally end up worse. The amount of performance is simply brutal - easily over twice as powerful as what the RX 9070 XT will be according to the latest rumors (the "just under 4080" raster and 4070 Ti Super RT). Getting this and taming it down to around 400 W power limit, a healthy dose of DLSS, and it'll be a GPU for the ages.

The 42% faster than 4080 Super performance at 4K on launch drivers is simply bananas. It'll be amazing to have one of these.

So, you admit this is a workstation card for businesses/ companies which make millions ?

No, it's a GPU for gamers who have a job (or patience).
 
DLSS 4 is looking mighty fine, I like what I see, very much:
Cant be bothered to go through the video, but on RDR2, with video at 4k res.

At start native the branches have gaps, that happens when AA is trash and/or low res. Both DLSS variants solve this but I think 3.8 looks better than 4.
As he was riding I kept eye on branches and the telegraph lines, Native has same issue where the lines have gaps and dont fully render, likely due to poor quality AA, and both DLSS handle it much better, 3.8 seems to still have the edge, but 4 was still better than native.

TAA is trash, I wonder how much of DLSS goodness is due to native often being tied with TAA.
 
I think it will blast the side panel/glass, impacting every hardware piece, unless you have a case with a open-mesh design for a side panel for the air to escape that way, then I don't see any issues. :)
Thi si s why I kept my old HAF case. My GPU temps looks SO friggin good compared to most. Side venting needs to make a comeback.
I'm also looking for AMD as only way out of this shit show. I'm not saying it's full gone conclusion but RTX 80/70Ti/70 aren't looking great at the moment. (May be another p/p stagnation disappointment)
If only AMD kept the gas on rDNA4. If it is an actual improvement this time, they have the chance for a suckerpunch.

Or if they just fix FSR....
You know that's not even remotely true, CPU's have been getting more efficient on the same node for eons! Heck Nvidia themselves did it 3(2.5?) times on 28nm o_O

Although getting them more efficient is probably exponentially harder now.
What CPUs are those? Raptor lake is less efficient then Alder lake, the 9000s are less efficient the the 7000s, ece.
Look at the chips. GK104 294 mm^2, that's in fact a 670-class GPU, then you got a GK110 - proper high-end 561 mm^2, and then you got a backport GM204 which is also not an enthusiast-level GPU with tiny chip only 398 mm^2. In comparison, GB202 is 750 mm^2 which is the largest Nvidia GPU ever made.
GPU die size =! rebrand. the 980 was a totally different arch and the 780 was designed after the 680 launch.
Cant be bothered to go through the video, but on RDR2, with video at 4k res.

At start native the branches have gaps, that happens when AA is trach and/or low res. Both DLSS variants sold this but I think 3.8 looks better than 4.
As he was riding I kept eye on branches and the telegraph lines, Native has same issue where the lines have gaps and dont fully render, likely due to poor quality AA, and both DLSS handle it much better, 3.8 seems to still have the edge, but 4 was still better than native.

TAA is trash
Frankly TAA should be banned.
 
Thi si s why I kept my old HAF case. My GPU temps looks SO friggin good compared to most. Side venting needs to make a comeback.
Yeah, same thought process with my Fractal R5. The case got a lot of flack for its silence-at-the-cost-of-cooling approach to design, but if you open the front door and toss an intake on the side panel it cools ... about as well as anything else, and probably a good bit better than a lot of current offerings. The case market has been goofing off for the better part of a decade, IMO, offering plenty if your primary interest is in colored lights, but very little otherwise, mostly regressions in terms of options/versatility.
 
This is a 4k/2160p GPU, do not buy this if you are on 1080p or 1440p for that matter.
 
I wonder how much is the GPU actually CPU limited. We've known that for most of the 4090's life that card was very badly CPU limited, and this one would naturally end up worse. The amount of performance is simply brutal - easily over twice as powerful as what the RX 9070 XT will be according to the latest rumors (the "just under 4080" raster and 4070 Ti Super RT). Getting this and taming it down to around 400 W power limit, a healthy dose of DLSS, and it'll be a GPU for the ages.

The 42% faster than 4080 Super performance at 4K on launch drivers is simply bananas. It'll be amazing to have one of these.



No, it's a GPU for gamers who have a job (or patience).

I doubt it's CPU bottlenecked at 4K with the 9800X3D and it's easy enough to test it by running the 9800X3D in two states of tune and see if there's a performance difference. Pretty sure a ton of CPU tests are coming up over the next few weeks.
 
Me too. My CoolerMaster HAF-XB EVO is going to be very handy for the RTX5080 I'm aiming for..

You still have yours? I regret getting rid of mine so much. So hard to buy another nowadays :(

I doubt it's CPU bottlenecked at 4K with the 9800X3D and it's easy enough to test it by running the 9800X3D in two states of tune and see if there's a performance difference. Pretty sure a ton of CPU tests are coming up over the next few weeks.

I suppose it'll largely depend on the game and resolution, end of the day. The 4090 already shone brightest in 4K, and that seems a trait this one shares. Which works great for me, since that is precisely what I will be buying mine for.
 
Okay i told its +40% just by looking spects.. its only 5% of shy of that, but in many games it is +40% and thats just awesome.

its not hard to tell how fast GPU is by looking spects.
But its still very hard for some of the loyal Amd fans who just see +10% or max 20%

Also

Rip AMD, Amd just cant compete because
  • DLSS 4 Frame Generation and Transformers Upscaling
 
So.....you're saying when jensen said "the more you buy the more you save", he was telling the truth?
Only if you're buying DOGE :nutkick:
What CPUs are those? Raptor lake is less efficient then Alder lake, the 9000s are less efficient the the 7000s, ece.
Zen 2->3 & IIRC(?) a couple of *dozers on 32nm, Intel itself did it on 14nm thrice.
 
Cant be bothered to go through the video, but on RDR2, with video at 4k res.

At start native the branches have gaps, that happens when AA is trash and/or low res. Both DLSS variants solve this but I think 3.8 looks better than 4.
As he was riding I kept eye on branches and the telegraph lines, Native has same issue where the lines have gaps and dont fully render, likely due to poor quality AA, and both DLSS handle it much better, 3.8 seems to still have the edge, but 4 was still better than native.

TAA is trash, I wonder how much of DLSS goodness is due to native often being tied with TAA.
+more FPS and smooth gameplay looks miles better than low fps native.
TAA should be banned
 
This was my worry Nvidia has been reliant on node shrinks, power increase over the last couple years & has been lacking in the power efficiency design of the GPU architecture.
 
This is a 4k/2160p GPU, do not buy this if you are on 1080p or 1440p for that matter.
Ray tracing says otherwise. Looks like a 1440p card to me with Ray tracing enabled. 1800p would be fine but marketing has made us skip that for !4K!
 
This was my worry Nvidia has been reliant on node shrinks, power increase over the last couple years & has been lacking in the power efficiency design of the GPU architecture.
It's not been lacking in efficiency. power use =! efficiency.

watts/frame Ada was the most efficient nvidia has ever made, and a damn sight better than AMD or intel. Blackwell has roughly the same efficiency on the same node.
 
Great overall performance. The price is high, but this card is mostly aiming content creators, where price is not that big factor if u get huge improvements. The 2 decoder engine is a pretty big deal.
7900xtx with newer titles, especially with rt on, now looks like an entry level card, often unplayable performance. :D
 
Last edited:
Thi si s why I kept my old HAF case. My GPU temps looks SO friggin good compared to most. Side venting needs to make a comeback.

If only AMD kept the gas on rDNA4. If it is an actual improvement this time, they have the chance for a suckerpunch.

Or if they just fix FSR....

What CPUs are those? Raptor lake is less efficient then Alder lake, the 9000s are less efficient the the 7000s, ece.

GPU die size =! rebrand. the 980 was a totally different arch and the 780 was designed after the 680 launch.

Frankly TAA should be banned.
It would be very interesting if this happened, as it would kill all upscaling based on temporal filters.
 
Not that impressive, I hoped for much more. Personally this feels like a tock improvement on the same process.

Roughly(!) 30% more shaders and rt units, 30% more expensive (no, nobody will buy a 4090 for 2.400$ from now on), 30% more die space, 30% more power consumption , 30% more performance, 30% everywhere. A Nothingburger on the same process node.

To think that this is the progress in gaming GPUs and we stay at this level for another 2 years feels kinda limiting to me.
 
Last edited:
So....fun. The Halo product is halo. It's fast, loud, and expensive. Check all the "no duh" choices.

Since I've watched the green vs red shenanigans over the last few days, let me be real here. It's got good performance numbers. It's "energy efficient." It's likewise drawing huge amounts of power. It's still possible to be efficient with those conditions...then the review points out unusually high power draw during playback, and I taste the double standard of someone who wants to be impartial but cannot really be impartial. Appreciate the honesty, but the reason that I don't care beyond the obvious ransom price is that the moves forward seem entirely to be "juice it until the return on more juice tapers off, then sell it as our flagship." That's...well, it's not interesting to me to see an exercise in pursuing big numbers at all prices...both in performance and cost.
 
Very cool. Performance increase its about what I expected. Will be more interesting to see where the 5080 lies against the 4090. And that cooler - will be amazing to see how it performs with the lower TDP on the 5080!
 
It's not been lacking in efficiency. power use =! efficiency.

watts/frame Ada was the most efficient nvidia has ever made, and a damn sight better than AMD or intel. Blackwell has roughly the same efficiency on the same node.
No, it's not.
It's +1% it's complete a joke to call its power efficiency "good".
 
Back
Top