• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 Founders Edition

GTX 770 refresh scenario.....
GTX 770 came one year after 670 and offered 13% more edge. 970 offered at least 33% advantage.
5070 came two years after 4070 and is to offer 25%, at the very most.
 
RTX 4090 was not most efficient card, based on W1zzard's reviews:
View attachment 381454
(TPU's RTX 4080 Super review here.)

Same, it's not more efficient than previous top efficient card - aka RTX 4080(S).

Still, remember, guys, those results are based on just one game - Cyberpunk 2077. It varies between games, just so you know.
Unfortunately, GN video shows efficiency comparison only in 3 games, which is still more than in one as seen on TPU:

It would be nice to have bigger statistical sample, 10 games at least, same settings, same rest of hardware, RTX 4090 vs RTX 5090. The more games, the better accuracy of the results. What was already tested by German colleagues, they limited power of RTX 5090 to 450W (RTX 4090 level) and saw 11-15% performance improvement. That means RTX 5090 limited to 450W is indeed more efficient than RTX 4090. As for 575W TGP, I don't think so. I'd say they are pretty much on par, though RTX 4090 might be very slightly more efficient. Of course, undervolted RTX 5090 might be totally different story, similarly to undervolted RTX 4090's story.
Still incorrect to label it as "inefficient". Its more efficient then the 4090 and blows every single AMD card out of the water.
 
Its more efficient then the 4090 and blows every single AMD card out of the water.
Those are some strong words when new AMD cards are not even tested yet. IPC will be more easy to test on RTX 5080 vs RTX 4080 due to the same 256 bit bus. Apples vs apples not apples vs oranges!
 
The CUDA cores can now all execute either INT or FP, on Ada only half had that capability. When I asked NVIDIA for more details on the granularity of that switch they acted dumb and gave me an answer to a completely different question and mentioned "that's all that we can share"
Still better than sharing flat out incorrect info I supopose. When Der8auer contacted nvidia to ask why the hotspot temperature was removed, their reply was somewhere along the lines of "oh that sensor was bogus, but we added memory temperatures now!". We've had memory temperature for ages.

Also the FE cooler seems pretty bad at cooling memory with 94-96'c on test benches, just like the 3090 FE memory temperatures. When people start putting these in a case and it accumulates a bit of dust, memory throttling is a real possibility down the line but with so much memory bandwidth I doubt it'll be much of an issue.

Now with the hotspot temperature removed, how does one figure out if their TIM/LM application is accurate or if the block is slightly misaligned? Since the core temperature is an average of sorts, I feel like removing that wasn't a great idea because we just lost another data point which was actually useful in these scenarios.
 
The 50
A bit more apples to apples:

4090 idle: 22W
5090 idle: 30W, +36%

4090 multi monitor: 27W
5090 multi monitor:39W, +44%

4090 video playback: 26W
5090 video playback: 54W, +108%

It's quite horrible. AMD "We'll fix it in drivers (but doesn't)" horrible.

But making excuses for Nvidia that this card isn't meant for gamers, home users is silly. Nvidia spent quite a big chunk of their presentation of RTX 5090 on how good it is in gaming - since it's apparently the only card that will have any significant performance uplift compared to Lovelace equivalent without using "frame quadrupling". Delegate this card to "Quadro" lineup, or "home and small business AI accelerator" lineup, what are you left with? Cards within 10- 15 % of their predecessors? That's within overclocking margin, as measly as it is now.
5090 is basically a "I give no ***** about being economic". Worse under load, massively worse when idle.

A issue has cropped up since 4000 launch, I noticed it when trying to make a curve like my 3080 had, the 3d mode on the cards cant go below about 0.910v which is pretty high, and the clocks also have a high min speed as well. The downside of this is when you have a card with a high core count, it basically now has a higher power draw floor.

Anyone that can afford a 5090 probably isn't overly concerned about the cost to run it for gaming.

If you game 4 hours a day, that's 28 hours a week.
If the GPU runs at a continuous 600W an hour while gaming you end up with 16.8kWh a week.
If you pay $0.10 / kWh = $1.68 a week
If you pay $0.20 / kWh = $3.36 a week
If you pay $0.30 / kWh = $5.04 a week
If you pay $0.70 / kWh = $11.76 a week
Remember, this is if the GPU is running a sustained, continuous 600W those 4 straight hours of gaming. It all depends on the game, resolution, settings and so on. Also, remember the V-Sync power chart shows the GPU pulling about 90W. The above numbers would be for top-end power draw scenarios.

Personally I wouldn't want a GPU that can suck 600W for gaming. Not to mention the fact that this GPU is priced nearly 3x over what I'm comfortable in spending on a GPU, so I'm not the target for this product. If I had oodles of money and no brains, I'd get one, but I've got a meager amount of money and brains so I won't be getting one.
Thats scary. UK regulated tariff, what we call SVR, at current exchange rates is in between those bottom 2 examples you listed.
 
Those are some strong words when new AMD cards are not even tested yet.
Well, given the last 3 generations of AMD struggled in the efficiency game and the actual arch improvements have been near non existent, I'mma make an educated guess and say rDNA4 isnt going to be setting that efficiency graph on fire.

It's also a correct statement. Every AMD card on that list is below the 5090 in efficiency. Dont need to be bold to state fact. There's future nvidia cards too, you'll ahv eto factor that in always, unless its announced no new GPUs will ever be made.
IPC will be more easy to test on RTX 5080 vs RTX 4080 due to the same 256 bit bus. Apples vs apples not apples vs oranges!
No idea why IPC was brought up. Apples vs oranges indeed.
Thats scary. UK regulated tariff, what we call SVR, at current exchange rates is in between those bottom 2 examples you listed.
Yeah, but those costs are at 600w continuously, 4 hours a day, every day.

Who does that? If you're a gaming enthusiast, you wont see those numbers sustained for very long, if you're running AI workloads you're either an enthusiast in which case the power use is a non issue or you're making money on it, see above.

The power use argument has just never made any sense.
 
No idea why IPC was brought up. Apples vs oranges indeed.
384 (4090) vs 512 (5090) technically not the same. For power and IPC testing 256 vs 256 will be more accurate.
 
If you pay $0.10 / kWh = $1.68 a week
If you pay $0.20 / kWh = $3.36 a week
If you pay $0.30 / kWh = $5.04 a week
If you pay $0.70 / kWh = $11.76 a week
Recall that for us European consumers, once VAT is calculated, the card costs approximately 2'300 /2'400 Euro.

1737740334964.png
 
The CUDA cores can now all execute either INT or FP, on Ada only half had that capability. When I asked NVIDIA for more details on the granularity of that switch they acted dumb and gave me an answer to a completely different question and mentioned "that's all that we can share"
On Blackwell all the CUDA Cores can do FP32 or INT32, but games use about ~35% of INT32 cores to run them (some Nvidia employees confirmed it), so the 5090 is only a ~110 FP32 Gaming GPU ! That's a lot of FP32 performance left on the table... I wonder if they could make each core have a "Dual Instruction mode" aka doing both FP32 + INT32 at the same time in next-gen architectures. That could give them a huge boost just by changing the way the architecture works.

A bit more apples to apples:

4090 idle: 22W
5090 idle: 30W, +36%

4090 multi monitor: 27W
5090 multi monitor:39W, +44%

4090 video playback: 26W
5090 video playback: 54W, +108%

It's quite horrible. AMD "We'll fix it in drivers (but doesn't)" horrible.

But making excuses for Nvidia that this card isn't meant for gamers, home users is silly. Nvidia spent quite a big chunk of their presentation of RTX 5090 on how good it is in gaming - since it's apparently the only card that will have any significant performance uplift compared to Lovelace equivalent without using "frame quadrupling". Delegate this card to "Quadro" lineup, or "home and small business AI accelerator" lineup, what are you left with? Cards within 10- 15 % of their predecessors? That's within overclocking margin, as measly as it is now.
The 5090 has a 512-bit bus and 33% more CUDA Cores...and they had to lower Core clocks compared to the 4090 to not use "too much" power... Also they're on a TSMC 4nm node so they can't get much more than that.

Blackwell was supposed to be on a TSMC 3nm node but Nvidia decided to cheap out by relying on AI to do the heavy work.

If the RTX 6090 released in Q1 2027 or Q2 2027 they could definitely go for a TSMC 2nm node and have much better efficiency already. But they might need to work on their architecture efficiency too.

I feel like Ampere, Lovelace and Blackwell all have a similar IPC. Lovelace was just on a much better node than Ampere but Core for Core we didn't see much improvement, and the same goes for Blackwell (at least as of now).
 
Last edited:
So the drivers for DLSS4 not out yet, they reviewers only? (570.xx)
 
Well I watched the OC3D review of the 5090 Suprim from MSI. When I saw the power draw was 836 Watts I was blown away. Just about all reviewers are looking at price as the mitigating factor and it does not matter how you try to fit it 836 watts from 1 component in a PC is insane.
 
Well I watched the OC3D review of the 5090 Suprim from MSI. When I saw the power draw was 836 Watts I was blown away. Just about all reviewers are looking at price as the mitigating factor and it does not matter how you try to fit it 836 watts from 1 component in a PC is insane.
Ritch boys video card.
 
Well I watched the OC3D review of the 5090 Suprim from MSI. When I saw the power draw was 836 Watts I was blown away. Just about all reviewers are looking at price as the mitigating factor and it does not matter how you try to fit it 836 watts from 1 component in a PC is insane.

Are you sure that isn't total system draw? I haven't seen review go that high yet unless they were showing the total system power instead of just the GPU.
 
Are you sure that isn't total system draw? I haven't seen review go that high yet unless they were showing the total system power instead of just the GPU.
I am pretty sure it was Furmark
 
Are you sure that isn't total system draw? I haven't seen review go that high yet unless they were showing the total system power instead of just the GPU.
I mean, there's an article here in TPU about Igor's Lab measuring 901W spikes

But that's what it is: spikes. And still covered by ATX 3.1 specs.
 
I mean, there's an article here in TPU about Igor's Lab measuring 901W spikes

But that's what it is: spikes. And still covered by ATX 3.1 specs.

Thanks, I had missed that article. Seems like a nothingburger if it is a 1ms spike and covered by the spec to account for it, not that it is constantly pulling almost 850watts continuously
 
Thanks, I had missed that article. Seems like a nothingburger if it is a 1ms spike and covered by the spec to account for it, not that it is constantly pulling almost 850watts continuously
Well the Asus card here draws over 600 Watts. I am not trying to bash the card but the power draw is insane. No matter how good the cooling solution is. 1200 Watt PSUs are not cheap.
 
... and the electricity bill won't be cheap, for 800W/h (entire PC in gaming)… 4 hours a day… I let you do the math
 
No matter how good the cooling solution is. 1200 Watt PSUs are not cheap.
Good pc case also needed to cool down that oven. Original cooler is bad ~40dba for that price it is not acceptable.....
 
Anyone that can afford a 5090 probably isn't overly concerned about the cost to run it for gaming.

If you game 4 hours a day, that's 28 hours a week.
If the GPU runs at a continuous 600W an hour while gaming you end up with 16.8kWh a week.
If you pay $0.10 / kWh = $1.68 a week
If you pay $0.20 / kWh = $3.36 a week
If you pay $0.30 / kWh = $5.04 a week
If you pay $0.70 / kWh = $11.76 a week
Remember, this is if the GPU is running a sustained, continuous 600W those 4 straight hours of gaming. It all depends on the game, resolution, settings and so on. Also, remember the V-Sync power chart shows the GPU pulling about 90W. The above numbers would be for top-end power draw scenarios.

Personally I wouldn't want a GPU that can suck 600W for gaming. Not to mention the fact that this GPU is priced nearly 3x over what I'm comfortable in spending on a GPU, so I'm not the target for this product. If I had oodles of money and no brains, I'd get one, but I've got a meager amount of money and brains so I won't be getting one.

It isn't really the energey cost. It's the amount of heat it puts into your room making it less comfortable and then the cost to cool the room.
 
the 5090FE has memory too hot, 94-95 degrees, risky, plus noise...
 
Well the Asus card here draws over 600 Watts. I am not trying to bash the card but the power draw is insane. No matter how good the cooling solution is. 1200 Watt PSUs are not cheap.

Oh, no disagreement there at all, the power draw is insane, the only thing that gave me pause for my original comment was the mention of 850 Watts for a single component, which I haven't seen as a continuous draw in the reviews I've seen/read (versus being total system). 1200 Watt PSUs are not cheap (I have a 1000 Watt EVGA power supply that I feel confident in, but nonetheless, it's an insane amount of power for a single card to draw, especially considering the performance increase being pretty linear between generations.
 
Back
Top