• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 Founders Edition

I do wonder if there's any credence to the rumors that some AIBs expressed surprise at NVIDIA's final pricing structure revealed during CES.
Makes me think that the initial price was supposed to be 2499 USD instead of 1999 USD. It'd be the only way these insane AIB markups make any sense.
My guess is the AIBs are surprised at the pricing because it gives them little room for profit over the price Nvidia charges them for each chip.

Nvidia has pretty obviously been trying to close off the market to AIBs and only sell direct (cut out a middleman). It's a primary reason EVGA quit the market during the 4000 series and I'm sure the 5000 series puts the squeeze on even harder than the 4000 series.
 
I am really impressed by cooling efficiency. Incredible results. Good job Nvidia. Wonder how much does liquid metal contribute to this. Hopefully someone will test it, eventually.

Now please, other manufacturers, kindly get inspired and don't ever come again with bigger than 3-slot GPUs in the future, okay?

As for performance, I used results from this review to calculate all resolution average:

View attachment 381175

RTX 4090 has around 18% less transistors count and 25% less compute units. I used all games average fps to calculate efficiency (not juct Cyberpunk) and RTX 5090 is in fact less efficient than RTX 4090, since it consumes roughly 20% more power per frame. Maybe the price increase (+$500) is let's say justified, but performance-wise and efficiency-wise this is far from being special. In other words, now all performance increases is about scaling - the more compute units, the more performance. The more you buy, the more you have, but definitely not save. Were the RTX 5090 priced at $1700, then you'd also get more for less. We'll see when they jump to 3nm or 2nm node.

What indeed is special, as I already mentioned, is cooling efficiency of RTX 5090 cooler. Really damn impressive. My doubts were unjustified, I must say.
Do you see a reason why the 5090 FE has a 2-slot cooler design and got over 40 dB(A) fan noise, which unfortunately passes the concentration threshold and has higher temperatures too? It's not like it was a slim cooler design contest. Workstation PCs are big, consumer PCs also have no problem using a 2.5 to 3-slot cooler design.

Supposedly next-gen GeForce "Rubin" is going to use TSMC 3N (not N2 so far unfortunately, but maybe), which would increase performance by 10-15% at same power:
https://en.wikipedia.org/wiki/3_nm_process said:
TSMC has stated that its 3 nm FinFET chips will reduce power consumption by 25–30% at the same speed, increase speed by 10–15% at the same amount of power and increase transistor density by about 33% compared to its previous 5 nm FinFET chips.
Not great. Using N2 would give the average expected 30% power efficiency improvement gen-over-gen, but NV would have to jump 2 full nodes from N4 (which basically is a N5) to N3 to N2 and I'm not sure N2 will be ready by then.
 
And reviewers, although noting the high price, don’t bother to expose this “price creep” although they have much better insight into all this than regular customers.

I wonder if the base models (the ones that should have been MSRP, but will also be more expensive) will be rarer than hen’s teeth, so people will eventually cave in and buy off all these overpriced models?



I doubt it. AIBs have really detailed price negotiations a year in advance, and even small changes require re-negotiations. Maybe they just know how little volume of this “Gaming” line there is, so instead of generating revenue from large number of cards sold they have to generate it from a smaller number - basically pre-scalping the buyers.
I kinda knew the 5090 would sell between $1800 and $2000, $2500 was not an option unless they went for a TSMC N3P or N3X similar node, with a full GB 202 and 2TB/s of bandwidth.
But $2000 is a lot still, and adding $400, $500 and even $800 for the ASUS is insane! I wonder how much they're going to sell...

Worst part of this is 5090 looks to be the best offering in terms of gen to gen perfromance in 5000 series.
It was the same with Lovelace too. The 4090 was definitely the best value... it was $1600 but was a lot more powerful, had a 384-bit and 24GB VRAM.

Do you see a reason why the 5090 FE has a 2-slot cooler design and got over 40 dB(A) fan noise, which unfortunately passes the concentration threshold and has higher temperatures too? It's not like it was a slim cooler design contest. Workstation PCs are big, consumer PCs also have no problem using a 2.5 to 3-slot cooler design.

Supposedly next-gen GeForce "Rubin" is going to use TSMC 3N (not N2 so far unfortunately, but maybe), which would increase performance by 10-15% at same power:

Not great. Using N2 would give the average expected 30% power efficiency improvement gen-over-gen, but NV would have to jump 2 full nodes from N4 (which basically is a N5) to N3 to N2 and I'm not sure N2 will be ready by then.
Because 2-slots coolers sell better! And the 5090 FE is pretty amazing for being able to cool a GPU of 600W ! Even a few years ago nobody thought it was even possible...

TSMC 2nm is probably too expensive as of now, not even Apple are using it right now! They're still on an Advanced 3nm node. I think their M5 chips will be 2nm for sure. The M3 and M4 haven't been that much better than M2 without raising core count on the M4. M5 should be much better for sure.
 
It was the same with Lovelace too. The 4090 was definitely the best value... it was $1600 but was a lot more powerful, had a 384-bit and 24GB VRAM.
At least they were good against higher class GPUs from 3000 series. 4080 comfortably beat both 3090 and 3090ti, heck even 4070TI matched 3090TI. 4070 was weaker of the bunch yet it matched 3080. Neither of the 5000 cards seem to be able to accomplish similar feats. 5080 and 5070 obviously wont be able match 4090 and 4080. 5070 TI has potential to match 4080S but that's not really the same thing as we never got 4080TI and 4080S was only a minor improvement over 4080 anyway.
And this is a deliberate move from Nvidia. They used to scale their 70 class cards as half of the 90 class cards in terms of cores (3070-3090), then they changed it to 4070TI - 4090 for 4000 series and now it's 5080 and 5090. So they finally managed to call a 70 class card a 80 name and charge accordingly... which automatically applies to lower class cards as well...
Well, holding onto the promise of "the more you buy the more you save" is easier when you force it...

Anyway, as long as they have no competition, I cant really blame them. Anyone in their shoes would do the same.
 
At least they were good against higher class GPUs from 3000 series. 4080 comfortably beat both 3090 and 3090ti, heck even 4070TI matched 3090TI. 4070 was weaker of the bunch yet it matched 3080. Neither of the 5000 cards seem to be able to accomplish similar feats. 5080 and 5070 obviously wont be able match 4090 and 4080. 5070 TI has potential to match 4080S but that's not really the same thing as we never got 4080TI and 4080S was only a minor improvement over 4080 anyway.
And this is a deliberate move from Nvidia. They used to scale their 70 class cards as half of the 90 class cards in terms of cores (3070-3090), then they changed it to 4070TI - 4090 for 4000 series and now it's 5080 and 5090. So they finally managed to call a 70 class card a 80 name and charge accordingly... which automatically applies to lower class cards as well...
Well, holding onto the promise of "the more you buy the more you save" is easier when you force it...

Anyway, as long as they have no competition, I cant really blame them. Anyone in their shoes would do the same.
The 3090 was a terrible value compared to the 3080 but had a lot more VRAM, but the 4090 was definitely the best upgrade and even the 5090 is a better upgrade than the other RTX 50s counterparts! 33% more CUDA Cores, 32GB VRAM and a lot more Bandwidth, which is great for Pros at least.
 
The 3090 was a terrible value compared to the 3080 but had a lot more VRAM, but the 4090 was definitely the best upgrade and even the 5090 is a better upgrade than the other RTX 50s counterparts! 33% more CUDA Cores, 32GB VRAM and a lot more Bandwidth, which is great for Pros at least.
Value of 3000 series was quite irrelevant because of the cryptoboom really. Those were bad times. But they at least brought performance to table. 3070 matched the 2080 Ti. 33% more CUDA core is also nothing really, 3080 trippled 2080s and 4090 was 60% more over 3090. Either way what matters is performance.
VRAM after 24GB (16 even) is just for AI. As a gamer, I couldnt care less. But someone tested CP2077 at 16K resolution and apparently it requires 32GB VRAM so 5090 can run that. With horrible glitches and at 20FPS at least...
Anyway I dont really care for this generation and just waiting for the next actual leap (hopefully with some hardware breakthrough like 3d vchace on CPUs and not software one)
 
Value of 3000 series was quite irrelevant because of the cryptoboom really. Those were bad times. But they at least brought performance to table. 3070 matched the 2080 Ti. 33% more CUDA core is also nothing really, 3080 trippled 2080s and 4090 was 60% more over 3090. Either way what matters is performance.
VRAM after 24GB (16 even) is just for AI. As a gamer, I couldnt care less. But someone tested CP2077 at 16K resolution and apparently it requires 32GB VRAM so 5090 can run that. With horrible glitches and at 20FPS at least...
Anyway I dont really care for this generation and just waiting for the next actual leap (hopefully with some hardware breakthrough like 3d vchace on CPUs and not software one)
RTX 30s had a lot more CUDA Cores due to the Hybrid architecture, but among all those cores a game needs ~35% INT32 Cores to run, so that's a lot of FP32 left on the table! FYI Turing CUDA cores were all FP32 (the INT32 were independent).
 
GPU Database needs a slight modification...

May I present:

1737953951063.png
 
I have a suggestion for an additional test due to complaints about melted 12VHPWR/12V-2x6 connectors and sockets on past and possible present and future cards: place a thermocouple or thermometer on the 12V-2x6 socket.

I got inspired by this when looking at Guru3D's reviews which include FLIR infrared camera heat maps. While FLIR infrared camera heat maps are sometimes said to be problematic, seeing the heat maps at https://www.guru3d.com/review/review-nvidia-geforce-rtx-5090-reference-edition/page-7/ , https://www.guru3d.com/review/review-palit-geforce-rtx-5090-gamerock/page-7/ , and https://www.guru3d.com/review/review-asus-rog-geforce-rtx-5090-astral-oc-gaming/page-7/ for the Founders Edition, Palit GameRock, and Asus ROG Astral versions of the RTX 5090 respectively makes me wonder how much of the video card's heat is heating up the 12V-2x6 socket or plug which could contribute to damaging or melting the socket or plug. I suspect that the heat could be a combination of the heat from having so many amps being pulled through the 12V-2x6 connection and from the card itself.
 
Last edited:
Another reviewer used an infrared camera to find that the 12V-2x6 connector is getting hot as seen in https://www.guru3d.com/review/review-nvidia-geforce-rtx-5090-reference-edition/page-7/ .

Could TechPowerUp confirm or dispute this finding by attaching a thermometer or thermocouple to the 12V-2x6 socket and then load the card? Thermometers or other physical temperature probes should be more accurate than an infrared camera, and having a second tester confirm a hot socket or plug could help dispute a possible claim that the testers badly plugged it in. However, if TechPowerUp’s test results in a cool socket, that could support a claim that the other tester could have badly plugged the power cable in.
 
Another reviewer used an infrared camera to find that the 12V-2x6 connector is getting hot as seen in https://www.guru3d.com/review/review-nvidia-geforce-rtx-5090-reference-edition/page-7/ .

Could TechPowerUp confirm or dispute this finding by attaching a thermometer or thermocouple to the 12V-2x6 socket and then load the card? Thermometers or other physical temperature probes should be more accurate than an infrared camera, and having a second tester confirm a hot socket or plug could help dispute a possible claim that the testers badly plugged it in. However, if TechPowerUp’s test results in a cool socket, that could support a claim that the other tester could have badly plugged the power cable in.
52c is not hot. These cables can go way over 100c
 
... for a short period of time
 
Still better than sharing flat out incorrect info I supopose. When Der8auer contacted nvidia to ask why the hotspot temperature was removed, their reply was somewhere along the lines of "oh that sensor was bogus, but we added memory temperatures now!". We've had memory temperature for ages.
They removed it so that you can't see the real pain in the ass in it's finest.
 
Seriously? W1z doesn't set prices for GPU's. All he can do is review and critique what is delivered. And really, who are YOU calling short-sighted?
It wasn’t a critique of @W1zzard , more that reviewers and users alike are all falling into the same trap and appear to be parroting the same gaslighting. While there are some aspects of this card that are indeed impressive, overall it is meh and should be treated as such. MFG should have been a main selling point as there are far too many hidden costs with this tech, that reviewers are only now exposing.

Primarily, anyone that pays the nvidia tax short sighted, because all it does is embolden nvidia into thinking that they can continue to get away with this monopolistic behaviour. Case in point, nVidia artificially limiting supply and keep prices high and insane prices AIB's are charging. This isn’t COVID times, people can't possibly be stupid enough to pay nearly $/£ 3000 for a GPU.
 
I have a suggestion for an additional test due to complaints about melted 12VHPWR/12V-2x6 connectors and sockets on past and possible present and future cards: place a thermocouple or thermometer on the 12V-2x6 socket.

I got inspired by this when looking at Guru3D's reviews which include FLIR infrared camera heat maps. While FLIR infrared camera heat maps are sometimes said to be problematic, seeing the heat maps at https://www.guru3d.com/review/review-nvidia-geforce-rtx-5090-reference-edition/page-7/ , https://www.guru3d.com/review/review-palit-geforce-rtx-5090-gamerock/page-7/ , and https://www.guru3d.com/review/review-asus-rog-geforce-rtx-5090-astral-oc-gaming/page-7/ for the Founders Edition, Palit GameRock, and Asus ROG Astral versions of the RTX 5090 respectively makes me wonder how much of the video card's heat is heating up the 12V-2x6 socket or plug which could contribute to damaging or melting the socket or plug. I suspect that the heat could be a combination of the heat from having so many amps being pulled through the 12V-2x6 connection and from the card itself.
In my opinion, on 5090 FE, the fact that the connector is positioned on the corner certainly contributes to keeping confined (greater domains insulation) connector and board as regards heat exchange.
Probably a good thing too since it is not known what temperatures the PCB reaches in "normal" operation (I don't think a copper trace on PCB higher than 35um was used so i expect also a significant heat rise on copper trace).
Considering that @600W approximately we have 50A on each of the twelve contacts if equally distributed...
Even a few milliohms are enough to have significant power dissipation (heat rise) on the contact material/surface...

I think that the contact geometry of the connector should be completely revised to ensure a sizing with a greater operating margin.
(5090 power draw push the technical limits for this connector).

I've also seen the technical idea of going back to draw power from the motherboard in the primary PCI-E slot through an additional dedicated connector/slot... so the problem is well know!

Anyway, a technical analysis by temperature monitor on the connector would be a good investigation...
 
Oh man, sad times are coming, eh. With fab process improvements slowing down, cards will get faster mainly via being bigger and consuming more power... Basically a 4090Ti.

I don't get why FE is "recommended".

Do you see a reason why the 5090 FE has a 2-slot cooler design and got over 40 dB(A) fan noise, which unfortunately passes the concentration threshold and has higher temperatures too?
Low effort, low volume "see, cards are available at claimed MSRP" move by NV.

What is more interesting, the "just sell cards for no profit" MSRP will likely apply to the lower end too.

Leaving AMD in a tricky position. 9000 series still use sizable chips and the record profits from TSMC are coming from... somewhere.
 
Oh man, sad times are coming, eh. With fab process improvements slowing down, cards will get faster mainly via being bigger and consuming more power... Basically a 4090Ti.

I don't get why FE is "recommended".


Low effort, low volume "see, cards are available at claimed MSRP" move by NV.

What is more interesting, the "just sell cards for no profit" MSRP will likely apply to the lower end too.

Leaving AMD in a tricky position. 9000 series still use sizable chips and the record profits from TSMC are coming from... somewhere.

We're already there. They can't even feasibly make something bigger or more power hungry than a 5090.
 
Interesting to think that nvidia did this once before. Going from Titan Black or 780Ti (fully enabled GK110(b)) parts to a Titan X (fully enabled GM200 part, with the same 250W envelope, same process node, smaller increase in number of cuda cores, similar clock speeds, same memory type and bus width, yet obtained larger increase in performance. They really did work some magic with Maxwell.

screen sources cited below.



1738105529375.png
 

Attachments

  • 1738105459435.png
    1738105459435.png
    421 KB · Views: 38
It wasn’t a critique of @W1zzard , more that reviewers and users alike are all falling into the same trap and appear to be parroting the same gaslighting.
Yeah, and that's what I'm calling short-sighted. Seriously, think it over for a moment..

Primarily, anyone that pays the nvidia tax short sighted
Or it's because NVidia GPU's are the best on offer and people want the best performing cards. This is not rocket science. It is in fact very simple thinking that does not need a slide-rule to figure out.
 
Or it's because NVidia GPU's are the best on offer and people want the best performing cards. This is not rocket science. It is in fact very simple thinking that does not need a slide-rule to figure out.
Yeah if you want the best you know it will never be cheap. One cannot buy a Lamborghini, Ferrari, Bugatti, etc. for cheap, the same goes for everything. But it is true that some brands do overcharge (because they can and/or are too greedy)...
 
Yeah if you want the best you know it will never be cheap. One cannot buy a Lamborghini, Ferrari, Bugatti, etc. for cheap, the same goes for everything
Haha, best at what? In the spam of electronic garbage of outdated, low-quality SMs at exorbitant prices?) With disproportionate power consumption? With zero generational improvement? It seems like it was just recently - Intel tried with RPL/R and literally burned out with these CPUs. Despite this, the 13900k/14000k got 5 stars and the editor's choice...
 
Whatever. You do you.

That has got to be one of the silliest analogies I've read or heard of. Seriously, if you're gonna make retort, keep it in the realm of reality.
No, that's just logic... Not everyone can afford the best products and that's totally normal. If Nvidia decided to price it at a very high price that's their right. People who cannot afford it just won't, and people who can will just do whatever they want. Sure it's not $1,000,000 like a Ferrari but for most people think spending $2,000 on a GPU alone is absolutely crazy (at least Gamers, for Pros that's totally different).
 
Back
Top