Monday, July 15th 2024

NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector

In the preparation season for NVIDIA's upcoming GeForce RTX 50 Series of GPUs, codenamed "Blackwell," one power supply manufacturer accidentally leaked the power configurations of all SKUs. Seasonic operates its power supply wattage calculator, allowing users to configure their systems online and get power supply recommendations. This means that the system often gets filled with CPU/GPU SKUs to accommodate the massive variety of components. This time we have the upcoming GeForce RTX 50 series, with RTX 5050 all the way up to the top RTX 5090 GPU. Starting with the GeForce RTX 5050, this SKU is expected to carry a 100 W TDP. Its bigger brother, the RTX 5060, bumps the TDP to 170 W, 55 W higher than the previous generation "Ada Lovelace" RTX 4060.

The GeForce RTX 5070, with a 220 W TDP, is in the middle of the stack, featuring a 20 W increase over the Ada generation. For higher-end SKUs, NVIDIA prepared the GeForce RTX 5080 and RTX 5090, with 350 W and 500 W TDP, respectively. This also represents a jump in TDP from Ada generation with an increase of 30 W for RTX 5080 and 50 W for RTX 5090. Interestingly, this time NVIDIA wants to unify the power connection system of the entire family with a 16-pin 12V-2x6 connector but with an updated PCIe 6.0 CEM specification. The increase in power requirements for the "Blackwell" generation across the SKUs is interesting, and we are eager to see if the performance gains are enough to balance efficiency.
Sources: @Orlak29_ on X, via VideoCardz
Add your own comment

168 Comments on NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector

#76
wolf
Better Than Native
oxrufiioxoI haven't played a single game were FSR was ok at 1440p I honestly thought it was broken on Nvidia gpu I was like man it can't be this bad.... So I picked up a 6700XT just to test it out locally on an all AMD system but couldn't tell it apart from either of my Nvidia systems.

The disocclusion artifacts and flickering on fine detail are too obvious to me in the majority of what I play to use it.
Makes a lot of sense, not all upscaling is created equal. When I hear "upscaling sucks", I'm like ???? because it's the polar opposite to my ongoing long term experience using DLSS at 4k. 1440p with FSR? yeahhh I can see why people say it's a blurry mess, I agree. Please be specific people, it matters. But hey the FSR stills look sharp right? so that's uhh... something.
Posted on Reply
#77
64K
nguyenI can assure you that 5090 will just be barely fast enough when devs push for higher fidelity in next gen games. It has been a constant cycle of hardware and software one up each other since forever.
True. Upgrading gaming hardware is a never ending path whether on PC or console. On PC you get to choose when and by how much which is what I like. Having choice.

Lumen comes to mind for software being a GPU killer.

What's bad is when the new game doesn't look any better than the games 5 or even 10 years ago but the requirements are up considerably and then it's usually the Developer squandering resources but it happens. Always has and probably always will.
Posted on Reply
#78
nguyen
64KTrue. Upgrading gaming hardware is a never ending path whether on PC or console. On PC you get to choose when and by how much which is what I like. Having choice.

Lumen comes to mind for software being a GPU killer.

What's bad is when the new game doesn't look any better than the games 5 or even 10 years ago but the requirements are up considerably and then it's usually the Developer squandering resources but it happens. Always has and probably always will.
When developers speed up their work in order to do something else, there will always be resources squandered. Just take FromSoftware games for example, their games are insanely unoptimized, yet they still make good games.
Posted on Reply
#79
ARF
stimpy88The 4090 is what the 4080 should have been.
Correct, since RTX 4090 doesn't use the full amount of shaders present in AD102.
And, RTX 4080 Ti is missing, RTX 4090 Ti is missing, RTX Titan New is missing.

All thanks to the "healthy" AMD competition..
stimpy88UE5 games are just too much for the 40x0 series at 4K without DLSS cheating.
What's the problem? Apply low, medium settings.
Posted on Reply
#80
londiste
ARFCorrect, since RTX 4090 doesn't use the full amount of shaders present in AD102.
And, RTX 4080 Ti is missing, RTX 4090 Ti is missing, RTX Titan New is missing.
All thanks to the "healthy" AMD competition..
I am sure lack of SKUs using full die has nothing to do with AMD. These are big dies on a cutting edge process. We do not know all too much about yields of big dies on these processes, especially with dies that big. It is very likely that AD102 - and perhaps even AD103 - yields for completely functional dies are bad. Not even to mention Workstation/Datacenter/AI cards using same dies that command a much higher price.

A different aspect of AMD competition is interesting though. Navi 31 GCD (RX 7900XTX) is only slightly larger than AD104 (RTX 4070Ti). Yes, there are MCD-s and packaging costs but the entire point of these is to bring cost down. It does not seem to have played out quite as well as could be expected.
Posted on Reply
#81
stimpy88
nguyenI can assure you that 5090 will just be barely fast enough when devs push for higher fidelity in next gen games. It has been a constant cycle of hardware and software one up each other since forever. If you like high FPS or better efficiency, better turn on that DLSS cheat ;)
True, but I think the 40x0 series aged like milk. The 4080 is not fast enough for 4K, with many later games struggling to get 60fps. The 4090 was nice, but ludicrous.
ARFCorrect, since RTX 4090 doesn't use the full amount of shaders present in AD102.
And, RTX 4080 Ti is missing, RTX 4090 Ti is missing, RTX Titan New is missing.

All thanks to the "healthy" AMD competition..



What's the problem? Apply low, medium settings.
I wouldn't pay $1000/$1100 to play at low/medium with that turning into crap and a 40fps "experience" in another year. Sorry.

But hopefully the 5080 will do 60fps for a year or two, especially if they make some improvements to DLSS, I might try it out again.
Posted on Reply
#82
chrcoluk
londisteThat is just MSI doing bad VRM design.
Spikes for power delivery are normal, the short 10-20ms spikes that sites including TPU are measuring these days normally do go a good 30% over average for a decent design. And that is OK.


It is simply about consolidation of standards. 6-pin is more than enough for a low-power card. It is not enough for midrange where you would need an 8-pin. And higher end needs 2-3 of those...
Yes, the 16-pin has all the sense stuff and there is - or should be - limits based on what PSU can provide but that is still a more elegant solution.
We already were consolidated, Nvidia decided to break from it.

All they need to do on their nuclear reactor cards is use multiple connectors.

Dread to think how many motherboard slot space the new cards will consume.
Posted on Reply
#83
bitsandboots
R0H1TIt will be better (perf/w) because of GDDR7, though the biggest difference would come from better "IPC" if any!
ratirtSomething tells me, this NV cards gen will not have staggering performance increase over the 4000 series.
We will see when the cards are released but that is my guess.
I suspect they'll be a hit with home AI folks simply due to GDDR7. Well, if the price is "right". And by that I mean as long as it doesn't get worse than it already is. But who's going to stop nvidia from doing that at this point
TheDeeGeeLet's how many people with their big mouths actually leave NVIDIA due to a connector.

I bet the same amount moving to Linux... aka NONE!
I remain on a 3090 because I don't want my computer to melt, so at least me.
Also, linux adoption rate is surprisingly high within the last few years. It's surpassed all of win 7 + mac on steam for example.
I don't expect it to get mainstream but I'm shocked it's gotten that far.
Posted on Reply
#84
TheDeeGee
qwerty_leshjust reiterating the sentiment from the earlier discussion on the connector - RIP to all of the ATX 3.0 buyers out there.
Why exactly?
Posted on Reply
#85
Chomiq
stimpy88So 5080's will join the burning 4090 club! cool! :D
Since when 350W melts the 650W rated connector?

Also, Zotac 4080 Super AMP:
Posted on Reply
#86
JustBenching
Thank the gods they keep the 16pin. Not buying a multiple 8pin card again.
Posted on Reply
#87
basco
nvidia versus amd
16pin versus 6\8pin
what´s next please i grab some popcorn
Posted on Reply
#88
TumbleGeorge
ratirtI'm talking about the staggering perf increase not a performance increase
I need of much more (real)increase for to be staggered. Please, no more fake teraflops!
Posted on Reply
#89
stimpy88
ChomiqSince when 350W melts the 650W rated connector?

Also, Zotac 4080 Super AMP:
Comprehension is king. NorthridgeFix comprehends just fine.
Posted on Reply
#90
Chomiq
stimpy88Comprehension is king. NorthridgeFix comprehends just fine.
Show me a 4080 with a melted 12VHPWR connector.

As for NorthbridgeFix:

As for 12V-2x6:
Posted on Reply
#91
Steevo
Ray tracing math is power intensive.
Posted on Reply
#92
MxPhenom 216
ASIC Engineer
qwerty_leshjust reiterating the sentiment from the earlier discussion on the connector - RIP to all of the ATX 3.0 buyers out there.
????
Posted on Reply
#93
londiste
SteevoRay tracing math is power intensive.
Yes and no. RT Units are specialized and as such very power efficient at what they do.
Everything else is power intensive, especially the rumored 30-50% more shaders at much faster frequency :D
Posted on Reply
#96
AnotherReader
nguyenPower consumption goes down when tensor cores are active (DLSS)?
Good spot! I suspect it's due to the load going down on the other areas of the GPU. The base image is rendered at a lower resolution which isn't going to stretch the 4090.
Posted on Reply
#97
MxPhenom 216
ASIC Engineer
TheDeeGeeWhy exactly?
He doesnt know either. Theres nothing really keeping a user of atx 3.0 PSU using these new cards. The connector, even tho its not the newer revision adopted with atx 3.1 will still work.
Posted on Reply
#98
starfals
If 5080 is a fail, just like the 4080... i swear ill just buy the first next gen AMD card i find. Getting tired of Nvidia just putting all the power into the 90 cards.
Posted on Reply
#99
Vayra86
Substantial power bump per tier? Completely skippable then

looks a lot like Ada SuperDuper
Posted on Reply
#100
oxrufiioxo
Vayra86Substantial power bump per tier? Completely skippable then

looks a lot like Ada SuperDuper
It's a 10% ish increase likely due to them wanting to push it a bit since the node isn't significantly better.

Ada can be really power tuned with almost no loss in performance guessing it'll be the same with this.

My 4090 at 350w loses almost no performance definitely not enough to be noticed without using an overlay at 320w it's less than 5%.

Guessing the 5090 is going to be some monstrosity that is borderline HPC with a price to match.
Posted on Reply
Add your own comment
Nov 28th, 2024 23:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts