• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5000 Series “Blackwell” TDPs Leaked, Entire Lineup Unified with 12+4 Pin Power Connector

My RTX 4070 refuses to budge from 195W under full load, even tried undervolting it, it doesn't give a damn it just keeps going up to 195W
My 4070 Super UV'd doesnt go over 185w no matter what, usually stays below 170, but youre saying 4070 won't go below 195w ?
 
Unbelievable! Are people going to pay 4-5k USD if not more for this card!? :roll: :kookoo:
LOL this card is not going to be $4-5k. It's Quadro counterpart will, but not GeForce don't be ridiculous
 
Last edited:
I'm a little out of the loop since I prefer AMD, has the 12V-2×6 connector solved the failure issues of the 12VHPWR? Because a repeat performance of $2000+ GPUs burning would be extremely disappointing.
Really very little has changed for how it takes wear/"non-perfections".
Power connector is still same very high current for tiny contacts and high total current/power density by cramming them very tight together for no real safety margins mess.
Vs older connectors not pushing the maximums and having looser spacing better allowing any possible heat to dissipate.

Mr. "Safety is pure waste" Rush Job would be proud...
 
I like it, nice and tidy, only 1 cable to run.. pretty sweet.

I have way more than 30 insertion cycles on my cable.

Toight like a tiger, maybe other PSU brands use chintzy plastic?
 
My 4070 Super UV'd doesnt go over 185w no matter what, usually stays below 170, but youre saying 4070 won't go below 195w ?

My Asus GeForce RTX 4070 Dual OC won't go lower at max when I try to UV with MSI Afterburner tested it earlier this year when I got the card.

Even tried locked voltage which should bring it down it doesn't it just goes up to 195W at max when it's under heavy use.
 
I don't understand how we're swinging so far again with the Power. I mean I had a 1200W PSU over a decade ago as at the time I planned on Xfiring some 29000XTs(yeah yeah fail card) Then I dropped down to 1050W and now currently running an 850W Platinum.
Why when the nodes are going down which should be hypothetically more efficient are the last few gens becoming so power hungry again....I mean I know my 850W is "enough" but the fact that we are back to 1000, 1200 and even 1600W PSU's and GPUs like this that apparently have already saturated the stupid connector they've had to add another....
I think there needs to be a distinction between power hungry and "pulling a lot of power". I bet a paycheck the 5090 will be the most efficient card ever produced. Whoever is primarily concerned with power should just grab a 5090 and limit it to 250-300w or whatever they feel like. It will still be the fastest card around with the best perf / watt. But for those who value performance above power, these cards are capable of handling more power to spit more fps. Nothing wrong with that either. Im running my 4090 locked to 320watts, but I wouldn't mind if it was also capable of pulling 2000 watts for when / if I require the extra performance. I really don't get what the issue is.

My Asus GeForce RTX 4070 Dual OC won't go lower at max when I try to UV with MSI Afterburner tested it earlier this year when I got the card.

Even tried locked voltage which should bring it down it doesn't it just goes up to 195W at max when it's under heavy use.
Of course undervolting doesn't lower power draw, that's because you are just moving the frequency curve to the left, meaning your card is trying to hit higher clockspeeds. So by UV, you are just gaining performance with the same power. If you want to lower power, power limit it.
 
My Asus GeForce RTX 4070 Dual OC won't go lower at max when I try to UV with MSI Afterburner tested it earlier this year when I got the card.

Even tried locked voltage which should bring it down it doesn't it just goes up to 195W at max when it's under heavy use.
I do that manually in MSI AB, I managed to achieve the out of the box clocks (2730MHz) at 985mv, cut power from 205-210W to 170-180W. You need a volt/freq curve editor for that (Ctrl+F).
 
Last edited:
FTFY

Let's reiterate the fact that 10 series x80 ran on 180W TDP
Ada's 'super efficient' 4080 wants 320 TDP. On a 'new node'

The only reason Ada looked good is because Ampere was such a shitshow in efficiency and Turing was just Pascal with shitty RT.
4080 is 3.5x faster than the 1080 in raw raster, much faster than that if RT or AI stuff is used, and currently is the most efficient card in existence.

Raw power draw is not a measure of efficiency despite it being used for shock effect.
 
FTFY

Let's reiterate the fact that 10 series x80 ran on 180W TDP
Ada's 'super efficient' 4080 wants 320 TDP. On a 'new node'

The only reason Ada looked good is because Ampere was such a shitshow in efficiency and Turing was just Pascal with shitty RT.
always been a huge fan of buying a mid-sized die for efficiency, I absolutely loved gtx1080 and how it easily beat my 1.5G 980Ti at over 100W less power (hence my sig, wish things could go back to Pascal days). Blackwell shapes to be an Ampere repeat. If 5070Ti has a 350W TDP, I'll need to decide if I care about upgrading all that much. The alternatives look kinda grim with 9070XT leaks putting it closer to 4070Ti than 4080 like it was rumored earlier.
 
What happened with the efficiency where Maxwell and Pascal shined? 500W TDP for the flagship, really?
 
4080 is 3.5x faster than the 1080 in raw raster, much faster than that if RT or AI stuff is used, and currently is the most efficient card in existence.

Raw power draw is not a measure of efficiency despite it being used for shock effect.
Nobody said that, that's just what you want to make of it to live the illusion that all is well here.

The fact is, TDP has virtually doubled to get that 3,5x faster.
At the same time, your games still run at similar FPS.
One might even debate if they look better than they used to do on the 1080, honestly, given the current TAA junk.

Such progress. Have you saved the world yet with your hyper efficient GPUs? Or is the exact opposite happening, and is the reality that all efficiency gained is just a free pass to use ever more power? What's the energy footprint YoY of all GPUs on the globe? An interesting thing to place alongside your ooh-aaah about most efficient GPU in existence. It gets even funnier if we factor in the cost and footprint of the fabs and node progress required to obtain 3.5x faster. I think if you put it all together the net sum might be we've actually just wasted an immense amount of resources so we can still barely run 4K - not much other than in 2017.

Stop bullshitting yourself, there should be a shock effect when you consider the x80 has doubled its TDP over 3-4 generations when in the past, it never did that.

Do you think that pascal will be more efficient than the 5090?
Doesn't matter. What matters is whether that singular gaming system with a top end GPU in it, uses 300-350W (2016), or over 600W (2024). And that's not touching the OC button, mind you, that's running both systems 'conservatively'. In the end, physics hasn't changed, you're still getting an ever increasing energy bill and heat production. Efficient or not.
 
Last edited:
Nobody said that, that's just what you want to make of it to live the illusion that all is well here.

The fact is, TDP has virtually doubled to get that 3,5x faster.
At the same time, your games still run at similar FPS.
One might even debate if they look better than they used to do on the 1080, honestly, given the current TAA junk.

Such progress. Have you saved the world yet with your hyper efficient GPUs? Or is the exact opposite happening, and is the reality that all efficiency gained is just a free pass to use ever more power? What's the energy footprint YoY of all GPUs on the globe? An interesting thing to place alongside your ooh-aaah about most efficient GPU in existence. It gets even funnier if we factor in the cost and footprint of the fabs and node progress required to obtain 3.5x faster. I think if you put it all together the net sum might be we've actually just wasted an immense amount of resources so we can still barely run 4K - not much other than in 2017.

Stop bullshitting yourself, there should be a shock effect when you consider the x80 has doubled its TDP over 3-4 generations when in the past, it never did that.
Remind me the TDP of the first Geforce card and the ones five generations later?

TDP goes up, it's normal.

Mind you your comment is rambling all over the place, so.

Doesn't matter. What matters is whether that singular gaming system with a top end GPU in it, uses 300-350W (2016), or over 600W (2024). And that's not touching the OC button, mind you, that's running both systems 'conservatively'. In the end, physics hasn't changed, you're still getting an ever increasing energy bill and heat production. Efficient or not.
Wrong, efficiency is work done/energy used, so if we all stuck with your beloved 1080 class GPUs, the world would actually use more power as it would take longer to do the same computing tasks. Gaming is what, 5% of the market? That's assuming you don't frame rate lock to your monitor refresh rate, in which case new GPUs use less power regardless of peak power draw potential.

Interesting you're not commenting on new tech like DLSS enabling lower resolution rendering with similar IQ, equating to further efficiency on top of the generational improvements. Is hot for handhelds like the Switch 2.

But old good new bad right?

If you're so concerned about power draw and efficiency why are you using an AMD card?
 
Last edited:
Wrong, efficiency is work done/energy used, so if we all stuck with your beloved 1080 class GPUs, the world would actually use more power as it would take longer to do the same computing tasks.

Interesting you're not commenting on new tech like DLSS enabling lower resolution rendering with similar IQ, equating to further efficiency on top of the generational improvements. Is hot for handhelds like the Switch 2.

But old good new bad right?
The reality would rather be that a lot of computing tasks wouldn't happen to begin with. Such as the ones we now call AI. Its not old good new bad, its about realizing that sometimes, enough is enough, and ever more is a certain road to oblivion. That's all this is about. There's a point where stuff starts to bite you in the ass, and we're solid in that territory right now, globally.
 
The reality would rather be that a lot of computing tasks wouldn't happen to begin with. Such as the ones we now call AI. Its not old good new bad, its about realizing that sometimes, enough is enough, and ever more is a certain road to oblivion.
Good luck telling that to hardware/software manufacturers, or the consumers/enterprise desiring more performance. Things get faster and aren't going to stop.
 
Good luck telling that to hardware/software manufacturers, or the consumers/enterprise desiring more performance. Things get faster and aren't going to stop.
Correct, and I do not applaud that anymore, nor do I pull wool over my eyes telling everyone how incredibly efficient I can do things now, compared to yesteryear. All I see, is that I'm still doing the very same things I did in 2016. There are a few more pixels. The screen's a bit bigger. My energy bill is a bit higher. Parts are (far) more expensive.
 
Correct, and I do not applaud that anymore, nor do I pull wool over my eyes telling everyone how incredibly efficient I can do things now, compared to yesteryear. All I see, is that I'm still doing the very same things I did in 2016. There are a few more pixels. The screen's a bit bigger. My energy bill is a bit higher. Parts are (far) more expensive.
Play the same games at the same settings and resolution, frame capped, on your new GPU vs an old one then try to tell me new GPUs aren't more efficient.

There is absolutely nothing stopping you from doing this, but if you want to play new games at new settings, there is a cost.
 
Play the same games at the same settings and resolution, frame capped, on your new GPU vs an old one then try to tell me new GPUs aren't more efficient.

There is absolutely nothing stopping you from doing this, but if you want to play new games at new settings, there is a cost.
I've said GPUs use more power, not that they haven't gained efficiency. Stop putting words in my mouth ;)

You're not wrong though.
 
I have no problem with increased power use at the top end, even creeping increases in power use at an arbitrary numerical branding label like "80 series". Because that's all

Arbitrary.

If a whale wants higher performance and the cost is higher power, then go ahead and pay. I greatly prefer efficiency, low power use, and low cost per frame. If that has migrated from the "80" class to the "70" or "60" class, then so be it. The 4070/Super and 4060 Ti are good GPUs for this, have excellent efficiency with very good performance (yes even the 4060 Ti) and coincidentally hover around the old 1080's power use. They also hover around the 1080's price FWIW.

Buy the product on its measured merits, not its branding or any artificial class-based argument.

And yes I put my pesos where my mouth is, amongst other GPUs I have a 4060 Ti which I bought it primarily for it's efficiency. Toss up with the 4070 Super but it was this past summer and I preferred to spend less at the time and wait on the 5070's efficiency as it'll arrive sooner than the 5060/Ti.
 
Nobody said that, that's just what you want to make of it to live the illusion that all is well here.

The fact is, TDP has virtually doubled to get that 3,5x faster.
At the same time, your games still run at similar FPS.
One might even debate if they look better than they used to do on the 1080, honestly, given the current TAA junk.

Such progress. Have you saved the world yet with your hyper efficient GPUs? Or is the exact opposite happening, and is the reality that all efficiency gained is just a free pass to use ever more power? What's the energy footprint YoY of all GPUs on the globe? An interesting thing to place alongside your ooh-aaah about most efficient GPU in existence. It gets even funnier if we factor in the cost and footprint of the fabs and node progress required to obtain 3.5x faster. I think if you put it all together the net sum might be we've actually just wasted an immense amount of resources so we can still barely run 4K - not much other than in 2017.

Stop bullshitting yourself, there should be a shock effect when you consider the x80 has doubled its TDP over 3-4 generations when in the past, it never did that.


Doesn't matter. What matters is whether that singular gaming system with a top end GPU in it, uses 300-350W (2016), or over 600W (2024). And that's not touching the OC button, mind you, that's running both systems 'conservatively'. In the end, physics hasn't changed, you're still getting an ever increasing energy bill and heat production. Efficient or not.
I kind of agree, although I think Nvidia's x90 cards are a replacement for high-end SLi, and not a single high-end card like the 1080.
 
I think there needs to be a distinction between power hungry and "pulling a lot of power". I bet a paycheck the 5090 will be the most efficient card ever produced. Whoever is primarily concerned with power should just grab a 5090 and limit it to 250-300w or whatever they feel like. It will still be the fastest card around with the best perf / watt. But for those who value performance above power, these cards are capable of handling more power to spit more fps. Nothing wrong with that either. Im running my 4090 locked to 320watts, but I wouldn't mind if it was also capable of pulling 2000 watts for when / if I require the extra performance. I really don't get what the issue is.
You can pretty easily take a 4090 and run it at 225W. Afaik changing the power limit still has a minimum of 50% so not under that. It'll be very-very efficient and the performance drop will be surprisingly small considering it literally uses half the power. The problem? Manufacturer has no incentive to sell you the same GPU/card for a lower price. So if you are willing to pay more there are very cool efficient solutions available :)

Of course undervolting doesn't lower power draw, that's because you are just moving the frequency curve to the left, meaning your card is trying to hit higher clockspeeds. So by UV, you are just gaining performance with the same power. If you want to lower power, power limit it.
It is a little bit about the viewpoint. If the aim is lowering power consumption power limiting will do that but you lose performance. Undervolting allows to reclaim some of that lost performance without increasing the power.
 
I kind of agree, although I think Nvidia's x90 cards are a replacement for high-end SLi, and not a single high-end card like the 1080.
That is a way to look at it. But this was about the x80; the TDP inflation trickles down. Also, diminishing returns happen, both on the hardware and software/graphics front. I'm not denying this is progress in its own right, but only if you define progress as more is better, forgetting the actual cost of all that in the non-digital world.

Right now there is a continuous escalation of TDP's gen over gen, and sure, we've seen this in the past, but consecutively, and with these jumps? Not quite.

The thing is I don't even blame Nvidia or AMD for it. We know all stops are pulled to get ever more performance out of a piece of silicon and sell us another product. There's just too much focus on selling products all the time, everywhere.
 
That is a way to look at it. But this was about the x80; the TDP inflation trickles down. Also, diminishing returns happen, both on the hardware and software/graphics front. I'm not denying this is progress in its own right, but only if you define progress as more is better, forgetting the actual cost of all that in the non-digital world.

Right now there is a continuous escalation of TDP's gen over gen, and sure, we've seen this in the past, but consecutively, and with these jumps? Not quite.

The thing is I don't even blame Nvidia or AMD for it. We know all stops are pulled to get ever more performance out of a piece of silicon and sell us another product. There's just too much focus on selling products all the time, everywhere.
Well, there were already lots of kW+ PSUs on the market for SLi/CF systems that had no place anymore when the slow death of multi GPU happened. Nvidia only gave them another reason to exist. And the $1000+ price doesn't make me believe that they're "just" high-end GPUs at all. They're much more than that.

The 5090 for example, will have double the shader count of the 5080 and a 200 W higher TDP - you could fit an entire product stack in between. That makes it not a gaming card, but a "look at my massive e-balls" card in my eyes, which is exactly what SLi/CF were all about.
 
Back
Top