• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5000 Series “Blackwell” TDPs Leaked, Entire Lineup Unified with 12+4 Pin Power Connector

I see that side of it but there are not a whole lot of 650w units out there with a 12V-2×6 connector and none that I know of that are 550w or lower. You don't need a 750w unit to run a 100w GPU (or even 170-200w). The PSU will need to step up to the plate in their offerings as well. Sure you can use an adapter but why force people to use an adapter when there really is no need to if you just kept the same connector.


5090 titan? $3k electrical heater for your home
That literally is the reason. By Nvidia just going entirely that direction, they're trying to force the power supply manufacturers to go that way too. I'm not saying I agree with it, I cna just hear the management decision about "well, if we don't force the issue they'll never change their cables"
 
I think the more pressing reason is more powah = moar clocks = lower shader count required = lower die size = bigger margins. I don't think AMD was ever in their minds while designing and positioning their stacks. AMD was happy designing their stuff for consoles already for quite a while - and Nvidia had SUPER warming up anyway.



Lazy because its just a power bump and there have been few if any architectural improvements coming our way. The Nvidia I knew and happily bought from truly innovated. What we've got now is a clusterfuck of vaseline smeared brute forced lighting that makes zero sense and is just there to fake the idea of progress. In the meantime, games play like absolute shit and Nvidia GPUs get slaughtered by Lumen just the same. But who knows, maybe Nvidia pulls out a rabbit on that front. Oh yeah, AI... of course.
What I want to see is the cards used without an internet connection enabled at all, the results recorded, and then with internet connection enabled, results recorded, stock default software no tuning.
 
I think the more pressing reason is more powah = moar clocks = lower shader count required = lower die size = bigger margins. I don't think AMD was ever in their minds while designing and positioning their stacks. AMD was happy designing their stuff for consoles already for quite a while - and Nvidia had SUPER warming up anyway.
Oh yeah definitely.. they went almost all-in with Lovelace (best TSMC node they could get, more than 50% CUDA cores vs Ampere, higher clocks, higher TDP, a lot more L2 Cache, Frame Generation, Path Tracing, Ray Reconstruction, etc.) but like I said AMD were supposed to beat them with their 8192 Shader RDNA 3 cores (in Raster of course, not RT/PT), whereas Blackwell will not have competition on the high-end so they don't need to push as hard, they're just competing with themselves. That's why the SUPER or Refresh variants will probably have 3GB GDDR7 chips too vs 2GB chips for the vanilla ones. AMD are busy with SoC chips, trying to catch up with Nvidia on A.I. and gain back some GPU market share (hence the Mainstream RDNA 4 GPUs).
 
I'm a little out of the loop since I prefer AMD, has the 12V-2×6 connector solved the failure issues of the 12VHPWR? Because a repeat performance of $2000+ GPUs burning would be extremely disappointing.
 
I'm a little out of the loop since I prefer AMD, has the 12V-2×6 connector solved the failure issues of the 12VHPWR? Because a repeat performance of $2000+ GPUs burning would be extremely disappointing.
Nvidia released a 2nd version of the 16-pin connector and since then the 12VHRPWR specs have been upgraded, new PSUs come with ATX 3.1/PCIe 5.1
 
AMD can't build a 5090.
That is probably true, but I don’t think they need to. You hear a lot about the top end cards online, but most people stick to mid to high end. If Amd release a 5070/5080 equivalent at a better price and/or lower tdp, I‘d see that as a win.
 
Great cards for them cold winter nights, warm and snuggly with your family huddled around your PC while you game, enjoying the warm comfort of the heat while you game.
 
That is probably true, but I don’t think they need to. You hear a lot about the top end cards online, but most people stick to mid to high end. If Amd release a 5070/5080 equivalent at a better price and/or lower tdp, I‘d see that as a win.
That was RDNA2 and RDNA3 and it didn't materialize, did it? Nvidia had 20% more expensive raster perf, AMD dropped to <10% market share.

What I want to see is the cards used without an internet connection enabled at all, the results recorded, and then with internet connection enabled, results recorded, stock default software no tuning.
Why? Because you think Nvidia is secretly downloading FPS? I don't think we've arrived there just yet lol, but you might be right 3 generations from now. I think that's the gen we'll be back at 1-slot add-incards for a GPU, all they need is a network chip :) We'll still pay 1,5K for them though, or you can sub to Geforce NOW for the small fee of $5,- per hour of gaming.
 
i doubt they know the values, those are just placeholders
about the connector, for the lower TDP cards maybe it's not such a big problem, not sure. I don't like it
 
That was RDNA2 and RDNA3 and it didn't materialize, did it? Nvidia had 20% more expensive raster perf, AMD dropped to <10% market share.
AMD already said that during the coof heard round the world, AMD prioritized what production they had on EPYC chips and ryzen chips by extension, over GPUs. So rDNA2 was in vanishingly short supply until the end of the generation, when sales had slowed.

It was a good decision on AMD's part, but it's still AMD's fault they lost market share there.

rDNA3 was super late to the game. The 7600 was a waste of sand, more expensive then the 6650 for the same performance (oh hey look it isnt just nvidia that does this), the 7800xt was great but launched over half a year too late to matter, with nvidia's 4060 and 4070 saturating the market, and the 7700xt was another mispriced disappointment (and also way too late to market).
Why? Because you think Nvidia is secretly downloading FPS? I don't think we've arrived there just yet lol, but you might be right 3 generations from now. I think that's the gen we'll be back at 1-slot add-incards for a GPU, all they need is a network chip :) We'll still pay 1,5K for them though, or you can sub to Geforce NOW for the small fee of $5,- per hour of gaming.
So long as physics remain unbroken, latency and input lag will ensure that geforce NOW remains the poor mans option, with even a 4060 providing a better experience.
 
rDNA3 was super late to the game. The 7600 was a waste of sand, more expensive then the 6650 for the same performance (oh hey look it isnt just nvidia that does this), the 7800xt was great but launched over half a year too late to matter, with nvidia's 4060 and 4070 saturating the market, and the 7700xt was another mispriced disappointment (and also way too late to market).

the gap between the 7900 and the rest of the cards was insanely big, but that can be explained by the insane amounts of unsold rdna2 cards they were sitting on, in my opinion. You can still buy them new today.
Release the top card, that sells very little anyway, just so they talk about us and stay relevant and hold the rest until the stock of rdna2 dies out.
 
Why is it nuts? What is the arbitrary number GPUs should not go past, and why?

250W. That keeps a single-GPU system able to be powered by a 550W PSU and helps keep whole-system cost and waste heat down.
 
I don't understand how we're swinging so far again with the Power. I mean I had a 1200W PSU over a decade ago as at the time I planned on Xfiring some 29000XTs(yeah yeah fail card) Then I dropped down to 1050W and now currently running an 850W Platinum.
Why when the nodes are going down which should be hypothetically more efficient are the last few gens becoming so power hungry again....I mean I know my 850W is "enough" but the fact that we are back to 1000, 1200 and even 1600W PSU's and GPUs like this that apparently have already saturated the stupid connector they've had to add another....
 
AMD already said that during the coof heard round the world, AMD prioritized what production they had on EPYC chips and ryzen chips by extension, over GPUs. So rDNA2 was in vanishingly short supply until the end of the generation, when sales had slowed.

It was a good decision on AMD's part, but it's still AMD's fault they lost market share there.

rDNA3 was super late to the game. The 7600 was a waste of sand, more expensive then the 6650 for the same performance (oh hey look it isnt just nvidia that does this), the 7800xt was great but launched over half a year too late to matter, with nvidia's 4060 and 4070 saturating the market, and the 7700xt was another mispriced disappointment (and also way too late to market).

So long as physics remain unbroken, latency and input lag will ensure that geforce NOW remains the poor mans option, with even a 4060 providing a better experience.
Irrelevant, because it wasnt until AMD dropped prices far below msrp that the 6700/6800 actually sold out, and ampere was already a good 30% more expensive per frame.
 
Such a lazy release. Nvidia pushing the power button every other gen does not bode well.
They are sticking with 5nm TSMC again aren't they? Probably getting any gen on gen increase with sure architectural improvements, but guessing much larger dies and of course, moar power.

We thought 40-series was expensive...get ready.
 
Irrelevant, because it wasnt until AMD dropped prices far below msrp that the 6700/6800 actually sold out, and ampere was already a good 30% more expensive per frame.
I dont remember that. I DO remember 6800s being totally unavailable for over a year, and when I DID get my 6800xt, it was near $1000. Same went for 6700xts, which I watched for my friend throughout the lockdowns and afterwards waiting for them to show up.
I don't understand how we're swinging so far again with the Power. I mean I had a 1200W PSU over a decade ago as at the time I planned on Xfiring some 29000XTs(yeah yeah fail card) Then I dropped down to 1050W and now currently running an 850W Platinum.
Why when the nodes are going down which should be hypothetically more efficient are the last few gens becoming so power hungry again....I mean I know my 850W is "enough" but the fact that we are back to 1000, 1200 and even 1600W PSU's and GPUs like this that apparently have already saturated the stupid connector they've had to add another....
I dont either. To me it seems arbitrary, people are worried about power use on cards that cost far more then the electricity would ever run, even in exepnsiv eplaces like europe or california.

and if you're concerned, you can always undervolt for dramatic gains.
250W. That keeps a single-GPU system able to be powered by a 550W PSU and helps keep whole-system cost and waste heat down.
Why are you limited to 550 watts? 1kw+ PSUs have been around for a long time, hell 750w PSUs are not that much more then 550s.

We could apply the same argument to a 150w GPU, that keeps whole system cost and waste heat even LOWER then 250w. So why is 250w/550w the cutoff?
 
They are sticking with 5nm TSMC again aren't they?
For some reason AMD and Nvidia both are sticking with 5nm class process for GPUs, specifically N4P this time around. The 3nm dies we know about are what, Apple's M4 at 165 mm², Arrow Lake's compute die at 115 mm², anything larger in a non-enterprise space? Should this raise concerns about the yields of TSMC 3nm class processes for large dies considering that TSMC has had N3 in volume production since very late 2022? Or would the problem be just the wafer price?
 
I dont remember that. I DO remember 6800s being totally unavailable for over a year, and when I DID get my 6800xt, it was near $1000. Same went for 6700xts, which I watched for my friend throughout the lockdowns and afterwards waiting for them to show up.

I dont either. To me it seems arbitrary, people are worried about power use on cards that cost far more then the electricity would ever run, even in exepnsiv eplaces like europe or california.

and if you're concerned, you can always undervolt for dramatic gains.

Why are you limited to 550 watts? 1kw+ PSUs have been around for a long time, hell 750w PSUs are not that much more then 550s.

We could apply the same argument to a 150w GPU, that keeps whole system cost and waste heat even LOWER then 250w. So why is 250w/550w the cutoff?

550W was chosen based on the 250W graphics spec, and seems to be the "lower bound" of where a manufacturer can price a well-made PSU. The (non-dual-chip) NV consumer flagships were at or under 250W from Tesla all the way to Turing. Then Ampere blew the lid off, and now the 4070 ti pulls more than that, with three cards above it in the stack. Yes, a 750W isn't much more expensive than a 550W, but it still is more expensive. You can "It's not much more" yourself into a KW unit if you don't draw a line somewhere.

PC graphics feels like traffic management right now. City: Roads are clogged, better build more lanes. Citizens: Ooh, it's easier to drive A to B now; let's take the car. City: Roads are clogged. Sub chipmakers for the planners and developers for the drivers and you've got the graphics arms race. It's always been so, but not with the ballooning power envelopes we're dealing with now.

Some may ask, "What, do you want graphics tech to stagnate?" Well, if the consequence is the power and dollar cost of gaming not spiraling out of control, I'll accept some stagnation.
 
Why is it nuts? What is the arbitrary number GPUs should not go past, and why?

This ignores reality, ADA showed a significant bump from ampere in IPC, not just clock and core bumps. IIRC its a 16% increase when adjusted for core and clock count. It's ironic, the behavior you describe fits AMD's rDNA3 far better then Ada.

When hardware T&L came out, it crushed the first few gens of compatible cards. Can you imagine if the forums were as cynical as they are today? PC gaming would have been snuffed out because your geforce 256 couldnt play half life at 300 FPS on max settings.

The way games paly has nothing to do with nvidia, and game developer's inability to optimize their games is also not their fault.
I think you're mistaken about the jump in IPC from Ampere to Ada. The SMX is essentially the same; it was Ampere that saw nearly a 25% increase in clock normalized throughput per SMX compared to Turing. Turing also improved performance per SMX when compared to Pascal; just compare a 2070 Super to a 1080.
 
Enthusiasts actually complain about TDP these days is funny to me, when maybe 2 decades ago PC Enthusiasts would purposefully try to blow through those numbers with overclocking everything they had to the actual brink of failure. Somehow TDP actually matters now for desktop users
 
Enthusiasts actually complain about TDP these days is funny to me, when maybe 2 decades ago PC Enthusiasts would purposefully try to blow through those numbers with overclocking everything they had to the actual brink of failure. Somehow TDP actually matters now for desktop users
The "normal" TDP has increased a LOT in 2 decades. And gains from overclocking have gone down, also a LOT. From outside factors, cost of power has increased in most countries since then.

2 decades ago was 2004.
Cream of the crop GPUs were 6800 Ultra with its horrible 105W and X800 XT at 85-90W.
Best CPUs were Athlon 64s with 90W TDP. On Intel side the much-maligned Prescott P4s and older Gallatin P4EEs at 115W.
Potential overclocking gains, especially for 24/7 usage, were significant.
In most cases, TDP was not the limiting factor for performance and often enough parts did not consume up to TDP.

Compare this with today where high-end GPU consumes 300W and more. High-end CPU consumes over 200W.
Gains from overclocking are kind of there but for 24/7 usage the power cost for any performance increases is very bad.
Oh, and TDP is generally the limiting factor as well :)

We have gone from 250W or so for a high-end PC in 2004 to 600W or so in 2024. Or potentially much more if you have a 4090 and something like "TDP is just a suggestion" Intel CPU.
 
That is probably true, but I don’t think they need to. You hear a lot about the top end cards online, but most people stick to mid to high end. If Amd release a 5070/5080 equivalent at a better price and/or lower tdp, I‘d see that as a win.
id go with amd right now if they did well not only in gaming but productivity too. I dont care for RT. content creators/editing is on the rise. a 6900xt is just a tad better then a 2080ti thats 2.5x less cost in the used market. I want to see them make the changes and id be happy to go amd with gpus as well. I think alot of people are fed up with leather boy and would happily come over if they upped their game. people buy nvidia becuase no choice. even davinci resolve ceo says "we currently dont recommend amd cards"

I want to see xx80 level with 14gb+ vram that does gaming and editing.
 
The "normal" TDP has increased a LOT in 2 decades. And gains from overclocking have gone down, also a LOT. From outside factors, cost of power has increased in most countries since then.

2 decades ago was 2004.
Cream of the crop GPUs were 6800 Ultra with its horrible 105W and X800 XT at 85-90W.
Best CPUs were Athlon 64s with 90W TDP. On Intel side the much-maligned Prescott P4s and older Gallatin P4EEs at 115W.
Potential overclocking gains, especially for 24/7 usage, were significant.
In most cases, TDP was not the limiting factor for performance and often enough parts did not consume up to TDP.

Compare this with today where high-end GPU consumes 300W and more. High-end CPU consumes over 200W.
Gains from overclocking are kind of there but for 24/7 usage the power cost for any performance increases is very bad.
Oh, and TDP is generally the limiting factor as well :)

We have gone from 250W or so for a high-end PC in 2004 to 600W or so in 2024. Or potentially much more if you have a 4090 and something like "TDP is just a suggestion" Intel CPU.
Correction 350 watt for high-end/prosumer cpus (threadripper) & 450 to 600 watt GPU's (RTX 4090 asus Strix bios was 600 watts at one point)
 
Back
Top