• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PCIe Gen5 "12VHPWR" Connector to Deliver Up to 600 Watts of Power for Next-Generation Graphics Cards

The other half of this conversation is that not only are we looking at more power consumption with GPUs, but CPUs are going there as well. 95-105W used to be the enthusiast grade chips, but next gen (Adler Lake and Zen4) are rumored to have 165W enthusiast level CPUs. And we already know that rating is a bit of a joke, as the chips will happily exceed that all day long by throwing a few BIOS levers. We very well could see 600W GPUs and CPUs that peak at 500W or better. Things are going to get even more expensive at the high end, as there’s just too much complexity in power delivery and cooling. With so many components to keep powered, I suspect it will even be less efficient at idle.

Now, all that said, if one can be content to not have the best of the best, the mid-grade everything will probably be fairly efficient. Current-gen consoles set the tone for game engines for a long time, and even the next level of mid-grade products should have no trouble being faster than what’s in XSX and PS5.
 
Good luck to the VRMs powering these chips

yeah I learned my lesson with motherboards in recent years, always worth it to just spend extra to get the high end boards for the better to best VRM's. this is the future of gpu's now, what is already more expensive, needs to be more expensive if you want longevity.

this is why steve gamersnexus review of the ps5 vrm's hitting like 90+ celsius worried me... longevity... its like when a car salesman tells you don't have to change your transmission fluid because it will last for the life of the car anyway, well the salesman means the powertrain warranty length, but it can last longer than that if you just change the fluid every 50k miles... LOL
 
The other half of this conversation is that not only are we looking at more power consumption with GPUs, but CPUs are going there as well. 95-105W used to be the enthusiast grade chips, but next gen (Adler Lake and Zen4) are rumored to have 165W enthusiast level CPUs.
At least on Alder Lake side, these rumors are most likely not accurate. The document with the tables is a PSU design guide and 165W has been there for several generations already - as an example of HEDT platform CPU. Not sure about Zen4 but more likely than not this is also misread. Have not been able to find the document it is from or same document for previous gen. If someone has a link, please share.
 
your only positive is....the look of a pc (cable)....which in of itself is already baffeling but then you turn it into us being afraid of change? could you atleast read what people are saying before responding in such a..... way?
You use a 2600k and an rx480 yet complain about a standard for 2022+ GPUS? There's nothing baffling about appreciating improvements in form factor and ease of installation. Have fun routing 4 separate 8 pin cables for potential future GPUs.
 
Ready to deliver what the market wants. 3090s and 6900XTs with power limits raised way above reference seem to be flying off the shelves, pro users will take the hit to get that render/calculation faster too, so here we go. Personally I see no issue with is. Hopefully the world wakes up to nuclear energy again though, we might need more power if that's the way it gonna go, lol
 
Despite my rank of Captain Oblivious, even I caught the sarcasm in your quoted post.
Then we have different sorts of reading comprehension, because what I'm reading in the quoted post is "Mining and playing games both use energy, so if you want to regulate mining, you need to regulate everything else that uses energy, including things that serve the purpose of having fun / playing"

Which is an utterly ridiculous train of thought, underlined by my link up there.
 
Then we have different sorts of reading comprehension, because what I'm reading in the quoted post is "Mining and playing games both use energy, so if you want to regulate mining, you need to regulate everything else that uses energy, including things that serve the purpose of having fun / playing"

Which is an utterly ridiculous train of thought, underlined by my link up there.

Clearly, because I interpret the post in question as trying to make the same point you are, that limits of that nature are pointless. (AFAICT, mining wasn't even mentioned anywhere in the chain.)
 
Clearly, because I interpret the post in question as trying to make the same point you are, that limits of that nature are pointless. (AFAICT, mining wasn't even mentioned anywhere in the chain.)

And my point is, some things do need limitations, and one is not the same as the other. But - you are correct, I misinterpreted the post because it is merely about the power consumption and not mining :D

My bad! However, earlier up in the topic, mining was indeed brought up as a main reason to up the power limit, I kinda ran with that somehow.
 
When California put limitations on cars milage per galons, inovation increaded, consumption went down, we got better cars and saved money. Sometimes legislation is need to give inovation a push, because this companies can just happily increase power consumption instead on inovating and pushing the technology.
 
You might need the performance, what you dont need/want is the consumption, we evolve in many areas by making products more efficient, while gpu's also are more efficient they also consistently use more power.
I am in favor of some law that puts a limit on the power consumption of such a product, let the manufactuers find other ways to squeeze out more performance.


No new regulations please. If you don't want a 600 watt card, don't buy it. Let consumer demand dictate the products.
 
At least on Alder Lake side, these rumors are most likely not accurate. The document with the tables is a PSU design guide and 165W has been there for several generations already - as an example of HEDT platform CPU. Not sure about Zen4 but more likely than not this is also misread. Have not been able to find the document it is from or same document for previous gen. If someone has a link, please share.
Yeah, speculation for sure, but we’ve already seen Rocket Lake peak over at around 430W with Adaptive Boost. While that’s worst-case, that’s ultimately what motherboard makers have to engineer around, and what system builders need to account for when selecting parts. Since Adler Lake’s spec only ups the ante, it would stand to reason that it’s going to have at least Rocket Lake-like peak current. Even if it’s a case of preparing for future readiness of another generation, the direction still appears to be “up” at the high-end. And while I mention 165W parts, the spec raised for all product segments, including the non-HEDT lines. My point is mainly that buying high-end on either front is going to put some strain on the delivery system.

I guess if there’s a positive on these modern designs, the GPUs and CPUs appear to only take that extra current when conditions allow, so I suppose the end result is less performance when the motherboard or PSU are overwhelmed. More of our components are adaptively overclocking themselves now, to where the system builder mostly just needs to “feed the beast.“
 
Those are rookie numbers, I'm holding out for 750W.

Oh yeah, so I finally got an Evga RTX 3080 Ti FTW3 Ultra last week. Had to order a new PSU because my older SS 850W has a sensitive OCP (over current protection) circuit, and the gfx card has a flaky RGB controller that flickers. Not going to send it back, who wants a $1420 refurbished used card? I'll just leave the RGB off.
 
I'm not keen on 9A per pin; 14AWG is already bulky and awkward wire to work with and that only carries 5.9A.

12-guage wire is horrible stuff for the tight turns of PC cable routing; Can we perhaps not have 600W graphics cards instead?
 
No new regulations please. If you don't want a 600 watt card, don't buy it. Let consumer demand dictate the products.

if there is universal truth is that unregulated companies always put the consumers needs and best interests first
Also some consumers are nuts, like driving a gigantic pick up truck to take the kids to school
 
if there is universal truth is that unregulated companies always put the consumers needs and best interests first
Man.

Companies put THEIR OWN interests first, not that of the consumer. Otherwise you wouldn't need marketing. Companies put profit first.
Yes, what consumers want is important, but not the primary motivation of a company.
Also some consumers are nuts, like driving a gigantic pick up truck to take the kids to school
A person is going to look for convenience first.
If a pickup truck is what's available, then they use that.
Usually the alternative is that they get a pickup truck AND an SUV and use them for different things.
Barring some people(who are in the know-how), mileage is not a concern for the average consumer.
 
man that's some lost irony
 
this connectors was years under-enginered ... not enought from 2012 ... i have few 7970 in "sli" for mining and other and connectors was always hot...even for mining machines like antminer....)) was totally hazardous
SOO hoppely they make enginering enought for next generations or for other using of gf cards ,,than gaming,,))
also remember my two nvidia 690 in sli,,, crazy )) was soo hot
 
It's just a connector. It's capable of up to 600W, that doesn't mean the card will draw 600W. A card that draws 150W could use this connector as well.

What this means is instead of having a random number of six and eight pin connectors between different SKUs you can have a standardized single cable for any PCIe device.
 
""What this means is instead of having a random number of six and eight pin connectors between different SKUs you can have a standardized single cable for any PCIe device.""" yes great idea agree, , single or two for extra powerfull cards..

funny: or they have to make simply just another one/two 24 pin for each gf card :D ... even if they(companys) dropping support for sli (for incomming 3090 ++)
"one more 24 pin " cannot by problem for PSU producers ....:laugh:
 
Can someone explain to me why there's need for 'signal' wires? I don't understand why it's technically unviable to 'simply' stick a 12V+ supercap array in the PSU dedicated to the new Heavy Duty power lead?
While it'd be a VERY bad idea, theoretically an 11.4v-12.6v *battery bank* could power a GPU. It just needs available current, minimal ripple, and sticking within Voltage spec.
Speaking of current, that's probably why this connector even was proposed. Average and Load power consumption on GPUs are still 65-250W; but momentary peak power consumption on 6900s and 3090s have been exceeding 30-35A. Which, has been crashing some otherwise seemingly 'excessive' PSUs.
All in all though, why does my PSU need a dataline connection to my GPU? This is asking for a security risk; if only in malware that damages hardware.
 
Can someone explain to me why there's need for 'signal' wires? I don't understand why it's technically unviable to 'simply' stick a 12V+ supercap array in the PSU dedicated to the new Heavy Duty power lead?
While it'd be a VERY bad idea, theoretically an 11.4v-12.6v *battery bank* could power a GPU. It just needs available current, minimal ripple, and sticking within Voltage spec.
Speaking of current, that's probably why this connector even was proposed. Average and Load power consumption on GPUs are still 65-250W; but momentary peak power consumption on 6900s and 3090s have been exceeding 30-35A. Which, has been crashing some otherwise seemingly 'excessive' PSUs.
All in all though, why does my PSU need a dataline connection to my GPU? This is asking for a security risk; if only in malware that damages hardware.

Despite not being an electrical engineer, here I go anyway. Right now, the PSU doesn't have any idea what's going on at the end of any of its connections. They're rather dumb devices that simply provide what is asked for them within design threshold (OCP, OTP, etc.). Feedback from a high-draw, high-variance load like a graphics card could help a more-intelligent PSU decide what to do other than what it can currently do, which is either shut down or fail.

It's not likely anything resembling what we consider "data" will be going over the signal wires. It could be something as mundane as additional voltage monitoring, or more likely an extremely simple serial protocol like I2C. But even assuming more complex communication, I'm not sure how much could go wrong re: security. Nothing else talks to the PSU (yet), so you wouldn't have a vector from PSU --> GPU. I suppose if someone tried hard enough, it might be possible to send bad data back to the PSU through the graphics card to force the PSU to misbehave, but that's making quite a few assumptions about what kind of attack surface might even exist.
 
Back
Top