Monday, October 11th 2021

PCIe Gen5 "12VHPWR" Connector to Deliver Up to 600 Watts of Power for Next-Generation Graphics Cards

The upcoming graphics cards based on PCIe Gen5 standard will utilize the latest PCIe connector with double bandwidth of the previous Gen4 that we use today and bring a new power connector that the next generation of GPUs brings. According to the information exclusive to Igor's Lab, the new connector will be called the 12VHPWR and will carry as many as 16 pins with it. The reason it is called 12VHPWR is that it features 12 pins for power, while the remaining four are signal transmission connectors to coordinate the delivery. This power connector is supposed to carry as much as 600 Watts of power with its 16 pins.

The new 12VHPWR connector should work exclusively with PCIe Gen5 graphics cards and not be backward compatible with anything else. It is said to replace three standard 8-pin power connectors found on some high-end graphics cards and will likely result in power supply manufacturers adopting the new standard. The official PCI-SIG specification defines each pin capable of sustaining up to 9.2 Amps, translating to a total of 55.2 Amps at 12 Volts. Theoretically, this translates to 662 Watts; however, Igor's Lab notes that the connector is limited to 600 Watts. Additionally, the 12VHPWR connector power pins have a 3.00 mm pitch, while the contacts in a legacy 2×3 (6-pin) and 2×4 (8-pin) connector lie on a larger 4.20 mm pitch.
There are already implementations of this connector, and one comes from Amphenol ICC. The company has designed a 12VHPWR connector and listed it ready for sale. You can check that out on the company website.
Source: Igor's Lab
Add your own comment

97 Comments on PCIe Gen5 "12VHPWR" Connector to Deliver Up to 600 Watts of Power for Next-Generation Graphics Cards

#51
Ferd
Good luck to the VRMs powering these chips
Posted on Reply
#53
Space Lynx
Astronaut
FerdGood luck to the VRMs powering these chips
yeah I learned my lesson with motherboards in recent years, always worth it to just spend extra to get the high end boards for the better to best VRM's. this is the future of gpu's now, what is already more expensive, needs to be more expensive if you want longevity.

this is why steve gamersnexus review of the ps5 vrm's hitting like 90+ celsius worried me... longevity... its like when a car salesman tells you don't have to change your transmission fluid because it will last for the life of the car anyway, well the salesman means the powertrain warranty length, but it can last longer than that if you just change the fluid every 50k miles... LOL
Posted on Reply
#54
londiste
Darmok N JaladThe other half of this conversation is that not only are we looking at more power consumption with GPUs, but CPUs are going there as well. 95-105W used to be the enthusiast grade chips, but next gen (Adler Lake and Zen4) are rumored to have 165W enthusiast level CPUs.
At least on Alder Lake side, these rumors are most likely not accurate. The document with the tables is a PSU design guide and 165W has been there for several generations already - as an example of HEDT platform CPU. Not sure about Zen4 but more likely than not this is also misread. Have not been able to find the document it is from or same document for previous gen. If someone has a link, please share.
Posted on Reply
#55
dgianstefani
TPU Proofreader
ZoneDymoyour only positive is....the look of a pc (cable)....which in of itself is already baffeling but then you turn it into us being afraid of change? could you atleast read what people are saying before responding in such a..... way?
You use a 2600k and an rx480 yet complain about a standard for 2022+ GPUS? There's nothing baffling about appreciating improvements in form factor and ease of installation. Have fun routing 4 separate 8 pin cables for potential future GPUs.
Posted on Reply
#57
Dristun
Ready to deliver what the market wants. 3090s and 6900XTs with power limits raised way above reference seem to be flying off the shelves, pro users will take the hit to get that render/calculation faster too, so here we go. Personally I see no issue with is. Hopefully the world wakes up to nuclear energy again though, we might need more power if that's the way it gonna go, lol
Posted on Reply
#58
Vayra86
80-watt HamsterDespite my rank of Captain Oblivious, even I caught the sarcasm in your quoted post.
Then we have different sorts of reading comprehension, because what I'm reading in the quoted post is "Mining and playing games both use energy, so if you want to regulate mining, you need to regulate everything else that uses energy, including things that serve the purpose of having fun / playing"

Which is an utterly ridiculous train of thought, underlined by my link up there.
Posted on Reply
#59
80-watt Hamster
Vayra86Then we have different sorts of reading comprehension, because what I'm reading in the quoted post is "Mining and playing games both use energy, so if you want to regulate mining, you need to regulate everything else that uses energy, including things that serve the purpose of having fun / playing"

Which is an utterly ridiculous train of thought, underlined by my link up there.
Clearly, because I interpret the post in question as trying to make the same point you are, that limits of that nature are pointless. (AFAICT, mining wasn't even mentioned anywhere in the chain.)
Posted on Reply
#60
Vayra86
80-watt HamsterClearly, because I interpret the post in question as trying to make the same point you are, that limits of that nature are pointless. (AFAICT, mining wasn't even mentioned anywhere in the chain.)
And my point is, some things do need limitations, and one is not the same as the other. But - you are correct, I misinterpreted the post because it is merely about the power consumption and not mining :D

My bad! However, earlier up in the topic, mining was indeed brought up as a main reason to up the power limit, I kinda ran with that somehow.
Posted on Reply
#61
Bomby569
When California put limitations on cars milage per galons, inovation increaded, consumption went down, we got better cars and saved money. Sometimes legislation is need to give inovation a push, because this companies can just happily increase power consumption instead on inovating and pushing the technology.
Posted on Reply
#62
itguy2003
ZoneDymoYou might need the performance, what you dont need/want is the consumption, we evolve in many areas by making products more efficient, while gpu's also are more efficient they also consistently use more power.
I am in favor of some law that puts a limit on the power consumption of such a product, let the manufactuers find other ways to squeeze out more performance.
No new regulations please. If you don't want a 600 watt card, don't buy it. Let consumer demand dictate the products.
Posted on Reply
#63
Darmok N Jalad
londisteAt least on Alder Lake side, these rumors are most likely not accurate. The document with the tables is a PSU design guide and 165W has been there for several generations already - as an example of HEDT platform CPU. Not sure about Zen4 but more likely than not this is also misread. Have not been able to find the document it is from or same document for previous gen. If someone has a link, please share.
Yeah, speculation for sure, but we’ve already seen Rocket Lake peak over at around 430W with Adaptive Boost. While that’s worst-case, that’s ultimately what motherboard makers have to engineer around, and what system builders need to account for when selecting parts. Since Adler Lake’s spec only ups the ante, it would stand to reason that it’s going to have at least Rocket Lake-like peak current. Even if it’s a case of preparing for future readiness of another generation, the direction still appears to be “up” at the high-end. And while I mention 165W parts, the spec raised for all product segments, including the non-HEDT lines. My point is mainly that buying high-end on either front is going to put some strain on the delivery system.

I guess if there’s a positive on these modern designs, the GPUs and CPUs appear to only take that extra current when conditions allow, so I suppose the end result is less performance when the motherboard or PSU are overwhelmed. More of our components are adaptively overclocking themselves now, to where the system builder mostly just needs to “feed the beast.“
Posted on Reply
#64
xorbe
Those are rookie numbers, I'm holding out for 750W.

Oh yeah, so I finally got an Evga RTX 3080 Ti FTW3 Ultra last week. Had to order a new PSU because my older SS 850W has a sensitive OCP (over current protection) circuit, and the gfx card has a flaky RGB controller that flickers. Not going to send it back, who wants a $1420 refurbished used card? I'll just leave the RGB off.
Posted on Reply
#65
Chrispy_
I'm not keen on 9A per pin; 14AWG is already bulky and awkward wire to work with and that only carries 5.9A.

12-guage wire is horrible stuff for the tight turns of PC cable routing; Can we perhaps not have 600W graphics cards instead?
Posted on Reply
#66
Bomby569
itguy2003No new regulations please. If you don't want a 600 watt card, don't buy it. Let consumer demand dictate the products.
if there is universal truth is that unregulated companies always put the consumers needs and best interests first
Also some consumers are nuts, like driving a gigantic pick up truck to take the kids to school
Posted on Reply
#67
cst1992
Bomby569if there is universal truth is that unregulated companies always put the consumers needs and best interests first
Man.

Companies put THEIR OWN interests first, not that of the consumer. Otherwise you wouldn't need marketing. Companies put profit first.
Yes, what consumers want is important, but not the primary motivation of a company.
Bomby569Also some consumers are nuts, like driving a gigantic pick up truck to take the kids to school
A person is going to look for convenience first.
If a pickup truck is what's available, then they use that.
Usually the alternative is that they get a pickup truck AND an SUV and use them for different things.
Barring some people(who are in the know-how), mileage is not a concern for the average consumer.
Posted on Reply
#69
klf
this connectors was years under-enginered ... not enought from 2012 ... i have few 7970 in "sli" for mining and other and connectors was always hot...even for mining machines like antminer....)) was totally hazardous
SOO hoppely they make enginering enought for next generations or for other using of gf cards ,,than gaming,,))
also remember my two nvidia 690 in sli,,, crazy )) was soo hot
Posted on Reply
#70
Gameslove
600 W gaming GPU? No no no......nope....
Posted on Reply
#71
Blueberries
It's just a connector. It's capable of up to 600W, that doesn't mean the card will draw 600W. A card that draws 150W could use this connector as well.

What this means is instead of having a random number of six and eight pin connectors between different SKUs you can have a standardized single cable for any PCIe device.
Posted on Reply
#72
klf
""What this means is instead of having a random number of six and eight pin connectors between different SKUs you can have a standardized single cable for any PCIe device.""" yes great idea agree, , single or two for extra powerfull cards..

funny: or they have to make simply just another one/two 24 pin for each gf card :D ... even if they(companys) dropping support for sli (for incomming 3090 ++)
"one more 24 pin " cannot by problem for PSU producers ....:laugh:
Posted on Reply
#73
LabRat 891
Can someone explain to me why there's need for 'signal' wires? I don't understand why it's technically unviable to 'simply' stick a 12V+ supercap array in the PSU dedicated to the new Heavy Duty power lead?
While it'd be a VERY bad idea, theoretically an 11.4v-12.6v *battery bank* could power a GPU. It just needs available current, minimal ripple, and sticking within Voltage spec.
Speaking of current, that's probably why this connector even was proposed. Average and Load power consumption on GPUs are still 65-250W; but momentary peak power consumption on 6900s and 3090s have been exceeding 30-35A. Which, has been crashing some otherwise seemingly 'excessive' PSUs.
All in all though, why does my PSU need a dataline connection to my GPU? This is asking for a security risk; if only in malware that damages hardware.
Posted on Reply
#74
80-watt Hamster
LabRat 891Can someone explain to me why there's need for 'signal' wires? I don't understand why it's technically unviable to 'simply' stick a 12V+ supercap array in the PSU dedicated to the new Heavy Duty power lead?
While it'd be a VERY bad idea, theoretically an 11.4v-12.6v *battery bank* could power a GPU. It just needs available current, minimal ripple, and sticking within Voltage spec.
Speaking of current, that's probably why this connector even was proposed. Average and Load power consumption on GPUs are still 65-250W; but momentary peak power consumption on 6900s and 3090s have been exceeding 30-35A. Which, has been crashing some otherwise seemingly 'excessive' PSUs.
All in all though, why does my PSU need a dataline connection to my GPU? This is asking for a security risk; if only in malware that damages hardware.
Despite not being an electrical engineer, here I go anyway. Right now, the PSU doesn't have any idea what's going on at the end of any of its connections. They're rather dumb devices that simply provide what is asked for them within design threshold (OCP, OTP, etc.). Feedback from a high-draw, high-variance load like a graphics card could help a more-intelligent PSU decide what to do other than what it can currently do, which is either shut down or fail.

It's not likely anything resembling what we consider "data" will be going over the signal wires. It could be something as mundane as additional voltage monitoring, or more likely an extremely simple serial protocol like I2C. But even assuming more complex communication, I'm not sure how much could go wrong re: security. Nothing else talks to the PSU (yet), so you wouldn't have a vector from PSU --> GPU. I suppose if someone tried hard enough, it might be possible to send bad data back to the PSU through the graphics card to force the PSU to misbehave, but that's making quite a few assumptions about what kind of attack surface might even exist.
Posted on Reply
#75
looniam
TheLostSwedeSignal wires = very low power usually...
b4psm4mNot the same gauge as the signal wires are not carrying any significant power.
your both missing that's nothing different. PCI_SIG specs are signal wires can be jumpered to a ground. (just look at your psu) those long thin wires are totally unnecessary. the design reeks of thumbing their nose to nvidia; which they can do.
Posted on Reply
Add your own comment
Nov 21st, 2024 09:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts