Sunday, October 30th 2022

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.

CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.
Update Oct 26th: There are multiple updates to the story.

The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.

Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.

Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.

Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.

Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources: Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
Add your own comment

230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

#101
mechtech
Proper 4090 cable and connectors

Posted on Reply
#103
Crackong
I think no one at the AMD side would think this would become a selling point.


Posted on Reply
#104
gasolina
I would like a 3 8 pin or 4 8 pin rather than the shitty nvidia 12 pin connektor
Posted on Reply
#105
Unregistered
Terrible design. Put out a card that is so huge that it will literally have issues fitting into a number of cases, and then design a power cable for it that is so unreliable it actually has a connect / disconnect limit, and furthermore is a fire hazard.

As some others said, great opportunity for AMD to potentially capitalize on.
#106
Arkz
In the various batteries and EV rides I've built I use 5.5mm bullet plugs, they can handle pretty high currents. They should have done something like an XT120 connector with a sense pin as an optional extra for high powered PSUs and cards. Given the pins in these things are still pretty weak and not rated that high, there were bound to be issues on cards that can pull so much power. The connector should never have been rated for anything more than 450w constant.

I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end.
gasolinaI would like a 3 8 pin or 4 8 pin rather than the shitty nvidia 12 pin connektor
It's not Nvidia's connector, as has been pointed out many times.
medi01So, it's NV f-up, top to bottom.
Intel doesn't deserve even part of the blame:

www.anandtech.com/show/16038/nvidia-confirms-12pin-gpu-power-connector


PCI SIG analysis (prior to meltdowns)



It's the same connector minus the sense pins, instead NV put a sense chip in the wire to tell the GPU how many cables are connected.

The connector was always gonna be fine for lower powered stuff, had no issues on my 3080FE roaring away for hours on end.

Yes it is NVs fault though, they're running too many amps through these pins so as the picture shows without a solid connection will fry. even with a solid connection they're going higher than the pins are rated for at full 600w. They're basically dupont connectors in the wire end of the port, and have never been rated that high compared to loads of different types of connectors. But they are cheap as chips compared to anything slightly more solid. 4080 should have no issues with it, it's the 4090s peak power draw that's the main issue and really NV should have had that card alone use a 16pin connector instead to spread the load more. Or 2 12 pins to ensure no issues. 600W is just beyond what's safe for those tiny pins to handle. That's 50 amps spread across those teeny tiny things.
Posted on Reply
#107
Bwaze
LabRat 891Pretty standard advice. But, you wouldn't know that unless you've wrangled a lot of SATA, coaxial, fibre, or other 'bend radius'-sensitive fine pitch cabling.

All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...
A lot of prople place the blame solely on end user. Even though Ada cards are uncommonly wide, and most PC cases weren't made with consideration for them - and the new adapter cable even increases the problem by having a long stiff "strain relief".
Posted on Reply
#108
Makaveli
ThrashZoneHi,
Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower :laugh:
That would easily fit in my mid tower :)
Posted on Reply
#109
jonnyGURU
Dirt ChipThis is not NV idea, it`s a new general standard.
4*8pin is not better, I think it`s even worse.
Using the 12VHPWR to 4*8pin adapter makes life harder.
AMD need to also adopt the 12VHPWR but to better position it on the GPU.


Because when you need 2*12VHPWR to feed next gen GPU`s that's 4*whatever you suggested.
The heat is not the problem here, the bending force on the connector is.
NihillimWasn't this a collaborative effort between Intel and PCI-SIG?
This wasn't an Intel idea. This was an Nvidia idea that was passed through the PCI-SIG consortium and got approval. Intel only added it to the ATX spec AFTER it was passed through PCI-SIG.

The connector works fine in most cases. But there are caveats (don't bend before 30mm, etc.).

Remember, this is the same connector as the 30 series FE. That was only 450W.

I think the problems started coming up when Nvidia said "hey look.. if you use a 600W cable you can clock the 4090 card higher", which is why I went on the profanity laden rant that got me exiled from GamersNexus (because he chose to quote me out of context instead of addressing the actual issues because he had Nvidia in house at the time).

I have personally used this connector upwards of 55A at 50°C. But you CAN NOT put a bend on the cable less than 30mm or higher than 50°C. This is shown in the PCI-SIG report that was leaked, but this only talked about the connector on the PSU side (which is why Corsair ATX 3.0 PSUs don't have the 12VHPWR connector on the PSU) and never talked about on the GPU side.
Posted on Reply
#110
OneMoar
There is Always Moar
if your connector has restrictions on how you plug in a connector then the connector is bad
example there is no possible way I could use a 12VHPWR connector in my case there isn't room there is barely room for the two 8 pin on my 3070ti and those make a hard bend to clear the side panel
to get a safe bend keeping the last 1.2Inchs strait the cable needs to make a massive ARC nobody wants that

nvidia needs to revise this connector asap maby use a longer pin or a different shaped pin T or + shaped
for right now right angle adapters seem to be the solution hopefully more psu vendors will make 90/L shaped adapters
edit:reddit has once again done nvidias job for them fixed
we can closed this thread now
Posted on Reply
#111
docnorth
Maybe it was designed in collaboration with NZXT:D...
Jock aside both sides are to blame. The user of the top GPU (at least for the moment) should avoid stressing a 600w cable or connector. On the other hand Nvidia knows that this huuuge card will be difficult to fit in many cases and must put the needed clearance as a red flag in the specs. It's another 20% added height on a 4 slot GPU, higher than most tower coolers...
Posted on Reply
#112
noname00
Why is the power connector on top of the card with these cards that are extremely tall? Putting the connector on the back at least would remove the need of a sharp bend. And I would like that even for cards with multiple 8 pins connectors.
Posted on Reply
#113
Darller
OneMoarif your connector has restrictions on how you plug in a connector then the connector is bad
Damn... polarized AC outlets are gonna screw up your worldview. What a stupid thing to say.
Posted on Reply
#114
Bwaze
noname00Why is the power connector on top of the card with these cards that are extremely tall? Putting the connector on the back at least would remove the need of a sharp bend. And I would like that even for cards with multiple 8 pins connectors.
RTX 4090 cards are also extremely long, so having a connector with long stiff stress relief and cables that you shouldn'd bend too tightly on top of that also excludes a lot of cases.

I think all this mess wouldn't happen if they made the adapter with 90 degrees bend. I know it's not practical for those who mount their cards veryically - but they are a minority, and they already have to buy a riser cable - so they can invest on another unnecessary nice cable.
Posted on Reply
#115
medi01
ArkzYes it is NVs fault though, they're running too many amp
They:

1) Designed that shit (Intel designed the sensing pins only)
2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
3) They STILL found it OK to push it out to the market
jonnyGURUThe connector works fine in most cases. But there are caveats (don't bend before 30mm, etc.).
Except it doesn't even per docs submitted to PCI SIG by NV itself.

Posted on Reply
#116
Dirt Chip
medi01There is no "general standard" of "ship home made adapter that cannot fit properly in 93% of f the cases.

This issue is absolutely NV's creation and doesn't have anything to do with 12 pin socket.

IF NV was too greedy for a proper 90 degree angle adapter, it could have located the socket differently.
I mostly agree, nv implement the new power standard and the position plus adapter make it very space constraint.
But NV didn't invented anything, they just adopted it.
erockerNot in the slightest.
Maybe not this gen but it's the way going forward for high end high wattage GPU at least. I just hope they will learn from NV (not so good) way of implamenting new this standard.
Posted on Reply
#117
medi01
Dirt ChipBut NV didn't invented anything, they just adopted it.
No. Per their own words (links have been shared several times) it was SPECIFICALLY NVIDIA that designed power delivery aspect of that socket. Intel only did sensing part.
Posted on Reply
#118
Readlight
Looks like my home input power line connection.
Posted on Reply
#119
Arkz
medi01They:

1) Designed that shit (Intel designed the sensing pins only)
2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
3) They STILL found it OK to push it out to the market

Except it doesn't even per docs submitted to PCI SIG by NV itself.

It's still perfectly fine for lower current cards. Again it's been in use for 2 years already. 3080, 3090, 3090Ti have had no problems with it. It's only now with the 4090 that it's an issue. If you look at PCI-SIGs test showing the failure that's when drawing 55A constant. That's 660W for it to fail in their test. And Nvidia may claim they made it, but it's just a Molex MicroFit 3.0 BMI Dual Row header.
Posted on Reply
#120
medi01
ArkzIt's still perfectly fine for lower current cards
Which need it as much as birds need pig tails
Arkz3080, 3090, 3090Ti have had no problems with i
Yeah. Why do you think that could be:


let alone the size...
the54thvoidto bash Nvidia as though it's their fault.
How is "it" not NV's fault???

Who designed that thing? NV (no, not Intel, Intel designed only sensing pins)
Who KNEW from testing it was terrible? NV (yes, they've even submitted it to PCI-SIG)
Who cheaped out on 90 degree connectors for a GPU that cots 2000+ Euro and does not fit into 93% of cases?

I'm pretty sure it wasn't my grandma, nor was it Intel or AMD.
Posted on Reply
#121
the54thvoid
Intoxicated Moderator
medi01Which need it as much as birds need pig tails


Yeah. Why do you think that could be:


let alone the size...


How is "it" not NV's fault???

Who designed that thing? NV (no, not Intel, Intel designed only sensing pins)
Who KNEW from testing it was terrible? NV (yes, they've even submitted it to PCI-SIG)
Who cheaped out on 90 degree connectors for a GPU that cots 2000+ Euro and does not fit into 93% of cases?

I'm pretty sure it wasn't my grandma, nor was it Intel or AMD.
Reading comprehension failure. Go and reread exactly what I posted about mating cycles (not being an Nvidia thing). Then read the part about shitty bending angles where I say it is a problem.

Then post in context.
Posted on Reply
#122
docnorth
QUANTUMPHYSICSSo who wants to fix this by building a hardened, angled adapter?
After that users will complain the GPU won't fit...
Posted on Reply
#123
Veseleil
mechtechProper 4090 cable and connectors

Or one of these:
Posted on Reply
#124
jonup
ArkzIn the various batteries and EV rides I've built I use 5.5mm bullet plugs, they can handle pretty high currents. They should have done something like an XT120 connector with a sense pin as an optional extra for high powered PSUs and cards. Given the pins in these things are still pretty weak and not rated that high, there were bound to be issues on cards that can pull so much power. The connector should never have been rated for anything more than 450w constant.

I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end.


It's not Nvidia's connector, as has been pointed out many times.


It's the same connector minus the sense pins, instead NV put a sense chip in the wire to tell the GPU how many cables are connected.

The connector was always gonna be fine for lower powered stuff, had no issues on my 3080FE roaring away for hours on end.

Yes it is NVs fault though, they're running too many amps through these pins so as the picture shows without a solid connection will fry. even with a solid connection they're going higher than the pins are rated for at full 600w. They're basically dupont connectors in the wire end of the port, and have never been rated that high compared to loads of different types of connectors. But they are cheap as chips compared to anything slightly more solid. 4080 should have no issues with it, it's the 4090s peak power draw that's the main issue and really NV should have had that card alone use a 16pin connector instead to spread the load more. Or 2 12 pins to ensure no issues. 600W is just beyond what's safe for those tiny pins to handle. That's 50 amps spread across those teeny tiny things.
My math doesn't doesn't agree with your conclusion. Assuming you are correct that each pin is good for 8amp, 12pins at 12volts are good for 1152W. At 600w they should be handling a little over 4amps if the load is evenly spread, which it probably isn't. We still have plenty of headroom though.
Posted on Reply
#125
Punkenjoy
It's not a problem of power per pin, it's just that the pins aren't secured enough. If they had the same setup but with a more secured socket for the pin, there would be no issue.

At those wattage (300w +), you need to make sure your socket is secured and hold in place properly. This is just too flimsy. Redo the same setup but with something that lock the connection in place properly and everyone would be fine.
Posted on Reply
Add your own comment
May 26th, 2024 11:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts