Sunday, October 30th 2022

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.

CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.
Update Oct 26th: There are multiple updates to the story.

The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.

Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.

Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.

Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.

Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources: Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
Add your own comment

230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

#51
ThrashZone
Hi,
Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower :laugh:
Posted on Reply
#52
JustBenching
TheDeeGeeI also wonder how many people are using only 2 PSU cables to connect the 600 watt adapter.

Cuz you know, some PSUs have daisy chain PCI-E 8-Pin cables... *yikes*
There is absolutely nothing wrong with daisy chain pcie 8,why are you saying yikes?
ThrashZoneHi,
Sorry but when a company says 30 times is a lifetime of a connector it's a pos.
Thats the same rating as the normal 8pin pcie though
Posted on Reply
#53
TheDeeGee
AquinusMaybe instead of making new connectors and by treating the symptom, we should probably invest time into having hardware detect these high resistance situations so a user can take action before stuff starts melting or catching fire. Ultimately this is a state that needs immediate action and even with the best of connectors, something can still go wrong. Regardless of connector, I'd like be aware of this situation should it arise before it causes damage.
True, as is this connector can never be safe.

Only safe way to connect is to solder wires directly to the GPU, i'm sure DIYs will attempt this.
Posted on Reply
#54
ThrashZone
Hi,
Big difference is normal pci-e/... cables aren't as fragile as this adapter seems to be.
Posted on Reply
#55
Punkenjoy
A well made connector should be enough to deliver 600w of power. The thing is those are not well designed.

To me it's not the amount of cable, the amount of connecting cycle, it's really the position and the locking mechanism of the thing. Something that handle 600w of power should be well locked in place. That 35 mm no bend crap is just hilarious.

That doesn't mean that they wont improve it. By example, PCI-E 16x slot now have locking mechanism. They could do something similar there. It's indeed additional cost but it's better to be safe than sorry.

I think the no bend on 35 mm is a stupidity, but not because of the side panel. It's just a dangerous risk if your cable move for whatever reason. If you have 4090 money, and not enough money to spend on larger case, well, you should revise your priority. a case too small will restrict airflow. It could always work if you reduce the power limits but still.

What i would do is a 90° connector with a clap on it to secure it in place and no one will have problem ever again.

I think this situation is ridiculous but at some point, there was a needs for a better connector instead of 4 8 pin.

The location of those connector should also be improved. If it was facing up or down (in a standard gpu mount), there would be way less bending required. I would actually have it facing up. This would allow someone with a vertical GPU mount to totally hide the cable.
Posted on Reply
#56
TheDeeGee
fevgatosThere is absolutely nothing wrong with daisy chain pcie 8,why are you saying yikes?


Thats the same rating as the normal 8pin pcie though
For the 600 Watt adapter it's the difference between pulling 300 Watt through a cable if daisy chained, or 150 with 4.

Sure PCI-E 8-Pin is rated for little over 300 Watt, but would you be comfortable with that?

But i guess some people like to live on the edge.
Posted on Reply
#57
JustBenching
TheDeeGeeFor the 600 Watt adapter it's the difference between pulling 300 Watt through a cable if daisy chained, or 150 with 4.

Sure PCI-E 8-Pin is rated for little over 300 Watt, but would you be comfortable with that?

But i guess some people like to live on the edge.
Again, there is absolutely nothing wrong with daisy chained 2x8pins, unless your PSU is bought on discount from Lidl. In which case, I don't think your main concern should be the cable anyways, right?

Even the 12vhpwr cable that corsair sells has 2 connectors on the PSU side. So, they have to know something, right?
Posted on Reply
#58
medi01
Dirt ChipThis is not NV idea, it`s a new general standard.
There is no "general standard" of "ship home made adapter that cannot fit properly in 93% of f the cases.

This issue is absolutely NV's creation and doesn't have anything to do with 12 pin socket.

IF NV was too greedy for a proper 90 degree angle adapter, it could have located the socket differently.
Posted on Reply
#59
Vayra86
ThrashZoneHi,
Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower :laugh:
Oh? We have numerous powerful ITX builds with high end components going about. Smaller cases can dissipate heat fine...

And thats the core of the issue here: a trend happening with pc components where higher power draw changes the old rules regarding what is possible and what is not. There is no guidance on that, from Nvidia either. They just assume you will solve the new DIY build problems that might arise from the specs they devised.

The very same thing is happening in CPU. And for what? To run the hardware way out of its efficiency curve, they are skirting the limits of what is possible out of the box to justify a ridiculous price point for a supposed performance edge you might never reach.

Components have landed in nonsense territory on the top end to keep the insatiable hunger of commerce afloat.
FinlandApolloDo you have better producer than Molex?

And no, that "molex connector" in the PC is not made by Molex, it's a Mate-N-Lok by TE-Connectivity. It's actually one of the few connectors that is NOT made by Molex.
Wha... molex cables buddy, I never did use a capital letter, and if I did, by accident.

These
Posted on Reply
#61
TheDeeGee
fevgatosAgain, there is absolutely nothing wrong with daisy chained 2x8pins, unless your PSU is bought on discount from Lidl. In which case, I don't think your main concern should be the cable anyways, right?

Even the 12vhpwr cable that corsair sells has 2 connectors on the PSU side. So, they have to know something, right?
Corsair knows, they also know how the sense wires work... not :D
medi01Hehe:

If the lower tier cards of AIBs will use this dumpster fire 12VHPWR connector as well, i will either get myself a 3070 Strix, or go to AMD (something i really want to avoid).
Posted on Reply
#62
dinmaster
what a shit show, gg nvidia. just couldn't stick with pci-e connectors or having a bigger connector and wires so the 30 reconnect limit wouldn't exist and the fires too.
Posted on Reply
#63
sephiroth117
The Quim ReaperThat's alright, if they burn up their card, they're rich, they can just buy another...
I don't have a 4090 but for some, paying a 1000$ a GPU few months ago was a good deal and normal but now 1500$ that's like ok you are filthy rich, have a 3 Porsche and can afford 10 4090, lmao.

No that's not alright because it could happen on an upcoming, more affordable 4080/4070...or a 3090ti since they have the 12vhpwr.
Posted on Reply
#64
Toss
I prefer my old AMD with normal 8-pins. F K THAT
WHat's the problem for them to go 4x 8 pins instead of this garbage? Same 600W TDP
Posted on Reply
#65
evernessince
the54thvoidThis is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.



Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.
To be fair, Molex connectors were a PITA. Even during the first connect the pins often not aligned and you'd had to fiddle with it to get everything aligned. Pretty much had to fiddle with each reconnect after that as well. Molex wasn't carrying 600w either.

I agree with you that problems are unlikely for most people but when you are talking about a cable that carries this much power you don't really want quality on par with molex.
Posted on Reply
#66
Solaris17
Super Dainty Moderator
sephiroth117I don't have a 4090 but for some, paying a 1000$ a GPU few months ago was a good deal and normal but now 1500$ that's like ok you are filthy rich, have a 3 Porsche and can afford 10 4090, lmao.
Man; comments like this really bring nothing to the table. I cannot stand it when people do it. This is totally off topic but I just want to throw some things out really quick before my meeting.

If you update even every 2 years in MY experience in consumer land you spend pretty much the same amount of money if you keep up with your build as someone that blows it all at once.

I bought 2x 4090s. 2x z690s; including all the other parts, coolers, fans, cases, ram for 2x platform upgrades. All at once. I probably just dropped 1/4 of the salary of what some make here on this forum.

Because I saved. Since 2017. The week after I finished our x299 builds. For the next platform jump. 5 years.

I do not think and it shouldnt make me out to be or included in the demographic of people that are considered hardware snobs because I can drop 3x your mortgage on PC parts in one night and still eat dinner. Your logic is flawed.

Also, I LOVE porsches.








And they dont need to be $180k cars. You can choose to spend that much though if you want.


For the record. If it helps. I know a few others that do it like me. At the very least its a waste of your time (not sure you know how much thats worth yet) because what people like this think of how I spend my money doesn't affect how I sleep at night.
Posted on Reply
#67
the54thvoid
Super Intoxicated Moderator
evernessinceTo be fair, Molex connectors were a PITA. Even during the first connect the pins often not aligned and you'd had to fiddle with it to get everything aligned. Pretty much had to fiddle with each reconnect after that as well. Molex wasn't carrying 600w either.

I agree with you that problems are unlikely for most people but when you are talking about a cable that carries this much power you don't really want quality on par with molex.
I feel as though I'm banging my head into a brick wall.

It doesn't matter whether Molex is a PITA. It matters that people are using this to bash Nvidia as though it's their fault. AMD use the same mini-molex 6 and 8-pin connectors (from the PSU), which all follow certain standards--which is namely the 30 cycle mating. The 30 cycle thing is not the issue.

The issue is the shitty bend mechanics and pin contact failure.
Posted on Reply
#68
Star_Hunter
If Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.
Posted on Reply
#69
kapone32
I just hope that with no EVGA around all these buyers have fun getting warranty service especially for Asus.
Posted on Reply
#70
Solaris17
Super Dainty Moderator
I wonder if cards will come with dielectric grease now. I think I have some left over from my starter. lol
kapone32I just hope that with no EVGA around all these buyers have fun getting warranty service especially for Asus.
>:( worst CX experience of my life
Posted on Reply
#71
ThrashZone
kapone32I just hope that with no EVGA around all these buyers have fun getting warranty service especially for Asus.
Hi,
Indeed
I'm now without a gpu manufacture luckily I'm not in the market.
Posted on Reply
#72
kapone32
Star_HunterIf Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.
With the size of the cards there would have been no issue using 4 8 pin connectors. I guess Nvidia didn't want to have the image of that in people's heads so in a time when we are just trying to get supply chain back in order they make a brand new standard that intuitively sounds dangerous. By the way I have never heard of shouting a design warning on a baseline product like a PSU cable but with Nvidia nothing surprises me.
Posted on Reply
#73
MachineLearning
This article's tone is pretty condescending. It doesn't take "arm wrestling" to make the connector burn up, it's just poorly designed.

How exactly are users supposed to prevent bending within 35mm of the terminals? Most people won't have problems - but virtually nobody would have issues if they just coughed up the extra PCB space and went for 8-pins.
Posted on Reply
#74
ThrashZone
Hi,
Just the cooler sticking that far past the card is just dumb imho
Posted on Reply
#75
mechtech
99 problems……but a cable isn’t one…..
Posted on Reply
Add your own comment
Nov 21st, 2024 12:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts