Sunday, October 30th 2022
PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up
Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.
CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.Update Oct 26th: There are multiple updates to the story.
The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.
Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.
Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.
Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.
Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources:
Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.Update Oct 26th: There are multiple updates to the story.
The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.
Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.
Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.
Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.
Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up
Intel doesn't deserve even part of the blame:
www.anandtech.com/show/16038/nvidia-confirms-12pin-gpu-power-connector PCI SIG analysis (prior to meltdowns)
As some others said, great opportunity for AMD to potentially capitalize on.
I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end. It's not Nvidia's connector, as has been pointed out many times. It's the same connector minus the sense pins, instead NV put a sense chip in the wire to tell the GPU how many cables are connected.
The connector was always gonna be fine for lower powered stuff, had no issues on my 3080FE roaring away for hours on end.
Yes it is NVs fault though, they're running too many amps through these pins so as the picture shows without a solid connection will fry. even with a solid connection they're going higher than the pins are rated for at full 600w. They're basically dupont connectors in the wire end of the port, and have never been rated that high compared to loads of different types of connectors. But they are cheap as chips compared to anything slightly more solid. 4080 should have no issues with it, it's the 4090s peak power draw that's the main issue and really NV should have had that card alone use a 16pin connector instead to spread the load more. Or 2 12 pins to ensure no issues. 600W is just beyond what's safe for those tiny pins to handle. That's 50 amps spread across those teeny tiny things.
The connector works fine in most cases. But there are caveats (don't bend before 30mm, etc.).
Remember, this is the same connector as the 30 series FE. That was only 450W.
I think the problems started coming up when Nvidia said "hey look.. if you use a 600W cable you can clock the 4090 card higher", which is why I went on the profanity laden rant that got me exiled from GamersNexus (because he chose to quote me out of context instead of addressing the actual issues because he had Nvidia in house at the time).
I have personally used this connector upwards of 55A at 50°C. But you CAN NOT put a bend on the cable less than 30mm or higher than 50°C. This is shown in the PCI-SIG report that was leaked, but this only talked about the connector on the PSU side (which is why Corsair ATX 3.0 PSUs don't have the 12VHPWR connector on the PSU) and never talked about on the GPU side.
example there is no possible way I could use a 12VHPWR connector in my case there isn't room there is barely room for the two 8 pin on my 3070ti and those make a hard bend to clear the side panel
to get a safe bend keeping the last 1.2Inchs strait the cable needs to make a massive ARC nobody wants that
nvidia needs to revise this connector asap maby use a longer pin or a different shaped pin T or + shaped
for right now right angle adapters seem to be the solution hopefully more psu vendors will make 90/L shaped adapters
edit:reddit has once again done nvidias job for them fixed
we can closed this thread now
Jock aside both sides are to blame. The user of the top GPU (at least for the moment) should avoid stressing a 600w cable or connector. On the other hand Nvidia knows that this huuuge card will be difficult to fit in many cases and must put the needed clearance as a red flag in the specs. It's another 20% added height on a 4 slot GPU, higher than most tower coolers...
I think all this mess wouldn't happen if they made the adapter with 90 degrees bend. I know it's not practical for those who mount their cards veryically - but they are a minority, and they already have to buy a riser cable - so they can invest on another unnecessary nice cable.
1) Designed that shit (Intel designed the sensing pins only)
2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
3) They STILL found it OK to push it out to the market Except it doesn't even per docs submitted to PCI SIG by NV itself.
But NV didn't invented anything, they just adopted it. Maybe not this gen but it's the way going forward for high end high wattage GPU at least. I just hope they will learn from NV (not so good) way of implamenting new this standard.
let alone the size... How is "it" not NV's fault???
Who designed that thing? NV (no, not Intel, Intel designed only sensing pins)
Who KNEW from testing it was terrible? NV (yes, they've even submitted it to PCI-SIG)
Who cheaped out on 90 degree connectors for a GPU that cots 2000+ Euro and does not fit into 93% of cases?
I'm pretty sure it wasn't my grandma, nor was it Intel or AMD.
Then post in context.
At those wattage (300w +), you need to make sure your socket is secured and hold in place properly. This is just too flimsy. Redo the same setup but with something that lock the connection in place properly and everyone would be fine.