Sunday, October 30th 2022

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.

CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.
Update Oct 26th: There are multiple updates to the story.

The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.

Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.

Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.

Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.

Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources: Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
Add your own comment

230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

#201
jigar2speed
The Quim ReaperThat's alright, if they burn up their card, they're rich, they can just buy another...
This is such a wrong notion, not everyone is rich who purchases RTX 4090, some people literally use their savings to buy high end products once in a decade and losing a product like this due to bad design is sad.

Never encourage bad engineering, its outright sad.
Posted on Reply
#202
Mussels
Freshwater Moderator
NaterNow I'm a Design Engineer, CAD Tech, Machinist, CAM Programmer, etc etc...but I'm no electrical engineer, yet I see a simple SIMPLE solution staring them right in the face.

Update the PCIe slots and mainboard layouts. Quit trying to fit the large square peg in the small round hole. The card is already taking up 4 PCIe slots in most designs, so have it pop into 4 rigid PCIe slots and get yourself 4x 75w of power right off the top without changing much of anything.

And I know some of you will reply "but that will make it too expensive!"

:wtf:
That'd be useless?
It'd need entirely new motherboards, CPU's and PCI-E standards

The slots are 75W, so even making it physically use two slots you're still far short of the 500W these cards can use
Posted on Reply
#203
Raiden85
TheDeeGeeFor the 600 Watt adapter it's the difference between pulling 300 Watt through a cable if daisy chained, or 150 with 4.

Sure PCI-E 8-Pin is rated for little over 300 Watt, but would you be comfortable with that?

But i guess some people like to live on the edge.
Corsair and Seasonic are certainly fine with it as their official 600w 12VHPWR adapters uses just two connectors from the PSU so 300w per cable. While 150w maybe the official limit the connector is just overbuilt if your using a PSU from a good company.

www.corsair.com/uk/en/Categories/Products/Accessories-%7C-Parts/PC-Components/Power-Supplies/600W-PCIe-5-0-12VHPWR-Type-4-PSU-Power-Cable/p/CP-8920284
Posted on Reply
#204
Sisyphus
phanbueyHow much statistical analysis do you need to look at that design and realize it’s bad? Always these apologists trying to defend obviously bad ideas by downplaying.
Statistical data are reasonable. "apologists trying to defend obviously bad ideas by downplaying" is emotional and personal. That says more about you than about the product.
Posted on Reply
#205
phanbuey
SisyphusStatistical data are reasonable. "apologists trying to defend obviously bad ideas by downplaying" is emotional and personal. That says more about you than about the product.
Why is it emotional -- it's a fact, and don't take it personally, it's not against you, I'm sure you're a fine person I don't know nor do I have anything against you. I do have an issue with the idea that we need mountains of data to identify a mistake -- we really don't.

Sometimes --a badly designed product is obvious, especially when it immediately introduces a failure mode that wasn't an issue in virtually the same design from last generation. It's not personal or emotional -- it's just a regression. The fact that you're obtusely needing more "statistical data (because it's reasonable, and not emotional)" on a clearly failing mechanical design is grossly disingenuous. How many need to light on fire before we reach your failure threshold? 1? 5? 5,000? 50,000? all of them? When is a clear design failure clear enough in the historical data?

I'm a data scientist and process engineer by trade. And I love the scientific process as much as the next guy, but if you know anything about industrial design and process engineering, that when you see any process or product that can fail catastrophically in the regular course of it's usage - that is a 100% fail guarantee -- that's something you fix right away. It's not a matter of if, it's just a matter of when. In this case, like when you see a dongle that the wires can only be bent a certain way or solder points can break and cause it to melt - Nvidia is reacting quickly to this because their guys know this, and they know a lawsuit is coming.
Posted on Reply
#206
maxfly
The real question at this point is how are they going to respond?
They're taking their sweetass time imo.

My take on the latest internal memo at Ngreedia. :D
"We really did engineer a great product...honest! Our design just wasn't followed to our specifications due to a slight oversight. The manufacturing facility is in China but our QC staff is in Taiwan."
"So what your saying is, communication wires got crossed and things went a bit haywire?"
:P
Posted on Reply
#207
OneMoar
There is Always Moar
if you are going to use the word ngreedia please don't post at all you are just thread crapping (and it makes you sound like 15)
which is my job and you may not have it
Posted on Reply
#208
Sisyphus
[/QUOTE]
phanbuey[...]I do have an issue with the idea that we need mountains of data to identify a mistake -- we really don't.
1) A few tech blogs are talking about a technical defect. In order to record how serious this is, the RMA caused is required as a minimum. If it is above average, measures are taken to reduce the failure rate. If not, no further measures are necessary. The test field/repair/guarantee is there for that with its precalculated costs. Failure rates of zero are impossible.
2) Technical products must meet technical specifications. Whether these are met or not is the only objective assessment.
3) Economic criteria such as profit margin.
What caused the error? There are dozens of possibilities. An incorrectly produced batch from a subcontractor that was overlooked during the incoming goods inspection is the most common reason for errors in the mass production of complex goods. Bad plug design could be, but is unlikely, as the plugs have to fulfill the technical specifications. It is not possible to make a qualified judgment before doing some research here and without facts/data.
Posted on Reply
#209
phanbuey
1) A few tech blogs are talking about a technical defect. In order to record how serious this is, the RMA caused is required as a minimum. If it is above average, measures are taken to reduce the failure rate. If not, no further measures are necessary. The test field/repair/guarantee is there for that with its precalculated costs. Failure rates of zero are impossible.
This is true for regular modes of failure - i.e. power delivery causing card to crash. IMO melting is in a 'catastrophic' category since it presents a fire hazard and poses a danger to the user. As far as rates go, typical failure rates of cables are generally in the 'several per million' ratio, especially if they are burn in tested at the mfg plant, there are far less than a million cards and already 5 reports of cables and ports melting 3 weeks from launch -- as such it's very likely far beyond the typical rate of cable failure.

2) Technical products must meet technical specifications. Whether these are met or not is the only objective assessment.
Correct but the technical specification of a power delivery cable to not melt when used in it's intended system is always present. The point of these cables is to deliver power without melting or lighting on fire.

3) Economic criteria such as profit margin.
What caused the error? There are dozens of possibilities. An incorrectly produced batch from a subcontractor that was overlooked during the incoming goods inspection is the most common reason for errors in the mass production of complex goods. Bad plug design could be, but is unlikely, as the plugs have to fulfill the technical specifications. It is not possible to make a qualified judgment before doing some research here and without facts/data.
This is a catastrophic failure which with it carries risk of lawsuit, damages far beyond the costs of recalling the products, and risk of regulatory intervention. If this was a card crashing or card malfunctioning and requiring an RMA then yes an economic analysis can be made. If however, there is any risk of physical danger to the end user (such as things lighting on fire which should not be on fire) then the economic criteria and profit margins are generally secondary concerns.
Posted on Reply
#210
xBruce88x
Is it too early to call this Bend Gate 2.0?
Posted on Reply
#211
Sisyphus
phanbuey[...]
We don't know the reason of the failure, we can only speculate. From my point of view, the connection is subjected to mechanical stress, due to short power supply cables or narrow pc cases. A sentence should be found in every installation manual: The plug connections must be installed in such a way that they are not exposed to mechanical stress. But I might be wrong, as the law is different from nation to nation. In a modular system environment with components from many independent companies, connected by ordinary consumers, it's not that clear, who is responsible. The plugs/cables may simply not be connected as intended. In the article, nVidia asked the board partners, to send the damaged cards in for further analysis. The issue will be solved, once the analysis is done. It could range from a charge of badly manufactured plugs, who are to be replaced, over more detailed manuals, with clear specifications about case dimensions/a list of certified power supplies for the rtx 4090, to a a complete overhaul of some PC standards or even new singular solutions, who are used in hpc/workstations with higher power demands.
You call this “bad design”, I call this early adopter problems. Who don't like to troubleshoot, should wait 4-6 month before purchasing the newest pc hardware components.

I have a technical education, but if I invest >1000 $ for new hardware, I prefer to have the PC built by a specialist retailer. It's much more convenient to just take the smoking PC back to where it was built.
Posted on Reply
#213
mechtech
Bend radius is a thing when it comes to cables..............basically any and all cables..............
Posted on Reply
#215
Dirt Chip
Some new info (as of 3/11/22) by JhonnyGuru: It seems mostly a human error by not connecting the cable all the way in.
The design (problem) of the connector make it easy to think you pluged it right but with banding to the sides that happened more with the adaptor, but not only by it, you can have bad pin contact (mostly on the outermost pins).

To be continued...
Posted on Reply
#216
RainingTacco
the54thvoidThis is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.



Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.
Molex was a total shitshow, i'm glad they are thing of the past. If you wanted to make a point, it's extremely bad point.
Posted on Reply
#217
Bomby569
mechtechBend radius is a thing when it comes to cables..............basically any and all cables..............
Not 8pins, that i can guarantee you
Posted on Reply
#218
Dirt Chip
TheDeeGeeThis video explains it all.

TL;DW: Human error by not connecting it all the way in.
Posted on Reply
#219
Bomby569
Dirt ChipTL;DW: Human error by not connecting it all the way in.
that's still bad design, people have been connecting pc power cables for decades without problems, it isn't human error if the design makes so many people do it wrong.
If that theory is even the right one, i've seen reddit posts and people clearly had it connected all the way
Posted on Reply
#220
Dirt Chip
Bomby569that's still bad design, people have been connecting pc power cables for decades without problems, it isn't human error if the design makes so many people do it wrong.
If that theory is even the right one, i've seen reddit posts and people clearly had it connected all the way
How many is "so many" in % of total users with this cable?
Posted on Reply
#221
Bomby569
Dirt ChipHow many is "so many" in % of total users with this cable?
There's people building pc's, newcomers and old timers, every day. The old 8 pin has none of this issues, and there should be millions of them in use at any time.
Versus a new product, with relatively few adopters (a drop of water in the ocean of 8 pin's) and with so many cases in so little time.
And i don't know what the total users mean, i see a lot of people that disconnected the cards, that bought 3rd party cables, others can be melted and they don't even know it... yet, we only know about the ones that post on the internet, not the ones that don't.

Clearly a design flaw it it's as the video say, not human error. You should not design things that so many can't even plug it in properly. Worst when not plug it in properly makes the product self destroy
Posted on Reply
#222
Dirt Chip
Bomby569There's people building pc's, newcomers and old timers, every day. The old 8 pin has none of this issues, and there should be millions of them in use at any time.
Versus a new product, with relatively few adopters (a drop of water in the ocean of 8 pin's) and with so many cases in so little time.
And i don't know what the total users mean, i see a lot of people that disconnected the cards, that bought 3rd party cables, others can be melted and they don't even know it... yet, we only know about the ones that post on the internet, not the ones that don't.

Clearly a design flaw it it's as the video say, not human error. You should not design things that so many can't even plug it in properly. Worst when not plug it in properly makes the product self destroy
Some, but not all, can be blamed on the design. The user also has his part in the end because it seems than very small percentage do it wrong.
What part is the user fault is yet to be seen.
to be continued...
Posted on Reply
#223
Bomby569
Dirt ChipSome, but not all, can be blamed on the design. The user also has his part in the end because it seems than very small percentage do it wrong.
for a PC part the catastrophic failure rate seems to high, not to low, but we agree to disagree.
Posted on Reply
#224
ThrashZone
Hi,
Clearly it's an nvidia's issue designing a shit adapter
But got to love someone trying to fit this large monster in a lunchbox case :laugh:
Posted on Reply
#225
Dirt Chip
ThrashZoneHi,
Clearly it's an nvidia's issue designing a shit adapter
But got to love someone trying to fit this large monster in a lunchbox case :laugh:
I hope to see Intel adaptors if any will be made.
It will be a nice compression.
Bomby569for a PC part the catastrophic failure rate seems to high, not to low, but we agree to disagree.
In general, human error is responsible to many things. I just say it is still to early to decide what is the responsibility degree on the user in those cases. From the data until now, zero doesn't seems to be the answer.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts