Sunday, October 30th 2022
PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up
Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.
CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.Update Oct 26th: There are multiple updates to the story.
The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.
Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.
Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.
Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.
Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources:
Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.Update Oct 26th: There are multiple updates to the story.
The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.
Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.
Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.
Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.
Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up
I agree comments like that are unnecessary.
Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.
Anyone with half a bit of sense would elevate this number because simply enough, GPUs definitely do get close to that limit or go over it. And these GPUs aren't dealing in low wattages like, say, HDDs do, or a bunch of fans. Here we are dealing with loads that are fat enough to make a PSU sweat. Also, and much more importantly, these cables aren't getting cheaper and certainly not dirt cheap like Molex is. We're talking about top of the line components here.
Similar things occur with, for example, SATA flatcables. They're too weak, so they break during the normal lifespan if you do a bit more than one or two reinstalls with them; and let's face it, with SSDs this likelihood has also increased as the devices are much more easily swapped around, or taken portable; slots on boards now offer hotswap sockets for SATA; etc.
And the examples like it are rapidly getting new friends: Intel's IHS bending; thermal pads needed on GPUs to avoid failure; etc etc.
The issue is shitty bend mechanics yes, but at the core of all these issues is one simple thing: cost reduction at the expense of OUR safety and durability of devices. Be wary what you cheer for - saying 30 cycles is fine because its the same as molex is not appreciating how PC gaming has evolved. Specs should evolve along with it.
Should of made the plastic tab longer if that was a minimum bend point
Hell just extend it down to so it acts like a leg to hold the big bitch up to :laugh:
We all have our lenses to view the world through, (un?)fortunately. Nobody's right. Or wrong. But the social feedback is generally how norm and normality are formed.
Still though I don't think its entirely honest to your budgetting or yourself to say buying the top end is price conscious. Its really not, all costs increase because you've set the bar that high and the $/fps is worst in the top, and that problem only gets bigger if you compare gen-to-gen for similar performance, and its perfectly possible to last a similar number of years with something one or two notches lower in the stack, and barely notice the difference. Especially today, where the added cost of cooling and other requirements can amount many hundreds of extra dollars.
That said, buying 'high end' is definitely more price conscious than cheaping out and then getting forced into an upgrade because you really can't run stuff proper in two-three years time. But there is nuance here; an x90 was never a good idea - only when they go on sale like AMD's 69xx's do now. Its the same as buying on launch; tech depreciates too fast to make it worthwhile. You mentioned cars yourself. Similar loss of value applies - the moment you drive off... Yeah, or you could decide to offer and design your very own Nvidia branded cable doing the same thing but with somewhat greater tolerances. One could say their margins and their leadership position kind of makes that an expectation even. Nvidia is always first foot in the door when it comes to pushing tech ahead... They still sell Gsync modules even though the peasant spec is commonplace now, for example.
Everything just stinks of cutting corners, and in this segment, IMHO, thats instant disqualification.
Also, back of the card? Where exactly? There's no PCB on the better half of it, right?
Yep seen a few
Thing is they are usually vertical mounting gpu's.
I was referring to standard mounting.
First, a 3% difference is enough to decide who is in 1st place of charts and who is second. And people on the Internet at least, are happy to announce the top card a monster and the 3% slower a failure, even if that second card is for example 30% more efficient. So, Intel first, because of their 10nm problems that forced them to compete while still on 14nm, Nvidia latter that probably decided to pocket all the money and leave nothing to AIBs and now also AMD that has realized that targeting for efficiency is suicidal, push their factory overclocks as high as they can.
There was no chance in a million Nvidia to come out with a Founders Edition at 350W and let it's partners produce custom models that go at 450W or higher.
:)
clearance ok, PSU? errrrr..... nah, no chance in hell :p changed it recently enough to not care about ATX 3.0 PSU which are all but available anywhere atm (aside one model from Thermaltake which is also close to be 3.5x time the price paid for the current i have ) and it seems AMD will keep with the 8pins for higher end (6pins for lower models) and i hope they stick to it given 1. the price, 2. the issues seen recently
although ... i HAVE the clearance! that's more important, oh and i know since my first self built, that tight cable bend is a bad bad thing ... and not only for PCs (specially with low quality cables/extensions, some cable handled steep curve better than others ... )
That said, why not just shift to use EPS12V? Higher-end 1k+ PSUs can power 2 or more EPS12V lines by default (depending on the modular options), and EPS12V 4-pin can handle 155 watts continuous while the 8-pin can handle 235-250 watts continuous. Still would require 3x 8-pin connectors for 600-700 watt cards, but at least the output per 8-pin went up from the PCIe limit of 150 watts, and having some PSU makers swap PCIe connectors for extra EPS12V isn't much different than also designing ATX 3.0 PSUs with a dedicated but potentially faulty 12VHPWR connection. If anything, Seasonic's higher-end PSUs can do both EPS12V or PCIe from the same modular port, so it could be adapted quickly. And most importantly, all EPS12V wires and contacts are of a thicker gauge than the 12VHPWR wires and contacts.
Pretty sure AMD is/has been competitive.
It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw. That sounds very much like treating a symptom rather than solving the underlying cause. Even if the cause was a small increase in resistance, how would you precisely detect this from either the PSU or the GPU end? (keeping in mind both tolerances in the spec of each part and the fact that the current draw is changing very rapidly) You can't just do this the same way a multimeter would do it; by sending a small current and measuring the voltage drop to calculate resistance. Are there more advanced (and reliable) techniques which goes beyond my knowledge of electronics which would make your proposal even feasible?
I'm more a fan of doing good engineering to create a robust design rather than overengineering a complex solution to compensate for a poor design.
The one thing that doesn't sound quite right in my mind is the claim that this whole problem is cables that are not fully seated causing extreme heat to melt down the plug, and I would like to see a proper in-depth analysis of this rather than jumping to conclusions. From what I've seen of power plugs (of any type/size) over the years, heat issues on the connective part is unusual, unless we are talking about not making a connection at all causing arcing, but that's with higher voltages. But with 12V and this wire gauge the threshold between "good enough" and no connection will be very tiny, probably less than 1 mm. So if this was the core problem, then engineering a solution would be fairly easy by either making the cables stick better in the plug or making the connecting area a tiny bit larger. Keep in mind that with most types of plugs the connection is usually far better than the wire, so unless it's physically damaged there shouldn't be an issue with connectivity. (Also remember that the electrons moves on the outside of the write and the surface area of the connection in most plugs is significantly larger than the wire gauge in most plugs, probably >10x+++).
So the explanation so far sounds a little bit off to me. Even me with my thick fingers would probably manage to do this unintentionally.
I would call this poor engineering, not user error.
But I think with better positioning, or an adequate adapter should have been essential free provided addins.
All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...