• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Reverse engineering a 12VHPWR cable

Joined
Feb 24, 2023
Messages
4,134 (4.96/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
So, let's say I have absolutely no trust in PCI-e to 12VHPWR adaptors. Let's say I am in no mood to spend long dollar for a new PSU whilst I already got a perfectly fine kW unit. Let's say my future GPU is doomed to come with this melting connector. I don't like it, I would sell anything and anyone to have this clown fiesta ended right here, right now but here we are.

Anyway... To the chase. Theoretically, if I put all the cables in the correct order, I will have male-to-male PCI-e to 12VHPWR cables, right? So I could insert the PCI-e end into the PSU and the 12VHPWR end into the GPU.

I will use AWG14 copper wires to be absolutely sure it catches no fire.

Also does my idea bode well with some bending? Do I allow myself some slack or stick to that "no bending first inch" rule?
 
Yes, this works fine. Just buy an extension cable, cut off the female part and rig up whatever voltages you like from whatever connector.

I did that for some setups here.

Statistically, it will probably be less safe than just using a 4x 8-pin adapter.
 
Statistically, it will probably be less safe than just using a 4x 8-pin adapter.
But why? I use thicker cables certified for much higher loads, I have less weak links and everything will be designed around fool-proofing against me personally and I know exactly how much of a PC building fool I am. Must be perfectly safe.
 
around fool-proofing against me personally
in that case, since your skills exceed those of professional electronics design engineers with dozens of years of experience, then you'll be perfectly safe of course.
 
your skills exceed those of professional electronics design engineers with dozens of years of experience
They do not but whoever made 12VHPWR never hired those in the first place.
 
In My mind Even if You add "bigger" cables to existing connectors You need to think that worst case scenario from all those cables only 1x12v and 1xGND need to pass all the current. So it should be around awg12 instead 14, You need to transfer through 1 wire 12V 50A in worst case scenario, and then You need to be sure that ONE pin in connector can handle 600W.
data is from https://www.thevanconversion.com/wire-sizing-calculator
 
Hi, Nvidia doesn´t steer the power rails, or how i read it there is no seperation anymore anyway, right?
So just for fun one could take two fat automotive starter cables and use one for +12V and the other for ground,
that would be safer than 12 seperate wires wich won´t be load controlled and can burn off one after another, right?
 
They do not but whoever made 12VHPWR never hired those in the first place.
No? If I look around the average office space there's a LOOOOT of money sitting in chairs doing fuck all every day. And what they actually DO produce is often of questionable quality - but, it 'fits the requirements'. We underestimate how much poor performers there are and how the world works when push comes to shove. If boss says this is where we go, that's where 98% of people will go. The 2% can't defend against that.

For one, I think the current design parameters / goals were wrong to begin with. The focus was clearly on reducing cost of materials. And lo and behold, we have a connector that is mostly plastic and minimal copper.

In My mind Even if You add "bigger" cables to existing connectors You need to think that worst case scenario from all those cables only 1x12v and 1xGND need to pass all the current. So it should be around awg12 instead 14, You need to transfer through 1 wire 12V 50A in worst case scenario, and then You need to be sure that ONE pin in connector can handle 600W.
data is from https://www.thevanconversion.com/wire-sizing-calculator
This gets me wondering, why not just run a single wire from end to end to begin with then.
 
This gets me wondering, why not just run a single wire from end to end to begin with then.
So 1 cable, but what about the connector that needs 12 pins + sense pins.
I thought that xt60 or something that could hande 600W could be used instead of those rubbish 6x2, but the psu and gpu has soldered connectors, and if I'm not wrong @Macro Device wants to leave the connectors intact on PSU and GPU, only cable mods.
 
probably the largest part of the issue with these is in the contacts (once you get past the idea and usage), so going with bigger wires may actually cause more problems if they're really stiff. Bending is definitely a problem. What you want is for the contacts to sit as straight as possible, as fully engaged as possible, and never move...THEN for them to have the same impedance to wherever they're going too lol
 
12VHPWR's weakness isn't the wire gauge - it's the lack of monitoring 6 individual circuits on the GPU end.

8-pin connectors are monitored per connector, so on a GPU with multiple 8-pin connectors, even if two of the 12V wire circuits in any connector fails, it's still only going to be a maximum of 12.5A down the remaining wire pair which is above spec, but only just above. Realistically it means it would run hotter than it should, and extended time at high temperature would probably make the plastic more brittle and prone to cracking, but at least it won't melt causing a short, or ignite.

16-pin 12VHPWR / 12V6X2 are often not monitored well enough on the GPU. It's possible all the current ends up going down one wire pair, which at 600W is 50A down a single wire pair. Normally the connector or the wire or both fail long before this actually happens, as the plastic probably starts to melt above 20A.
 
In My mind Even if You add "bigger" cables to existing connectors You need to think that worst case scenario from all those cables only 1x12v and 1xGND need to pass all the current. So it should be around awg12 instead 14, You need to transfer through 1 wire 12V 50A in worst case scenario, and then You need to be sure that ONE pin in connector can handle 600W.
data is from https://www.thevanconversion.com/wire-sizing-calculator
No, I won't use a 5090 or anything remotely close in terms of wattage. I'm targeting 5070 at a roughly 225 W power budget. That's why I'm adamant my cable will work, unless there's some voodoo magic I need to apply to the cable that you normally won't do to a regular power cable.
and if I'm not wrong @Macro Device wants to leave the connectors intact on PSU and GPU, only cable mods.
You're not mistaken.
probably the largest part of the issue with these is in the contacts (once you get past the idea and usage), so going with bigger wires may actually cause more problems if they're really stiff. Bending is definitely a problem. What you want is for the contacts to sit as straight as possible, as fully engaged as possible, and never move...THEN for them to have the same impedance to wherever they're going too lol
I only need to know if I can. If I can then cool, I'll bend those. If not, fine, I'll get a PC case that doesn't require me to think about it.


Well. Once I buy a GPU I'm doing this frankencable and posting this mess on TPU. It'll get as ghetto as possible.
 
probably the largest part of the issue with these is in the contacts (once you get past the idea and usage), so going with bigger wires may actually cause more problems if they're really stiff. Bending is definitely a problem. What you want is for the contacts to sit as straight as possible, as fully engaged as possible, and never move...THEN for them to have the same impedance to wherever they're going too lol
Exactly. The problem with this connector is the pins, not the wires. Increasing the wire size does nothing to the current rating of the pins and only increases the chances of having pin contact issues.
13 amps is your maximum. No need to oversize wires and try to exceed 13 amps.
Additionally, the pins are physical items with physical limitations. Using the proper crimp tool and given the physical size of the metal tabs that fold around and crimp to the wires, 16 AWG is the biggest wire supported.
When you try, you'll see...
From my experience, even the maximum supported wire size sometimes leaves results I don't fully trust. Probably 18 AWG will crimp better and will handle 13 amps just fine, but go with 16 AWG if it makes you feel better.
 
Last edited:
Thicker cables may put more strain on the connectors.
 
I only need to know if I can. If I can then cool, I'll bend those. If not, fine, I'll get a PC case that doesn't require me to think about it.
I think that if You prebend the wires and make that connector is seated correctly it should be not a problem.
But as @ty_ger said some pins that You crimp are rated for specific cable awg, so it could not fit, But if You solder it directly to the pin it should work, but then You would need a good way to take out the pins from the connector and then do not destroy them while handling.
 
So...let's talk failure mode.
1) Where are the failures occurring?
2) Why are they failing?
3) How are they failing?

The short is that the wires are not failing...because they are functionally thick enough. You are getting failures at your pin connection. Why it fails there is that the cross sectional area of the pin is much less than the wire. How it fails is the voltage and amperage flowing through the area, with the resistance of the metal, creates enough heat that it cannot dissipate into the environment. Said runaway thermal event melts the plastic, which then decays into reactive hydrocarbons, which burst into flame. Note that the thick wires are not the issue, what you need is a thicker pin and docking mechanism (or sensing to prevent uneven power rail delivery over each conductor).

As such...your solution is maximum effort with no results. You're more than welcome to deliver that...but I wouldn't spend my time doing this. It doesn't increase the connector cross sectional area, so it's not addressing the core issue.
 
You are getting failures at your pin connection.
I agree with that. What people often ignores is that no matter how thick the PCIe power connector cables are, eventually those cables connect to the graphic card by some tiny pins that are no wider than 1mm, held in place by plastic. That's where the problem occurs. My theory is that if there's only one cable, the current will pass through those tiny little pins, causing them to heat up and melt everything near it. This is why increasing the diameter of the cable can only bring some benefits since there will always be a point where the current is forced to pass through pins with a diameter less than a millimeter.

Notice the point of failure of all those PCIe power connector melting incidents; they usually melt at the connector instead of the cable because that's where the current needs to pass through those pins (except those that uses crap instead of proper cables).
 
Last edited:
NO this won't work!

In terms of current capability, sure, this will work. But 14awg wires are HUGE!!! See this.

The problem with this connector is the pins, not the wires.
Totally agree the pins are the major concern but the wires are a problem too because of their large size. No way will a 14awg wire fit into those pins. Solid copper is out of the question (too rigid anyway). So you will have to used stranded cable. But because of the size of 14awg, it will be necessary to cut many, if not most of the strands from the wire to fit the pin, defeating the purpose for using 14awg in the first place.
Increasing the wire size does nothing to the current rating of the pins
Very true.

Thicker cables may put more strain on the connectors.
This is very true too.

Again, 14awg is HUGE!!!! They used to wire houses with 14awg when 15A circuits were common - it is still used for dedicated lighting circuits. Note a standard lamp cord is 18awg. In fact, 18/2 wire is even commonly called "Lamp Wire".

If me, I might try 16awg wire to see if it fits without cutting too many strands. But it may also be too big and you may be limited to 18awg - which is still plenty big for this project "IF" your soldering skills are good and each of your connections is done properly.
 
Stupid question (maybe), but say you don't have a native ATX 3 PSU but one of the old ones. Isn't it safer - since you have a 12vhw on the GPU side that goes into 2 sockets on the PSU side. Each socket on the PSU provides 300 watts, so I'd assume if one pin tried to pull more than 300w the PSU would trip.

Am I correct, am I missing something?
 
the wire has 0 issue with it.
the issue is the conector/ pin itself
 
I agree with that. What people often ignores is that no matter how thick the PCIe power connector cables are, eventually those cables connect to the graphic card by some tiny pins that are no wider than 1mm, held in place by plastic. That's where the problem occurs. My theory is that if there's only one cable, the current will pass through those tiny little pins, causing them to heat up and melt everything near it. This is why increasing the diameter of the cable can only bring some benefits since there will always be a point where the current is forced to pass through pins with a diameter less than a millimeter.

Notice the point of failure of all those PCIe power connector melting incidents; they usually melt at the connector instead of the cable because that's where the current needs to pass through those pins (except those that uses crap instead of proper cables).
They have solutions for this you know. Bigger pins :)

why not just a slim version of a standard socket. Boom, straight up capacity with tolerances for 3.5kw, 1 cable :D You can realistically fit a socket on the shroud somewhere given the fact most of these GPUs are cinder blocks anyway...

On a more serious but not unrelated note...
The core of the issue is unfixable within the design of 12VHPWR. You're either gonna be adding more of them to divide the load and get royally under the 600W number no matter what; or you're going to make them bigger so the tiny pin is no longer the weakest link. Either way, an improved solution will not look remotely like a 12VHPWR connection. And the intended goal of this tiny connector, reduced material cost and #cables versus 8 pin pcie, is not achieved either with any of these improvements.

Bottom line, this is a piece of shit with a finite shelf life, because already GPUs are tickling the top end of its capability and do so with added risk. It will be replaced very soon, it has no future whatsoever.

Heck even the supposed 'flexibility' of a thin gauge wire isn't achieved because you're not allowed to bend them proper :P The whole thing stinks from front to back.

the wire has 0 issue with it.
the issue is the conector/ pin itself
The issue is the current and the tolerances, and those hit the whole combination; wire and connector.
 
Last edited:
I thought the cable had the problem of inconsistent contact with the pins and not much to do with the gauge of wires.
 
I thought the cable had the problem of inconsistent contact with the pins and not much to do with the gauge of wires.
It did with the original 12VHPWR, which is why the connectors were modified with longer pins for the second revision (12V6X2) but I honestly believe that the connectors were used as a scapegoat.

Melting cables/connectors were just symptoms of uneven current balance causing a failure cascade, something that the 16-pin connector enabled and that 8-pin connectors prevented simply by needing per-connector monitoring at a bare minimum.

IMO, the four sense wires in a 16-pin connector should have been balance/monitoring wires for pairs of 12V circuits, not wasted on a 450W/600W sense circuit. Just rate all 16-pins at 600W and re-use the 4 sense pins for proper current monitoring, mandated in an updated PCI-SIG standard with the onus on the device drawing current to do so safely and responsibly.
 
It did with the original 12VHPWR, which is why the connectors were modified with longer pins for the second revision (12V6X2) but I honestly believe that the connectors were used as a scapegoat.

Melting cables/connectors were just symptoms of uneven current balance causing a failure cascade, something that the 16-pin connector enabled and that 8-pin connectors prevented simply by needing per-connector monitoring at a bare minimum.

IMO, the four sense wires in a 16-pin connector should have been balance/monitoring wires for pairs of 12V circuits, not wasted on a 450W/600W sense circuit. Just rate all 16-pins at 600W and re-use the 4 sense pins for proper current monitoring, mandated in an updated PCI-SIG standard with the onus on the device drawing current to do so safely and responsibly.
Think them asus cards had the per pin monitoring so you could actually see if any of the pins are not making good contact.
 
Think them asus cards had the per pin monitoring so you could actually see if any of the pins are not making good contact.
Yep. And now third party 12V6X2 adapters have been announced or released that do the same thing.

Nvidia forcing the use of the 16-pin connector onto AIBs whilst also squeezing them so hard for margins and also not enforcing adequate current monitoring is the reason the connectors are melting.
 
Back
Top