• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Again.... another one of these.... people really like to drag the horse after it has long been dead and pummeled beyond recognition.
It's a little different in this case.

Every generation since the new 12-pin was introduced we've had the industry-wide back-and-forth of:
>problems reported
>problems denied
>problems 'fixed'

As I recall, we're on *at least* the 3rd revision to Intel/Dell/nVidia's genius new power cable.
The concept is fundamentally flawed, and each revision is a 'band-aid'.
 
That many amps going through tiny gauge wires and tiny connectors. What could go wrong?
 
That many amps going through tiny gauge wires and tiny connectors. What could go wrong?
Every, single, time these reports come up,
I can't help but imagine the practical current-carrying capacity of the good'ole 80conductor 40pin IDE/PATA/UATA ribbon cable. :laugh:
 
And here I am rocking with Cablemod 8pin cables without issues

Seriously, why Nvidia couldn't just use the traditional PCIE cables?
Because you can't fit 4x8pin connectors on this for a total of 600W

1739152913750.png

Why they choose this design instead of the more traditional one, I cannot tell.
I don't have a problem pushing tech to new things, as long as they done right and with the proper safety mechanisms like monitoring the current of such a small and high power connector.
Its just a few extra $$ but that would've pushed them to stick to larger PCB.

I am just stating the facts

Not all that many amps.

600W ÷ 12V = 50A. If spread evenly across 6 pins, 8.3A each.
Exactly double per pin of the traditional 8pin and that is a lot more
 
In a recent YouTube video by JayzTwoCents on 5090 bios issues, he found his Zotac 5090 was drawing 630 watts steady state (overclocked) and was spiking even higher to 700 watts. So a cable rated at 600 watts is an invitation for melting. He had a flaky bios after a driver update so maybe a driver issue. Also he had professional equipment hooked up for accurate power measurement. So maybe the extension cable is not the problem.
 
Why they choose this design instead of the more traditional one, I cannot tell.
The smaller the PCB the lesser are the signal losses. It's more stable than bigger PCBs.
Improperly seated connectors resulting in poor mating contact surfaces are most likely the cause of overheating.
The fact it's so easy to improperly seat them makes for a very bad design.

Just enforcing AWG14 would've been enough to make classic 8-pin great again but alas.
 
  • Like
Reactions: Jun
A lot of these claims are proving to be BS and perhaps all of them are. From what I see, a lot, if not all of these guys are using aftermarket cable adapters from mod DIY and stuff.
 
A lot of these claims are proving to be BS and perhaps all of them are. From what I see, a lot, if not all of these guys are using aftermarket cable adapters from mod DIY and stuff.
Like I said...
PC hardware used to be fool proof on everything related to power at least.
Not anymore I guess

Statistically mistakes (on way or the other) are happening everywhere and you can not relay on the human factor. Its a major principal in a lot of cases.
In this case when a connector is pushed close to the edge of what a physical connection of this size can sustain like the 600W, and there is little or no room for mistakes you have to force the user to do the right thing. You can't just hope for the best.
Add monitoring to each of the pin is a sensible thing to do, just like @buildzoid is suggesting in this video, especially on a $2000+ product.
I dont think he is unreasonable here.

Its simple... The second the GPU sees a deviation above a certain level between current on the pins it will prevent the user from running anything that can damage the card.
This is the simplest way to force the user to the proper connection.
If a $2000+ product cant protect it self from ignorance, idiocy, laziness or whatever you can think of calling it, then its a false design from the beginning.
With just a few extra $$ on the cost of the card it can be done. The video on OP shows this and its very simple.
 
Like I said...

If a $2000+ product cant protect it self from ignorance, idiocy, laziness or whatever you can think of calling it, then its a false design from the beginning.
With just a few extra $$ on the cost of the card it can be done. The video on OP shows this and its very simple.
The 1 above you is deflecting like nv is,

Pay attention to what Alex is saying at timestamp 0:05 everyone.

Just face it the cards have a defective faulty design due to that connector, and quit trying to sweep it under the rug. NV is at fault here for the bad design.

Call it planned obsolescence.
 
Last edited:
I know I’m a broken record here, but the ATX format has simply worn out its practical design life. Heavy GPUs, massive CPU coolers, cooling issues, delivering sufficient power safely through a random array of cables and adapters. Considering the amount of money consumers dump into this high-end part of the industry, I’m really surprised we have such ugly solutions to very well known situations. This method of delivering 600W seems to have prioritized form over function.
 
I know I’m a broken record here, but the ATX format has simply worn out its practical design life. Heavy GPUs, massive CPU coolers, cooling issues, delivering sufficient power safely through a random array of cables and adapters. Considering the amount of money consumers dump into this high-end part of the industry, I’m really surprised we have such ugly solutions to very well known situations. This method of delivering 600W seems to have prioritized form over function.
No the gpus green are pushing are getting out of control with current draw.
 
Because you can't fit 4x8pin connectors on this for a total of 600W

View attachment 384131

Why they choose this design instead of the more traditional one, I cannot tell.
I don't have a problem pushing tech to new things, as long as they done right and with the proper safety mechanisms like monitoring the current of such a small and high power connector.
Its just a few extra $$ but that would've pushed them to stick to larger PCB.

I am just stating the facts


Exactly double per pin of the traditional 8pin and that is a lot more

They could have added power connectors to a separate PCB like the IO and PCIe slot fingers already are. They could have also added shunt resistors to the pins like what ASUS did with the 5090 Astral (although that doesn't fix the issue, it will just prevent the card from melting. If your connector pins are borked, they are still going to be borked). The 5090 FE, despite it's price increase over the 4090, feels like a cost savings experiment. They probably asked their engineers to reduce the cost of the design compared to the 4000 series. The result is a smaller cooler that, while doing a good job, relies more on it's fans, is louder, doesn't adequately cool the memory, and provides a lack of improvement to the connector issue that was already a problem at 450w.

You have to wonder as well how many more burnt connectors there would be if this wasn't a known issue. Plenty of people power limiting their 4090 and I'm sure the same will apply to the 5090 as well. The incident rate of the 8-pin connector is drastically lower despite the fact that 12VHPWR and 12V2X6 users are taking active measures to prevent issues.. That speaks volumes.
 
Guess we gonna need some kind of cable coolers soon haha.
 
I guess it could be debatable that there is a problem with the design of the cables and connectors. Maybe they would be adequate if these pigs were only pulling a reasonable amount of power for a high end card, you know like 250W.

The point when they lost their fucking minds was designing cards over 300W.
 
Did used the 1200w ATX 3.0 at first with my 4090, never had issues , always ran default 450 watts, last month got a Seasonic Prime TX 1600 WATT ATX 3.1 , if I end up with a 5090 , plus why would you cheap out , and not get the latest ATX PSU standard 3.1 5.1 - 12v2x6 ? spending over 2000 dollars makes no sense at all , knowing 5090 uses 600 watt

peace of mind , if and when I get a 5090

Seasonic Support Desk<supportdesk@seasonic.com>
To:You


Dear Harmon,

Thank you for your reply.

The PSU is ATX 3.1 and PCIe 5.1. Intel simplified the naming of ATX 3.1 to ATX 3 and PCIe 5 to avoid confusion. Therefore, our PRIME TX-1600, which follows ATX 3.1 and PCIe 5.1 standards, will include the 12V-2x6 cable and connector on the PSU side. You can check the connector on the PSU; it should have shorter sense wires and longer power pins. The packaging will also feature a 12V-2x6 logo to differentiate it from previous ATX 3.0 and PCIe 5.0 (12VHPWR) versions.

If you have any other questions, please let us know.
Thank you.
 
so another case of a dumb-ass that didn’t RTFM.

You can‘t fix stupid.
 
Re-use a 3rd party cable that probably have gone through many plug-in and unplug cycles, what could gone wrong LOL.

hints: the receptables on the female connector are probably not in good shape anymore, causing power imbalance between the pins.
 
What is surprising here is why they needed to make up a new high-power connector when there are plenty connectors for high currents, even mixed with data. Take a look at these:


It's called a money grab

Maybe, but it also looks like incompetence. Looking at high-power pins they all share common features like large surface area. In contrast 12VHPWR and 12V-2x6 look like they simply put together pins from earlier, lower power connectors.

The idea is that you have several pins that all share current. The problem is that how can you make sure they really share it ?

Just connecting pins together is not a good idea, because in a parallel circuit the current is inversely proportional to the resistance. And a large portion of that resistance comes from the contact. So now suppose that 3 pins have slightly higher contact resistance of 0.03 Ohms due to oxidation compared with 0.01 Ohms you get on other pins. Then suddenly the other pins carry 50% more current. Meltdown both at the GPU and power supply.
 
Clearly the adapter being sturdy, the cable was so twisted inside the connector that it had the same effect as a welding rod.
You have to line them up perfectly and check the pins first and do not zip tie them in the twisted position.
 
Had said it before on other forums, that plug should not be used safely for more than 300W. Add 75W from PciE slot and there you go 375W max.
 
Back
Top