• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Did anyone make a bingo card for these kinda threads yet?
I can contribute some items,

It's not a bad connector its:
  • the user's fault - victim blaming
  • happened to other types of cables - whataboutism
  • not the 'right' cable, PSU - no true scotsman
If you think otherwise you're:
  • too poor to afford one - appeal to wealth

the vendors say its safe/fixed so it must be - appeal to authority

12v 2x6 is fixed because we haven't seen melting yet - bad sample size

and any and all of the "you're holding it wrong" antennagate excuses - defending a megacorporation, for free.


And if you think these are strawmen, you have plenty of threads in which to find all the bingo spots :D


I kinda-sorta wanted a 5090, and the 4090 before it, but I've continued to dodge a bullet and I guess this 3090 gets another few years of life until I suppose AMD catches up and doesn't bless my computer with a safety hazard. Or maybe, nvidia already made the choice for me by making the 5090 a paper launch in which I couldn't buy one if I wanted one.
 
incredibly Nvidia gets away with this. If it was Intel or AMD all hell would brake lose, all the influencers would bring the pitchforks years ago. You have to wonder how Nvidia compensates all these people so well
Who said they got away with it just yet?

This saga simply continues. It took quite a while for us to realize asbestos wasn't the best idea in everything we use either. That is why we need to voice concerns and this needs a better solution.
 
If you have the money for the 5090 then you should get an alerting system based on IR/thermal vision.
 
  • Like
Reactions: N/A
Anyways, Der8auers video now has proven that the FE has a design flaw, because it lacks per pin sensing, which the ASUS Astral 5090 does have and thus can balance the load over all six 12V cables.
Really? Makes you wonder how those 3x8 pin (24 wires) managed all these years...

I'd rather think this proves the cable ain't up to spec

If you have the money for the 5090 then you should get an alerting system based on IR/thermal vision.
Come the 9090 they'll give you a free bunker shelter with your $45,000 enthusiast gpu

Yeah, not a single 4090 build I've put together has melted... That being said after watching the Buildzoid video the only 5090 I'd be comfortable owning is the Astral and I can't believe a company as technically capable as Nvidia doesn't have a failsafe if multiple pins have poor contact.
Yeah feels real premium doesn't it, buying the top end of the stack these days...
 
Last edited:
Looks like there might already be a 2nd occurrence of this issue with a 5090, this time with the stock cable that came with the PSU:
It's a design flaw where the first pin gets more than fair share of amps, half. So we can relax now. It's not a pandemic.
 
It's a design flaw where the first pin gets more than fair share of amps, half. So we can relax now. It's not a pandemic.
Back to the design table and implent proper circuitry so the GPU knows what each of the six 12V connections is pulling, just like how it was with the 3090 Ti.
 
Stumbled on this x thing.
1739300516893.png
 
Also, remember the GTX 680 with unique stacked 6+6-Pin?

Two of those could have worked for the 4090 and 5090 to save space.

48c6-09.jpg
 
Last edited:
Only one shunt resistor on the 4090 and 5090.

That's a big mistake. Cables like this WILL have unbalanced loads and the card should be compensating for that. If you fail to balance the current across cables, you get melted cables and fire hazards.

This doesn't look like a connector problem to me. No one can possibly expect a little plastic thing to evenly apply pressure to all 12 pins in all scenarios. The EE should have build some degree of sensing and compensation.
This does make me curious because I really don't know, but how did GPUs do this on pcie cabling then? Its not like we didn't pull 500+W into a GPU before today

I mean sure, I reckon you already have the load divided by three on a 3x8p ? But then there could still be unbalanced load? Also let's appreciate the fact the GPU's not dead. Its just molten at the power socket. Is the problem the uneven draw...because the GPU seems to handle that fine, it just wants the juice. The cause of uneven draw isn't the GPU nor the PSU... they're just sending and receiving the power. In my simple mind that just leaves the cable being insufficient, the size inviting too much variance.

Isn't it also far more economical to just upscale the cable a good bit so you can handle the unbalanced load (and likely get less of it as there is more surface area) instead of desiring even more hardware on a GPU where the desire is to reduce PCB sizes?
 
Last edited:
Having more control on load balancing could also improve efficiency and with good quality PSU you can keep those for a decade, paying a few bucks more for better power delivery and security is worth it in my opinion...
In practicality, how would you load-balance the PSU end? Think about it.
You would have to increase or decrease the voltage on various pins to encourage the current to flow or not flow where you need it or where you need less of it. How would you do that? With a resistive load? Terrible. Heat and inefficiency. With a buck/boost converter? More inefficiency, another big stage of parts, and more ripple to filter.

It's easy to just say stuff like that, but when you think of it in practice, it falls aparts. A warning about imbalance is about the best you could do at that stage. It would not be a root solution. The root solution would still need to be the card somehow ensuring that it draws even load. That is a smarter solution because it already has most of the needed circuitry there and is the load which is dynamic.

This does make me curious because I really don't know, but how did GPUs do this on pcie cabling then? Its not like we didn't pull 500+W into a GPU before today

I mean sure, I reckon you already have the load divided by three on a 3x8p ? But then there could still be unbalanced load?
Well, simply, it was a way more robust connector capable of safely delivering way more current than rated. That was usually it. Some cards had fancy features; most not.

Check it out:

TLDW: a single 6-pin connector could deliver enough to 2x 8-pin connectors for the card to run without issue without wires or connections overheating. 8-pin connectors were usually actually rated 150% more than ATX's advertised rating, given the wires were spec'd as needed; and that doesn't even include any additional safety margin engineered into that rating.
 
Last edited:
Also, remember the GTX 680 with unique stacked 6+6-Pin?

Two of those could have worked for the 4090 and 5090 to save space.
wow memory unlocked.
and yeah with as tall as cards are now i imagine there is no real disadvantage to this layout aside from uncommon manufacturing
 
This does make me curious because I really don't know, but how did GPUs do this on pcie cabling then? Its not like we didn't pull 500+W into a GPU before today

I mean sure, I reckon you already have the load divided by three on a 3x8p ? But then there could still be unbalanced load? Also let's appreciate the fact the GPU's not dead. Its just molten at the power socket. Is the problem the uneven draw...because the GPU seems to handle that fine, it just wants the juice. The cause of uneven draw isn't the GPU nor the PSU... they're just sending and receiving the power. In my simple mind that just leaves the cable being insufficient, the size inviting too much variance.

Isn't it also far more economical to just upscale the cable a good bit so you can handle the unbalanced load (and likely get less of it as there is more surface area) instead of desiring even more hardware on a GPU where the desire is to reduce PCB sizes?

You were only pulling 75 watt with 6 pin pcie and 150 watt with 8 pin pcie - if you were pulling 500 watt, it was with 3x 8 pin pcie, like on the asus 3090 strix. 150 watts divided by 12 volts means that the most any of the wires on the pcie 8 pin would get was 12 amps... and that was with a much thicker gauge. So basically a none issue. Debauer was getting 23 amps on a single wire on his 12 pin with 5090 for comparison.

And yes, upscaling the 12 pin connector and cable to double the gauge would have solved it most likely.
 
In practicality, how would you load-balance the PSU end? Think about it.
You would have to increase or decrease the voltage on various pins to encourage the current to flow or not flow where you need it or where you need less of it. How would you do that? With a resistive load? Terrible. Heat and inefficiency. With a buck/boost converter? More inefficiency, another big stage of parts, and more ripple to filter.

It's easy to just say stuff like that, but when you think of it in practice, it falls aparts. A warning about imbalance is about the best you could do at that stage. It would not be a root solution. The root solution would still need to be the card somehow ensuring that it draws even load. That is a smarter solution because it already has most of the needed circuitry there and is the load which is dynamic.


Well, simply, it was a way more robust connector capable of safely delivering way more current than rated. That was usually it. Some cards had fancy features; most not.

Check it out:

TLDW: a single 6-pin connector could deliver enough to 2x 8-pin connectors for the card to run without issue without wires or connections overheating. 8-pin connectors were usually actually rated 150% more than ATX's advertised rating, given the wires were spec'd as needed; and that doesn't even include any additional safety margin engineered into that rating.

2 thicker wires instead of 12 thinner.
 
In practicality, how would you load-balance the PSU end? Think about it.
You would have to increase or decrease the voltage on various pins to encourage the current to flow or not flow where you need it or where you need less of it. How would you do that? With a resistive load? Terrible. Heat and inefficiency. With a buck/boost converter? More inefficiency, another big stage of parts, and more ripple to filter.

It's easy to just say stuff like that, but when you think of it in practice, it falls aparts. A warning about imbalance is about the best you could do at that stage. It would not be a root solution. The root solution would still need to be the card somehow ensuring that it draws even load. That is a smarter solution because it already has most of the needed circuitry there and is the load which is dynamic.
Yeah, I admit I didn't thought that far... Would a system shutting off the power delivery in case of over-current on a pin (or several) on the PSU end be doable ? Or is it just better to force GPU maker (Nvidia) and AIB to do a better job at designing on their end.
 
Ideally speaking, we need more security margin for current (for 600w we need 8 16AWG wires) and current load management and monitoring circuitry on both end (PSU & GPU). That means we need a new connector and a new standard for PSU as well...
This!

There are so many users on Reddit repeating the third party cables argument. Gosh, I guess only Nvidia can construct proper 12v cables nowadays. Any third party cable, be it from Asus, Corsair or any other vendor is simply not up to the task.

The real problems imho:
1. Blackwell was most probaly not adequately budgeted and rushed in R&D because resources were focused on other products. Don't get fooled by the FE cooler - there is no real innovation in this gen. 9 from 10 USD revenue at Nvidia nowadays comes from the data center business. The gaming department is not that important anymore (CEO likes the stage time in Las Vegas and will still use the spotlight to present a new leather jacket, but the gaming branch is not important anymore).
2. Then it was still based on the same process as the last gen (that was cheaper to order at TSMC, see no.1 and their priorities), so they had to use the old Intel CPU innovation trick to make a difference: push more power to the GPU, getting near the limit of specifications and narrowing the headroom
3. And they do not need to change a thing because of their loyal customer base not aware that things changed (well, I am one of those customers, but starting to believe, I will stay with my 4090 for two more years, skipping the upgrade for the first time in years).
I fear, thats the truth.
 
wow memory unlocked.
and yeah with as tall as cards are now i imagine there is no real disadvantage to this layout aside from uncommon manufacturing
Actually nothing makes sense with GPU connectors right now.

Like why does a 300+ Watt GPU need 3/4 8-Pins when 12V-2x6 is rated for 600 Watts and has smaller pins.

A stacked 8+8-Pin connector would have worked just fine... And as you said, GPU's are no longer 1-slot anyways, and that stacked connector would be small enough.

The choices that have been made the past years are mind-boggling right now...
 
Last edited:
This!

There are so many users on Reddit repeating the third party cables argument. Gosh, I guess only Nvidia can construct proper 12v cables nowadays. Any third party cable, be it from Asus, Corsair or any other vendor is simply not up to the task.

The real problems imho:
1. Blackwell was most probaly not adequately budgeted and rushed in R&D because resources were focussed on other products. 9 from 10 USD revenue at Nvidia nowadays comes from the data center business. The gaming department is not that important anymore (CEO likes the stage time in Las Vegas and will still use the spotlight to present a new leather jacket, but the gaming branch is not important anymore).
2. Then it was still based on the same process as the last gen (that was cheaper to order at TSMC, see no.1 and their priorities), so they had to use the old Intel CPU innovation trick to make a difference: push more power to the GPU, getting near the limit of specifications and narrowing the headroom
3. And they do not need to change a thing because of their loyal customer base not aware that things changed (well, I am one of those customers, but starting to believe, I will stay with my 4090 for two more years, skipping the upgrade for the first time in years).
I fear, thats the truth.
Blackwell, honestly is really meh at several level. It really seems Nvidia put minimal effort behind this. All those years people were saying Nvidia will never become complacent like Intel was, but right now with AI and data center being their bread and butter I am not so sure any more.
 
Actually nothing makes sense with GPU connectors right now.

Like why does a 300+ Watt GPU need 3/4 8-Pins when 12V-2x6 is rated for 600 Watts and has smaller pins.

A stacked 8+8-Pin connector would have worked just fine... And as you said, GPU's are no longer 1-slot anyways, and that stacked connector would be small enough.

The choices that have been made the past years are mindboggling right now...
I don't know why, but the answer is nvidia is acting like apple in that they are doing novel packaging solutions for no real reason and at the expense of all else. it's some misplaced desire to make things "look good"
their actions are to the detriment of everyone lately.
put aside the connector for a moment, which appears to be mandated by nvidia upon the board partners
the board partners are having a rough time in other ways. during the 4000 series they complained that nvidia was a bad company to cooperate with because nvidia's FE was competing against them, for less money, and with a superior packaging solution that they did not share.
now on the 5000 series, same story but worse in that the board partners had like 1 month or less in which to make and validate a product before the 5090 release.
i can't think of any reason why nvidia is doing any of this except for that they wished they didn't have board partners anymore, could be the sole producer of cards, and could reduce the bill of materials as much as possible to maximize profit.

its weird because i don't get the feeling AMD is a good company to partner with either, but they don't seem to be trying to harm their partners.
 
One could also run the 5090 at 100 Watt Power Limit, incase 5 out of 6 cables make a bad connection... lol

What a clusterfuck.
 
Really? Makes you wonder how those 3x8 pin (24 wires) managed all these years...

I'd rather think this proves the cable ain't up to spec

the problem is it doesn't make sense, it isn't the cable that decides were the load goes, it's just a cable.
 
Here guys, Der8auer has his say:


His conclusion is so ON POINT... Let's hope Nvidia are watching and learning! It is totally unacceptable that more than 2 years after the RTX 4090 release we're still at the same point!
der8auer checked and found out that 1 cable goes up to 23 Amps when others are at 2-3 Amps and they should all be between 6 to 8 Amps max ! What are Nvidia doing ?!!
 
His conclusion is so ON POINT... Let's hope Nvidia are watching and learning! It is totally unacceptable that more than 2 years after the RTX 4090 release we're still at the same point!
der8auer checked and found out that 1 cable goes up to 23 Amps when others are at 2-3 Amps and they should all be between 6 to 8 Amps max ! What are Nvidia doing ?!!
They got away with it by deflecting that even it professionals are plugging them in wrong...
 

How Nvidia made the 12VHPWR connector even worse.​

This video enhanced my understanding of the problem to the point, that in prior discussions on TPU, why simply doubling the connector won't solve the problem this time around.
 
If you look at the high res PCB shots from the review on here it looks like it only has 2 (which is double the founders edition), but it doesn't have the 6 in front of it that the ASUS cards have (again these can be seen on the rear of the PCB in the high res shots from the card review.
MSI 5090 Suprim
View attachment 384366

ASUS Astral (Front)
View attachment 384367
Rear
View attachment 384368



De8auer got his hands on it, and did some testing with his own 12VHPWR connector, and he was seeing 1 wire on the FE edition pulling 22 amps.
That said he has the 12VHPWR connector attached to 2 PCIE outputs from a Corsair AX1600i, but the official connector has this attached to 4.

Effectively he's trying to pull 300W over each of the PCIE connectors which is double what they are rated at, which would explain some of the issues; but not completely explain the fact that there's such an uneven current distribution across each of the 12V connectors; 23 Amps vs 11, 8, 5, 3 & 2 Amps.

Both of the issues I've seen have been with the FE model of the card and not with any of the AIBs, they are also with 12VHPWR ATX 3.0 PSUs rather than ATX 3.1 PSUs with the 12v-2x6 connector.


Okay, well that's good to know. I just bought a new 3.1 ATX NZXT C1500 PSU.

Guess I'm wondering about that video from De8eaur and then Buildzoid. He basically says NVIDIA made the 12vhpwr even worse (saying he was detecing 150c when the card was running from the PSU side (which is unbelievable and that's on a test bench), and that there is nothing on the pcb to tell the card that the pins are not fully connected. Lots of comments on those vidoes and on reddit feel like we're going to be getting a lot more failures. Good to hear that the SUpric SOC has 2 shunt resistors, but now I'm worried about the card. I haven't received it yet, and I have 4090, and am wondering if it makes sense to return the 5090.

Also I'm using an right angled connect form cablemod, not the adapter (the right angles have had no problems reported), and am wondering if I should just switch to the connector that came with my PSU. I've had the card for 2 years and have had no issues, so likely the cable is fine, just wondering what you guys think. I also have an alert set up on HWINFO for if the 12vhpwr goes below 11.8v.

Thanks for the advice!



That was an isolated situation involving a third party cable that seemingly was not built to proper specs.

As much as I've complained and hated on this new connector, it does seem that the problems have ironed out.

Thanks for the reply, please see my post above for additional concerns, wondering if you saw those videos and still feel that the issues have been resolved.

Also all the comments on here seem to suggest that this is something that will affect all cards, and not just the Founders Edition...

 
Last edited:
Back
Top