• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Proper 4090 cable and connectors

View attachment 267221
Or one of these:
IMG_20221026_132349.jpg
 
In the various batteries and EV rides I've built I use 5.5mm bullet plugs, they can handle pretty high currents. They should have done something like an XT120 connector with a sense pin as an optional extra for high powered PSUs and cards. Given the pins in these things are still pretty weak and not rated that high, there were bound to be issues on cards that can pull so much power. The connector should never have been rated for anything more than 450w constant.

I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end.


It's not Nvidia's connector, as has been pointed out many times.


It's the same connector minus the sense pins, instead NV put a sense chip in the wire to tell the GPU how many cables are connected.

The connector was always gonna be fine for lower powered stuff, had no issues on my 3080FE roaring away for hours on end.

Yes it is NVs fault though, they're running too many amps through these pins so as the picture shows without a solid connection will fry. even with a solid connection they're going higher than the pins are rated for at full 600w. They're basically dupont connectors in the wire end of the port, and have never been rated that high compared to loads of different types of connectors. But they are cheap as chips compared to anything slightly more solid. 4080 should have no issues with it, it's the 4090s peak power draw that's the main issue and really NV should have had that card alone use a 16pin connector instead to spread the load more. Or 2 12 pins to ensure no issues. 600W is just beyond what's safe for those tiny pins to handle. That's 50 amps spread across those teeny tiny things.
My math doesn't doesn't agree with your conclusion. Assuming you are correct that each pin is good for 8amp, 12pins at 12volts are good for 1152W. At 600w they should be handling a little over 4amps if the load is evenly spread, which it probably isn't. We still have plenty of headroom though.
 
It's not a problem of power per pin, it's just that the pins aren't secured enough. If they had the same setup but with a more secured socket for the pin, there would be no issue.

At those wattage (300w +), you need to make sure your socket is secured and hold in place properly. This is just too flimsy. Redo the same setup but with something that lock the connection in place properly and everyone would be fine.
 
Does this make anyone else wish that all pcie gpu based power connectors where right angle connectors in the first place.

Cable training/management has always put some kind of force on gpus in opposition to the pcie slot in my experience, especially in smaller mid tower cases and sff builds.

Im guessing there’s a good reason they haven’t. I wonder if even changing the orientation of the pcie plug would have been a better solution in the long run (vertically off the pcb in either up/down direction like a motherboard power connector). Seems like an odd limitation to stand by when I think about it.
 
Last edited:
Does this make anyone else wish that all pcie gpu based power connectors where right angle connectors in the first place.

Cable training/management has always put some kind of force on gpus in opposition to the pcie slot in my experience, especially in smaller mid tower cases and sff builds.

Im guessing there’s a good reason they haven’t. I wonder if even changing the orientation of the pcie plug would have been a better solution in the long run (vertically off the cob in either up/down direction like a mother power connector). Seems like an odd limitation to stand by when I think about it.

Yes ! this, but the focus was always on getting the best cooler and it's the position that limit the less airflow and radiator size for cooling. Now that they have massively oversize cooler, they could do something better, but if you have a 4 slot GPU, that is quite deep to go plug your card if the connector was flat on the board instead of at a 90° angle. (unless they make it extra long).

for having it facing the top, it's another story, you would have to have a mezzanine card to prevent it from sticking out on the back of the card. not impossible but way more complex and again, they would need to secure that in place to avoid frying your board.

But god, that would look way better than that.

The best alternative would probably be at the end of the card, but again, due to cooling, the PCB no longer extend to the end of the cards as they allow air to go thru there.

At this point, why now redoing the whole PCI-E connector to allow it to deliver 600+ watt and just have a beefier connector on the motherboard.

No simple solution right now
 
Yes ! this, but the focus was always on getting the best cooler and it's the position that limit the less airflow and radiator size for cooling. Now that they have massively oversize cooler, they could do something better, but if you have a 4 slot GPU, that is quite deep to go plug your card if the connector was flat on the board instead of at a 90° angle. (unless they make it extra long).

for having it facing the top, it's another story, you would have to have a mezzanine card to prevent it from sticking out on the back of the card. not impossible but way more complex and again, they would need to secure that in place to avoid frying your board.

But god, that would look way better than that.

The best alternative would probably be at the end of the card, but again, due to cooling, the PCB no longer extend to the end of the cards as they allow air to go thru there.

At this point, why now redoing the whole PCI-E connector to allow it to deliver 600+ watt and just have a beefier connector on the motherboard.

No simple solution right now
because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind
 
This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.

View attachment 267011

Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.

I actually noticed that one of my PCI-E cables (8 Pin) had a good amount of corrosion. The way i detected it was HWInfo reporting up to 11V on the 12V VRM Input rail. Like that coud'nt be good. Once i swapped out the cable it was a clean 12V again.

So yeah, it's real. They get worn out over the amount of times its installed and removed. The more i guess at the cost of any coating on top of the metal pins or so.
 
This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.

View attachment 267011

Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.
Hi,
Every psu I've bought to date included 6 vga cables
When will nvidia include 6 adapters ?
Or will psu makers send the same 6 cables for new gpu's
 
because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind
Didn't apple manage it with Vega.
All on the PCB.

IE pciex ish but with additional power connector.
 
Didn't apple manage it with Vega.
All on the PCB.

IE pciex ish but with additional power connector.

Pretty much all the hardware apple releases, it's proprietary and they can design their own standard pretty much. They dont have to stick to ATX or PCI-E specs.
 
Pretty much all the hardware apple releases, it's proprietary and they can design their own standard pretty much. They dont have to stick to ATX or PCI-E specs.
Does that answer my question with a yes.

I had heard of apple and they're walled garden so nothing you just said did I not know.

So if it's been done, it could be again or did apple patent inline PCB power connection somehow.
 
Very simple, Nvidia overstepped what is acceptable power draw in an ATX PC. This is design failure.
 
Does that answer my question with a yes.

I had heard of apple and they're walled garden so nothing you just said did I not know.

So if it's been done, it could be again or did apple patent inline PCB power connection somehow.

Apple is pretty much computers on the go. You buy and you dont have to look for ways to upgrade or "build it yourself" type of thing. It's very easy and pretty much from the go. However apple is so different in terms of software that most windows users coud'nt manage themself on a apple in the first place.

And inline PCB power is'nt something new.

105601_hpe-cray-ex-amd-instinct-mi250x-at-sc21-close.jpg


Look for OAM formfactor. It's capable of more then 500W of power delivery per "card" pretty much. There's servers out there with 8 of those things stacked into it:

105600_hpe-cray-ex-amd-instinct-mi250x-at-sc21-side.jpg


Servers that require almost 4KW of power at it's full operation.
 
Apple is pretty much computers on the go. You buy and you dont have to look for ways to upgrade or "build it yourself" type of thing. It's very easy and pretty much from the go. However apple is so different in terms of software that most windows users coud'nt manage themself on a apple in the first place.

And inline PCB power is'nt something new.

View attachment 267329

Look for OAM formfactor. It's capable of more then 500W of power delivery per "card" pretty much. There's servers out there with 8 of those things stacked into it:

View attachment 267330

Servers that require almost 4KW of power at it's full operation.
Next you'll tell me PCB have been invented for building circuits and that apple isn't a fruit.

Wtaf do you really think I asked without knowing this stuff ?!

I just wasn't sure if Apple used something like it on the dual Vega card they had.

And despite two replys I'm still not 100%.
 
because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind
Well, if you use 1 trace to carry a 12v, but if you use 12 trace, you just have to carry 5a.

Like said above, carrying such amount of power on very complex circuit board is already done on server side. This is not something impossible. very far from it.

The biggest challenge is how you address the transition to the new stardard, cards with 8pin/16pin + board power until enough motherboard have the new gpu slot. not impossible but just need multiple corporation to agree on a timeline.
 
No. Per their own words (links have been shared several times) it was SPECIFICALLY NVIDIA that designed power delivery aspect of that socket. Intel only did sensing part.
Well, if it's only NV work than OK and they are responsible from top to bottom. I really thought it was part of the ATX3.0 spec.

Anyway, It is still interested to know the details (if there any..) on how it was connected: was it fully "click" on? Dose the wire were bent and if so, was any force applied on the connector to make hime tilt as a result.

Unless we will see multiple incidens of non-bented yet melted connectors it can be classified as simple 'human error'.

Also, I can totally see this will be a deal breaker to some and more than that, the holy-grail of bashing ammo.
The Samsung exploding battery kind of things.

All of this and 4090ti, with it's 525w stock, is yet to come. Yummy!
 
Not going to risk it and allow the cable to be straight with open case had the cables tucked away with slight curved bend with my h210 itx case. Can someone do a thermal test directly on cables with and without bend at max load obviously for short period of time?
 
Not going to risk it and allow the cable to be straight with open case had the cables tucked away with slight curved bend with my h210 itx case. Can someone do a thermal test directly on cables with and without bend at max load obviously for short period of time?
Have a word with yourself.

You imply you have one(4090).

Why would anyone though, other than a journalist, do this to their £1600$$ /260000$ :p purchase,.

Just to see.
 
Nice. The cards are already huge bricks and yet it still needs even more space from their sides? After all, it wasn't that bad when the power connector(s) were in the back of the card like in the old days..
 
Have a word with yourself.

You imply you have one(4090).

Why would anyone though, other than a journalist, do this to their £1600$$ /260000$ :p purchase,.

Just to see.
itx case has challenges and I game with headphones on so it doesn't bother me. I school the journalists .:rockout::cool:
 
My math doesn't doesn't agree with your conclusion. Assuming you are correct that each pin is good for 8amp, 12pins at 12volts are good for 1152W. At 600w they should be handling a little over 4amps if the load is evenly spread, which it probably isn't. We still have plenty of headroom though.
You counting all of that power going in through 12 pins. there's 6 12v pins sharing 50A, then 6 ground pins returning.
 
Back
Top