• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Well done Huang, I hope this costs a bomb to reengineer. RDNA 7900XT looking better everyday. I don't reallt care about the numbers, I know it will destroy RDNA2 and Ampere and my 2080 Super and 1080 Ti, so that is all that matters to me. I don't need it to out do the 4090 at all. With RT being improved over 100% and with FSR it'll be fine for anything I can throw at it.
 
The comments count is accompanied by a fire emoji, how appropriate.
 
Pretty standard advice. But, you wouldn't know that unless you've wrangled a lot of SATA, coaxial, fibre, or other 'bend radius'-sensitive fine pitch cabling.

All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...
 
Proper 4090 cable and connectors

1666749888167.jpeg
 
I think no one at the AMD side would think this would become a selling point.


Untitled.jpg
 
I would like a 3 8 pin or 4 8 pin rather than the shitty nvidia 12 pin connektor
 
Terrible design. Put out a card that is so huge that it will literally have issues fitting into a number of cases, and then design a power cable for it that is so unreliable it actually has a connect / disconnect limit, and furthermore is a fire hazard.

As some others said, great opportunity for AMD to potentially capitalize on.
 
Last edited by a moderator:
In the various batteries and EV rides I've built I use 5.5mm bullet plugs, they can handle pretty high currents. They should have done something like an XT120 connector with a sense pin as an optional extra for high powered PSUs and cards. Given the pins in these things are still pretty weak and not rated that high, there were bound to be issues on cards that can pull so much power. The connector should never have been rated for anything more than 450w constant.

I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end.

I would like a 3 8 pin or 4 8 pin rather than the shitty nvidia 12 pin connektor
It's not Nvidia's connector, as has been pointed out many times.

So, it's NV f-up, top to bottom.
Intel doesn't deserve even part of the blame:



PCI SIG analysis (prior to meltdowns)

View attachment 267240

It's the same connector minus the sense pins, instead NV put a sense chip in the wire to tell the GPU how many cables are connected.

The connector was always gonna be fine for lower powered stuff, had no issues on my 3080FE roaring away for hours on end.

Yes it is NVs fault though, they're running too many amps through these pins so as the picture shows without a solid connection will fry. even with a solid connection they're going higher than the pins are rated for at full 600w. They're basically dupont connectors in the wire end of the port, and have never been rated that high compared to loads of different types of connectors. But they are cheap as chips compared to anything slightly more solid. 4080 should have no issues with it, it's the 4090s peak power draw that's the main issue and really NV should have had that card alone use a 16pin connector instead to spread the load more. Or 2 12 pins to ensure no issues. 600W is just beyond what's safe for those tiny pins to handle. That's 50 amps spread across those teeny tiny things.
 
Pretty standard advice. But, you wouldn't know that unless you've wrangled a lot of SATA, coaxial, fibre, or other 'bend radius'-sensitive fine pitch cabling.

All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...
A lot of prople place the blame solely on end user. Even though Ada cards are uncommonly wide, and most PC cases weren't made with consideration for them - and the new adapter cable even increases the problem by having a long stiff "strain relief".
 
Last edited:
Hi,
Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower :laugh:
That would easily fit in my mid tower :)
 
This is not NV idea, it`s a new general standard.
4*8pin is not better, I think it`s even worse.
Using the 12VHPWR to 4*8pin adapter makes life harder.
AMD need to also adopt the 12VHPWR but to better position it on the GPU.


Because when you need 2*12VHPWR to feed next gen GPU`s that's 4*whatever you suggested.
The heat is not the problem here, the bending force on the connector is.

Wasn't this a collaborative effort between Intel and PCI-SIG?

This wasn't an Intel idea. This was an Nvidia idea that was passed through the PCI-SIG consortium and got approval. Intel only added it to the ATX spec AFTER it was passed through PCI-SIG.

The connector works fine in most cases. But there are caveats (don't bend before 30mm, etc.).

Remember, this is the same connector as the 30 series FE. That was only 450W.

I think the problems started coming up when Nvidia said "hey look.. if you use a 600W cable you can clock the 4090 card higher", which is why I went on the profanity laden rant that got me exiled from GamersNexus (because he chose to quote me out of context instead of addressing the actual issues because he had Nvidia in house at the time).

I have personally used this connector upwards of 55A at 50°C. But you CAN NOT put a bend on the cable less than 30mm or higher than 50°C. This is shown in the PCI-SIG report that was leaked, but this only talked about the connector on the PSU side (which is why Corsair ATX 3.0 PSUs don't have the 12VHPWR connector on the PSU) and never talked about on the GPU side.
 
if your connector has restrictions on how you plug in a connector then the connector is bad
example there is no possible way I could use a 12VHPWR connector in my case there isn't room there is barely room for the two 8 pin on my 3070ti and those make a hard bend to clear the side panel
to get a safe bend keeping the last 1.2Inchs strait the cable needs to make a massive ARC nobody wants that

nvidia needs to revise this connector asap maby use a longer pin or a different shaped pin T or + shaped
for right now right angle adapters seem to be the solution hopefully more psu vendors will make 90/L shaped adapters
edit:reddit has once again done nvidias job for them fixed
we can closed this thread now
 

Attachments

  • 312703506_533574548588655_3645642574030471618_n.jpg
    312703506_533574548588655_3645642574030471618_n.jpg
    202.3 KB · Views: 108
  • bm8zj0lx41w91 (1).webp.jpg
    bm8zj0lx41w91 (1).webp.jpg
    76.2 KB · Views: 90
Last edited:
Maybe it was designed in collaboration with NZXT:D...
Jock aside both sides are to blame. The user of the top GPU (at least for the moment) should avoid stressing a 600w cable or connector. On the other hand Nvidia knows that this huuuge card will be difficult to fit in many cases and must put the needed clearance as a red flag in the specs. It's another 20% added height on a 4 slot GPU, higher than most tower coolers...
 
Why is the power connector on top of the card with these cards that are extremely tall? Putting the connector on the back at least would remove the need of a sharp bend. And I would like that even for cards with multiple 8 pins connectors.
 
if your connector has restrictions on how you plug in a connector then the connector is bad
Damn... polarized AC outlets are gonna screw up your worldview. What a stupid thing to say.
 
Why is the power connector on top of the card with these cards that are extremely tall? Putting the connector on the back at least would remove the need of a sharp bend. And I would like that even for cards with multiple 8 pins connectors.
RTX 4090 cards are also extremely long, so having a connector with long stiff stress relief and cables that you shouldn'd bend too tightly on top of that also excludes a lot of cases.

I think all this mess wouldn't happen if they made the adapter with 90 degrees bend. I know it's not practical for those who mount their cards veryically - but they are a minority, and they already have to buy a riser cable - so they can invest on another unnecessary nice cable.
 
Yes it is NVs fault though, they're running too many amp
They:

1) Designed that shit (Intel designed the sensing pins only)
2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
3) They STILL found it OK to push it out to the market
The connector works fine in most cases. But there are caveats (don't bend before 30mm, etc.).
Except it doesn't even per docs submitted to PCI SIG by NV itself.

1666755098318-png.267240
 
There is no "general standard" of "ship home made adapter that cannot fit properly in 93% of f the cases.

This issue is absolutely NV's creation and doesn't have anything to do with 12 pin socket.

IF NV was too greedy for a proper 90 degree angle adapter, it could have located the socket differently.
I mostly agree, nv implement the new power standard and the position plus adapter make it very space constraint.
But NV didn't invented anything, they just adopted it.

Not in the slightest.
Maybe not this gen but it's the way going forward for high end high wattage GPU at least. I just hope they will learn from NV (not so good) way of implamenting new this standard.
 
But NV didn't invented anything, they just adopted it.
No. Per their own words (links have been shared several times) it was SPECIFICALLY NVIDIA that designed power delivery aspect of that socket. Intel only did sensing part.
 
Looks like my home input power line connection.
 
They:

1) Designed that shit (Intel designed the sensing pins only)
2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
3) They STILL found it OK to push it out to the market

Except it doesn't even per docs submitted to PCI SIG by NV itself.

1666755098318-png.267240
It's still perfectly fine for lower current cards. Again it's been in use for 2 years already. 3080, 3090, 3090Ti have had no problems with it. It's only now with the 4090 that it's an issue. If you look at PCI-SIGs test showing the failure that's when drawing 55A constant. That's 660W for it to fail in their test. And Nvidia may claim they made it, but it's just a Molex MicroFit 3.0 BMI Dual Row header.
 
It's still perfectly fine for lower current cards
Which need it as much as birds need pig tails

3080, 3090, 3090Ti have had no problems with i
Yeah. Why do you think that could be:


let alone the size...

to bash Nvidia as though it's their fault.
How is "it" not NV's fault???

Who designed that thing? NV (no, not Intel, Intel designed only sensing pins)
Who KNEW from testing it was terrible? NV (yes, they've even submitted it to PCI-SIG)
Who cheaped out on 90 degree connectors for a GPU that cots 2000+ Euro and does not fit into 93% of cases?

I'm pretty sure it wasn't my grandma, nor was it Intel or AMD.
 
Which need it as much as birds need pig tails


Yeah. Why do you think that could be:


let alone the size...


How is "it" not NV's fault???

Who designed that thing? NV (no, not Intel, Intel designed only sensing pins)
Who KNEW from testing it was terrible? NV (yes, they've even submitted it to PCI-SIG)
Who cheaped out on 90 degree connectors for a GPU that cots 2000+ Euro and does not fit into 93% of cases?

I'm pretty sure it wasn't my grandma, nor was it Intel or AMD.

Reading comprehension failure. Go and reread exactly what I posted about mating cycles (not being an Nvidia thing). Then read the part about shitty bending angles where I say it is a problem.

Then post in context.
 
Back
Top