• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

8-pin PCIe to ATX 12VHPWR Adapter Included with RTX 40-series Graphics Cards Has a Limited Service-Life of 30 Connect-Disconnect Cycles

Nope. Not even poor pin connections in the adapters. Pin connections in the 12VHPWR connector itself.

Which is funny that JayZ noticed it... touched on it.... showed a picture of the connector melting... .but still went on about how it's the adapters fault. Yes... There's A LOT of tape wrapped around the 12VHPWR connector on the GPU side to limit the bend radius. Correct. To limit the bend radius at the 12VHPWR connector. Come on Jay... use your head... what does that have to do with the 8-pin or the fact that the cable you're holding is an adapter?????

He's telling people "you better buy a PSU that's ATX 3.0 (which to him means has the 12VHPWR connector on the modular interface)" which is LITERALLY the part that is melting!!!!!!!!
Wait, so you're saying the fault lies in the 12VHPWR connector at the PCB side, not the cable side? 'Cause when I say "adapter", that's only because those are the cables we're talking about, not due to there necessarily being something wrong with them being adapters (outside of the possibility of worse crimps with multiple wires, but that should be a solvable QC issue).
 
The 150W "spec" comes from PCI-SIG, driven by Nvidia, as well. The had already defined that the reason for the 6-pin on a GPU was because the GPU needed 75 more watts of power than what the slot could deliver. Needs 150W? Put in a second 6-pin. But when cards started needing 225W additional, Nvidia thought it would be silly to put a third 6-pin on the card. So they asked PCI-SIG if they could put two sense pins on an 8-pin connector so, if grounded, the card would "know" that it was safe to demand 150W from a single connector.

Think about it logically. Surely you would expect 6 conductors on a mini-fit jr. connector to deliver more than the equal amount of power as 4 tiny pins in a PCIe slot. And you don't magically double the capacity of a 6 conductor connector by simply adding two sense pins. If you want to double the the capacity, you MORE THAN double the conductors. But that wasn't the goal here. The goal here was to specify the power demands of the card: 75W, 150W, 225W, 300W, so on.

The cheaper terminals, of reputable brands, use the brass w/ tin terminals with 18g wire. In a 2x4 configuration with 2x3 terminations, those terminals support 8A per conductor. So, your "typical" 6-pin PCIe, or even 8-pin PCIe (since it's still technically only 6 power conductors) is capable of 24A. So, at 11.4V (because we always work +/-5%), you're talking about 273.6W per connector. Fine for a 6-pin and 8-pin on the same cable, but not good for two 8-pins which is when they tell you not to daisy chain. It's also why Nvidia's squid adapter has three 8-pins and not just two.

Some manufacturers use mini-fit HCS terminals. These are rated at 10A per terminal in a 2x3 configuration. So using the same math, you have 342W per connector assuming voltages drop to 11.4V.

The cable Corsair made here: https://www.corsair.com/us/en/Categories/Products/Accessories-|-Parts/PC-Components/Power-Supplies/600W-PCIe-5-0-12VHPWR-Type-4-PSU-Power-Cable/p/CP-8920284 uses 6x mini-fit HCS terminals per 8-pin connector. So two of those type 4 connectors are capable of a total of 60A, which is, obviously, GREATER than the capability of the 12VHPWR connector.
I was basing it on what pci-sig said for the 150W per as they are supposed to have done all the testing prior

if thats true why is the spec not 180 or 200w per 6/8 pin if the worst case is 273 thats leaving a lot on the table

also how do you fix this obviously tape on the connector to prevent it from flexing is not going to fly
people are going to flex those cables or bend or cram them into places
who ever thought tape was a good fix is a moron
 
Wait, so you're saying the fault lies in the 12VHPWR connector at the PCB side, not the cable side? 'Cause when I say "adapter", that's only because those are the cables we're talking about, not due to there necessarily being something wrong with them being adapters (outside of the possibility of worse crimps with multiple wires, but that should be a solvable QC issue).
Yes.

The 12VHPWR connector itself.

BUT... Look at the test case and you'd see why Nvidia didn't delay the 40 series launch, yet the PCI-SIG thought there was enough of an issue to send out an email: 30mm bend. 55A. For hours of continuous use.

That's why leaks like these are so bad. Wccftech only leaked the body of the email without any context or any content from the attachment. Wccftech should be nuked of the internet for such irresponsible journalism.

I was basing it on what pci-sig said for the 150W per as they are supposed to have done all the testing prior

if thats true why is the spec not 180 or 200w per 6/8 pin if the worst case is 273 thats leaving a lot on the table
Lack of hind sight. Back then, Nvidia didn't think they'd ever ship a card with a power requirement of more than 300W. Surely standard would've changed in the 15 years since! LOL!
also how do you fix this obviously tape on the connector to prevent it from flexing is not going to fly
people are going to flex those cables or bend or cram them into places
who ever thought tape was a good fix is a moron
Yeah. The tape is silly. There is more flexible insulation that allows for greater bend radius, but it's more expensive, sooo... But given the price of these cards, SURELY there's enough money to go around to pay for some better cable materials. LOL!

Really, it's quite typical of your average PC builder today, so why should anything change? Blows wad on 4090 Ti, so buys an Aerocool KCAS 1000w PSU and some cheap Asiahorse extensions to power it.
 
Yes.

The 12VHPWR connector itself.

BUT... Look at the test case and you'd see why Nvidia didn't delay the 40 series launch, yet the PCI-SIG thought there was enough of an issue to send out an email: 30mm bend. 55A. For hours of continuous use.

That's why leaks like these are so bad. Wccftech only leaked the body of the email without any context or any content from the attachment. Wccftech should be nuked of the internet for such irresponsible journalism.


Lack of hind sight. Back then, Nvidia didn't think they'd ever ship a card with a power requirement of more than 300W. Surely standard would've changed in the 15 years since! LOL!

Yeah. The tape is silly. There is more flexible insulation that allows for greater bend radius, but it's more expensive, sooo... But given the price of these cards, SURELY there's enough money to go around to pay for some better cable materials. LOL!

Really, it's quite typical of your average PC builder today, so why should anything change? Blows wad on 4090 Ti, so buys an Aerocool KCAS 1000w PSU and some cheap Asiahorse extensions to power it.
so what we need is a 36 pin connector rated for at least 1500W
so we can not do this again in 5 years at the rate nvidia is scaling power consumption
fk it lets just make it 2 4/0 gauge wires with lugs and terminals and two 18 awg sense wires
would probably look better to boot
 
Last edited:
so what we need is a 36 pin connector rated for at least 1500W
so we can not do this again in 5 years at the rate nvidia is scaling power consumption
fk it lets just make it 2 4/0 gauge wires with lugs and terminals and two 18 awg sense wires
would probably look better to boot
LOL! Actually, the lack of sideways flexibility is BECAUSE there's two many columns of pins.

Sorry for shitty MSPAINT...
1664135645819.png

So... as you can see, as you bend to the left, the wires on the right get pulled more. If there were less wires, there's be less pull. If they wanted to change the standard, they should've gone with a 6 or 8-pin mini-fit type connector and required two for higher wattage power requirements.

And actual sense pins like you have in the 24-pin would've been a great idea. It's funny, I made the same complaint when the 6+2-pin came out when they only terminated the senses to ground. I said, "hey guys... how about implement and actual +12V sense and ground sense to feedback voltages to the PSU." Of course... who would want to listen to an actual PSU guy about PSUs. LOL!
 
Yes.

The 12VHPWR connector itself.

BUT... Look at the test case and you'd see why Nvidia didn't delay the 40 series launch, yet the PCI-SIG thought there was enough of an issue to send out an email: 30mm bend. 55A. For hours of continuous use.

That's why leaks like these are so bad. Wccftech only leaked the body of the email without any context or any content from the attachment. Wccftech should be nuked of the internet for such irresponsible journalism.
Aren't the PCB-side and cable-side connectors both technically 12VHPWR connectors, just different genders? Or is the cable-side connector actually called something else?

And yeah, I saw the amperages and times required for those melted connectors and pretty much dismissed it there and then - seems like the rest of the internet did differently, though.
 
LOL! Actually, the lack of sideways flexibility is BECAUSE there's two many columns of pins.

Sorry for shitty MSPAINT...
View attachment 262985
So... as you can see, as you bend to the left, the wires on the right get pulled more. If there were less wires, there's be less pull. If they wanted to change the standard, they should've gone with a 6 or 8-pin mini-fit type connector and required two for higher wattage power requirements.

And actual sense pins like you have in the 24-pin would've been a great idea. It's funny, I made the same complaint when the 6+2-pin came out when they only terminated the senses to ground. I said, "hey guys... how about implement and actual +12V sense and ground sense to feedback voltages to the PSU." Of course... who would want to listen to an actual PSU guy about PSUs. LOL!
seems like a right angle pcb adapter like evgas power link thing would mitigate some of that
what a terrible design
 
Aren't the PCB-side and cable-side connectors both technically 12VHPWR connectors, just different genders? Or is the cable-side connector actually called something else?

And yeah, I saw the amperages and times required for those melted connectors and pretty much dismissed it there and then - seems like the rest of the internet did differently, though.
They are. The tendency to bend a sharper angle is moreso on the PSU side as people are trying to route the cable from under a PSU shroud, around a stack of 5.25" bays and then up the back of the mobo tray. Whereas, on the GPU side, you're just trying to avoid the side panel. Not saying it's not ever going to happen on the GPU side; but given the size of the cards and the distance between the card and side panel in some instances, we might see some failures.
 
solution put screws on the plug two on each side to secure the connector
if that doesn't do it make a right angle adapter and again use screws
 
solution put screws on the plug two on each side to secure the connector
if that doesn't do it make a right angle adapter and again use screws

Problem with 90° adapters is you don't know what direction Joe Enduser is going to direct their cables. I have a 4000D and the cable management requires the PCIe cables go up and down the heatplate of the GPU. I also have a Fractal Meshify and the cable management requires the PCIe cables flop down over the GPU fan. And then you have all of the smaller cases out there where the cables need to go either left or right.

An "attachable" collar mechanism or a longer collar on the connector itself would be a better solution.
 
Problem with 90° adapters is you don't know what direction Joe Enduser is going to direct their cables. I have a 4000D and the cable management requires the PCIe cables go up and down the heatplate of the GPU. I also have a Fractal Meshify and the cable management requires the PCIe cables flop down over the GPU fan. And then you have all of the smaller cases out there where the cables need to go either left or right.

An "attachable" collar mechanism or a longer collar on the connector itself would be a better solution.
just make the connector rotate double slip rings :D
 
I've got a great idea:

Better efficiency on GPUs
 
I've got a great idea:

Better efficiency on GPUs
That is what impresses me the most about the 6500XT and the 6600M. The size of the chip is crazy small but the performance is crazy for what they draw. I swear my 6500XT doesn't even need the 6 pin that is attached to it.
 
This company wants power... more power, all the power and nothing to people, let them die after we took their money.
Such nice values for EU...2500€...must be like the Strix... you must pay for the box.
Now serious...where is this going for? Check the PCB size... now check the cards length... now check the cards weigth... now get rid of the motherboard, cpu and ram because all that matters is selling this monsters to dominate the PC concept... its time for NVIDIA to rule the world...
 
That is what impresses me the most about the 6500XT and the 6600M. The size of the chip is crazy small but the performance is crazy for what they draw. I swear my 6500XT doesn't even need the 6 pin that is attached to it.
Is your 6500XT tuned lower than stock? 'Cause the 6500XT is the least efficient RDNA2 GPU out there. It consumes nearly as much power as the 6600 while delivering barely more than half the performance.

You're not wrong in principle though. RDNA2 is ridiculously efficient in its best tuned versions. Heck, even my 6900 XT with my moderate underclock (2250MHz, no undervolt) is stupidly efficient. At that, I'm really looking forward to RDNA3 and its promised +50% perf/W. That'll be interesting for sure.
 
Oddly enough, there seems to be only one power supply currently available for purchase that meets the new PCIE5 /ATX 3.0 standard. It's a 1000W MSI MPG A1000G 80+ Gold PSU. And the crazy thing is that it won't ship until the end of October... So, those super early adopters of an RTX 4090 will have to use the included 3/4 8pin to 12pin connector until they get their new power supply in the mail lol. And this may cause some issues for some...

Annnnd, of course, I'm trying to do exactly this... But I imagine my odds of actually getting a 4090 are pretty darn low. I'll keep selling my old stuff in the meantime just in case I can get lucky.
 
Is your 6500XT tuned lower than stock? 'Cause the 6500XT is the least efficient RDNA2 GPU out there. It consumes nearly as much power as the 6600 while delivering barely more than half the performance.

You're not wrong in principle though. RDNA2 is ridiculously efficient in its best tuned versions. Heck, even my 6900 XT with my moderate underclock (2250MHz, no undervolt) is stupidly efficient. At that, I'm really looking forward to RDNA3 and its promised +50% perf/W. That'll be interesting for sure.
I am not talking about the power draw. I have been using high end GPUs since Vega 64. Even with Polaris I had taken apart the card to re-apply thermal paste or apply a waterblock. When I took apart my Gigabyte 6500XT because of high temps (It's a Gigabyte Card) I was shocked at how small the GPU is. I mean like literally 17% of a 6800XT. Then I OC I get to 2985 MHZ with a 2300 MHZ Memory clock. The thing about Power draw is that I had it with a 5600G and the GPU would draw about the same power but that was 3.0 connectivity so I put it with a 5900X that I was going to sell and it draws about 2/5 more power than the GPU. As long as you play at 1080P there is no Game that the 6500XT feels slow or less than satisfactory. When I look at my 6800XT vs it I really appreciate what AMD have achieved and have whatever next GPU from AMD as a for sure purchase as I am unabashed about ardently being an AMD for GPU user since my experience with the GTS 450.
 
Oddly enough, there seems to be only one power supply currently available for purchase that meets the new PCIE5 /ATX 3.0 standard. It's a 1000W MSI MPG A1000G 80+ Gold PSU. And the crazy thing is that it won't ship until the end of October... So, those super early adopters of an RTX 4090 will have to use the included 3/4 8pin to 12pin connector until they get their new power supply in the mail lol. And this may cause some issues for some...

Annnnd, of course, I'm trying to do exactly this... But I imagine my odds of actually getting a 4090 are pretty darn low. I'll keep selling my old stuff in the meantime just in case I can get lucky.
You need to catch up with a number of threads here; including this one as it seems you only read the "article" and none of the posts that followed.

You don't need an "ATX 3.0" PSU with "PCIe 5.0" Power connectors on the Psu's modular interface. Only Jayz2cents ever said that. You can use the included adapter, but I'm willing to be that either someone like Cablemod or whoever made your PSU has a more elegant solution in a native cable.
 
Last edited:
Class action law suit against the house burning down connector or its inceptioner/ promoter.

Something always gives when cutting corners and quite a few have been.

I am afraid the new nVidia power connector standard has no usefull life as it is. I wont feel bad if proven wrong( kudos to them if so).

Not if you knew of this potential problem beforehand. The law will look at it as you took a risk knowing of possible issue(s) with the cable.

If on the other hand you knew nothing about the cable issue(s) & just bought a card off the shelf, then you stand a better chance of a lawsuit. But I'm pretty sure it's going to say something about the power cable in the instructions/installation.

If you follow the instruction/installation to the letter, then you stand a chance court. But you may have to prove you followed instruction/installation to the letter with evidence, but there may not be any evidence if there is a big fire.
 
Last edited:
So many reports of these catching fire and melting on social media - and not the same one over and over


It seems if they're bent in any way they become a massive fire risk, and because the GPU's are huge it's almost impossible for them to NOT bend

1666654684923.png
 
Back
Top