• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Status
Not open for further replies.
He probably means that when you use 2-3 rail wiring, it will balance through the PSU - and yes, it will probably work.
Why would it work?

the psu cannot dictate the power need.

In a multi rail setup the voltage of high current rail would dip and that would do something, but no sane person would design a psu with a rail per pin on the back of the psu.
 
What baffles me the most about this thread is the amount of people choosing to deny an obvious situation in order to side with a multi-billion dollar company that could choose to do better for the end user, has a monopoly in the high-end space and has been continuously charging abusive prices to both the consumers and its AIB partners. You guys really deserve to be named Nvidia Ambassadors, you might even get $10 off in your next Nvidiapurchase!
 
You people don't get it. If a properly-handled cable fails after a few mating cycles, the issue is the quality of the components. There's nothing about the design of a metal pin fitting inside another that limits the number of insertions.
Sure.
There's a safety margin. Not a large one -- but again, these margins only exist to protect against manufacturing defects or user error. Without those, there would be no margin required.
So the margns aren't enough. Isn't that the entire point?
 
What baffles me the most about this thread is the amount of people choosing to deny an obvious situation in order to side with a multi-billion dollar company that could choose to do better for the end user, has a monopoly in the high-end space and has been continuously charging abusive prices to both the consumers and its AIB partners. You guys really deserve to be named Nvidia Ambassadors, you might even get $10 off in your next Nvidiapurchase!
I think it's a mix of sunk-cost fallacy and team-ball. "If you're not with me, then you're my enemy" to quote Revenge of the Sith.

Anyway, like I said, it's their money to spend and their risk to take. We can only hope that every single case so far has been user error and/or a damaged/inferior product as has been posited. Because this connector is getting more use with this gen (and probable future gens), GPUs are not coming down in costs (and power draw) and AIBs/suppliers are not getting any friendlier with their RMA/warranty processes.

I personally won't be gambling on any hardware using the connector and I'll be incredibly disappointed if AMD and Intel switch to it too.

EDIT: An addendum I forgot to add. I think you'll find the majority of people arguing against the connector in this thread aren't "Team AMD" or hoping that Nvidia will "take a big hit" over this because again, that's tribalist nonsense. They're either concerned about a potentially faulty standard becoming more widespread OR they're like me in that we don't like the idea of our fellow geeks spending thousands of dollars on a part that could catastrophically fail on you.

Supporting GPU companies like a sports team is childish nonsense. They're not your friend, they won't appreciate it, they won't do you any favors and they'll take any opportunity they can to wring you out of your hard-earned cash and time.
 
Most PSUs have 1 rail nowadays, there is also a niche which has multiple rail so it can deliver X watts across each rail. He probably means that when you use 2-3 rail wiring, it will balance through the PSU - and yes, it will probably work. The point is that if it can't deliver current through any of the rails due to a bad contact, and can't deliver it through the others due to rail restriction, it will probably crash the whole machine and it will be hard to figure out why.
But here comes the question - why do we need to change our PSU's for each generation because Nvidia changes something in power usage for each generation - this is insane.
PSUs with multi rail modes generally don’t do load balancing. How would they know that certain loads need to be balanced?

Multi rail modes allow each rail to have an over current protection limit to be set low enough to matter. I know because multi rail mode saved one of my builds because the GPU was overdrawing its 6-pin and 8-pin PCI Express Graphics connectors enough to cause the power cord connected to both to literally warm up due to having too many amps go through the cord when the GPU was processing a heavy GPGPU work load by shutting down once the OCP limit was exceeded. I split the power connectors to one per rail and the power cords were no longer heating up when the GPU was heavily loaded. This saved my rig and stopped the potential of an apartment fire due to an overloaded power cable. The GPU was a reference Radeon HD 6970 which violated the PCI Express Graphics wattage limits on the connectors by overdrawing the amps under real world maximum loads. Sadly, many other GPUs that I have worked with violate those limits as well. Had the GPU met the specification, one power cable could have provided power to both connectors without heating up or tripping the rail’s over current protection limit.

Single rail modes share a common high amperage over current protection limit that is too high to do any meaningful protection, so I consider single rail only PSUs to be recall-worthy unsafe trash. My old rig could have had caught fire if my PSU provided only a single rail.
 
Last edited:
I think it's a mix of sunk-cost fallacy and team-ball. "If you're not with me, then you're my enemy" to quote Revenge of the Sith.

(...)

EDIT: An addendum I forgot to add. I think you'll find the majority of people arguing against the connector in this thread aren't "Team AMD" or hoping that Nvidia will "take a big hit" over this because again, that's tribalist nonsense. They're either concerned about a potentially faulty standard becoming more widespread OR they're like me in that we don't like the idea of our fellow geeks spending thousands of dollars on a part that could catastrophically fail on you.

Supporting GPU companies like a sports team is childish nonsense. They're not your friend, they won't appreciate it, they won't do you any favors and they'll take any opportunity they can to wring you out of your hard-earned cash and time.

Couldn't have said it better.

I have a 4090. That doesn't make me feel like I'm "Team Green". A manufacturer is not my sports team, they're not my friend. I paid them big money and I expect them to provide me with a high quality standard in return. It's really that simple. Tribalistic fanboyism is a dumb mindset.
 
Single rail modes share a common high amperage over current protection limit that is too high to do any meaningful protection, so I consider single rail only PSUs to be recall-worthy unsafe trash. My old rig could have had caught fire if my PSU provided only a single rail.
It does not necessarily work like that though. Many single rail PSUs have per connector OCP. All it needs is a shunt per connector and circuitry to read the voltage drop.

And multi rail PSU does not necessarily do any connector level OCP either. It’s a different thing completely.
 
Interesting new post on Reddit (not a 5090 topic, but a 4090 OC experience) from the winner of last year’s OC World Cup:

I think we can safely rule out user error with this guy.

My Takeaway:
These 12VHPWR cables and connectors seem to age pretty fast. He mentioned doing the same thing with 1000W in 2023 with no issues, but now with 900W… well, not so much.

1740047355326.png


Also, putting some (cold) air on the 12VHPWR connector when overclocking your GPU isn’t a bad idea. It reminded me of that GN video (the livestream breaking the 3DMark record) with the ASUS Astral. Steve and Joe mentioned at some point that cooling the connector helped, and for that, the fourth fan on the back of the Astral was great. Combined with the WireView Pro, its airflow reached the connector area.
So, the loud fourth fan on the Astral actually serves a good purpose :D


We probably should start calculating 2 different failure rates for the 5090:
1️⃣ Mainstream and novice users – buys the GPU, connects it with a new cable, never touches the thing again - probably low failure rates
2️⃣ Enthusiasts – constantly swaps components, using gadgets like Wireview, changing cables, changing PSUs, changing rigs etc. - unknown failure rate (probably a lot higher than no.1 - we don't know yet)

I’d bet those two failure rates will be very different with the 5090 :)

It was clear with the 4090 and is even clearer now: We need something better than the 12VHPWR standard.
 
Last edited:
It does not necessarily work like that though. Many single rail PSUs have per connector OCP. All it needs is a shunt per connector and circuitry to read the voltage drop.

And multi rail PSU does not necessarily do any connector level OCP either. It’s a different thing completely.
Having per connector OCP is multi virtual rail mode. Thanks for the correction. I forgot the word “virtual” in my previous post. Multi virtual rail mode is what saved my previous rig and my apartment from burning. Single rail mode shuts down the connector level OCP and only uses the OCP for the entire physical rail whose limit is set way too high to matter. Multi virtual rail mode combines the best of single rail mode and multiple physical rails by having one big efficient physical rail whose output is split and regulated with effective low limit OCP on each virtual rail.
 
Having per connector OCP is multi virtual rail mode. Thanks for the correction. I forgot the word “virtual” in my previous post. Multi virtual rail mode is what saved my previous rig and my apartment from burning. Single rail mode shuts down the connector level OCP and only uses the OCP for the entire physical rail whose limit is set way too high to matter. Multi virtual rail mode combines the best of single rail mode and multiple physical rails by having one big efficient physical rail whose output is split and regulated with effective low limit OCP on each virtual rail.
Ah, it seems that rails do not nominate separate voltage rails anymore, but just groups of leads behind an OCP circuit. My bad. ”Virtual” prefix makes sense, thanks for the clarification!

Still, a ”multi-rail” PSU can have multiple connectors behind a single OCP circuit. For example by having two rails and one with 20 A and the other also 20A. Then both can have a mixture of connectors on the back of the psu.

Per connector OCP makes sense, and is hopefully implemented to more PSUs in the future. In the case of this pos gpu, it would however do nothing to help, as a per pin OCP is needed to shut down the system if a pin is pulling more than the specified 9A or whatever. Fun times for users, if one would be made, as most cables would fail that over time and PCs would just randomly crash every now and then.
 
Well, someone is confused for sure.

Name one computer PSU ever which could adjust for per lead current draw, or even per connector power draw. Bonus points if you can describe how it did that.
Always happy to assist in the education of others. Old style PSUs with multiple physical 12V rails are just that -- separate power supplies feeding different connectors. In this case, two equal-demand loads (equivalent resistance) can draw different currents. Newer multi-rail PSUs don't do it with physical components but rather sensors. Most simply shut down if a single rail gets overloaded, but its certainly possible for them to instead drop the voltage on an overloaded rail, which would again cause a current differential. I can provide a diagram if you like.

I can name thousands of GPU’s which did current balancing.
You're still confused. GPUs don't do current balancing. An AIB may ... but if you think that's relevant to power leads connected to a common backplane, you're confused as to what current balancing even is.

They make as much money off of this as off of anything else.
You seriously cannot believe this. These influencers make money off clicks. People click on outrageous, controversial stories then -- like all of you here -- post links to those videos in countless forums to get others to click too.

He probably means that when you use 2-3 rail wiring, it will balance through the PSU - and yes, it will probably work...But here comes the question - why do we need to change our PSU's for each generation because Nvidia changes something in power usage for each generation
You don't need a new PSU. You simply need a power cable that isn't faulty.
 
Last edited:
Most simply shut down if a single rail gets overloaded, but its certainly possible for them to instead drop the voltage on an overloaded rail, which would again cause a current differential. I can provide a diagram if you like.
Please do. PSU make and model to go with the diagram as well, please.
You're still confused. GPUs don't do current balancing.
MOST do. Otherwise there would be huge problems with any that have both 8pin and 6pin connectors, and especially those that use the 75W available from the pcie board connector.
4090 and 5090 are the only ones in living memory that do not do any load balancing.
An AIB may
No. A proper load balancing needs to be a firmware level feature. My understanding is that an AIB cannot re-write the portion of the nvidia firmware to enable proper load balancing on modern cards.

You simply need a power cable that isn't faulty.
A non-faulty power cable can easily provide enough resistance differential to load a single pin with 20A of current. Read the damn spec.
 
My Takeaway:
These 12VHPWR cables and connectors seem to age pretty fast. He mentioned doing the same thing with 1000W in 2023 with no issues, but now with 900W… well, not so much.

In my opinion, it's probably not so much a problem of poor aging, as one of inconsistency. This is what makes it seemingly hard to replicate. The first ever recorded case of a melting 4090 dates back to October 24th 2022, 12 days after the release of the card. Maybe they're aging fast from constant operation near failure, but that's not the main reason.

I think the root cause of the melting are those uneven, atrocious quantities of amps we've seen flowing through individual lanes e.g. in der8auer's readings. That's why you usually see just 1-3 burnt pins. Of course, if you pulled 1000W through a 12VHPWR for enough time you'd eventually have it fail even with perfect distribution since it's way out of spec (~13,9 amps per lane) and the safety margins are already too tight by default. The problem is the slightest defect at the connection site or wear on the cable heads is creating severe load imbalances, and you can barely check if they're happening or know when it will fail.

You seriously cannot believe this. These influencers make money off clicks. People click on outrageous, controversial stories then -- like all of you here -- post links to those videos in countless forums to get others to click too.

So it's all a conspiracy staged to stain the immaculate reputation of good old Jensen. I'll bring my tin foil hat.
There's a difference between taking advantage of a controversy for clicks and creating a whole narrative out of thin air. These problems are well diagnosed and documented.
 
A non-faulty power cable can easily provide enough resistance differential to load a single pin with 20A of current. Read the damn spec.
Learn the damn basics of electrical engineering. The only way that can happen is if that single pin has much less resistance than all the rest, which means the pin will generate much LESS heat than the others. Do you not even understand what (I^2)r even means?

Please do. PSU make and model to go with the diagram as well, please.
As you've acknowledged the other posters who pointed out you're wrong here, I can assume this is intended to be snark.

MOST do. Otherwise there would be huge problems with any that have both 8pin and 6pin connectors,
Again -- learn at least Kirchhoff's and Ohm's laws if you want to discuss this. No GPU in the world does current balancing. An AIB can -- BUT for power leads terminating in a common backplane, there is nothing in the world the board itself can do to influence individual lead current imbalance. It's strictly a matter of the lead resistance itself.
 
you people don't get it any reason that allows this failure mode to exist is unacceptable

plugging in a cable more then a handful of times is not abuse not misuse not anything other then piss poor design

the card not current balancing is also you guessed it piss poor design

designing anything with no margin for error/safety is once again piss poor design
^this is completely correct.

You people don't get it. If a properly-handled cable fails after a few mating cycles, the issue is the quality of the components. There's nothing about the design of a metal pin fitting inside another that limits the number of insertions.


There's a safety margin. Not a large one -- but again, these margins only exist to protect against manufacturing defects or user error. Without those, there would be no margin required.
It's already been said, but it's worth repeating. ^this statement is wrong in every way. Here's an illustration showing corrosion build-up on plated material after the plating is worn through (which happens after several insertions):
Degradation-PT3-2.gif

This is just one example, but that corrosion build-up can be hard to see, especially inside a connector-shell, and the result of that is increased impedance. That increased impedance causes a higher voltage drop, which means the current will take the path of least resistance and if one pin is worse than the others, this will directly result in a large current imbalance on the wires and pins. That removal of material on every insertion cycle is called fretting:
1623155791276

What baffles me the most about this thread is the amount of people choosing to deny an obvious situation in order to side with a multi-billion dollar company that could choose to do better for the end user, has a monopoly in the high-end space and has been continuously charging abusive prices to both the consumers and its AIB partners. You guys really deserve to be named Nvidia Ambassadors, you might even get $10 off in your next Nvidiapurchase!
Honestly, I'm not sure it's even about that. Maybe some, but what I see a lot more of on just about every topic lately is a selfishness and an unwillingness to consider other people or other situations. Basically a "well it didn't happen to me" mentality. So people just get combative and argumentative because something didn't happen to them. If they think that just plugging in the cable correctly will definitely result in no issues, they'll die on that hill...until it happens to them. Then they'll switch sides because they've seen it first hand and everyone who was on their side before will say "oh, well you did it wrong or you had a faulty cable". It's easy to say "every failure is faulty cable or misuse" when there's such a crappy design that it's absurdly easy to have a "faulty" cable even though the cables meet the specification (which has relatively no margin). Maybe there's tribalism too, but I think you'll find just as many people arguing that the cable is fine who don't even have one because it's not an issue that even effects them so they can't put themselves in the shoes of someone who it might effect. I mean, we're all arguing about cables that also effect 40-series, but mostly 4090, 5080, and 5090 and they've probably sold less than 100 5090's worldwide at this point lol (and a huge amount of those went to reviewers).
 
BUT for power leads terminating in a common backplane, there is nothing in the world the board itself can do to influence individual lead current imbalance.
That's the point. Stop dumping them all into one backplane without having a shunt sense resistor in line prior. It's really really easy. One resistor per pin before anything else. Extremely cheap solution that needs not much else. I don't think it even needs special firmware because you can do it all in hardware and simply interrupt the existing power good signal if an imbalance reaches dangerous levels. You could do this with shunt resistors and op amps for $1. Balancing would be nice, but a bit harder, and cost a bit more. They have done balancing before, but it's all about profitability, right? Otherwise they wouldn't have stopped doing it.

If not balancing the load, at least they could incorporate incredibly cheap and easy protection instead of just making it someone else's problem. Don't say it is impossible.

Or use a different connector.

Or use a much higher margin of safety with the knowledge that the existing safety margin sucks and the existing engineering controls aren't working.
 
Last edited:
Learn the damn basics of electrical engineering. The only way that can happen is if that single pin has much less resistance than all the rest, which means the pin will generate much LESS heat than the others. Do you not even understand what (I^2)r even means?
Yet it will heat up much more due to the cable resistance etc. Derbauer showed pretty unsafe temperatures coming from the cables alone. (I^2)r baby.
As you've acknowledged the other posters who pointed out you're wrong here, I can assume this is intended to be snark.
So you do not have any clue as to how that would happen. You could have said so in the very beginning.
Again -- learn at least Kirchhoff's and Ohm's laws if you want to discuss this. No GPU in the world does current balancing.
And yet they do. The whole thing with XOC shunt mods was to prevent the current balancing and OCP circuitry from doing it’s job. Look it up. Fun times before the power delivery control was integrated to the GPU firmware blob.
BUT for power leads terminating in a common backplane, there is nothing in the world the board itself can do to influence individual lead current imbalance. It's strictly a matter of the lead resistance itself.
Yes. Bad design choice from nvidia. My point excactly. They should drive a subset of the power delivery components from each pin and balance the connector, like in 3090 ti (or almost any gpu made prior to 4090). Look it up.
 
Last edited:
. ^this statement is wrong in every way. Here's an illustration showing corrosion build-up on plated material after the plating is worn through (which happens after several insertions):
You're using an illustration of corrosion affecting gold-plating wear-through, to attempt to claim why the non-plated pins are a bad idea? Where does the insanity end?

Since you obviously didn't read my post, let me repeat it: "nothing about the design of a metal pin fitting inside another that limits the number of insertions." I have a 100 amp 0-48v variable bench supply that feeds power through metal pins fitting inside a socket. There is no gold plating or "load balancing across pins", and yet it's survived a good 10,000+ insertion cycles over the last 30 years. If a plug fails after just a few insertions, then it was made cheaply. Period.
Yet it will heat up much more due to the cable resistance etc. Derbauer showed pretty unsafe temperatures coming from the cables alone. (I^2)r baby.
This is seriously like explaining tensor calculus to a five-year old. The "unsafe temperatures" from the cable caused no pin melting. When connected across a common backplane, the only way one wire carries more current than its neighbors is if the pin connections are lower resistance. Which means the pins generate much less heat, and any extra heat is dissipated along the entire length of the cable.

The pin area melting we've seen is from the pins having too MUCH resistance, meaning the cable itself will carry less.

And yet they do. The whole thing with XOC shunt mods was to prevent the current balancing and OCP circuitry from doing it’s [sic] job.
Good grief, give it up. That's a shunt mod to the BOARD, not the GPU. Do you even know the difference?
 
Learn the damn basics of electrical engineering. The only way that can happen is if that single pin has much less resistance than all the rest, which means the pin will generate much LESS heat than the others. Do you not even understand what (I^2)r even means?
You're almost there. Keep going with the logic here...so Pin/Socket pair A has less resistance than Pin/Socket pair B, right? So the higher resistance pair will have less current, right? That means the pair that still has the lowest resistance will have the most current. That creates imbalance...which means that the lowest resistance pairs will have the most current...so the higher resistance pairs are doing jack-all and the best pairs you have left are doing all the work...so even if they have the lowest resistance of the bunch, they're carrying all that load, hence heating up and causing melting to happen. You seem to have gotten turned around somewhere maybe assuming that the pairs that are better than others somehow have been improved and won't heat up even though they're now carrying too much current. This is incorrect. Just because they're now better than the other contacts doesn't mean they've been improved or can do more than they're supposed to...it just means that oddly enough the best pairs end up melting because they're the last ones still carrying all the load.
 
Having per connector OCP is multi virtual rail mode. Thanks for the correction. I forgot the word “virtual” in my previous post. Multi virtual rail mode is what saved my previous rig and my apartment from burning. Single rail mode shuts down the connector level OCP and only uses the OCP for the entire physical rail whose limit is set way too high to matter. Multi virtual rail mode combines the best of single rail mode and multiple physical rails by having one big efficient physical rail whose output is split and regulated with effective low limit OCP on each virtual rail.
Ok so that's a thing or you just made it up for the new year's wish list :wtf:
 
Last edited:
You're using an illustration of corrosion affecting gold-plating wear-through, to attempt to claim why the non-plated pins are a bad idea? Where does the insanity end?

Since you obviously didn't read my post, let me repeat it: "nothing about the design of a metal pin fitting inside another that limits the number of insertions." I have a 100 amp 0-48v variable bench supply that feeds power through metal pins fitting inside a socket. There is no gold plating or "load balancing across pins", and yet it's survived a good 10,000+ insertion cycles over the last 30 years. If a plug fails after just a few insertions, then it was made cheaply. Period.
Corrosion to plated pins after fretting was one example of how pins can go bad as a result of insertions, which you said cannot happen. So yes, it is a valid example and no, I did not claim that pins should be plated or non-plated.

Also, yes, this is a good point (the bolded part). These are cheaply made cables unfit for purpose. This is the whole discussion point that you seem to be missing by making a really bad assumption. Your argument above the bolded part suggests that because you have a good connector, all pin/socket connectors are good. This is ridiculous. As I've suggested previously in this thread (no idea how many pages ago at this point), the connector we're talking about would be totally fine if it was used for a lower powered application where there's more margin. The current usage is too close to the rated specification and there aren't good enough safeguards. They are cheaply made in bulk and too prone to issues. That's it. That's the whole situation.
 
Corrosion to plated pins after fretting was one example of how pins can go bad as a result of insertions, which you said cannot happen.
I said no such thing. Read my post again.

...you seem to be missing by making a really bad assumption. Your argument above the bolded part suggests that because you have a good connector, all pin/socket connectors are good.
Again: I said no such thing. Is English not your native language? The connector, the cable pins, the cable itself -- all obviously need to be of good quality. And the quality of those cable components has absolutely nothing to do with NVidia.

Keep going with the logic here...so Pin/Socket pair A has less resistance than Pin/Socket pair B, right? So the higher resistance pair will have less current, right? That means the pair that still has the lowest resistance will have the most current. That creates imbalance...which means that the lowest resistance pairs will have the most current...so the higher resistance pairs are doing jack-all and the best pairs you have left are doing all the work...so even if they have the lowest resistance of the bunch, they're carrying all that load, hence heating up and causing melting to happen. You seem to have gotten turned around somewhere maybe assuming that the pairs that are better than others somehow have been improved and won't heat up even though they're now carrying too much current. This is incorrect. Just because they're now better than the other contacts doesn't mean they've been improved or can do more than they're supposed to...it just means that oddly enough the best pairs end up melting because they're the last ones still carrying all the load.
Your word salad doesn't change the basic math here. You only get 20A on one wire by assuming its pin resistance is zero, and the other pins at max 10mOhm. That means the heat generated on that pin is ALSO zero, whereas the wires carrying only 6A are developing a third of a watt in a very tiny area.
 
The pin area melting we've seen is from the pins having too MUCH resistance, meaning the cable itself will carry less.
But then they will heat MUCH less, due to (I^2)r. You see, the I is to the power of two. Do the math and stop complaining.
Good grief, give it up. That's a shunt mod to the BOARD, not the GPU. Do you even know the difference?
GPU is the card. Do you mean the ASIC when you write GPU?

I thought it was PRETTY FUCKING OBVIOUS that considering what I have been writing here is that ”GPU” refers to the damn card.

OMFG.

And as you well know, the GPU firmware decides how the power delivery components are utilized, so it ACTUALLY does the load balancing. It just reguires a BOARD design that is not pure garbage.

Nvidia is at fault with their shitty BOARDS.
Not surf boards, if that is your next hole to dig to.
 
There's a safety margin. Not a large one -- but again, these margins only exist to protect against manufacturing defects or user error. Without those, there would be no margin required.

I'm sorry to go into disrespect territory, but this is the most outrageously dumb comment I've read so far. Amounts of copium are over 9000.

Guess what, user error and -especially- manufacturing defects do happen, if Corsair makes a shitty cable that's not the user's fault and their safety must be a prioritary concern. Downplaying the role of safety margins and stating they're barely even required should be enough to revoke an engineer's license. Nvidia can't just wash their hands of this. ANY electrical application has a technical and ETHICAL obligation to include a solid safety margin, LET ALONE a $2000+ premium product from the richest company in the world that's grossly overpriced because they a hold a monopoly. It's not like they can't include more copper and plastic because they have too tight of a margin. What the fuck are you guys defending?

Honestly, I'm not sure it's even about that. Maybe some, but what I see a lot more of on just about every topic lately is a selfishness and an unwillingness to consider other people or other situations. Basically a "well it didn't happen to me" mentality. So people just get combative and argumentative because something didn't happen to them. If they think that just plugging in the cable correctly will definitely result in no issues, they'll die on that hill...until it happens to them. Then they'll switch sides because they've seen it first hand and everyone who was on their side before will say "oh, well you did it wrong or you had a faulty cable". It's easy to say "every failure is faulty cable or misuse" when there's such a crappy design that it's absurdly easy to have a "faulty" cable even though the cables meet the specification (which has relatively no margin). Maybe there's tribalism too, but I think you'll find just as many people arguing that the cable is fine who don't even have one because it's not an issue that even effects them so they can't put themselves in the shoes of someone who it might effect. I mean, we're all arguing about cables that also effect 40-series, but mostly 4090, 5080, and 5090 and they've probably sold less than 100 5090's worldwide at this point lol (and a huge amount of those went to reviewers).

This. :clap:
 
Last edited:
I said no such thing. Read my post again.
There's nothing about the design of a metal pin fitting inside another that limits the number of insertions.
OK I read it again. That's what you said. insertions cause fretting. Fretting makes connections go bad. I'm not going to continue this argument as you're talking nonsense.
Again: I said no such thing. Is English not your native language? The connector, the cable pins, the cable itself -- all obviously need to be of good quality. And the quality of those cable components has absolutely nothing to do with NVidia.
I know you've made it your goal to argue with everyone on this thread about everything just for the sake of arguing, but when did I blame Nvidia? I would say it was a stupid decision for them to use this connector for these cards (unless they used two), but it was a group effort from several groups to design this connector and I'd blame them to some degree as well, but ultimately the use of this design for this purpose is the biggest issue here, so I guess that would fall on Nvidia more than others at this point. So I don't blame Nvidia for the connector or cable issues that are design-related. I would just say it was a bad decision to use it for this application. The level of cheap manufacturing of this spec is not an unknown commodity and assuming every single pin/socket/molded-connector will be perfect would be a pretty dumb thing to do.

You only get 20A on one wire by assuming its pin resistance is zero, and the other pins at max 10mOhm. That means the heat generated on that pin is ALSO zero, whereas the wires carrying only 6A are developing a third of a watt in a very tiny area.
This is wrong again. that's not the only way and the resistance is never "zero". Again, just because some resistance values go up on some of the connections, it doesn't mean the others go down. I'm done arguing with you. You're being intentionally obstinate even when this whole argument boils down to a connector spec that's unfit for the purpose it's used for and that always leads to an unacceptably high failure rate, which we're seeing. There's no fact-based argument against that.
 
Status
Not open for further replies.
Back
Top