• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

If Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.
With the size of the cards there would have been no issue using 4 8 pin connectors. I guess Nvidia didn't want to have the image of that in people's heads so in a time when we are just trying to get supply chain back in order they make a brand new standard that intuitively sounds dangerous. By the way I have never heard of shouting a design warning on a baseline product like a PSU cable but with Nvidia nothing surprises me.
 
This article's tone is pretty condescending. It doesn't take "arm wrestling" to make the connector burn up, it's just poorly designed.

How exactly are users supposed to prevent bending within 35mm of the terminals? Most people won't have problems - but virtually nobody would have issues if they just coughed up the extra PCB space and went for 8-pins.
 
Hi,
Just the cooler sticking that far past the card is just dumb imho
 
99 problems……but a cable isn’t one…..
 
Man; comments like this really bring nothing to the table. I cannot stand it when people do it. This is totally off topic but I just want to throw some things out really quick before my meeting.

If you update even every 2 years in MY experience in consumer land you spend pretty much the same amount of money if you keep up with your build as someone that blows it all at once.

I bought 2x 4090s. 2x z690s; including all the other parts, coolers, fans, cases, ram for 2x platform upgrades. All at once. I probably just dropped 1/4 of the salary of what some make here on this forum.

Because I saved. Since 2017. The week after I finished our x299 builds. For the next platform jump. 5 years.

I do not think and it shouldnt make me out to be or included in the demographic of people that are considered hardware snobs because I can drop 3x your mortgage on PC parts in one night and still eat dinner. Your logic is flawed.

Also, I LOVE porsches.

View attachment 267091

View attachment 267092

View attachment 267093


And they dont need to be $180k cars. You can choose to spend that much though if you want.


For the record. If it helps. I know a few others that do it like me. At the very least its a waste of your time (not sure you know how much thats worth yet) because what people like this think of how I spend my money doesn't affect how I sleep at night.

OT:

I agree comments like that are unnecessary.

Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.
 
OT:

I agree comments like that are unnecessary.

Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.

You should never pass up the opportunity to enlighten. Within reason of course. There is always the chance it expands his thinking.
 
What they basically told here is "we are making connectors as cheap as possible, so quality is very low thus plastic bends and tears easily, please don't touch it".
 
What they basically told here is "we are making connectors as cheap as possible, so quality is very low thus plastic bends and tears easily, please don't touch it".

You know the connector and decision was not nvidia's but I do have to wonder, what their engineering team was thinking. Certainly they were given the same kind of information (+35mm) I mean, someone SOMEWHERE had to have put this in a case and been like "Man we should idk, put this at the back of the card right??"
 
I feel as though I'm banging my head into a brick wall.

It doesn't matter whether Molex is a PITA. It matters that people are using this to bash Nvidia as though it's their fault. AMD use the same mini-molex 6 and 8-pin connectors (from the PSU), which all follow certain standards--which is namely the 30 cycle mating. The 30 cycle thing is not the issue.

The issue is the shitty bend mechanics and pin contact failure.
The 30 cycles are another symptom of a business that is constantly eating away at what could be called 'headroom' at large.

Anyone with half a bit of sense would elevate this number because simply enough, GPUs definitely do get close to that limit or go over it. And these GPUs aren't dealing in low wattages like, say, HDDs do, or a bunch of fans. Here we are dealing with loads that are fat enough to make a PSU sweat. Also, and much more importantly, these cables aren't getting cheaper and certainly not dirt cheap like Molex is. We're talking about top of the line components here.

Similar things occur with, for example, SATA flatcables. They're too weak, so they break during the normal lifespan if you do a bit more than one or two reinstalls with them; and let's face it, with SSDs this likelihood has also increased as the devices are much more easily swapped around, or taken portable; slots on boards now offer hotswap sockets for SATA; etc.

And the examples like it are rapidly getting new friends: Intel's IHS bending; thermal pads needed on GPUs to avoid failure; etc etc.

The issue is shitty bend mechanics yes, but at the core of all these issues is one simple thing: cost reduction at the expense of OUR safety and durability of devices. Be wary what you cheer for - saying 30 cycles is fine because its the same as molex is not appreciating how PC gaming has evolved. Specs should evolve along with it.
 
Hi,
Should of made the plastic tab longer if that was a minimum bend point
Hell just extend it down to so it acts like a leg to hold the big bitch up to :laugh:
 
You should never pass up the opportunity to enlighten. Within reason of course. There is always the chance it expands his thinking.
That also goes both ways, I have to agree that it is pretty odd seeing the comments of people here wrt products, and a lot of that is happening in the 'top end' segment - but then that's my view. It all depends on your perspective, some want the latest greatest no matter what, and there is no common sense involved. The fact you are different, does not make it a rule; and yes, I think the lack of sense in some minds is also an opportunity to enlighten.

We all have our lenses to view the world through, (un?)fortunately. Nobody's right. Or wrong. But the social feedback is generally how norm and normality are formed.

Still though I don't think its entirely honest to your budgetting or yourself to say buying the top end is price conscious. Its really not, all costs increase because you've set the bar that high and the $/fps is worst in the top, and that problem only gets bigger if you compare gen-to-gen for similar performance, and its perfectly possible to last a similar number of years with something one or two notches lower in the stack, and barely notice the difference. Especially today, where the added cost of cooling and other requirements can amount many hundreds of extra dollars.

That said, buying 'high end' is definitely more price conscious than cheaping out and then getting forced into an upgrade because you really can't run stuff proper in two-three years time. But there is nuance here; an x90 was never a good idea - only when they go on sale like AMD's 69xx's do now. Its the same as buying on launch; tech depreciates too fast to make it worthwhile. You mentioned cars yourself. Similar loss of value applies - the moment you drive off...

You know the connector and decision was not nvidia's but I do have to wonder, what their engineering team was thinking. Certainly they were given the same kind of information (+35mm) I mean, someone SOMEWHERE had to have put this in a case and been like "Man we should idk, put this at the back of the card right??"
Yeah, or you could decide to offer and design your very own Nvidia branded cable doing the same thing but with somewhat greater tolerances. One could say their margins and their leadership position kind of makes that an expectation even. Nvidia is always first foot in the door when it comes to pushing tech ahead... They still sell Gsync modules even though the peasant spec is commonplace now, for example.

Everything just stinks of cutting corners, and in this segment, IMHO, thats instant disqualification.

Also, back of the card? Where exactly? There's no PCB on the better half of it, right?
 
Last edited:
Oh? We have numerous powerful ITX builds with high end components going about. Smaller cases can dissipate heat fine...

And thats the core of the issue here: a trend happening with pc components where higher power draw changes the old rules regarding what is possible and what is not. There is no guidance on that, from Nvidia either. They just assume you will solve the new DIY build problems that might arise from the specs they devised.

The very same thing is happening in CPU. And for what? To run the hardware way out of its efficiency curve, they are skirting the limits of what is possible out of the box to justify a ridiculous price point for a supposed performance edge you might never reach.

Components have landed in nonsense territory on the top end to keep the insatiable hunger of commerce afloat.
Hi,
Yep seen a few
Thing is they are usually vertical mounting gpu's.

I was referring to standard mounting.
 
If Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.
Well, it makes sense, but today it couldn't happen.
First, a 3% difference is enough to decide who is in 1st place of charts and who is second. And people on the Internet at least, are happy to announce the top card a monster and the 3% slower a failure, even if that second card is for example 30% more efficient. So, Intel first, because of their 10nm problems that forced them to compete while still on 14nm, Nvidia latter that probably decided to pocket all the money and leave nothing to AIBs and now also AMD that has realized that targeting for efficiency is suicidal, push their factory overclocks as high as they can.
There was no chance in a million Nvidia to come out with a Founders Edition at 350W and let it's partners produce custom models that go at 450W or higher.
 
Mwa-ha-ha-ha-ha! :roll:

aREAmKM_460swp.webp
 
Last edited:
oh, i am fine, then ...
IMG_20220729_080509.jpg

clearance ok, PSU? errrrr..... nah, no chance in hell :p changed it recently enough to not care about ATX 3.0 PSU which are all but available anywhere atm (aside one model from Thermaltake which is also close to be 3.5x time the price paid for the current i have ) and it seems AMD will keep with the 8pins for higher end (6pins for lower models) and i hope they stick to it given 1. the price, 2. the issues seen recently

although ... i HAVE the clearance! that's more important, oh and i know since my first self built, that tight cable bend is a bad bad thing ... and not only for PCs (specially with low quality cables/extensions, some cable handled steep curve better than others ... )
on a 6+8 Zotac GTX 770 some years ago
IMG_20131231_010642.jpg
 
AMD will need to fallow this power standard design if they want to stay competitive in the high-end.
AMD wont need to "fallow" this, considering the vast majority of PSUs still use 8 pin connectors. They can just use 8 pins and offer a 12VHPWR to 8 pin adapter for all the suckers that ran out and bought a new PSU
 
AMD wont need to "fallow" this, considering the vast majority of PSUs still use 8 pin connectors. They can just use 8 pins and offer a 12VHPWR to 8 pin adapter for all the suckers that ran out and bought a new PSU
well they ... "Fall O[ut of that] W[hacko]" new connector, i guess ...
 
I find it humorous that despite being a joint NVIDIA/Intel design, even Intel didn't use it for their higher-end ARC cards, despite there being "lower power" versions of the 12-pin connector.

That said, why not just shift to use EPS12V? Higher-end 1k+ PSUs can power 2 or more EPS12V lines by default (depending on the modular options), and EPS12V 4-pin can handle 155 watts continuous while the 8-pin can handle 235-250 watts continuous. Still would require 3x 8-pin connectors for 600-700 watt cards, but at least the output per 8-pin went up from the PCIe limit of 150 watts, and having some PSU makers swap PCIe connectors for extra EPS12V isn't much different than also designing ATX 3.0 PSUs with a dedicated but potentially faulty 12VHPWR connection. If anything, Seasonic's higher-end PSUs can do both EPS12V or PCIe from the same modular port, so it could be adapted quickly. And most importantly, all EPS12V wires and contacts are of a thicker gauge than the 12VHPWR wires and contacts.
 
According to who?
Because you have many number of examples that works without a problem and the tiny thing of a long validation process by electrical engineers.
You know, it can be smaller and better. CPU`s does that all the time.
So you think that cables can just get smaller and smaller while drawing more and more current?
It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw.

Maybe instead of making new connectors and by treating the symptom, we should probably invest time into having hardware detect these high resistance situations so a user can take action before stuff starts melting or catching fire. Ultimately this is a state that needs immediate action and even with the best of connectors, something can still go wrong. Regardless of connector, I'd like be aware of this situation should it arise before it causes damage.
That sounds very much like treating a symptom rather than solving the underlying cause. Even if the cause was a small increase in resistance, how would you precisely detect this from either the PSU or the GPU end? (keeping in mind both tolerances in the spec of each part and the fact that the current draw is changing very rapidly) You can't just do this the same way a multimeter would do it; by sending a small current and measuring the voltage drop to calculate resistance. Are there more advanced (and reliable) techniques which goes beyond my knowledge of electronics which would make your proposal even feasible?

I'm more a fan of doing good engineering to create a robust design rather than overengineering a complex solution to compensate for a poor design.

The one thing that doesn't sound quite right in my mind is the claim that this whole problem is cables that are not fully seated causing extreme heat to melt down the plug, and I would like to see a proper in-depth analysis of this rather than jumping to conclusions. From what I've seen of power plugs (of any type/size) over the years, heat issues on the connective part is unusual, unless we are talking about not making a connection at all causing arcing, but that's with higher voltages. But with 12V and this wire gauge the threshold between "good enough" and no connection will be very tiny, probably less than 1 mm. So if this was the core problem, then engineering a solution would be fairly easy by either making the cables stick better in the plug or making the connecting area a tiny bit larger. Keep in mind that with most types of plugs the connection is usually far better than the wire, so unless it's physically damaged there shouldn't be an issue with connectivity. (Also remember that the electrons moves on the outside of the write and the surface area of the connection in most plugs is significantly larger than the wire gauge in most plugs, probably >10x+++).
So the explanation so far sounds a little bit off to me.

This article's tone is pretty condescending. It doesn't take "arm wrestling" to make the connector burn up, it's just poorly designed.
Even me with my thick fingers would probably manage to do this unintentionally.
I would call this poor engineering, not user error.
 
So you think that cables can just get smaller and smaller while drawing more and more current?
It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw.


That sounds very much like treating a symptom rather than solving the underlying cause. Even if the cause was a small increase in resistance, how would you precisely detect this from either the PSU or the GPU end? (keeping in mind both tolerances in the spec of each part and the fact that the current draw is changing very rapidly) You can't just do this the same way a multimeter would do it; by sending a small current and measuring the voltage drop to calculate resistance. Are there more advanced (and reliable) techniques which goes beyond my knowledge of electronics which would make your proposal even feasible?

I'm more a fan of doing good engineering to create a robust design rather than overengineering a complex solution to compensate for a poor design.

The one thing that doesn't sound quite right in my mind is the claim that this whole problem is cables that are not fully seated causing extreme heat to melt down the plug, and I would like to see a proper in-depth analysis of this rather than jumping to conclusions. From what I've seen of power plugs (of any type/size) over the years, heat issues on the connective part is unusual, unless we are talking about not making a connection at all causing arcing, but that's with higher voltages. But with 12V and this wire gauge the threshold between "good enough" and no connection will be very tiny, probably less than 1 mm. So if this was the core problem, then engineering a solution would be fairly easy by either making the cables stick better in the plug or making the connecting area a tiny bit larger. Keep in mind that with most types of plugs the connection is usually far better than the wire, so unless it's physically damaged there shouldn't be an issue with connectivity. (Also remember that the electrons moves on the outside of the write and the surface area of the connection in most plugs is significantly larger than the wire gauge in most plugs, probably >10x+++).
So the explanation so far sounds a little bit off to me.


Even me with my thick fingers would probably manage to do this unintentionally.
I would call this poor engineering, not user error.
I agree but not because the parts are in any way bad IMHO, though with little experience that counts for even less than it's usual nothing.

But I think with better positioning, or an adequate adapter should have been essential free provided addins.
 
How long before the industry discovers that 12 volts is far too low a voltage, for GPUs and CPUs alike?
 
Back
Top