# PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up



## btarunr (Oct 25, 2022)

Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.

CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.



 

 

 

*Update Oct 26th*: There are multiple updates to the story.



The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.

*Update 15:59 UTC*: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.

*Update Oct 26th*: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.

*Update Oct 26th*: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.

*Update Oct 30th*: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.

*View at TechPowerUp Main Site* | Source


----------



## phanbuey (Oct 25, 2022)

what a horrible design... hoping amd is competitive this time.


----------



## The Quim Reaper (Oct 25, 2022)

That's alright, if they burn up their card, they're rich, they can just buy another...


----------



## Dirt Chip (Oct 25, 2022)

One must learn the hard way I guess...
When "how it look" is all that matter, blended with low common sense and without minimal 'RTFM ability'- you'll get burned.


----------



## Crackong (Oct 25, 2022)

Well

I am NOT surprised


----------



## Dirt Chip (Oct 25, 2022)

phanbuey said:


> what a horrible design... hoping amd is competitive this time.


AMD will need to fallow this power standard design if they want to stay competitive in the high-end.


----------



## natr0n (Oct 25, 2022)

nvidia should stop all the proprietary bullshit cables.

a few 6 or 8 pin is perfect nobody wants an electronic octopus hanging off a gpu.


----------



## the54thvoid (Oct 25, 2022)

> This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable.



This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user. 






Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.


----------



## phanbuey (Oct 25, 2022)

Dirt Chip said:


> AMD will need to fallow this power standard design if they want to stay competitive in the high-end.



They don't need to stick the 2" connector at 90 degrees on a 5 inch wide card tho.


----------



## Dirt Chip (Oct 25, 2022)

phanbuey said:


> They don't need to stick the 2" connector at 90 degrees on a 5 inch wide card tho.


Yep. 4090`s are with very bad, biffy design that will make you replace your case in order to fit them properly.
A simple L-shape 12VHPWR adaptor will solve many problem but than again- more adapters...


----------



## Pepamami (Oct 25, 2022)

Dirt Chip said:


> AMD will need to fallow this power standard design if they want to stay competitive in the high-end.


or maybe just use 4 8Pin connectors, instead of 2, is they gonna pull out a 600Watt GPU. I dont believe, that making few more 8pin connectors gonna make life harder, than this new nvidia one.


----------



## Xex360 (Oct 25, 2022)

Pepamami said:


> or maybe just use 4 8Pin connectors, instead of 2, is they gonna pull out a 600Watt GPU. I dont believe, that making few more 8pin connectors gonna make life harder, than this new nvidia one.


Why not make 2 of the new connectors, seems to me 600w on one cable is a bit too much heat wise, we need 4 8Pin connectors to get the same power.


----------



## Dirt Chip (Oct 25, 2022)

Pepamami said:


> or maybe just use 4 8Pin connectors, instead of 2, is they gonna pull out a 600Watt GPU. I dont believe, that making few more 8pin connectors gonna make life harder, than this new nvidia one.


This is not NV idea, it`s a new general standard.
4*8pin is not better, I think it`s even worse.
Using the 12VHPWR to 4*8pin adapter makes life harder.
AMD need to also adopt the 12VHPWR but to better position it on the GPU.



Xex360 said:


> Why not make 2 of the new connectors, seems to me 600w on one cable is a bit too much heat wise, we need 4 8Pin connectors to get the same power.


Because when you need 2*12VHPWR to feed next gen GPU`s that's 4*whatever you suggested.
The heat is not the problem here, the bending force on the connector is.


----------



## Guwapo77 (Oct 25, 2022)

Dirt Chip said:


> This is not NV idea, it`s a new general standard.
> 4*8pin is not better, I think it`s even worse.
> Using the 12VHPWR to 4*8pin adapter makes life harder.
> AMD need to also adopt the 12VHPWR but to better position it on the GPU.
> ...


1.  I have 3*8 pins in use and adding one more wouldn't be a problem even in the slightest.  

2. You have one of these on hand and have tested this personally? Everything I'm seeing is a mixture of both. If you've tested this yourself, I'll certainly take your word under consideration.


----------



## Pepamami (Oct 25, 2022)

Dirt Chip said:


> I think it`s even worse.


its not worse, 4x8pin has 2 times less load on a single pin, than 12VHPWR.
12VHPWR its not suited for 600W, its just a better version 2x8pins, where u move "unnecessary signals" to small pins, saving space by this (having 12 pins, instead of 16)


----------



## Dirt Chip (Oct 25, 2022)

Pepamami said:


> its not worse, 4x8pin has 2 times less load on a single pin, than 12VHPWR.
> *12VHPWR its not suited for 600W,* its just a better version 2x8pins, where u move "unnecessary signals" to small pins, saving space by this (having 12 pins, instead of 16)


According to who?
Because you have many number of examples that works without a problem and the tiny thing of a long validation process by electrical engineers.
You know, it *can* be smaller and better. CPU`s does that all the time.


----------



## fevgatos (Oct 25, 2022)

Dirt Chip said:


> According to who?
> Because you have many number of examples that works without a problem and the tiny thing of a long validation process by electrical engineers.
> You know, it *can* be smaller and better. CPU`s does that all the time.


The 12vhpwr has less voltage pins man, the 8pins cable standard was overbuilt. You could literally power a 350w card with one single connector that splits off to 2x6+2pins.


----------



## Nihillim (Oct 25, 2022)

natr0n said:


> nvidia should stop all the proprietary bullshit cables.


Wasn't this a collaborative effort between Intel and PCI-SIG?


----------



## ZoneDymo (Oct 25, 2022)

if you need to include this, its a bad design.

hell all they needed to do is add extra plastic at the end there (a sleeve if you will) so you cant bend it there so you are forced to bend it 35 mm away....


----------



## Vayra86 (Oct 25, 2022)

the54thvoid said:


> This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.
> 
> View attachment 267011
> 
> Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.


30 cycles over the course of 5~7 years of usage is not a whole lot, is it... and this is a connector that is not only thinner but also carries higher power.

And molex is certainly not known for its fantastic quality either. It looks and works like bottom barrel junk. Great comparison when we're talking about top end GPU power delivery 

As always, cost considerations are clearly in play here and not to greater benefit of end user (safety) or product (longevity).


----------



## TheDeeGee (Oct 25, 2022)

Is anyone surprised?

Going from 24-Pin (3x 8-Pin) to 12-Pin with 50% smaller terminals delivering the same 450 Watt.

Now in a perfect world that may be fine, but the bending of the pins can't be avoided with the current design.


----------



## john_ (Oct 25, 2022)

Was there in the past any case where someone should be careful to not bend a cable? I think never. So, we can assume that we will have a big number of cases and maybe even see the 12VHPWR changing in the near future. Or maybe we will see cables that fix this problem. I have less than 2 euros USB cables with a metallic shield on them, I am pretty sure putting a metallic shield on a much more expensive cable like the 12VHPWR, is not really a cost problem.



I am reading on the internet people saying that this is a user error, but I believe it is not. There was never a case in the past where you have to be careful with a power cable, so how is this a user error? People never had to even consider something like this and even if the cables come with a warning on the installation manual, do even people know when the cable is bent more than it should? People are not engineers.


----------



## Carlyle2020hs (Oct 25, 2022)

So the cable needs 35mm space before bending.

... If 93% of all cases do not have the space for it, is it legal to include it?


----------



## Dirt Chip (Oct 25, 2022)

Nihillim said:


> Wasn't this a collaborative effort between Intel and PCI-SIG?


yep, but the bashing contest is on.
It is anything but "proprietary"


----------



## the54thvoid (Oct 25, 2022)

Vayra86 said:


> 30 cycles over the course of 5~7 years of usage is not a whole lot, is it... and this is a connector that is not only thinner but also carries higher power.
> 
> And molex is certainly not known for its fantastic quality either. It looks and works like bottom barrel junk. Great comparison when we're talking about top end GPU power delivery
> 
> As always, cost considerations are clearly in play here and not to greater benefit of end user (safety) or product (longevity).


Is there a higher standard than 30?

If so, what, and what specs? No point shooting something down without at least giving evidence the other way. Then at least I can change my stance and support yours. But without the evidence, your post sounds like mere opinion.

Not having a go at you, simply asking for something to see that will enable me to change my mind.


----------



## Dirt Chip (Oct 25, 2022)

fevgatos said:


> The 12vhpwr has less voltage pins man, the 8pins cable standard was overbuilt. You could literally power a 350w card with one single connector that splits off to 2x6+2pins.


Yep, that's the whole idea. Less pins, less space.
12vhpwr will not pull more than 600w unless molded, but it sure can.
If you bend the wire near the connector you can result in over pressure that can cues such burn phenomena.
Interesting to know what was the reason in this case.



john_ said:


> I am reading on the internet people saying that this is a user error, but I believe it is not. *There was never a case in the past where you have to be careful with a power cable*, so how is this a user error? People never had to even consider something like this and even if the cables come with a warning on the installation manual, do even people know when the cable is bent more than it should? People are not engineers.


Right.



TheDeeGee said:


> Is anyone surprised?
> 
> Going from 24-Pin (3x 8-Pin) to 12-Pin with 50% smaller terminals delivering the same 450 Watt.
> 
> Now in a perfect world that may be fine, but the bending of the pins can't be avoided with the current design.


I agree that it is more a location problem on the GPU (and the size of it causing you to be space-limited) and the use of the adaptor that make space for maneuver even smaller than problem with the 12VHPWR spec.

There is no problem to engineer such pins that can stand the current\voltage. Also, only *3* of the 8pin are for voltage. So it`s 3*3=*9* total. Not 24...
FYI, 12VHPWR have *6* voltage pins. see pic.

We need to ask about this incidence circumstance and not only bash the 12VHPWR brain out.


----------



## TheDeeGee (Oct 25, 2022)

Dirt Chip said:


> Yep, that's the whole idea. Less pins, less space.
> 12vhpwr will not pull more than 600w unless molded, but it sure can.
> If you bend the wire near the connector you can result in over pressure that can cues such burn phenomena.
> Interesting to know what was the reason in this case.
> ...


They should have looked into combining a traditional 2x 6-Pin or 2x 8-Pin, and keep the size. That would still be a reduced footprint on the PCB and a more robust connector.

It's clearly the fragile pins being the problem, the cables can handle it just fine.


----------



## Vayra86 (Oct 25, 2022)

the54thvoid said:


> Is there a higher standard than 30?
> 
> If so, what, and what specs? No point shooting something down without at least giving evidence the other way. Then at least I can change my stance and support yours. But without the evidence, your post sounds like mere opinion.
> 
> Not having a go at you, simply asking for something to see that will enable me to change my mind.











						Mating Connector: What to Know About Connector Mating Cycles
					

A mating connector is any method of assembling of two or more component parts with mutually complementing shapes that. Any electrical connector, bolted joint, and jigsaw puzzle is an example of assembling based on mating connection. Mating cycles are important when selecting a connector. Learn...




					www.iconnsystems.com
				




Good for perspective; 30 just isnt a whole lot especially as GPUs get switched, cleaned, resold etc


----------



## Gundem (Oct 25, 2022)

So now I need a new case as well. Cool.

*hypothetically speaking of course*


----------



## the54thvoid (Oct 25, 2022)

Vayra86 said:


> Mating Connector: What to Know About Connector Mating Cycles
> 
> 
> A mating connector is any method of assembling of two or more component parts with mutually complementing shapes that. Any electrical connector, bolted joint, and jigsaw puzzle is an example of assembling based on mating connection. Mating cycles are important when selecting a connector. Learn...
> ...



30 still appears fine for normal usage. I've disconnected my graphics card zero times in the past year. A reviewer may need more but then, that's a business use with other considerations.

The construction and bend elements are fair game for criticism but, IMO, the 30 (minimum) rating, which may be much higher IRL, isn't an issue.


----------



## sephiroth117 (Oct 25, 2022)

This goes far above Nvidia and RTX 4090.

This is a standards and a standard that can and will be found in all GPUs at one point, AMD is part of PCI-SIG.
If the 7000 is still on PCI-E (which clearly was the best choice since ATX 3.0 PSU I have yet to found one in store), maybe the 8000 won't

We need to get to the bottom of this, thoroughly and scientifically. Leave no room for doubt, people should be using their card without thinking it will burn their PC!

PCI-SIG already warned about thermal variance in adapters and non-ATX3 back in September I believe


----------



## swirl09 (Oct 25, 2022)

Carlyle2020hs said:


> So the cable needs 35mm space before bending.
> 
> ... If 93% of all cases do not have the space for it, is it legal to include it?


That particular aspect wouldnt really fall into a legal category IMO.

What absolutely should, is that there is no warnings or documentation of this point in/on the packaging. I dont believe "its been talked about online" is going to hold much weight in court. 

If you dont want something to be bent, dont make it highly bendable, with zero warnings/instructions. (Ive checked everything that came in the box, there was NOTHING about how it should or shouldnt be bent)


----------



## TheDeeGee (Oct 25, 2022)

Here is another one.


----------



## xtreemchaos (Oct 25, 2022)

the way GPU TPD is going in a few years we are going to need Buzz bar to connect the pci power  .


----------



## TheoneandonlyMrK (Oct 25, 2022)

Carlyle2020hs said:


> So the cable needs 35mm space before bending.
> 
> ... If 93% of all cases do not have the space for it, is it legal to include it?


Exactly, I mentioned this yesterday but people gloss over stuff 

This makes a 4090 impractical in more than 93% of cases IMHO.

But buyers be like nah I can jam it in soooo.

IMHO this is going to be a drama.


----------



## TheDeeGee (Oct 25, 2022)

I also wonder how many people are using only 2 PSU cables to connect the 600 watt adapter.

Cuz you know, some PSUs have daisy chain PCI-E 8-Pin cables... *yikes*


----------



## damric (Oct 25, 2022)

Blame Raja


----------



## SOAREVERSOR (Oct 25, 2022)

Vayra86 said:


> 30 cycles over the course of 5~7 years of usage is not a whole lot, is it... and this is a connector that is not only thinner but also carries higher power.
> 
> And molex is certainly not known for its fantastic quality either. It looks and works like bottom barrel junk. Great comparison when we're talking about top end GPU power delivery
> 
> As always, cost considerations are clearly in play here and not to greater benefit of end user (safety) or product (longevity).



This!  How many times did the molex pins just come right the fuck out of the connector!  Molex fucking blows goats and we all hated it.  One of the great things about SATA and PCIE was being done with that for GPUs and HDDs.



the54thvoid said:


> 30 still appears fine for normal usage. I've disconnected my graphics card zero times in the past year. A reviewer may need more but then, that's a business use with other considerations.
> 
> The construction and bend elements are fair game for criticism but, IMO, the 30 (minimum) rating, which may be much higher IRL, isn't an issue.


That's max.  In the case of molex the first time you removed it you could rip all the pins out stuck in the female socket or they would be non aligned you had to monkey them back into place.  It was a giant parade of fail, bad, suck, miserable, wasted time, and much cursing.


----------



## Pepamami (Oct 25, 2022)

Dirt Chip said:


> According to who?
> Because you have many number of examples that works without a problem and the tiny thing of a long validation process by electrical engineers.
> You know, it *can* be smaller and better. CPU`s does that all the time.


because u put smaller, thicker AWG cables, does not mean that same PINs are suited for this.
Now we have "pls dont bend it, ok" situaltions, coz of PINs overload


----------



## the54thvoid (Oct 25, 2022)

SOAREVERSOR said:


> This!  How many times did the molex pins just come right the fuck out of the connector!  Molex fucking blows goats and we all hated it.  One of the great things about SATA and PCIE was being done with that for GPUs and HDDs.
> 
> 
> That's max.  In the case of molex the first time you removed it you could rip all the pins out stuck in the female socket or they would be non aligned you had to monkey them back into place.  It was a giant parade of fail, bad, suck, miserable, wasted time, and much cursing.



But last Gen's cards, and the ones before, using the mini-molex design had the same rating. I had to Google into the rabbit hole but this pdf sort of covers it.

molex.com/pdm_docs/ps/PS-5556-001-001.pdf

Whether or not that blows is irrelevant. It's the false drama over the 30 cycles.


----------



## Chomiq (Oct 25, 2022)

I connected my cable once, haven't disconnected it since and it's been almost a year now since I got my 3080 Ti FE. Granted, it's 350W GPU, not a 450W+.


----------



## catulitechup (Oct 25, 2022)

damric said:


> Blame Raja



.................


----------



## Aquinus (Oct 25, 2022)

Chomiq said:


> I connected my cable once, haven't disconnected it since and it's been almost a year now since I got my 3080 Ti FE. Granted, it's 350W GPU, not a 450W+.


In the 11-ish years of having my 1kW Seasonic, I've disconnected and reconnected cables less than once a year.


----------



## Blueberries (Oct 25, 2022)

I know it generates clicks or video views or whatever but the fear mongering has to stop. There is exactly n=1 cases of this happening in the wild and who knows how they treated the cable or what other elements could have caused this. A quick google search of "burned 8 pin connector" will yield numerous results of this happening throughout history, it's not unique to 12VHPWR. 

It seems to me you just need to be a little careful and follow the recommendations when using these cables instead of slapping things together like a monkey.


----------



## TheDeeGee (Oct 25, 2022)

The only other option i see if they want to keep this design, is to mould the terminals into the plastic. That would ensure they can no longer wiggle around when the cable is being pulled.

----------------------------------------------------------------------------------

Well, slowly getting more and more reports.

Vectral on Twitter: "Uh oh... Another 4090 (ASUS 4090 TUF OC) with a damaged 12VHPWR power connector just got posted in the same reddit thread as the original Gigabyte card post from today. This is slightly getting out of hand... @Buildzoid1 @Sebasti66855537 https://t.co/oQnaywqBzq" / Twitter


----------



## Aquinus (Oct 25, 2022)

TheDeeGee said:


> The only other option i see if they want to keep this design, is to mould the terminals into the plastic. That would ensure they can no longer wiggle around when the cable is being pulled.
> 
> ----------------------------------------------------------------------------------
> 
> ...


Maybe instead of making new connectors and by treating the symptom, we should probably invest time into having hardware detect these high resistance situations so a user can take action before stuff starts melting or catching fire. Ultimately this is a state that needs immediate action and even with the best of connectors, something can still go wrong. Regardless of connector, I'd like be aware of this situation should it arise _before it causes damage_.


----------



## FinlandApollo (Oct 25, 2022)

Vayra86 said:


> 30 cycles over the course of 5~7 years of usage is not a whole lot, is it... and this is a connector that is not only thinner but also carries higher power.
> 
> And molex is certainly not known for its fantastic quality either. It looks and works like bottom barrel junk. Great comparison when we're talking about top end GPU power delivery
> 
> As always, cost considerations are clearly in play here and not to greater benefit of end user (safety) or product (longevity).


Do you have better producer than Molex? 

And no, that "molex connector" in the PC is not made by Molex, it's a Mate-N-Lok by TE-Connectivity. It's actually one of the few connectors that is NOT made by Molex.


----------



## ThrashZone (Oct 25, 2022)

the54thvoid said:


> 30 still appears fine for normal usage. I've disconnected my graphics card zero times in the past year. A reviewer may need more but then, that's a business use with other considerations.
> 
> The construction and bend elements are fair game for criticism but, IMO, the 30 (minimum) rating, which may be much higher IRL, isn't an issue.


Hi,
Sorry but when a company says 30 times is a lifetime of a connector it's a pos.


----------



## dinmaster (Oct 25, 2022)

problem is, they made the cables too small and the connectors (probably why its 30 uses). should be a higher gauge and the connector should be bigger. The designer of this connector thought hey more 12v wires will divide the power and make it safe... nvidia comes along lets go more power 600w over small wires that were designed for 400w.. just regular pcie power connectors is the right design and if you need 3 of them let that be. I personally hope that amd does not go this way in terms of power connectors, don't mess with something that has been working for years.. if anything just add wires, 10 pin pcie connector. we got 6/8 pin ones already that delivery a good amount of power with a good gauge of wire and like i said, have 2 or 3 connectors on the cards like we currently have.


----------



## ThrashZone (Oct 25, 2022)

Hi,
Funny tried to end with a joke


----------



## user556 (Oct 25, 2022)

Performance is king ... but only if it's reliable.


----------



## Colddecked (Oct 25, 2022)

Blueberries said:


> I know it generates clicks or video views or whatever but the fear mongering has to stop. There is exactly n=1 cases of this happening in the wild and who knows how they treated the cable or what other elements could have caused this. A quick google search of "burned 8 pin connector" will yield numerous results of this happening throughout history, it's not unique to 12VHPWR.
> 
> It seems to me you just need to be a little careful and follow the recommendations when using these cables instead of slapping things together like a monkey.



There's more than 1 case of this.  And we are talking, what, 2 weeks after launch?  Just running benchmarks?  Sure it can happen to 8pins also, but I'm sure they were running out of spec.  If you are counting on people bending the cable exactly at 35mm after the connector, its a design error.


----------



## ThrashZone (Oct 25, 2022)

Hi,
Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower


----------



## fevgatos (Oct 25, 2022)

TheDeeGee said:


> I also wonder how many people are using only 2 PSU cables to connect the 600 watt adapter.
> 
> Cuz you know, some PSUs have daisy chain PCI-E 8-Pin cables... *yikes*


There is absolutely nothing wrong with daisy chain pcie 8,why are you saying yikes?



ThrashZone said:


> Hi,
> Sorry but when a company says 30 times is a lifetime of a connector it's a pos.


Thats the same rating as the normal 8pin pcie though


----------



## TheDeeGee (Oct 25, 2022)

Aquinus said:


> Maybe instead of making new connectors and by treating the symptom, we should probably invest time into having hardware detect these high resistance situations so a user can take action before stuff starts melting or catching fire. Ultimately this is a state that needs immediate action and even with the best of connectors, something can still go wrong. Regardless of connector, I'd like be aware of this situation should it arise _before it causes damage_.


True, as is this connector can never be safe.

Only safe way to connect is to solder wires directly to the GPU, i'm sure DIYs will attempt this.


----------



## ThrashZone (Oct 25, 2022)

Hi,
Big difference is normal pci-e/... cables aren't as fragile as this adapter seems to be.


----------



## Punkenjoy (Oct 25, 2022)

A well made connector should be enough to deliver 600w of power. The thing is those are not well designed.

To me it's not the amount of cable, the amount of connecting cycle, it's really the position and the locking mechanism of the thing. Something that handle 600w of power should be well locked in place. That 35 mm no bend crap is just hilarious. 

That doesn't mean that they wont improve it. By example, PCI-E 16x slot now have locking mechanism. They could do something similar there. It's indeed additional cost but it's better to be safe than sorry.

I think the no bend on 35 mm is a stupidity, but not because of the side panel. It's just a dangerous risk if your cable move for whatever reason. If you have 4090 money, and not enough money to spend on larger case, well, you should revise your priority.  a case too small will restrict airflow. It could always work if you reduce the power limits but still. 

What i would do is a 90° connector with a clap on it to secure it in place and no one will have problem ever again. 

I think this situation is ridiculous but at some point, there was a needs for a better connector instead of 4 8 pin. 

The location of those connector should also be improved. If it was facing up or down (in a standard gpu mount), there would be way less bending required. I would actually have it facing up. This would allow someone with a vertical GPU mount to totally hide the cable.


----------



## TheDeeGee (Oct 25, 2022)

fevgatos said:


> There is absolutely nothing wrong with daisy chain pcie 8,why are you saying yikes?
> 
> 
> Thats the same rating as the normal 8pin pcie though


For the 600 Watt adapter it's the difference between pulling 300 Watt through a cable if daisy chained, or 150 with 4.

Sure PCI-E 8-Pin is rated for little over 300 Watt, but would you be comfortable with that?

But i guess some people like to live on the edge.


----------



## fevgatos (Oct 25, 2022)

TheDeeGee said:


> For the 600 Watt adapter it's the difference between pulling 300 Watt through a cable if daisy chained, or 150 with 4.
> 
> Sure PCI-E 8-Pin is rated for little over 300 Watt, but would you be comfortable with that?
> 
> But i guess some people like to live on the edge.


Again, there is absolutely nothing wrong with daisy chained 2x8pins, unless your PSU is bought on discount from Lidl. In which case, I don't think your main concern should be the cable anyways, right?

Even the 12vhpwr cable that corsair sells has 2 connectors on the PSU side. So, they have to know something, right?


----------



## medi01 (Oct 25, 2022)

Dirt Chip said:


> This is not NV idea, it`s a new general standard.


There is no "general standard" of "ship home made adapter that cannot fit properly in 93% of f the cases.

This issue is absolutely NV's creation and doesn't have anything to do with 12 pin socket.

IF NV was too greedy for a proper 90 degree angle adapter, it could have located the socket differently.


----------



## Vayra86 (Oct 25, 2022)

ThrashZone said:


> Hi,
> Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower


Oh? We have numerous powerful ITX builds with high end components going about. Smaller cases can dissipate heat fine...

And thats the core of the issue here: a trend happening with pc components where higher power draw changes the old rules regarding what is possible and what is not. There is no guidance on that, from Nvidia either. They just assume you will solve the new DIY build problems that might arise from the specs they devised.

The very same thing is happening in CPU. And for what? To run the hardware way out of its efficiency curve, they are skirting the limits of what is possible out of the box to justify a ridiculous price point for a supposed performance edge you might never reach.

Components have landed in nonsense territory on the top end to keep the insatiable hunger of commerce afloat.



FinlandApollo said:


> Do you have better producer than Molex?
> 
> And no, that "molex connector" in the PC is not made by Molex, it's a Mate-N-Lok by TE-Connectivity. It's actually one of the few connectors that is NOT made by Molex.


Wha... molex cables buddy, I never did use a capital letter, and if I did, by accident.

These


----------



## medi01 (Oct 25, 2022)

Hehe:


__ https://twitter.com/i/web/status/1584931430483705859


----------



## TheDeeGee (Oct 25, 2022)

fevgatos said:


> Again, there is absolutely nothing wrong with daisy chained 2x8pins, unless your PSU is bought on discount from Lidl. In which case, I don't think your main concern should be the cable anyways, right?
> 
> Even the 12vhpwr cable that corsair sells has 2 connectors on the PSU side. So, they have to know something, right?


Corsair knows, they also know how the sense wires work... not 



medi01 said:


> Hehe:
> 
> 
> __ https://twitter.com/i/web/status/1584931430483705859


If the lower tier cards of AIBs will use this dumpster fire 12VHPWR connector as well, i will either get myself a 3070 Strix, or go to AMD (something i really want to avoid).


----------



## medi01 (Oct 25, 2022)

TheDeeGee said:


> a 3070


NV: "users stick to series, not price tier" <= check
Was your earlier card 970?



TheDeeGee said:


> or go to AMD (something i really want to avoid).


NV: "customers keep buying our stuff no matter what" <= check


----------



## dinmaster (Oct 25, 2022)

what a shit show, gg nvidia. just couldn't stick with pci-e connectors or having a bigger connector and wires so the 30 reconnect limit wouldn't exist and the fires too.


----------



## sephiroth117 (Oct 25, 2022)

The Quim Reaper said:


> That's alright, if they burn up their card, they're rich, they can just buy another...



I don't have a 4090 but for some, paying a 1000$ a GPU few months ago was a good deal and normal but now 1500$ that's like ok you are filthy rich, have a 3 Porsche and can afford 10 4090, lmao.

No that's not alright because it could happen on an upcoming, more affordable 4080/4070...or a 3090ti since they have the 12vhpwr.


----------



## TheDeeGee (Oct 25, 2022)

medi01 said:


> NV: "users stick to series, not price tier" <= check
> Was your earlier card 970?
> 
> 
> NV: "customers keep buying our stuff no matter what" <= check


Troll confirmed, ignored bai!


----------



## Toss (Oct 25, 2022)

I prefer my old AMD with normal 8-pins. F K THAT
WHat's the problem for them to go 4x 8 pins instead of this garbage? Same 600W TDP


----------



## evernessince (Oct 25, 2022)

the54thvoid said:


> This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.
> 
> View attachment 267011
> 
> Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.



To be fair, Molex connectors were a PITA.  Even during the first connect the pins often not aligned and you'd had to fiddle with it to get everything aligned.  Pretty much had to fiddle with each reconnect after that as well.  Molex wasn't carrying 600w either.  

I agree with you that problems are unlikely for most people but when you are talking about a cable that carries this much power you don't really want quality on par with molex.


----------



## Solaris17 (Oct 25, 2022)

sephiroth117 said:


> I don't have a 4090 but for some, paying a 1000$ a GPU few months ago was a good deal and normal but now 1500$ that's like ok you are filthy rich, have a 3 Porsche and can afford 10 4090, lmao.



Man; comments like this really bring nothing to the table. I cannot stand it when people do it. This is totally off topic but I just want to throw some things out really quick before my meeting.

If you update even every 2 years in MY experience in consumer land you spend pretty much the same amount of money if you keep up with your build as someone that blows it all at once.

I bought 2x 4090s. 2x z690s; including all the other parts, coolers, fans, cases, ram for 2x platform upgrades. All at once. I probably just dropped 1/4 of the salary of what some make here on this forum.

Because I saved. Since 2017. The week after I finished our x299 builds. For the next platform jump. 5 years.

I do not think and it shouldnt make me out to be or included in the demographic of people that are considered hardware snobs because I can drop 3x your mortgage on PC parts in one night and still eat dinner. Your logic is flawed.

Also, I LOVE porsches.














And they dont need to be $180k cars. You can choose to spend that much though if you want.


For the record. If it helps. I know a few others that do it like me. At the very least its a waste of your time (not sure you know how much thats worth yet) because what people like this think of how I spend my money doesn't affect how I sleep at night.


----------



## the54thvoid (Oct 25, 2022)

evernessince said:


> To be fair, Molex connectors were a PITA.  Even during the first connect the pins often not aligned and you'd had to fiddle with it to get everything aligned.  Pretty much had to fiddle with each reconnect after that as well.  Molex wasn't carrying 600w either.
> 
> I agree with you that problems are unlikely for most people but when you are talking about a cable that carries this much power you don't really want quality on par with molex.



I feel as though I'm banging my head into a brick wall.

It doesn't matter whether Molex is a PITA. It matters that people are using this to bash Nvidia as though it's their fault. AMD use the same mini-molex 6 and 8-pin connectors (from the PSU), which all follow certain standards--which is namely the 30 cycle mating. The 30 cycle thing is not the issue.

The issue is the shitty bend mechanics and pin contact failure.


----------



## Star_Hunter (Oct 25, 2022)

If Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.


----------



## kapone32 (Oct 25, 2022)

I just hope that with no EVGA around all these buyers have fun getting warranty service especially for Asus.


----------



## Solaris17 (Oct 25, 2022)

I wonder if cards will come with dielectric grease now. I think I have some left over from my starter. lol



kapone32 said:


> I just hope that with no EVGA around all these buyers have fun getting warranty service especially for Asus.



 worst CX experience of my life


----------



## ThrashZone (Oct 25, 2022)

kapone32 said:


> I just hope that with no EVGA around all these buyers have fun getting warranty service especially for Asus.


Hi,
Indeed 
I'm now without a gpu manufacture luckily I'm not in the market.


----------



## kapone32 (Oct 25, 2022)

Star_Hunter said:


> If Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.


With the size of the cards there would have been no issue using 4 8 pin connectors. I guess Nvidia didn't want to have the image of that in people's heads so in a time when we are just trying to get supply chain back in order they make a brand new standard that intuitively sounds dangerous. By the way I have never heard of shouting a design warning on a baseline product like a PSU cable but with Nvidia nothing surprises me.


----------



## MachineLearning (Oct 25, 2022)

This article's tone is pretty condescending. It doesn't take "arm wrestling" to make the connector burn up, it's just poorly designed. 

How exactly are users supposed to prevent bending within 35mm of the terminals? Most people won't have problems - but virtually nobody would have issues if they just coughed up the extra PCB space and went for 8-pins.


----------



## ThrashZone (Oct 25, 2022)

Hi,
Just the cooler sticking that far past the card is just dumb imho


----------



## mechtech (Oct 25, 2022)

99 problems……but a cable isn’t one…..


----------



## rv8000 (Oct 25, 2022)

Solaris17 said:


> Man; comments like this really bring nothing to the table. I cannot stand it when people do it. This is totally off topic but I just want to throw some things out really quick before my meeting.
> 
> If you update even every 2 years in MY experience in consumer land you spend pretty much the same amount of money if you keep up with your build as someone that blows it all at once.
> 
> ...



OT:

I agree comments like that are unnecessary.

Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.


----------



## Solaris17 (Oct 25, 2022)

rv8000 said:


> OT:
> 
> I agree comments like that are unnecessary.
> 
> Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.



You should never pass up the opportunity to enlighten. Within reason of course. There is always the chance it expands his thinking.


----------



## randomUser (Oct 25, 2022)

What they basically told here is "we are making connectors as cheap as possible, so quality is very low thus plastic bends and tears easily, please don't touch it".


----------



## Solaris17 (Oct 25, 2022)

randomUser said:


> What they basically told here is "we are making connectors as cheap as possible, so quality is very low thus plastic bends and tears easily, please don't touch it".



You know the connector and decision was not nvidia's but I do have to wonder, what their engineering team was thinking. Certainly they were given the same kind of information (+35mm) I mean, someone SOMEWHERE had to have put this in a case and been like "Man we should idk, put this at the back of the card right??"


----------



## Vayra86 (Oct 25, 2022)

the54thvoid said:


> I feel as though I'm banging my head into a brick wall.
> 
> It doesn't matter whether Molex is a PITA. It matters that people are using this to bash Nvidia as though it's their fault. AMD use the same mini-molex 6 and 8-pin connectors (from the PSU), which all follow certain standards--which is namely the 30 cycle mating. The 30 cycle thing is not the issue.
> 
> The issue is the shitty bend mechanics and pin contact failure.


The 30 cycles are another _symptom _of a business that is constantly eating away at what could be called 'headroom' at large.

Anyone with half a bit of sense would elevate this number because simply enough, GPUs definitely do get close to that limit or go over it. And these GPUs aren't dealing in low wattages like, say, HDDs do, or a bunch of fans. Here we are dealing with loads that are fat enough to make a PSU sweat. Also, and much more importantly, these cables aren't getting cheaper and certainly not dirt cheap like Molex is. We're talking about top of the line components here.

Similar things occur with, for example, SATA flatcables. They're too weak, so they break during the normal lifespan if you do a bit more than one or two reinstalls with them; and let's face it, with SSDs this likelihood has also increased as the devices are much more easily swapped around, or taken portable; slots on boards now offer hotswap sockets for SATA; etc.

And the examples like it are rapidly getting new friends: Intel's IHS bending; thermal pads needed on GPUs to avoid failure; etc etc.

The issue is shitty bend mechanics yes, but at the core of all these issues is one simple thing: cost reduction at the expense of OUR safety and durability of devices. Be wary what you cheer for - saying 30 cycles is fine because its the same as molex is not appreciating how PC gaming has evolved. Specs should evolve along with it.


----------



## ThrashZone (Oct 25, 2022)

Hi,
Should of made the plastic tab longer if that was a minimum bend point 
Hell just extend it down to so it acts like a leg to hold the big bitch up to


----------



## Vayra86 (Oct 25, 2022)

Solaris17 said:


> You should never pass up the opportunity to enlighten. Within reason of course. There is always the chance it expands his thinking.


That also goes both ways, I have to agree that it is pretty odd seeing the comments of people here wrt products, and a lot of that is happening in the 'top end' segment - but then that's _my _view. It all depends on your perspective, some want the latest greatest no matter what, and there is no common sense involved. The fact you are different, does not make it a rule; and yes, I think the lack of sense in some minds is also an opportunity to enlighten.

We all have our lenses to view the world through, (un?)fortunately. Nobody's right. Or wrong. But the social feedback is generally how norm and normality are formed.

Still though I don't think its entirely honest to your budgetting or yourself to say buying the top end is _price conscious_. Its really not, all costs increase because you've set the bar that high and the $/fps is worst in the top, and that problem only gets bigger if you compare gen-to-gen for similar performance, and its perfectly possible to last a similar number of years with something one or two notches lower in the stack, and barely notice the difference. Especially today, where the added cost of cooling and other requirements can amount many hundreds of extra dollars.

That said, buying 'high end' is definitely more price conscious than cheaping out and then getting _forced_ into an upgrade because you really can't run stuff proper in two-three years time. But there is nuance here; an x90 was never a good idea - only when they go on sale like AMD's 69xx's do now. Its the same as buying on launch; tech depreciates too fast to make it worthwhile. You mentioned cars yourself. Similar loss of value applies - the moment you drive off...



Solaris17 said:


> You know the connector and decision was not nvidia's but I do have to wonder, what their engineering team was thinking. Certainly they were given the same kind of information (+35mm) I mean, someone SOMEWHERE had to have put this in a case and been like "Man we should idk, put this at the back of the card right??"


Yeah, or you could decide to offer and design your very own Nvidia branded cable doing the same thing but with somewhat greater tolerances. One could say their margins and their leadership position kind of makes that an expectation even. Nvidia is always first foot in the door when it comes to pushing tech ahead... They still sell Gsync modules even though the peasant spec is commonplace now, for example.

Everything just stinks of cutting corners, and in this segment, IMHO, thats instant disqualification.

Also, back of the card? Where exactly? There's no PCB on the better half of it, right?


----------



## ThrashZone (Oct 25, 2022)

Vayra86 said:


> Oh? We have numerous powerful ITX builds with high end components going about. Smaller cases can dissipate heat fine...
> 
> And thats the core of the issue here: a trend happening with pc components where higher power draw changes the old rules regarding what is possible and what is not. There is no guidance on that, from Nvidia either. They just assume you will solve the new DIY build problems that might arise from the specs they devised.
> 
> ...


Hi,
Yep seen a few
Thing is they are usually vertical mounting gpu's.

I was referring to standard mounting.


----------



## john_ (Oct 25, 2022)

Star_Hunter said:


> If Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.


Well, it makes sense, but today it couldn't happen.
First, a 3% difference is enough to decide who is in 1st place of charts and who is second. And people on the Internet at least, are happy to announce the top card a monster and the 3% slower a failure, even if that second card is for example 30% more efficient. So, Intel first, because of their 10nm problems that forced them to compete while still on 14nm, Nvidia latter that probably decided to pocket all the money and leave nothing to AIBs and now also AMD that has realized that targeting for efficiency is suicidal, push their factory overclocks as high as they can. 
There was no chance in a million Nvidia to come out with a Founders Edition at 350W and let it's partners produce custom models that go at 450W or higher.


----------



## zlobby (Oct 25, 2022)

Mwa-ha-ha-ha-ha!


----------



## catulitechup (Oct 25, 2022)

More jokes on 4090


__ https://twitter.com/i/web/status/1584676185916600320


----------



## GreiverBlade (Oct 25, 2022)

oh, i am fine, then ... 
 

clearance ok, PSU? errrrr..... nah, no chance in hell  changed it recently enough to not care about ATX 3.0 PSU which are all but available anywhere atm  (aside one model from Thermaltake which is also close to be 3.5x time the price paid for the current i have ) and it seems AMD will keep with the 8pins for higher end (6pins for lower models) and i hope they stick to it given 1. the price, 2. the issues seen recently

although ... i HAVE the clearance! that's more important, oh and i know since my first self built, that tight cable bend is a bad bad thing ... and not only for PCs (specially with low quality cables/extensions, some cable handled steep curve better than others ... )


Spoiler: the bendiest bend i ever did was ...



on a 6+8 Zotac GTX 770 some years ago


----------



## TheinsanegamerN (Oct 25, 2022)

Dirt Chip said:


> AMD will need to fallow this power standard design if they want to stay competitive in the high-end.


AMD wont need to "fallow" this, considering the vast majority of PSUs still use 8 pin connectors. They can just use 8 pins and offer a 12VHPWR to 8 pin adapter for all the suckers that ran out and bought a new PSU


----------



## GreiverBlade (Oct 25, 2022)

TheinsanegamerN said:


> AMD wont need to "fallow" this, considering the vast majority of PSUs still use 8 pin connectors. They can just use 8 pins and offer a 12VHPWR to 8 pin adapter for all the suckers that ran out and bought a new PSU


well they ... "Fall O[ut of that] W[hacko]" new connector, i guess ...


----------



## TechLurker (Oct 25, 2022)

I find it humorous that despite being a joint NVIDIA/Intel design, even Intel didn't use it for their higher-end ARC cards, despite there being "lower power" versions of the 12-pin connector.

That said, why not just shift to use EPS12V? Higher-end 1k+ PSUs can power 2 or more EPS12V lines by default (depending on the modular options), and EPS12V 4-pin can handle 155 watts continuous while the 8-pin can handle 235-250 watts continuous. Still would require 3x 8-pin connectors for 600-700 watt cards, but at least the output per 8-pin went up from the PCIe limit of 150 watts, and having some PSU makers swap PCIe connectors for extra EPS12V isn't much different than also designing ATX 3.0 PSUs with a dedicated but potentially faulty 12VHPWR connection. If anything, Seasonic's higher-end PSUs can do both EPS12V or PCIe from the same modular port, so it could be adapted quickly. And most importantly, all EPS12V wires and contacts are of a thicker gauge than the 12VHPWR wires and contacts.


----------



## erocker (Oct 25, 2022)

Dirt Chip said:


> AMD will need to fallow this power standard design if they want to stay competitive in the high-end.


Not in the slightest.


----------



## Dave65 (Oct 25, 2022)

phanbuey said:


> what a horrible design... hoping amd is competitive this time.


This time?
Pretty sure AMD is/has been competitive.


----------



## efikkan (Oct 25, 2022)

Dirt Chip said:


> According to who?
> Because you have many number of examples that works without a problem and the tiny thing of a long validation process by electrical engineers.
> You know, it *can* be smaller and better. CPU`s does that all the time.


So you think that cables can just get smaller and smaller while drawing more and more current?
It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw.



Aquinus said:


> Maybe instead of making new connectors and by treating the symptom, we should probably invest time into having hardware detect these high resistance situations so a user can take action before stuff starts melting or catching fire. Ultimately this is a state that needs immediate action and even with the best of connectors, something can still go wrong. Regardless of connector, I'd like be aware of this situation should it arise _before it causes damage_.


That sounds very much like treating a symptom rather than solving the underlying cause. Even if the cause was a small increase in resistance, how would you precisely detect this from either the PSU or the GPU end? (keeping in mind both tolerances in the spec of each part and the fact that the current draw is changing very rapidly) You can't just do this the same way a multimeter would do it; by sending a small current and measuring the voltage drop to calculate resistance. Are there more advanced (and reliable) techniques which goes beyond my knowledge of electronics which would make your proposal even feasible?

I'm more a fan of doing good engineering to create a robust design rather than overengineering a complex solution to compensate for a poor design.

The one thing that doesn't sound quite right in my mind is the claim that this whole problem is cables that are not fully seated causing extreme heat to melt down the plug, and I would like to see a proper in-depth analysis of this rather than jumping to conclusions. From what I've seen of power plugs (of any type/size) over the years, heat issues on the connective part is unusual, unless we are talking about not making a connection at all causing arcing, but that's with higher voltages. But with 12V and this wire gauge the threshold between "good enough" and no connection will be very tiny, probably less than 1 mm. So if this was the core problem, then engineering a solution would be fairly easy by either making the cables stick better in the plug or making the connecting area a tiny bit larger. Keep in mind that with most types of plugs the connection is usually far better than the wire, so unless it's physically damaged there shouldn't be an issue with connectivity. (Also remember that the electrons moves on the outside of the write and the surface area of the connection in most plugs is significantly larger than the wire gauge in most plugs, probably >10x+++).
So the explanation so far sounds a little bit off to me.



MachineLearning said:


> This article's tone is pretty condescending. It doesn't take "arm wrestling" to make the connector burn up, it's just poorly designed.


Even me with my thick fingers would probably manage to do this unintentionally.
I would call this poor engineering, not user error.


----------



## TheoneandonlyMrK (Oct 25, 2022)

efikkan said:


> So you think that cables can just get smaller and smaller while drawing more and more current?
> It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw.
> 
> 
> ...


I agree but not because the parts are in any way bad IMHO, though with little experience that counts for even less than it's usual nothing.

But I think with better positioning, or an adequate adapter should have been essential free provided addins.


----------



## Wirko (Oct 25, 2022)

How long before the industry discovers that 12 volts is far too low a voltage, for GPUs and CPUs alike?


----------



## Sisyphus (Oct 26, 2022)

Electrical plug connections should not be exposed to any mechanical stress. Consequences: Poor contact resistance, broken cable, cracks in the soldering points. Who does not know this, should not build PCs.


----------



## Minus Infinity (Oct 26, 2022)

Well done Huang, I hope this costs a bomb to reengineer. RDNA 7900XT looking better everyday. I don't reallt care about the numbers, I know it will destroy RDNA2 and Ampere and my 2080 Super and 1080 Ti, so that is all that matters to me. I don't need it to out do the 4090 at all. With RT being improved over 100% and with FSR it'll be fine for anything I can throw at it.


----------



## JAB Creations (Oct 26, 2022)

The comments count is accompanied by a fire emoji, how appropriate.


----------



## QUANTUMPHYSICS (Oct 26, 2022)

So who wants to fix this by building a hardened, angled adapter?


----------



## LabRat 891 (Oct 26, 2022)

Pretty standard advice. But, you wouldn't know that unless you've wrangled a lot of SATA, coaxial, fibre, or other 'bend radius'-sensitive fine pitch cabling.

All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...


----------



## mechtech (Oct 26, 2022)

Proper 4090 cable and connectors


----------



## medi01 (Oct 26, 2022)

So, it's NV f-up, top to bottom.
Intel doesn't deserve even part of the blame:









						NVIDIA Confirms 12-pin GPU Power Connector
					






					www.anandtech.com
				





> NVIDIA states in the video that this 12-pin design is of its own creation.



PCI SIG analysis (prior to meltdowns)






__ https://twitter.com/i/web/status/1584950589393293312


----------



## Crackong (Oct 26, 2022)

I think no one at the AMD side would think this would become a selling point.


----------



## gasolina (Oct 26, 2022)

I would like a 3 8 pin or 4 8 pin rather than the shitty nvidia 12 pin connektor


----------



## Razrback16 (Oct 26, 2022)

Terrible design. Put out a card that is so huge that it will literally have issues fitting into a number of cases, and then design a power cable for it that is so unreliable it actually has a connect / disconnect limit, and furthermore is a fire hazard.

As some others said, great opportunity for AMD to potentially capitalize on.


----------



## Arkz (Oct 26, 2022)

In the various batteries and EV rides I've built I use 5.5mm bullet plugs, they can handle pretty high currents. They should have done something like an XT120 connector with a sense pin as an optional extra for high powered PSUs and cards. Given the pins in these things are still pretty weak and not rated that high, there were bound to be issues on cards that can pull so much power. The connector should never have been rated for anything more than 450w constant.

I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end.



gasolina said:


> I would like a 3 8 pin or 4 8 pin rather than the shitty nvidia 12 pin connektor


It's not Nvidia's connector, as has been pointed out many times.



medi01 said:


> So, it's NV f-up, top to bottom.
> Intel doesn't deserve even part of the blame:
> 
> 
> ...


It's the same connector minus the sense pins, instead NV put a sense chip in the wire to tell the GPU how many cables are connected.

The connector was always gonna be fine for lower powered stuff, had no issues on my 3080FE roaring away for hours on end.

Yes it is NVs fault though, they're running too many amps through these pins so as the picture shows without a solid connection will fry. even with a solid connection they're going higher than the pins are rated for at full 600w. They're basically dupont connectors in the wire end of the port, and have never been rated that high compared to loads of different types of connectors. But they are cheap as chips compared to anything slightly more solid. 4080 should have no issues with it, it's the 4090s peak power draw that's the main issue and really NV should have had that card alone use a 16pin connector instead to spread the load more. Or 2 12 pins to ensure no issues. 600W is just beyond what's safe for those tiny pins to handle. That's 50 amps spread across those teeny tiny things.


----------



## Bwaze (Oct 26, 2022)

LabRat 891 said:


> Pretty standard advice. But, you wouldn't know that unless you've wrangled a lot of SATA, coaxial, fibre, or other 'bend radius'-sensitive fine pitch cabling.
> 
> All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
> Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...


A lot of prople place the blame solely on end user. Even though Ada cards are uncommonly wide, and most PC cases weren't made with consideration for them - and the new adapter cable even increases the problem by having a long stiff "strain relief".


----------



## Makaveli (Oct 26, 2022)

ThrashZone said:


> Hi,
> Yeah but you have to laugh at someone buying a 1600.us+ gpu and trying to put it in a midtower


That would easily fit in my mid tower


----------



## jonnyGURU (Oct 26, 2022)

Dirt Chip said:


> This is not NV idea, it`s a new general standard.
> 4*8pin is not better, I think it`s even worse.
> Using the 12VHPWR to 4*8pin adapter makes life harder.
> AMD need to also adopt the 12VHPWR but to better position it on the GPU.
> ...





Nihillim said:


> Wasn't this a collaborative effort between Intel and PCI-SIG?



This wasn't an Intel idea. This was an Nvidia idea that was passed through the PCI-SIG consortium and got approval.  Intel only added it to the ATX spec AFTER it was passed through PCI-SIG.

The connector works fine in most cases.  But there are caveats (don't bend before 30mm, etc.).

Remember, this is the same connector as the 30 series FE.  That was only 450W.  

I think the problems started coming up when Nvidia said "hey look.. if you use a 600W cable you can clock the 4090 card higher", which is why I went on the profanity laden rant that got me exiled from GamersNexus (because he chose to quote me out of context instead of addressing the actual issues because he had Nvidia in house at the time).

I have personally used this connector upwards of 55A at 50°C.  But you CAN NOT put a bend on the cable less than 30mm or higher than 50°C. This is shown in the PCI-SIG report that was leaked, but this only talked about the connector on the PSU side (which is why Corsair ATX 3.0 PSUs don't have the 12VHPWR connector on the PSU) and never talked about on the GPU side.


----------



## OneMoar (Oct 26, 2022)

if your connector has restrictions on how you plug in a connector then the connector is bad
example there is no possible way I could use a 12VHPWR connector in my case  there isn't room there is barely room for the two 8 pin on my 3070ti and those make a hard bend to clear the side panel
to get a safe bend keeping the last 1.2Inchs strait the cable needs to make a massive ARC nobody wants that

nvidia needs to revise this connector asap maby use a longer pin or a different shaped pin T or + shaped
for right now right angle adapters seem to be the solution hopefully more psu vendors will make 90/L shaped adapters
edit:reddit has once again done nvidias job for them fixed
we can closed this thread now


----------



## docnorth (Oct 26, 2022)

Maybe it was designed in collaboration with NZXT...
Jock aside both sides are to blame. The user of *the* top GPU (at least for the moment) should avoid stressing a 600w cable or connector. On the other hand Nvidia knows that this huuuge card will be difficult to fit in many cases and *must* put the needed clearance as a red flag in the specs. It's another 20% added height on a 4 slot GPU, higher than most tower coolers...


----------



## noname00 (Oct 26, 2022)

Why is the power connector on top of the card with these cards that are extremely tall? Putting the connector on the back at least would remove the need of a sharp bend. And I would like that even for cards with multiple 8 pins connectors.


----------



## Darller (Oct 26, 2022)

OneMoar said:


> *if your connector has restrictions on how you plug in a connector then the connector is bad*


Damn... polarized AC outlets are gonna screw up your worldview.  What a stupid thing to say.


----------



## Bwaze (Oct 26, 2022)

noname00 said:


> Why is the power connector on top of the card with these cards that are extremely tall? Putting the connector on the back at least would remove the need of a sharp bend. And I would like that even for cards with multiple 8 pins connectors.


RTX 4090 cards are also extremely long, so having a connector with long stiff stress relief and cables that you shouldn'd bend too tightly on top of that also excludes a lot of cases. 

I think all this mess wouldn't happen if they made the adapter with 90 degrees bend. I know it's not practical for those who mount their cards veryically - but they are a minority, and they already have to buy a riser cable - so they can invest on another unnecessary nice cable.


----------



## medi01 (Oct 26, 2022)

Arkz said:


> Yes it is NVs fault though, they're running too many amp


They:

1) Designed that shit (Intel designed the sensing pins only)
2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
3) They STILL found it OK to push it out to the market


jonnyGURU said:


> The connector works fine in most cases. But there are caveats (don't bend before 30mm, etc.).


Except it doesn't even per docs submitted to PCI SIG by NV itself.


----------



## Dirt Chip (Oct 26, 2022)

medi01 said:


> There is no "general standard" of "ship home made adapter that cannot fit properly in 93% of f the cases.
> 
> This issue is absolutely NV's creation and doesn't have anything to do with 12 pin socket.
> 
> IF NV was too greedy for a proper 90 degree angle adapter, it could have located the socket differently.


I mostly agree, nv implement the new power standard and the position plus adapter make it very space constraint.
But NV didn't invented anything, they just adopted it.



erocker said:


> Not in the slightest.


Maybe not this gen but it's the way going forward for high end high wattage GPU at least. I just hope they will learn from NV (not so good) way of implamenting new this standard.


----------



## medi01 (Oct 26, 2022)

Dirt Chip said:


> But NV didn't invented anything, they just adopted it.


No. Per their own words (links have been shared several times) it was SPECIFICALLY NVIDIA that designed power delivery aspect of that socket.  Intel only did sensing part.


----------



## Readlight (Oct 26, 2022)

Looks like my home input power line connection.


----------



## Arkz (Oct 26, 2022)

medi01 said:


> They:
> 
> 1) Designed that shit (Intel designed the sensing pins only)
> 2) They have tested it and figured it is HIGHLY PROBLEMATIC (see PCI SIG report in my previous post)
> ...


It's still perfectly fine for lower current cards. Again it's been in use for 2 years already. 3080, 3090, 3090Ti have had no problems with it. It's only now with the 4090 that it's an issue. If you look at PCI-SIGs test showing the failure that's when drawing 55A constant. That's 660W for it to fail in their test. And Nvidia may claim they made it, but it's just a Molex MicroFit 3.0 BMI Dual Row header.


----------



## medi01 (Oct 26, 2022)

Arkz said:


> It's still perfectly fine for lower current cards


Which need it as much as birds need pig tails



Arkz said:


> 3080, 3090, 3090Ti have had no problems with i


Yeah. Why do you think that could be:


__ https://twitter.com/i/web/status/1584950589393293312
let alone the size...



the54thvoid said:


> to bash Nvidia as though it's their fault.


How is "it" not NV's fault???

Who designed that thing? NV (no, not Intel, Intel designed only sensing pins)
Who KNEW from testing it was terrible? NV (yes, they've even submitted it to PCI-SIG)
Who cheaped out on 90 degree connectors for a GPU that cots 2000+ Euro and does not fit into 93% of cases?

I'm pretty sure it wasn't my grandma, nor was it Intel or AMD.


----------



## the54thvoid (Oct 26, 2022)

medi01 said:


> Which need it as much as birds need pig tails
> 
> 
> Yeah. Why do you think that could be:
> ...



Reading comprehension failure. Go and reread exactly what I posted about mating cycles (not being an Nvidia thing). Then read the part about shitty bending angles where I say it is a problem.

Then post in context.


----------



## docnorth (Oct 26, 2022)

QUANTUMPHYSICS said:


> So who wants to fix this by building a hardened, angled adapter?


After that users will complain the GPU won't fit...


----------



## Veseleil (Oct 26, 2022)

mechtech said:


> Proper 4090 cable and connectors
> 
> View attachment 267221


Or one of these:


----------



## jonup (Oct 26, 2022)

Arkz said:


> In the various batteries and EV rides I've built I use 5.5mm bullet plugs, they can handle pretty high currents. They should have done something like an XT120 connector with a sense pin as an optional extra for high powered PSUs and cards. Given the pins in these things are still pretty weak and not rated that high, there were bound to be issues on cards that can pull so much power. The connector should never have been rated for anything more than 450w constant.
> 
> I recall someone pointing out that the pins are rated for about 8 amps each or something similar, and at 600w would be more than that. So right off the bat having cards that can pull more than the connectors are rated for is just bad. They probably thought it would just be for spikes and for the rest of the time the card would be pulling no where near its limit, forgetting people playing with unlocked framerates, people rendering stuff for hours on end.
> 
> ...


My math doesn't doesn't agree with your conclusion. Assuming you are correct that each pin is good for 8amp, 12pins at 12volts are good for 1152W. At 600w they should be handling a little over 4amps if the load is evenly spread, which it probably isn't. We still have plenty of headroom though.


----------



## Punkenjoy (Oct 26, 2022)

It's not a problem of power per pin, it's just that the pins aren't secured enough. If they had the same setup but with a more secured socket for the pin, there would be no issue. 

At those wattage (300w +), you need to make sure your socket is secured and hold in place properly. This is just too flimsy. Redo the same setup but with something that lock the connection in place properly and everyone would be fine.


----------



## rv8000 (Oct 26, 2022)

Does this make anyone else wish that all pcie gpu based power connectors where right angle connectors in the first place.

Cable training/management has always put some kind of force on gpus in opposition to the pcie slot in my experience, especially in smaller mid tower cases and sff builds.

Im guessing there’s a good reason they haven’t. I wonder if even changing the orientation of the pcie plug would have been a better solution in the long run (vertically off the pcb in either up/down direction like a motherboard power connector). Seems like an odd limitation to stand by when I think about it.


----------



## Punkenjoy (Oct 26, 2022)

rv8000 said:


> Does this make anyone else wish that all pcie gpu based power connectors where right angle connectors in the first place.
> 
> Cable training/management has always put some kind of force on gpus in opposition to the pcie slot in my experience, especially in smaller mid tower cases and sff builds.
> 
> Im guessing there’s a good reason they haven’t. I wonder if even changing the orientation of the pcie plug would have been a better solution in the long run (vertically off the cob in either up/down direction like a mother power connector). Seems like an odd limitation to stand by when I think about it.



Yes ! this, but the focus was always on getting the best cooler and it's the position that limit the less airflow and radiator size for cooling. Now that they have massively oversize cooler, they could do something better, but if you have a 4 slot GPU, that is quite deep to go plug your card if the connector was flat on the board instead of at a 90° angle. (unless they make it extra long). 

for having it facing the top, it's another story, you would have to have a mezzanine card to prevent it from sticking out on the back of the card. not impossible but way more complex and again, they would need to secure that in place to avoid frying your board. 

But god, that would look way better than that. 

The best alternative would probably be at the end of the card, but again, due to cooling, the PCB no longer extend to the end of the cards as they allow air to go thru there. 

At this point, why now redoing the whole PCI-E connector to allow it to deliver 600+ watt and just have a beefier connector on the motherboard. 

No simple solution right now


----------



## OneMoar (Oct 26, 2022)

Punkenjoy said:


> Yes ! this, but the focus was always on getting the best cooler and it's the position that limit the less airflow and radiator size for cooling. Now that they have massively oversize cooler, they could do something better, but if you have a 4 slot GPU, that is quite deep to go plug your card if the connector was flat on the board instead of at a 90° angle. (unless they make it extra long).
> 
> for having it facing the top, it's another story, you would have to have a mezzanine card to prevent it from sticking out on the back of the card. not impossible but way more complex and again, they would need to secure that in place to avoid frying your board.
> 
> ...


because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind


----------



## jonnyGURU (Oct 26, 2022)

Nihillim said:


> Wasn't this a collaborative effort between Intel and PCI-SIG?


No. Not Intel.  They didn't add the connector to ATX 3.0 until after it was finalized by the consortium.


----------



## Jism (Oct 26, 2022)

the54thvoid said:


> This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.
> 
> View attachment 267011
> 
> Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.



I actually noticed that one of my PCI-E cables (8 Pin) had a good amount of corrosion. The way i detected it was HWInfo reporting up to 11V on the 12V VRM Input rail. Like that coud'nt be good. Once i swapped out the cable it was a clean 12V again.

So yeah, it's real. They get worn out over the amount of times its installed and removed. The more i guess at the cost of any coating on top of the metal pins or so.


----------



## ThrashZone (Oct 26, 2022)

the54thvoid said:


> This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.
> 
> View attachment 267011
> 
> Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.


Hi,
Every psu I've bought to date included 6 vga cables
When will nvidia include 6 adapters ?
Or will psu makers send the same 6 cables for new gpu's


----------



## TheoneandonlyMrK (Oct 26, 2022)

OneMoar said:


> because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
> also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind


Didn't apple manage it with Vega.
All on the PCB.

IE pciex ish but with additional power connector.


----------



## SOAREVERSOR (Oct 26, 2022)

will it blend is now will it burn


----------



## Jism (Oct 26, 2022)

TheoneandonlyMrK said:


> Didn't apple manage it with Vega.
> All on the PCB.
> 
> IE pciex ish but with additional power connector.



Pretty much all the hardware apple releases, it's proprietary and they can design their own standard pretty much. They dont have to stick to ATX or PCI-E specs.


----------



## TheoneandonlyMrK (Oct 26, 2022)

Jism said:


> Pretty much all the hardware apple releases, it's proprietary and they can design their own standard pretty much. They dont have to stick to ATX or PCI-E specs.


Does that answer my question with a yes.

I had heard of apple and they're walled garden so nothing you just said did I not know.

So if it's been done, it could be again or did apple patent inline PCB power connection somehow.


----------



## Redwoodz (Oct 26, 2022)

Very simple, Nvidia overstepped what is  acceptable power draw in an ATX PC. This is design failure.


----------



## Jism (Oct 26, 2022)

TheoneandonlyMrK said:


> Does that answer my question with a yes.
> 
> I had heard of apple and they're walled garden so nothing you just said did I not know.
> 
> So if it's been done, it could be again or did apple patent inline PCB power connection somehow.



Apple is pretty much computers on the go. You buy and you dont have to look for ways to upgrade or "build it yourself" type of thing. It's very easy and pretty much from the go. However apple is so different in terms of software that most windows users coud'nt manage themself on a apple in the first place.

And inline PCB power is'nt something new. 





Look for OAM formfactor. It's capable of more then 500W of power delivery per "card" pretty much. There's servers out there with 8 of those things stacked into it:





Servers that require almost 4KW of power at it's full operation.


----------



## TheoneandonlyMrK (Oct 26, 2022)

Jism said:


> Apple is pretty much computers on the go. You buy and you dont have to look for ways to upgrade or "build it yourself" type of thing. It's very easy and pretty much from the go. However apple is so different in terms of software that most windows users coud'nt manage themself on a apple in the first place.
> 
> And inline PCB power is'nt something new.
> 
> ...


Next you'll tell me PCB have been invented for building circuits and that apple isn't a fruit.

Wtaf do you really think I asked without knowing this stuff ?!

I just wasn't sure if Apple used something like it on the dual Vega card they had.

And despite two replys I'm still not 100%.


----------



## Jism (Oct 26, 2022)

PCB's have bin invented for building circuits.


----------



## Punkenjoy (Oct 26, 2022)

OneMoar said:


> because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
> also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind


Well, if you use 1 trace to carry a 12v, but if you use 12 trace, you just have to carry 5a.

Like said above, carrying such amount of power on very complex circuit board is already done on server side. This is not something impossible. very far from it. 

The biggest challenge is how you address the transition to the new stardard, cards with 8pin/16pin + board power until enough motherboard have the new gpu slot. not impossible but just need multiple corporation to agree on a timeline.


----------



## Dirt Chip (Oct 26, 2022)

medi01 said:


> No. Per their own words (links have been shared several times) it was SPECIFICALLY NVIDIA that designed power delivery aspect of that socket.  Intel only did sensing part.


Well, if it's only NV work than OK and they are responsible from top to bottom. I really thought it was part of the ATX3.0 spec.

Anyway, It is still interested to know the details (if there any..) on how it was connected: was it fully "click" on? Dose the wire were bent and if so, was any force applied on the connector to make hime tilt as a result.

Unless we will see multiple incidens of non-bented yet melted connectors it can be classified as simple 'human error'.

Also, I can totally see this will be a deal breaker to some and more than that, the holy-grail of bashing ammo. 
The Samsung exploding battery kind of things.

All of this and 4090ti, with it's 525w stock, is yet to come. Yummy!


----------



## Godrilla (Oct 26, 2022)

Not going to risk it and allow the cable to be straight with open case had the cables tucked away with slight  curved bend with my h210 itx case.  Can someone do a thermal test directly on cables with and without bend at max load obviously for short period of time?


----------



## TheoneandonlyMrK (Oct 26, 2022)

Godrilla said:


> Not going to risk it and allow the cable to be straight with open case had the cables tucked away with slight  curved bend with my h210 itx case.  Can someone do a thermal test directly on cables with and without bend at max load obviously for short period of time?


Have a word with yourself.

You imply you have one(4090).

Why would anyone though, other than a journalist, do this to their £1600$$ /260000$  purchase,.

Just to see.


----------



## Kissamies (Oct 26, 2022)

Nice. The cards are already huge bricks and yet it still needs even more space from their sides? After all, it wasn't that bad when the power connector(s) were in the back of the card like in the old days..


----------



## Godrilla (Oct 26, 2022)

TheoneandonlyMrK said:


> Have a word with yourself.
> 
> You imply you have one(4090).
> 
> ...


itx case has challenges and I game with headphones on so it doesn't bother me. I school the journalists .


----------



## Arkz (Oct 27, 2022)

jonup said:


> My math doesn't doesn't agree with your conclusion. Assuming you are correct that each pin is good for 8amp, 12pins at 12volts are good for 1152W. At 600w they should be handling a little over 4amps if the load is evenly spread, which it probably isn't. We still have plenty of headroom though.


You counting all of that power going in through 12 pins. there's 6 12v pins sharing 50A, then 6 ground pins returning.


----------



## Crackong (Oct 27, 2022)

The horror has a face - NVIDIA’s hot 12VHPWR adapter for the GeForce RTX 4090 with a built-in breaking point | igor'sLAB
					

Those who are now beating up on the new 12VHPWR (although I don't really like the part either) may generate nice traffic with it, but they simply haven't recognized the actual problem with the…




					www.igorslab.de
				




Igor just confirmed the Built by Nvidia adaptor has such a high quality, that the wires are held (only) by soldering onto a very very thin piece of metal , and can be broken with very little force.
The card doesn't even know one of the soldering is broken, cause all the pins are joint inside the adaptor so the pin didn't "disconnected" , the load just spread to other wires and pump up the Amps (and wire temps) , it just keeps going until it melts.

What a wonderful design  !


----------



## jonnyGURU (Oct 27, 2022)

Yeah.  This is bad.  Really bad.  

Welp!  Connectors from Corsair, beQuiet, CableMod, etc... don't use this method and use a standard crimp so.. buy them up guys!


----------



## TheDeeGee (Oct 27, 2022)

Glad this is sorted out then.

Now i'd like to know how CableMod does their cables.


----------



## OneMoar (Oct 27, 2022)

this is so incredibly stupid
a 1500 dollar card and they are using bonded tin on the connector
.
that being said if I where AIBS I would be installing a thermal probe on the connector to ensure the thing throttles or shuts down if it starts getting hot set it at 50-60c if the connector ever gets that hot there is a problem


----------



## TheDeeGee (Oct 27, 2022)

So there are probably going to be recalls then.

Adapter gate? NVIDIA briefs all board partners this morning and makes damage an absolute boss issue | igor'sLAB (igorslab.de)


----------



## Mussels (Oct 27, 2022)

This is quite the fail


----------



## LabRat 891 (Oct 28, 2022)

Arkz said:


> ... there's 6 12v pins sharing 50A, then 6 ground pins returning.


Gotta wonder why Intel, nVidia, etc. didn't move to an insertable PCB type connector?
 Reading the layout of the new 12-pin in text reminded me of the 1.2kw 12VDC PSUs I have that interface all that amperage over exposed traces on a single long PCB 'finger'. 
I'd be concerned about insertion/removal life, but this 12-pin jobby is already limited to 30 cycles, by spec.


----------



## OneMoar (Oct 28, 2022)

so I was right bonded connector was the problem 
I love knowing everything 
who wants to celebrate my impending godhood with me 

I promise a place by my side who ever supports my ascension


----------



## jonnyGURU (Oct 28, 2022)

LabRat 891 said:


> Gotta wonder why Intel, nVidia, etc. didn't move to an insertable PCB type connector?
> Reading the layout of the new 12-pin in text reminded me of the 1.2kw 12VDC PSUs I have that interface all that amperage over exposed traces on a single long PCB 'finger'.
> I'd be concerned about insertion/removal life, but this 12-pin jobby is already limited to 30 cycles, by spec.


nVidia and Dell. Not Intel.  Intel's a member of the PSI-SIG, of course, but the 12VHPWR spec was sponsored by Nvidia and Dell.


----------



## Totally (Oct 28, 2022)

”8pInZ bUrN uP t00” folks  where y'all at?



>



So are you all implying 8-pins too have this exemplary build quality?

Seriously, that is insane. How did something like that even make it off paper?


----------



## RJARRRPCGP (Oct 28, 2022)

The limit of disconnections and reconnections, reminds me of what Intel said (or what Intel reportedly said) for LGA sockets, making some users panic just over swapping CPUs!

Making me wonder if changing back to the Q9450 (on my Asus Maximus II Gene) will make the socket go bad. Sigh.


----------



## jonnyGURU (Oct 28, 2022)

Totally said:


> ”8pInZ bUrN uP t00” folks  where y'all at?
> 
> 
> 
> ...



The 8-pin side is just as bad.  

They take one or two +12V wire and one ground wire for each 8-pin and solder the wire across the four terminals, left to right.  Some of the +12V leads go to the PCB to power the IC.  So while you have four 8-pin connectors, at the end of the day you still only have 6 +12V and 6 ground when all four are plugged in.


----------



## efikkan (Oct 28, 2022)

Crackong said:


> Igor just confirmed the Built by Nvidia adaptor has such a high quality, that the wires are held (only) by soldering onto a very very thin piece of metal , and can be broken with very little force.
> The card doesn't even know one of the soldering is broken, cause all the pins are joint inside the adaptor so the pin didn't "disconnected" , the load just spread to other wires and pump up the Amps (and wire temps) , it just keeps going until it melts.
> 
> What a wonderful design  !


This explanation makes far more sense. The original theory of pins making full contact sounded strange to me (in #97).

I hope this discovery leads to extensive tests of all such plugs and cables on the market, in case there are even more design flaws.

But I would still prefer if this standard was abandoned. I think the margins of error are a bit too low, but that's my opinion.

60ºC still sounds a little hot to me. What is a comparable result for a 8-pin under full continous load?


----------



## jonnyGURU (Oct 28, 2022)

efikkan said:


> This explanation makes far more sense. The original theory of pins making full contact sounded strange to me (in #97).
> 
> I hope this discovery leads to extensive tests of all such plugs and cables on the market, in case there are even more design flaws.
> 
> ...


The "theory" of pins losing connectivity isn't strange at all.  While they don't come dislodged in an up or down (referred to in the technical documentation as North and South) direction from the connector, East or West bends do cause the terminals to go cock-eyed inside the housing, which increases resistance on that terminal forcing the current to take the path of least resistance instead.  This causes those terminals to eventually burn up.  This is why Corsair decided NOT to put 12VHPWR connectors on their ATX 3.0 PSUs and instead continue using the 2x 8-pin to 1x 12VHPWR cable instead.  Because the typical use case for a PSU is in the bottom, with a shroud and the necessity to do a "Eastern" bend out of the shroud to go up the back of the motherboard tray.  And then given the "hidden" fashion of today's PSUs within shrouds, such damage will likely go unnoticed.

Yes.  60°C is high.  But the spec for the connector is actually 70°C.  

Typical mini-fit jr. connectors are rated at 65°C.  But they also don't tend to get as hot because they're not as "dense" as the 12VHPWR connector's.


----------



## Tropick (Oct 28, 2022)

jonnyGURU said:


> But they also don't tend to get as hot because they're not as "dense" as the 12VHPWR connector's.


Sounds like this 12VHPWR connector is proving to be pretty dense in a whole bunch of different ways.


----------



## mechtech (Oct 28, 2022)

Can we get a sign like this??  Except Do not bend the 16-pin cable?


----------



## Totally (Oct 29, 2022)

jonnyGURU said:


> The 8-pin side is just as bad.
> 
> They take one or two +12V wire and one ground wire for each 8-pin and solder the wire across the four terminals, left to right.  Some of the +12V leads go to the PCB to power the IC.  So while you have four 8-pin connectors, at the end of the day you still only have 6 +12V and 6 ground when all four are plugged in.



Yeah, but that's happening on the PSU side not on a tiny little bit of real estate at the load side connector.


----------



## Mussels (Oct 29, 2022)

This all greatly reminds me of why i tell everyone not to use any adaptors or extensions on modern GPUs

These melted on an undervolted 3090, locked to under 250W


 


Native direct cables or risk a fire. Nvidia was dumb to force this new connector so early, they just didnt want the stigma of 4x 8 pin connectors on these GPUs


----------



## OneMoar (Oct 29, 2022)

the kicker is you don't need 4 X 8 pin
you only need two the 8 pin connector and 80% of most PSUs are perfectly capable of delivering 260W per connector (and thats being conservative)
its just nobody wanted to test  there units for that
+ you still have idiots that put two 8 pin connectors on a single lead if they just did away with that we could have plenty of power for even a overclocked card

what we need is a proper solid pcb adapter with heavy duty power and ground planes with a big fat chunk of copper to dissipate heat

I would also again suggest that AIBs put temp sensors in the connector area so if it starts getting toasty there is no fire this is a good idea regardless of the quality of  cable or adapter and negatigable cost as most IC controllers already have the capability 

also suggest changing the connector on the gpu side to ABS so that in the event of a failure it doesn't smoke a 1500 card


----------



## Mussels (Oct 29, 2022)

I mean, WTF isn't there a solid right-angled connector from the very beginning?


----------



## medi01 (Oct 29, 2022)

mechtech said:


> xcept Do not bend the 16-pin cable?


YYeah. Next to those 93% of PC cases in which it is impossible to fit the card, without bending the said adpter.

Brilliant idea, that will completely unf*ckup the f*ckup.


__ https://twitter.com/i/web/status/1586196490422296577


----------



## TheDeeGee (Oct 30, 2022)

I check the official Nvidia reddit daily now, and there are atleast 2 reports every 24 hours with pictures of new melted adapters of various stages. How there hasn't been a recall yet is beyond me.


----------



## ARF (Oct 30, 2022)

TheDeeGee said:


> I check the official Nvidia reddit daily now, and there are atleast 2 reports every 24 hours with pictures of new melted adapters of various stages. How there hasn't been a recall yet is beyond me.



Bizarre.


> Nvidia is calling for all partner-manufactured GeForce RTX 4090 boards affected by the melting power plug problem we’re definitely not calling ‘connectorgate’ to be gathered up and returned to HQ. According to a post on Igor’s Lab, a briefing was sent to all AIB partners this morning (27th) that the cards should be shipped back home for analysis, though it’s not clear if they’re being sent to Nvidia or the AIB manufacturer’s nerve center.


Nvidia Calls for Melted 4090 Cards to Be Returned for Analysis | Tom's Hardware (tomshardware.com)


----------



## TheDeeGee (Oct 30, 2022)

ARF said:


> Bizarre.
> 
> Nvidia Calls for Melted 4090 Cards to Be Returned for Analysis | Tom's Hardware (tomshardware.com)


I think i've yet to see a report from a FE card though, so there is that.

That would mean AIBs are at fault as well, if they stray too much from the guidelines.


----------



## Mussels (Oct 30, 2022)

TheDeeGee said:


> I think i've yet to see a report from a FE card though, so there is that.
> 
> That would mean AIBs are at fault as well, if they stray too much from the guidelines.


look how ampere launched - we had all sorts of madness come out

Partly because nvidia gave specs that were "okay to use" that turned out false, and partly because the board makers did whatever they could to cut costs on custom PCB designs, despite charging more for them


----------



## Fasola (Oct 30, 2022)

The plot thickens: It appears there are at least 2 types of adapter, a lower quality one (Igor's) and a higher quality one (GN's). GN hasn't been able to trigger a failure with their adapters even after cutting the side cables.


----------



## Ravenas (Oct 30, 2022)

I don't recall a graphics card where my cable bend how to be a certain length or the connector would burn and my graphics card would have damage. What an absolute mess.


----------



## dj-electric (Oct 30, 2022)

I have ordered parts to assemble my own 12VHPWR cable. I have crimpers and all lab equipment to prepare and test it. Might update if people care


----------



## Nater (Oct 30, 2022)

Now I'm a Design Engineer, CAD Tech, Machinist, CAM Programmer, etc etc...but I'm no electrical engineer, yet I see a simple SIMPLE solution staring them right in the face.

Update the PCIe slots and mainboard layouts.  Quit trying to fit the large square peg in the small round hole.  The card is already taking up 4 PCIe slots in most designs, so have it pop into 4 rigid PCIe slots and get yourself 4x 75w of power right off the top without changing much of anything.

And I know some of you will reply "but that will make it too expensive!"


----------



## ARF (Oct 30, 2022)

Nater said:


> but I'm no electrical engineer



Dr. Lisa Su from AMD - AMD's CEO is an Electrical engineer - she knows best.

*Radeon RX 7000 Series Won't Use 16-pin 12VHPWR, AMD Confirms* | TechPowerUp


----------



## thegnome (Oct 30, 2022)

Mussels said:


> This all greatly reminds me of why i tell everyone not to use any adaptors or extensions on modern GPUs
> 
> These melted on an undervolted 3090, locked to under 250W
> View attachment 267657
> ...


All depends on the extensions used, most cheap ones totally aren't good enough for any load, even under the official 150W. My extensions on my 250W GPU from CM (also 2x 8pin) has had zero problems with melting or heat at all.


----------



## Denver (Oct 30, 2022)

Fasola said:


> The plot thickens:



lol AMD should do marketing on top of the problem, "Unfortunately, our GPUs still don't have the flamethrower / self-destruct function, guys"


----------



## ARF (Oct 30, 2022)

Denver said:


> lol AMD should do marketing on top of the problem, "Unfortunately, our GPUs still don't have the flamethrower / self-destruct function, guys"



Err, this is evil - just stop the sales already and return all (working and burnt) cards to Huang.


----------



## HTC (Oct 30, 2022)

I think i figured out the REAL problem: the temperature of Huang's oven wasn't right for some of the cards ...


----------



## hat (Oct 30, 2022)

Nater said:


> Now I'm a Design Engineer, CAD Tech, Machinist, CAM Programmer, etc etc...but I'm no electrical engineer, yet I see a simple SIMPLE solution staring them right in the face.
> 
> Update the PCIe slots and mainboard layouts.  Quit trying to fit the large square peg in the small round hole.  The card is already taking up 4 PCIe slots in most designs, so have it pop into 4 rigid PCIe slots and get yourself 4x 75w of power right off the top without changing much of anything.
> 
> And I know some of you will reply "but that will make it too expensive!"


I can see a number of problems with this. First, you are correct that it would make the design more expensive. There may also be some challenges stacking multiple PCI-E ports to a single card, even if they're only there for power. That would make the card a fair bit more prone to breaking somewhere along all those PCI-E pins, and getting power to the main PCB would be a challenge in itself. Beyond that, you would need to ensure that your motherboard actually has 4 PCI-E slots all lined up in a row to fit such a card. And then you run into the problem of supplying up to 300w of power from the slots alone, something that burned up 24-pin ATX connectors in the past. We've had a few mentions of exactly that happening here on these forums when people were jamming many cards into a single machine for Foldind@Home. You would need motherboards to be specifically designed for this. Take a look at the cryptocurrency mining motherboards that had asstons of PCI-E slots for jamming as many cards in a single machine as possible... they all had tons of extra power connectors to feed the slots.


----------



## Sisyphus (Oct 30, 2022)

Technical transition problem. Power consumption has reached the limit of existing connection structure. nVidia wanted backward compatibility in a highly fragmented, modular system preserved, bad decision. It did not work sufficiently reliable. Adapter are always a bad solution, as every additional mechanical contact brings in new contact resistance, mechanical issues, more than double the failure rate.
New standards must be used, old cables and plugs must be replaced. If old power supplies are made compatible using an adapter, the adapters/connectors must be certified, together with a correct installation without bending/mechanical stress on the plug.
600W is not difficult to connect, as long as you use cables/connectors originally intended for 600W and follow quality control. In the field of household appliances, 500-1500W are normal power ranges. No issue if the manufacturer installs all parts. A problem when customers use hundreds of different plugs and power supplies from different origin with very limited ability to check the quality before installation.

Typical early adopter issue. Presumably one will still improvise with the 4090, with the 5090 new power supply units/connectors will be necessary and no guarantee granted, using old cables/old or no-name adaptors.


----------



## Aquinus (Oct 30, 2022)

Mussels said:


> they just didnt want the stigma of 4x 8 pin connectors on these GPUs


Now they have the stigma of melting power connectors instead. Well played, nVidia.


----------



## nangu (Oct 30, 2022)

Crackong said:


> The horror has a face - NVIDIA’s hot 12VHPWR adapter for the GeForce RTX 4090 with a built-in breaking point | igor'sLAB
> 
> 
> Those who are now beating up on the new 12VHPWR (although I don't really like the part either) may generate nice traffic with it, but they simply haven't recognized the actual problem with the…
> ...



Nvidia high quality overengineering!!


----------



## Sisyphus (Oct 30, 2022)

Aquinus said:


> Now they have the stigma of melting power connectors instead. Well played, nVidia.


All companies have the "stigma" of some disappointed users.



nangu said:


> Nvidia high quality overengineering!!


Yes, 4090 is constantly sold out.


----------



## cyberloner (Oct 30, 2022)

don't buy it~!


----------



## nangu (Oct 30, 2022)

Sisyphus said:


> All companies have the "stigma" of some disappointed users.
> 
> 
> Yes, 4090 is constantly sold out.



 for you too!


----------



## arsh666 (Oct 30, 2022)

I think the main problem here is the card, Yes its fast, but horrible design, its the size of a xbox and draws way to much power which causes other problems like heat, and now failed power connector. As an electrician, you can do the math, and this card will pull 60 amps, normally for this type of connection we use #6 copper and lugs to connect. I would be surprised that this power connection meets UL safety standards.


----------



## Sisyphus (Oct 30, 2022)

arsh666 said:


> I think the main problem here is the card, Yes its fast, but horrible design, its the size of a xbox and draws way to much power which causes other problems like heat, and now failed power connector. As an electrician, you can do the math, and this card will pull 60 amps, normally for this type of connection we use #6 copper and lugs to connect. I would be surprised that this power connection meets UL safety standards.


Who buys a 4090 should invest 200-400$ more for a new power supply with proper cables/plugs and a midi/big tower with enough space, no adapters. Simple as that.


----------



## ARF (Oct 30, 2022)

> *Update Oct 30th*: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.



I see the following illustration:





But I don't think the bend is that bad. How does bend exist when the connector has a locking mechanism which makes sure that the male-female parts allign ideally?


----------



## Wirko (Oct 30, 2022)

jonnyGURU said:


> which increases resistance on that terminal forcing the current to take *the path of least resistance* instead


That's how we've always been taught. May I suggest a better version, more clear to noobs and more adequate to professionals: _[all] the paths of least resistance_.



Sisyphus said:


> Technical transition problem. Power consumption has reached the limit of existing connection structure.


Looking a bit farther into the future: It's the technical transition to *24 or 48 volts*, which seems inevitable in the longer term. 12VHPWR is already obsolete. 12VO for motherboards can carry 288 W and so it is equally obsolete. But the transition hasn't been done yet, and hasn't even started yet, because it would break compatibility with any existing PSUs.


----------



## QUANTUMPHYSICS (Oct 30, 2022)

The REAL ISSUE I see is that PC design has changed and people are building computers for looks and show with form over function.

Secondarily, the power demands of the components have increased and there isn't enough communication between GPU makers, PSU makers and consumers/ modders. 

This is a CATASTROPHE waiting to happen as many younger builders don't understand the dangers of running this much power through their cutesy PC cases without proper grounding, shielding and wire management. 

I think the entire PC layout needs redigning.


----------



## ARF (Oct 30, 2022)

Wirko said:


> Looking a bit farther into the future: It's the technical transition to *24 or 48 volts*, which seems inevitable in the longer term. 12VHPWR is already obsolete. 12VO for motherboards can carry 288 W and so it is equally obsolete. But the transition hasn't been done yet, and hasn't even started yet, because it would break compatibility with any existing PSUs.



But the GPU die and memory dies run on around 1 volt, which would mean that the requirements on the voltage transformer on the PCB will be much higher.


----------



## phanbuey (Oct 30, 2022)

This is just a complete garbage card design.  Card is unnecessarily bulky; the absolutely unnecessary dongle connector is unnecessarily intrusive and protruding at 90 degrees.  And now it's lighting on fire if you bend the cable incorrectly.  What a waste of a phenomenal GPU.


----------



## Sisyphus (Oct 30, 2022)

Wirko said:


> [...]
> Looking a bit farther into the future: It's the technical transition to *24 or 48 volts*, which seems inevitable in the longer term. 12VHPWR is already obsolete. 12VO for motherboards can carry 288 W and so it is equally obsolete. But the transition hasn't been done yet, and hasn't even started yet, because it would break compatibility with any existing PSUs.


Higher voltage or thicker cables/plugs. I am fine with both. 2-3 8 pin plugs will soon be gone for high-end GPUs. Even MB and case design will change, if 3-4 pci-slot GPUs become normal. But don't overdo it either, the problem only affects consumers who think they need a high-end GPU, that's at best 1-2% of all PCs. It is therefore quite possible that an isolated solution will be developed here.



phanbuey said:


> This is just a complete garbage card design.  Card is unnecessarily bulky; the absolutely unnecessary dongle connector is unnecessarily intrusive and protruding at 90 degrees.  And now it's lighting on fire if you bend the cable incorrectly.  What a waste of a phenomenal GPU.


Without the statistical data, complete error analysis, such conclusions are not possible. Always these dramatizations. Apart from that: Thanks to all early adopters.


----------



## RogueSix (Oct 30, 2022)

arsh666 said:


> I think the main problem here is the card, Yes its fast, but horrible design, its the size of a xbox and draws way to much power which causes other problems like heat, and now failed power connector. As an electrician, you can do the math, and this card will pull 60 amps, normally for this type of connection we use #6 copper and lugs to connect. I would be surprised that this power connection meets UL safety standards.



Do you own a RTX 4090? Doesn't sound like it. Because if you did, you would know that the RTX 4090 is a very cool running card. With some exceptions (ASUS ROG STRIX OC) it is also not *that* huge. I own the MSI Suprim X and it's a big card, alright, but by far not as crazy as the stupid clickbait YouTubers would like people to believe. It easily fit in my beQuiet DarkBase 900 (non-Pro) and there was also zero issue putting the side panel back on. I never had to bend the adapter or anything.

The card is a powerhouse but it remains cool exactly *because* of its sizable cooling solution. That is not the issue at all. We will have to wait and see what the official investigations by nVidia, the AIC partners and the maker of the adapter(s) turn up. The YouTubers are poking around in the dark for the clicks. They can not be taken serious. Let's wait and see for the results of the official investigation...


----------



## TheDeeGee (Oct 30, 2022)

Seems there are 3 different adapters in the wild now.

- 300V with 4 solder joints (Nvidia reddit user)
- 150V with 4 solder joints (IgorsLab)
- 300V with 2 solder joints (GamersNexus)


----------



## phanbuey (Oct 31, 2022)

Sisyphus said:


> Without the statistical data, complete error analysis, such conclusions are not possible. Always these dramatizations. Apart from that: Thanks to all early adopters.


How much statistical analysis do you need to look at that design and realize it’s bad?  Always these apologists trying to defend obviously bad ideas by downplaying.


----------



## arsh666 (Oct 31, 2022)

RogueSix said:


> Do you own a RTX 4090? Doesn't sound like it. Because if you did, you would know that the RTX 4090 is a very cool running card. With some exceptions (ASUS ROG STRIX OC) it is also not *that* huge. I own the MSI Suprim X and it's a big card, alright, but by far not as crazy as the stupid clickbait YouTubers would like people to believe. It easily fit in my beQuiet DarkBase 900 (non-Pro) and there was also zero issue putting the side panel back on. I never had to bend the adapter or anything.
> 
> The card is a powerhouse but it remains cool exactly *because* of its sizable cooling solution. That is not the issue at all. We will have to wait and see what the official investigations by nVidia, the AIC partners and the maker of the adapter(s) turn up. The YouTubers are poking around in the dark for the clicks. They can not be taken serious. Let's wait and see for the results of the official investigation...


Well I don't own one but I did own a 3dfx voodoo 5500, and I remember how that all turned out. I would also like to see Nvidia do it own investigation into it. I'm sure they will be blaming everyone but them self. They will come up with a solution, and it will probably be like a water cooler that will spray water on the power connector one it detects fire.


----------



## Icon Charlie (Oct 31, 2022)

nangu said:


> Nvidia high quality overengineering!!


 BURN BABBY BURN!!!!


----------



## jigar2speed (Oct 31, 2022)

The Quim Reaper said:


> That's alright, if they burn up their card, they're rich, they can just buy another...


This is such a wrong notion, not everyone is rich who purchases RTX 4090, some people literally use their savings to buy high end products once in a decade and losing a product like this due to bad design is sad.

Never encourage bad engineering, its outright sad.


----------



## Mussels (Oct 31, 2022)

Nater said:


> Now I'm a Design Engineer, CAD Tech, Machinist, CAM Programmer, etc etc...but I'm no electrical engineer, yet I see a simple SIMPLE solution staring them right in the face.
> 
> Update the PCIe slots and mainboard layouts.  Quit trying to fit the large square peg in the small round hole.  The card is already taking up 4 PCIe slots in most designs, so have it pop into 4 rigid PCIe slots and get yourself 4x 75w of power right off the top without changing much of anything.
> 
> And I know some of you will reply "but that will make it too expensive!"


That'd be useless?
It'd need entirely new motherboards, CPU's and PCI-E standards

The slots are 75W, so even making it physically use two slots you're still far short of the 500W these cards can use


----------



## Raiden85 (Oct 31, 2022)

TheDeeGee said:


> For the 600 Watt adapter it's the difference between pulling 300 Watt through a cable if daisy chained, or 150 with 4.
> 
> Sure PCI-E 8-Pin is rated for little over 300 Watt, but would you be comfortable with that?
> 
> But i guess some people like to live on the edge.



Corsair and Seasonic are certainly fine with it as their official 600w 12VHPWR adapters uses just two connectors from the PSU so 300w per cable. While 150w maybe the official limit the connector is just overbuilt if your using a PSU from a good company. 



			https://www.corsair.com/uk/en/Categories/Products/Accessories-%7C-Parts/PC-Components/Power-Supplies/600W-PCIe-5-0-12VHPWR-Type-4-PSU-Power-Cable/p/CP-8920284


----------



## Sisyphus (Nov 1, 2022)

phanbuey said:


> How much statistical analysis do you need to look at that design and realize it’s bad?  Always these apologists trying to defend obviously bad ideas by downplaying.


Statistical data are reasonable. "apologists trying to defend obviously bad ideas by downplaying" is emotional and personal. That says more about you than about the product.


----------



## phanbuey (Nov 1, 2022)

Sisyphus said:


> Statistical data are reasonable. "apologists trying to defend obviously bad ideas by downplaying" is emotional and personal. That says more about you than about the product.


Why is it emotional -- it's a fact, and don't take it personally, it's not against you, I'm sure you're a fine person I don't know nor do I have anything against you.  I do have an issue with the idea that we need mountains of data to identify a mistake -- we really don't.

Sometimes --a badly designed product is obvious, especially when it immediately introduces a failure mode that wasn't an issue in virtually the same design from last generation.  It's not personal or emotional -- it's just a regression.  The fact that you're obtusely needing more "statistical data (because it's reasonable, and not emotional)" on a clearly failing mechanical design is grossly disingenuous.  How many need to light on fire before we reach your failure threshold? 1? 5? 5,000? 50,000? all of them?  When is a clear design failure clear enough in the historical data? 

I'm a data scientist and process engineer by trade.  And I love the scientific process as much as the next guy, but if you know anything about industrial design and process engineering, that when you see any process or product that can fail catastrophically in the regular course of it's usage - that is a 100% fail guarantee -- that's something you fix right away.  It's not a matter of if, it's just a matter of when.  In this case, like when you see a dongle that the wires can only be bent a certain way or solder points can break and cause it to melt - Nvidia is reacting quickly to this because their guys know this, and they know a lawsuit is coming.


----------



## maxfly (Nov 1, 2022)

The real question at this point is how are they going to respond? 
They're taking their sweetass time imo.

My take on the latest internal memo at Ngreedia. 
"We really did engineer a great product...honest! Our design just wasn't followed to our specifications due to a slight oversight. The manufacturing facility is in China but our QC staff is in Taiwan."
"So what your saying is, communication wires got crossed and things went a bit haywire?"


----------



## OneMoar (Nov 1, 2022)

if you are going to use the word ngreedia please don't post at all you are just thread crapping (and it makes you sound like 15)
which is my job and you may not have it


----------



## Sisyphus (Nov 1, 2022)

[/QUOTE]





phanbuey said:


> [...]I do have an issue with the idea that we need mountains of data to identify a mistake -- we really don't.


1) A few tech blogs are talking about a technical defect. In order to record how serious this is, the RMA caused is required as a minimum. If it is above average, measures are taken to reduce the failure rate. If not, no further measures are necessary. The test field/repair/guarantee is there for that with its precalculated costs. Failure rates of zero are impossible.  
2) Technical products must meet technical specifications. Whether these are met or not is the only objective assessment. 
3) Economic criteria such as profit margin.
What caused the error? There are dozens of possibilities. An incorrectly produced batch from a subcontractor that was overlooked during the incoming goods inspection is the most common reason for errors in the mass production of complex goods. Bad plug design could be, but is unlikely, as the plugs have to fulfill the technical specifications. It is not possible to make a qualified judgment before doing some research here and without facts/data.


----------



## phanbuey (Nov 1, 2022)

1) A few tech blogs are talking about a technical defect. In order to record how serious this is, the RMA caused is required as a minimum. If it is above average, measures are taken to reduce the failure rate. If not, no further measures are necessary. The test field/repair/guarantee is there for that with its precalculated costs. Failure rates of zero are impossible.
*This is true for regular modes of failure - i.e. power delivery causing card to crash.  IMO melting is in a 'catastrophic' category since it presents a fire hazard and poses a danger to the user.  As far as rates go, typical failure rates of cables are generally in the 'several per million' ratio, especially if they are burn in tested at the mfg plant, there are far less than a million cards and already 5 reports of cables and ports melting 3 weeks from launch -- as such it's very likely far beyond the typical rate of cable failure.*

2) Technical products must meet technical specifications. Whether these are met or not is the only objective assessment.
*Correct but the technical specification of a power delivery cable to not melt when used in it's intended system is always present.  The point of these cables is to deliver power without melting or lighting on fire.*

3) Economic criteria such as profit margin.
What caused the error? There are dozens of possibilities. An incorrectly produced batch from a subcontractor that was overlooked during the incoming goods inspection is the most common reason for errors in the mass production of complex goods. Bad plug design could be, but is unlikely, as the plugs have to fulfill the technical specifications. It is not possible to make a qualified judgment before doing some research here and without facts/data.
*This is a catastrophic failure which with it carries risk of lawsuit, damages far beyond the costs of recalling the products, and risk of regulatory intervention.  If this was a card crashing or card malfunctioning and requiring an RMA then yes an economic analysis can be made.  If however, there is any risk of physical danger to the end user (such as things lighting on fire which should not be on fire) then the economic criteria and profit margins are generally secondary concerns.*


----------



## xBruce88x (Nov 1, 2022)

Is it too early to call this Bend Gate 2.0?


----------



## Sisyphus (Nov 1, 2022)

phanbuey said:


> [...]


We don't know the reason of the failure, we can only speculate. From my point of view, the connection is subjected to mechanical stress, due to short power supply cables or narrow pc cases. A sentence should be found in every installation manual: The plug connections must be installed in such a way that they are not exposed to mechanical stress. But I might be wrong, as the law is different from nation to nation. In a modular system environment with components from many independent companies, connected by ordinary consumers, it's not that clear, who is responsible. The plugs/cables may simply not be connected as intended. In the article, nVidia asked the board partners, to send the damaged cards in for further analysis. The issue will be solved, once the analysis is done. It could range from a charge of badly manufactured plugs, who are to be replaced, over more detailed manuals, with clear specifications about case dimensions/a list of certified power supplies for the rtx 4090, to a a complete overhaul of some PC standards or even new singular solutions, who are used in hpc/workstations with higher power demands. 
You call this “bad design”, I call this early adopter problems. Who don't like to troubleshoot, should wait 4-6 month before purchasing the newest pc hardware components.

I have a technical education, but if I invest >1000 $ for new hardware, I prefer to have the PC built by a specialist retailer. It's much more convenient to just take the smoking PC back to where it was built.


----------



## chispy (Nov 4, 2022)

RTX 4090 Woes Get Worse: Native 16-pin reportedly *Melts* as well ...

" By now, you may have read many horror stories about Nvidia's 16-pin power adapter melting on the GeForce RTX 4090. However, the terror doesn't stop there. A GeForce RTX 4090 owner has reported the first alleged case of a 12VHPWR power connector meltdown from a native ATX 3.0 power supply. " ...


Complete Story and Source - https://www.tomshardware.com/news/rtx-4090-native-16-pin-melting


----------



## mechtech (Nov 5, 2022)

Bend radius is a thing when it comes to cables..............basically any and all cables..............


----------



## TheDeeGee (Nov 6, 2022)

This video explains it all.


----------



## Dirt Chip (Nov 10, 2022)

Some new info (as of 3/11/22) by JhonnyGuru: It seems mostly a human error by not connecting the cable all the way in.
The design (problem) of the connector make it easy to think you pluged it right but with banding to the sides that happened more with the adaptor, but not only by it, you can have bad pin contact (mostly on the outermost pins).

To be continued...


----------



## RainingTacco (Nov 11, 2022)

the54thvoid said:


> This is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.
> 
> View attachment 267011
> 
> Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.



Molex was a total shitshow, i'm glad they are thing of the past. If you wanted to make a point, it's extremely bad point.


----------



## Bomby569 (Nov 13, 2022)

mechtech said:


> Bend radius is a thing when it comes to cables..............basically any and all cables..............



Not 8pins, that i can guarantee you


----------



## Dirt Chip (Nov 13, 2022)

TheDeeGee said:


> This video explains it all.


TL;DW: Human error by not connecting it all the way in.


----------



## Bomby569 (Nov 13, 2022)

Dirt Chip said:


> TL;DW: Human error by not connecting it all the way in.



that's still bad design, people have been connecting pc power cables for decades without problems, it isn't human error if the design makes so many people do it wrong.
If that theory is even the right one, i've seen reddit posts and people clearly had it connected all the way


----------



## Dirt Chip (Nov 13, 2022)

Bomby569 said:


> that's still bad design, people have been connecting pc power cables for decades without problems, it isn't human error if the design makes so many people do it wrong.
> If that theory is even the right one, i've seen reddit posts and people clearly had it connected all the way


How many is "so many" in % of total users with this cable?


----------



## Bomby569 (Nov 13, 2022)

Dirt Chip said:


> How many is "so many" in % of total users with this cable?



There's people building pc's, newcomers and old timers, every day. The old 8 pin has none of this issues, and there should be millions of them in use at any time.
Versus a new product, with relatively few adopters (a drop of water in the ocean of 8 pin's) and with so many cases in so little time. 
And i don't know what the total users mean, i see a lot of people that disconnected the cards, that bought 3rd party cables, others can be melted and they don't even know it... yet, we only know about the ones that post on the internet, not the ones that don't.

Clearly a design flaw it it's as the video say, not human error. You should not design things that so many can't even plug it in properly. Worst when not plug it in properly makes the product self destroy


----------



## Dirt Chip (Nov 13, 2022)

Bomby569 said:


> There's people building pc's, newcomers and old timers, every day. The old 8 pin has none of this issues, and there should be millions of them in use at any time.
> Versus a new product, with relatively few adopters (a drop of water in the ocean of 8 pin's) and with so many cases in so little time.
> And i don't know what the total users mean, i see a lot of people that disconnected the cards, that bought 3rd party cables, others can be melted and they don't even know it... yet, we only know about the ones that post on the internet, not the ones that don't.
> 
> Clearly a design flaw it it's as the video say, not human error. You should not design things that so many can't even plug it in properly. Worst when not plug it in properly makes the product self destroy


Some, but not all, can be blamed on the design. The user also has his part in the end because it seems than very small percentage do it wrong.
What part is the user fault is yet to be seen.
to be continued...


----------



## Bomby569 (Nov 13, 2022)

Dirt Chip said:


> Some, but not all, can be blamed on the design. The user also has his part in the end because it seems than very small percentage do it wrong.



for a PC part the catastrophic failure rate seems to high, not to low, but we agree to disagree.


----------



## ThrashZone (Nov 13, 2022)

Hi,
Clearly it's an nvidia's issue designing a shit adapter 
But got to love someone trying to fit this large monster in a lunchbox case


----------



## Dirt Chip (Nov 13, 2022)

ThrashZone said:


> Hi,
> Clearly it's an nvidia's issue designing a shit adapter
> But got to love someone trying to fit this large monster in a lunchbox case


I hope to see Intel adaptors if any will be made.
It will be a nice compression.



Bomby569 said:


> for a PC part the catastrophic failure rate seems to high, not to low, but we agree to disagree.


In general, human error is responsible to many things. I just say it is still to early to decide what is the responsibility degree on the user in those cases. From the data until now, zero doesn't seems to be the answer.


----------



## modmax (Nov 23, 2022)

but what if they left the old connectors? I don't understand what the difference would be, except that the new ones catch fire.


----------



## Chomiq (Nov 23, 2022)

modmax said:


> but what if they left the old connectors? I don't understand what the difference would be, except that the new ones catch fire.


Only if inserted incorrectly.


----------

