• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 4000 series burning cables thread

Status
Not open for further replies.
I promise you someone burned 8pins in a couple of weeks too.
My point flew over your head apparently. I made no claims about the connector; the only way to know for sure is to look at failure rate. The 8pin failure rate is (apparently) acceptable, so those people who do get it burned up are just part of that small percent. We’ll know for sure if this is the case long term for the 16 pin.
My point was don’t blindly trust experts.
 
My point was don’t blindly trust experts.
I guess I disagree with that then, at least until you have some better stats. I wouldn't call it "blind trust" though, they are "experts" for a reason. (And I'm not talking about techtubers)
 
That article was again, a complete WCCFtech fabrication. Otherwise yeah, it seems Q&A is the main concern here.
I think it was later posted, but I may be mixed up. So far only metal pin QA is bad, although I have to admit that it isn't something particularly new, I saw it myself in Define R4 case, when my molex melted a bit. Still, I would hate that to become standard.


This is a good way to go about this, good job finding it. Community research is the best we can manage right now. Do remember that despite individual incidents, many many such adapters are sold, so I am curious how common this really is. It could be anything at this point. We badly need stats. Still, verifiable community polling is a good place to start.
This thread is basically meant to gather all those accidents, but Reddit did better job by making their thread sticky and requiring to report functioning cards/adapters. But as they say, that thread only exists to supplement professional reviews and investigations.


This is pretty much true, btw. They just write the standard. Anyone can make a china copy of it. It's probably up to nvidia or similar to enforce stricter connector regulations on the AIB makers themselves.
I'm not sure, but PCI-SIG could have required certain cable AWG or certain thickness of female end of connector, which so far seems to be a problem, because after some plugs, female end becomes permanently widened, but it could also be bent if wires themselves are bent to some angle. Perhaps their standard is at fault or perhaps nVidia used to thin stuff that easily breaks. I'm more concerned that with ATX 3.0 power supply those new 4 pins for sensing fail to do their job, they were added to reduce or eliminate power spikes, but not only they failed to do that in TPU's review, they also let card use more power than stated in TDP, which was the sole reason why we got new connector and standard.
 
I’m not sure what to make of this entirely, but it seems like if there’s enough resistance occurring to melt the plug, maybe there should be something sensing the amp spike on the card and reduce load. Electronics generally should be designed with safety measures to ensure they can’t exceed the rated amperage. The other thought is that if the adapters aren’t meant to be bent, then they should be designed accordingly, especially since these cards are so big that you might run into issues with side clearance on many cases. Bending seems almost inevitable with all the variables involved.

I know I’ve angled 8-pin cables significantly before in the name of cable management, and I doubt I’m the only one. This should have been considered in this new design, especially since one intent of the design was to reduce the number of cable feeds to the card! It seems as though it was designed to solve certain problems, but it only made those problems worse, at least in the short term. Adapters add a lot more connection challenges. How many terminations are on the other end where all those 8 pin cables go in?
 
Nvidia's FE isn't too tall as a card, which means there's a fair bit of leftover case width in a typical ATX case to put in a gentle bend radius.

Cards like this, OTOH, are just asking for trouble in a regular case. Even some of the largest cases like the Corsair 7000D are only 245mm wide, which are going to struggle for clearance with the new wave of utterly ridiculous GPUs large enough to use as (very bad, very expensive) cricket bats.
 
I guess I disagree with that then, at least until you have some better stats. I wouldn't call it "blind trust" though, they are "experts" for a reason. (And I'm not talking about techtubers)
If the last few years has made anything clear, its that "experts" in any field hardly know what they are talking about and make mistakes constantly. Thats why they need to keep revising and changing what that stated was settled fact not 5 minutes ago.

It seems to be common sense that smaller connectors can handle less amperage and are more prone to overheating then connectors with more metal. This is why large charging sockets for, say, EVS are so heavy and have so much metal in them, and get larger the higher the wattage goes. "experts" in the field of high power electronics will tell you that, any day of the week. I guess either those "experts" are wrong or the "experts" at nvidia are wrong, we'll find out either way as more 4090s melt down.

there's nothing wrong with the 8 pin, there IS something wrong with nvidia's power slurping silicon "requiring" this proprietary garbage.
 
If the last few years has made anything clear, its that "experts" in any field hardly know what they are talking about and make mistakes constantly.
No, I don't subscribe to that anti-intellectual social-media-driven idea. You have credentials via learning something, believe it or not. Expertise means something.
 
Seasonic proves their worth.

Something of note that I haven't heard until now.
"We've checked the GeForce RTX 4090's documentation and online user manuals, and there are no instructions or guidelines on manipulating the 16-pin power connector or its cable. Did Nvidia honestly not know about the issue? If so, it may want rethink its procedures."

Not surprised.
Edit #2- using Seasonic or Cablemods adapter voids your warranty.
 
Last edited:
The H100 isn't even the same product segment, of course that's not what I'm meaning! VDI is just one of a few use-cases that require datacenter GPUs based on the desktop graphics architecture. The other is GPU rendering farms. VDI in particular is about a 25 billion dollar industry, annually - the fasted growing datacenter market at the moment, and likely to hit 33 billion USD in 2025.

3090 = RTX A5000 and A5500, running at 230W.
4090 = RTX L5000 and L5500? I'm guessing they'll use L for Lovelace....

We're not talking about one or two cards and one or two racks of niche-use servers here, we're talking tens of millions of units.

They removed the A and are calling it RTX 6000 Ada

NVIDIA RTX 6000 Graphics Card | NVIDIA

I’m talking about this, not in relation to 8pins.
I know stuff gets memory holed fast these days, but it’s been like a couple weeks.

This has been talked out in the media, hasn't it? I'm fairly sure GN and Jon had a back and forth on this and settled on Jon being wrong for once. Which can happen, this is a new specification, and a lot of the details have been muddy for even people in the industry. It's really a non-issue.

I think it's fair game to not trust the 12VHPWR connector yet. It's clearly an early adopter issue, just like when the 6-pin (2x3) connectors first began appearing with the GeForce 7 and 8 series. The 6800 Ultra (NV40) ran off two 4-pin Molex connectors, the 7800 GTX (G70) required one 6-pin and by the time the 8800 GTX (G80) came around the cards were already needing two, power supplies of the time obviously didn't have the cables for this connector day one. Transitional periods in hardware standards always have these minor problems.

You have 3 options, really:

1. Ensure your adapters are of very high quality or buy native cables for your PSU if available
2. Upgrade power supply if yours is already a little long in the tooth
3. Purchase a Radeon instead since they are still sticking to the 8-pin (2x4) connector for the time being (and perhaps wisely)
 
Update, some more testing done:

Loose connection makes connector heat over 100 degrees Celsius. Again, connector on adapter (maybe native ATX 3.0 cable too) seems to be weak and not handle insertion force very well.


So despite user's best effort while plugging in the power cable, wires can bend and come out of plug. Again, be careful, make sure that doesn't happen.

At this point it seems safe to say that this isn't one off situation and pin design is too weak on those nVidia adapters, however not much is known so far whether non adapter cables and connectors are more robust, nonetheless neither can be trusted too much and must be checked.

The Cablemod sell out posted a video about cables, nothing happened:
 
Nvidia's FE isn't too tall as a card, which means there's a fair bit of leftover case width in a typical ATX case to put in a gentle bend radius.

Cards like this, OTOH, are just asking for trouble in a regular case. Even some of the largest cases like the Corsair 7000D are only 245mm wide, which are going to struggle for clearance with the new wave of utterly ridiculous GPUs large enough to use as (very bad, very expensive) cricket bats.
Good one! Now I'm thinking about a 90+ mph cricket ball striking a 4090. :roll:
 
Jayz2c has just posted a video, I'm still watching listening to it as a type this but he's already shown multiple people of the small subset of "his subscribers" with melted HPWR connectors.

This is not a non-issue. He didn't need to make a video about the PCIe 8-pin because that connector didn't have a problem.

I know it's Jay, so pinch of salt etc, but that's four more independent people with connectors failing just among the small sample size that is "Jay's subscribers". Clearly, the melty cable that sparked this thread is not an isolated incident.

If the last few years has made anything clear, its that "experts" in any field hardly know what they are talking about and make mistakes constantly. Thats why they need to keep revising and changing what that stated was settled fact not 5 minutes ago.

It seems to be common sense that smaller connectors can handle less amperage and are more prone to overheating then connectors with more metal. This is why large charging sockets for, say, EVS are so heavy and have so much metal in them, and get larger the higher the wattage goes. "experts" in the field of high power electronics will tell you that, any day of the week. I guess either those "experts" are wrong or the "experts" at nvidia are wrong, we'll find out either way as more 4090s melt down.

there's nothing wrong with the 8 pin, there IS something wrong with nvidia's power slurping silicon "requiring" this proprietary garbage.
This.
The new connectors are made of the same materials as the old connectors but they're smaller AND carry more current per pin.
Regardless of how much safety margin is built-in, they're still a huge downgrade in terms of current-per-contact-area, so it's no wonder we're seeing a few people melting adapters.

They removed the A and are calling it RTX 6000 Ada

NVIDIA RTX 6000 Graphics Card | NVIDIA
...aaaand it's only 300W. Because I, and several others, have constantly said that 450-600W is utterly fucking ridiculous.
At 300W they're probably 95% as fast, too.
 
...aaaand it's only 300W. Because I, and several others, have constantly said that 450-600W is utterly fucking ridiculous.
At 300W they're probably 95% as fast, too.

Notice the single 8-pin connector on the render, too, though that is probably an oversight, it should use 12VHPWR, too.

TPU DB has it at around an estimated 17% faster than the RTX 4090, which is around the ballpark I expect the RTX 4090 Ti to land on. This card should have the fully enabled AD102 processor with all 142 SM/18176 shaders enabled.

NVIDIA RTX 6000 Ada Specs | TechPowerUp GPU Database
 
No, I don't subscribe to that anti-intellectual social-media-driven idea. You have credentials via learning something, believe it or not. Expertise means something.
It means your on the bell curve yes but where.
Most don't apply they're degrees to they're profession I work with many like this in engineering, they do sometimes have strange ideas.
They do like to please and climb ladders, and we all make mistakes.
But nonetheless within specific work roles we all start at the left of the bell curve it's up to the individual to realize where he sits on it.
And Nvidia self describe as a Software company.
 
And Nvidia self describe as a Software company.
Not really. nVidia self-describes itself as AI or data center company in GTCs. That's what they talk about the most. However, annual reports show that a bit over 50% of their revenue (as in net profit) comes from gaming (that's excluding clients, aka OEMs like Dell, HP). Only this year data center revenue shot up from 10% to 50% and very unsurprisingly they are mostly talking about it. nV doesn't talk much about software, basically only when it supports their hardware.
 
Im hearing and seeing more reports of improper insertion too... not pushing all the way till you get a click.

Becoming more and more apparent the cards themselves are A'ok, and that the adapter is the cause, whether poorly designed, manufactured, or not quite perfectly used, or any combo of the 3.
 
It means your on the bell curve yes but where.
I believe above average on the field of your choosing. That's kind of the point. There may be exceptions of course, but it certainly boosts the average.
 
Last edited:
This should have been considered in this new design, especially since one intent of the design was to reduce the number of cable feeds to the card! It seems as though it was designed to solve certain problems, but it only made those problems worse, at least in the short term. Adapters add a lot more connection challenges. How many terminations are on the other end where all those 8 pin cables go in?
nVidia designed this one and only connector for a reason and that is not user convenience.
For their own reasons they designed the RTX30/40 in a certain way that does not allow too many connectors.
There is not enough space because of the PCB is too small, so they had to come up with something while power usage gone up significantly after Turing.

They couldn't care less about how many connections outside of the card it self there will be. To be fair there is no (known) issues with the multiple connections of the adapter.
Sure those multi-connections dont help space-wise but the problem seems to be on the 1 and only connection that all power comes through and it doesn't seem to be the multiple terminations either. Its on the front exactly where the male/female pins connect.
I'm not sure if it is poor design (like too small size) or poor execution (like too much clearance on connection pins and/or housing =loose under certain situations).

Maybe some one had to enforce high requirements and standards of this single HPWR connector's manufacturing, but didn't...
Maybe testing wasn't sufficient and they didn't took into account every situation the most users have to deal, like bending and/or anything else.

Nevertheless it seems like a sloppy job overall no matter who is responsible for it.
 
It means your on the bell curve yes but where.
Most don't apply they're degrees to they're profession I work with many like this in engineering, they do sometimes have strange ideas.
They do like to please and climb ladders, and we all make mistakes.
The pleasing is what is killing business approaches post launch. And its the commercial push that generally inspires the pleasing. And... this is a slowly escalating process. This expands like cancer, and when you notice it, you stumble upon unfixable issues. Climate is a fantastic example. We don't even want to know the truth, its too painful, but we've known it for decades now. The 'trust' in other people is similar. Its very uncomfortable if the people you trusted suddenly appear to be untrustworthy. Shakes up your view of the world. Creates FUD and we hate it.

Experts cannot be trusted if they are not truly independent - any expert with a brand name in his current application, or strong ties to a certain industry, should be treated with caution. It happens in business, it happens in science (all the time! scientific 'evidence' of something is very strong commercially, never mind the fact it is bullshit or not), and it happens on social media. Any 'social media expert' is by definition an expert in generating ad revenue through those media. Everything else is secondary.

So who can you trust? Its getting increasingly difficult to determine these days. But anything and anyone in the business of exposure to generate income, is suspect. Any commercial enterprise is suspect. And even individuals that just love attention are suspect.

We're screwed :) The best and only thing you've really got these days are your own life experience, filter methods and common sense, hopefully backed by solid theory. I honestly always try to type up responses that use those elements first and foremost, instead of reposting another nonsense video.

In the current topic though, yes it was crystal clear to me this whole new connector/spec is a massive clusterfuck of bad ideas turned into product. It looked weak before, it proves its weakness now, any fool can see the differences between 6/8 pin and this, and history should have added the info that small is weak and low on tolerances. In the design phase!
 
Last edited:
:roll:

1666858256853.png

First users report NVIDIA RTX 4090 GPUs with melted 16-pin power connectors - VideoCardz.com
 
Update, adapter was investigated by Igor from Igorslab:

TL;DR design of adapter is bad, the 12VHPWR standard is fine.
Yes indeed but the positioning on the card plus 35mm allowance is still a thing.
And a thing that to me is poor placement, the cards need to fit in cases IMHO not buy a special case or vertical mount specially for the card.

They need to rethink the whole massive card power input design IMHO, the connector is in the wrong place.
 
Update, adapter was investigated by Igor from Igorslab:

TL;DR design of adapter is bad, the 12VHPWR standard is fine.
Indeed. Turns out that nvidia is the problem (still) and the concern for their customers is (where it was for ages) - very low. They use 4 wires with the adapter, being soldered by kindergartners, getting connectors made out of tin foil as it appears, no wonder the adapters are flaky.
Nvidia needs another, bigger slap in their faces and promptly!
 
Yes indeed but the positioning on the card plus 35mm allowance is still a thing.
And a thing that to me is poor placement, the cards need to fit in cases IMHO not buy a special case or vertical mount specially for the card.

They need to rethink the whole massive card power input design IMHO, the connector is in the wrong place.
RTX 4090 is dumpster fire of product that frankly needs a whole recall, redesign and some people fired from management. There are things that have to be fixed:
1) TDP needs to go down to no more than 350 watts and clock speed to be allowed to have more variance, so that there is minimal loss of performance
2) Cooler shrank and perhaps given new aesthetics, because now it just looks like clone of RTX 3080, a much lesser card and this is supposed to be a huge upgrade
3) Power connectors nowadays must all be angled at least 90 degrees, because early 2000s are over and putting a power connector in actually convenient place isn't black magic anymore and it's cheap and makes our life better
4) No more sparky adapters
5) RTX 4090 needs to cost no more than 1400 dollars
6) DP 2.0 is necessity, not an option on your top tier card
7) Nobody cares about 3GHz clock speed in some lab, to be fair, nobody really cares about clock speeds of graphics cards much anyway, it's nearly irrelevant in era of dynamic clock speed and in era, where audience seemingly started to care about perf/watt.
8) Stop being dicks:
 
Last edited:
Update, adapter was investigated by Igor from Igorslab:

TL;DR design of adapter is bad, the 12VHPWR standard is fine.
Wow. Cost cutting taken to a new extreme, courtesy of Nvidia alongside their top end product.

You can't even make it up. This is a new low, one wonders 'why'. Surely the cost/risk benefit wasn't considered positive on that, I mean why even remotely risk this when you're pushing a new power level and ditto GPU, it makes absolutely no sense.

Brand damage to Nvidia: for me, big enough to reconsider ever buying their stuff again. They'd better come up with an eye watering solution right now. Hard gained trust over the course of many years shattered in an instant, right here. They already had their asses on the line with the relentless RT push that just isn't making waves, but now... Its one thing to have doubts about company strategy, but when doubts arise over quality.... pfew

RTX 4090 is dumpster fire of product that frankly needs a whole recall, redesign and some people fired from management. There are things that have to be fixed:
1) TDP needs to go down to no more than 350 watts and clock speed to be allowed to have more variance, so that there is minimal loss of performance
2) Cooler shrank and perhaps given new aesthetics, because now it just looks like clone of RTX 3080, a much lesser card and this is supposed to be a huge upgrade
3) Power connectors nowadays must all be angled at least 90 degrees, because early 2000s are over and putting a power connector in actually convenient place isn't black magic anymore and it's cheap and makes our life better
4) No more sparky adapters
5) RTX 4090 needs to cost no more than 1400 dollars
6) DP 2.0 is necessity, not an option on your top tier card
7) Nobody cares about 3GHz clock speed in some lab, to be fair, nobody really cares about clock speeds of graphics cards much anyway, it's nearly irrelevant in era of dynamic clock speed and in era, where audience seemingly started to care about perf/watt.
8) Stop being dicks:
Honestly I think this whole affair and the massive gap between x90 and lower cards is going to inspire a pull/delay of further ADA GPUs.

They'll ride out Ampere and refresh their shit first. Now we know why they staggered it like this... apparently this was a gamble, all the way, and AMD is slowly but certainly showing that it will meet raster performance without a problem. They already pulled a 4080... writing's on the wall.

I mean what have they got, apart from massively undercutting on price compared to Ampere? Perf/watt improvements and DLSS3. That's absolutely worthless without a major price cut. This, alongside Turing and Ampere, is yet another gen where I really don't feel any pull to jump on their new features. Its just not worth all the hassle nor the price.
 
Last edited:
Status
Not open for further replies.
Back
Top