• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Status
Not open for further replies.
Sure, let's take the inane fearmongering up another notch. Why not skip the middle man and just grab a torch and pitchfork and head to Santa Clara. to burn down evil Doctor NVidia's fortress of power? That's about as sensible as telling people to contact the FTC, because they read a few stories on the Interwebs.

As for this "violation of UL" idiocy, there is no legal requirement for UL certification, nor does UL certify "standards", but only individual products, meaning it doesn't affect NVidia whatsoever.
i mean a graphics card burning your house down unattended with your child or pets inside would be something worth pitchforkery.
 
posted over the weekend

So instead of a melted connector, you get corrupted save and/or system files while in the heat of a battle, because the PSU decides to shut the system off due to a hot cable.

Awesome...
 
  • Like
Reactions: N/A
What about the bios lock ? :)
Why do you think there is no MPT for the 7000 series ? ;)
Why do you think there is telemetry in/out watts on all cards ?

There is no conspiracy - it's just reality.

I hear your conspiracy...and raise you a common sense.

Let me be less obtuse. If AMD and Nvidia were to release a card, do you think they'd want it to have 20% of its performance left on the table? No. Common sense is that they'd release a card showing 100% of its capability and charge for it. The common sense thing is that during manufacturing there are variations. The reason that they produce cards and specify them to a performance level is that they can be relatively assured to run at that level. This means that for most you setup a minimum level, bin anything below that as bad, and everything else is good. IE, the cut-off is 2.8 GHz, the three cards in question can run at 2.7, 2.9, and 3.1, and you pass 2/3 with them being set to 2.8 GHz and fuse off the bad bits of the third to make the next product down in the stack.

The thing is, this used to have a lot of variation. "Golden" chips overclocked like crazy, and the internal cut-off was made at a point where production matched output. In this way, depending upon what you got, you could overclock to 3.1 GHz, or be stuck at the rated 2.8 GHz. The "conspiracy" is that with better processes, and more consistent output, the cards are now at 2.7, 2.8, and 2.9 GHz...so the 2.8 rating leaves no production inefficiencies swing for the overclocking headroom. Better manufacturing means they come out of the box clocked higher and with less OC headroom.

So...yeah. It's not a conspiracy so much as improvement in production removing inefficiencies, leading to less performance left on the proverbial table that you can get back by overclocking. That sucks....but it's not something to whine about. Note that Nvidia states their cards run at 575 watts. They can pull 900 watts for 1 ms...or 1,000,000 ns. This is all fine and dandy...except they can do this very often when going from high usage spikes...and it's over a connection where the 900 watts/575 watts is not forcibly balanced over conductors and therefore can (and apparently does) peak on a limited amount of them...exceeding the 9.2 amps that the wire connector is theoretically rated for under presumably ideal situations. That's less about failure of manufacturing, or about reality, and somebody designing a connector that wasn't meant for what it is experiencing in practice.

Ahh...but there are those who still believe the engineers specified everything perfect. Again, engineers of vehicles didn't (historically) design them to be destroyed. This how the Ford Pinto became a thing, and it's one of the easiest situations where some common sense would have fixed everything. Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.
 
I hear your conspiracy...and raise you a common sense.

Let me be less obtuse. If AMD and Nvidia were to release a card, do you think they'd want it to have 20% of its performance left on the table? No. Common sense is that they'd release a card showing 100% of its capability and charge for it. The common sense thing is that during manufacturing there are variations. The reason that they produce cards and specify them to a performance level is that they can be relatively assured to run at that level. This means that for most you setup a minimum level, bin anything below that as bad, and everything else is good. IE, the cut-off is 2.8 GHz, the three cards in question can run at 2.7, 2.9, and 3.1, and you pass 2/3 with them being set to 2.8 GHz and fuse off the bad bits of the third to make the next product down in the stack.

The thing is, this used to have a lot of variation. "Golden" chips overclocked like crazy, and the internal cut-off was made at a point where production matched output. In this way, depending upon what you got, you could overclock to 3.1 GHz, or be stuck at the rated 2.8 GHz. The "conspiracy" is that with better processes, and more consistent output, the cards are now at 2.7, 2.8, and 2.9 GHz...so the 2.8 rating leaves no production inefficiencies swing for the overclocking headroom. Better manufacturing means they come out of the box clocked higher and with less OC headroom.

So...yeah. It's not a conspiracy so much as improvement in production removing inefficiencies, leading to less performance left on the proverbial table that you can get back by overclocking. That sucks....but it's not something to whine about. Note that Nvidia states their cards run at 575 watts. They can pull 900 watts for 1 ms...or 1,000,000 ns. This is all fine and dandy...except they can do this very often when going from high usage spikes...and it's over a connection where the 900 watts/575 watts is not forcibly balanced over conductors and therefore can (and apparently does) peak on a limited amount of them...exceeding the 9.2 amps that the wire connector is theoretically rated for under presumably ideal situations. That's less about failure of manufacturing, or about reality, and somebody designing a connector that wasn't meant for what it is experiencing in practice.

Ahh...but there are those who still believe the engineers specified everything perfect. Again, engineers of vehicles didn't (historically) design them to be destroyed. This how the Ford Pinto became a thing, and it's one of the easiest situations where some common sense would have fixed everything. Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.
All correct except the part about binning. The actual reason is Moore's Law wall: because transistors are barely getting smaller anymore, the free performance boost that IC designers used to get from a node shrink (via cramming more transistors in the same area) is gone. That boost was by and large responsible for the generational performance improvements we became used to.

Without that free lunch the designers had to turn to other avenues to maintain the expected performance cadence. What they did was to throw efficiency out of the window, so instead of clocking at a sweet spot between performance and power consumption, silicon is now clocked from the factory at the maximum it will do without catching fire. The underclocking craze is literally just people doing manually what designers used to.

The problem is that after the designers pushed the power lever all the way to the right, there are no more levers to pull. It's why RTX 5090 draws ~600W compared to RTX 4090 which pulls ~400W; ~50% more power for ~25% more performance. It's why power consumption is going to continue to go up, die sizes are going to continue to increase, leading-edge silicon is going to continue getting more expensive, and performance increases are going to taper off and eventually plateau.

The entire silicon microprocessor industry isn't just staring down the barrel; said barrel is firmly pressed against their collective eyeball. It is going to be a very long, very hard winter until either some new silicon breakthrough is reached and commercialised, or an economical alternative to that material is demonstrated; TSMC and ASML trying to get closer and closer to the inevitable limits of physics is neither of these. As enthusiasts, we are all gonna have a bad time.
 
All correct except the part about binning. The actual reason is Moore's Law wall: because transistors are barely getting smaller anymore, the free performance boost that IC designers used to get from a node shrink (via cramming more transistors in the same area) is gone. That boost was by and large responsible for the generational performance improvements we became used to.

Without that free lunch the designers had to turn to other avenues to maintain the expected performance cadence. What they did was to throw efficiency out of the window, so instead of clocking at a sweet spot between performance and power consumption, silicon is now clocked from the factory at the maximum it will do without catching fire. The underclocking craze is literally just people doing manually what designers used to.

The problem is that after the designers pushed the power lever all the way to the right, there are no more levers to pull. It's why RTX 5090 draws ~600W compared to RTX 4090 which pulls ~400W; ~50% more power for ~25% more performance. It's why power consumption is going to continue to go up, die sizes are going to continue to increase, leading-edge silicon is going to continue getting more expensive, and performance increases are going to taper off and eventually plateau.

The entire silicon microprocessor industry isn't just staring down the barrel; said barrel is firmly pressed against their collective eyeball. It is going to be a very long, very hard winter until either some new silicon breakthrough is reached and commercialised, or an economical alternative to that material is demonstrated; TSMC and ASML trying to get closer and closer to the inevitable limits of physics is neither of these. As enthusiasts, we are all gonna have a bad time.
Very well said. There is also a situation where power loss increases with frequency. People demonstrated THz transistors, but they are in chips housing few at a time and dissipating lots of power. This is similar to how DDR drivers in SSDs and memory are often large contributors to power dissipation.

One possibility for improvement is to switch away from silicon. We have known of better alternatives to silicon for decades, but the industry using silicon substrates had so much momentum that by time you design a CPU on a different substrate you will lose to node shrink. Now that node shrinks are getting harder people might look at other substrates again.
 
Sure then, but so can the 12vh. We don't see burned connectors on 4070, it's mostly the 90 parts and some 80s (which can also be pushed to 450w and beyond btw).
The 12Vh might be able to, but not the way it is implemented now.

So we can play this game all day, is the cable bad or is the GPU bad... the reality is simply the GPU and the cable are at some wattage no longer a good combination. And you will need 2 connectors or more. Realistically I'd say every 300W would need another connector. Then we're getting near the pcie cabling tolerances and the shitty cables can probably do fine, too.
 
Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.

It's bad circuit design. Not more not less. I doubt the NVIDIA graphic card connector provides 1.2 Volt DC. Most likely 12V DC. Someone screwed up reading the specs. Someone screwed up learning the basics. Or do you want to tell me they did not knew what connectors and other parts were placed on the pcb? There is for sure some sort of Direct current to direct current converter circuits on those boards. Someone made mistakes by designing those. You, with your shunt resistors. They are not magic. They are also not magic like fuses. My windows 11 pro 24h2 amd gpu driver has a slider for the voltage which may is, by converting the units, from 0.8 V DC to 1.2 V DC. You need some sort of circuit to provide that to the card.
they saved money on the circuit design and on the parts being used.
Up to 800 Watts is not an excuse for a bad circuit design and bad pcb design and bad choice of components. They should fire those people and get proper people who know their tech field.

--

These products are still sold and bought on a daily basis. PSU are still sold and bought on a daily basis. The usual 1 year RMA warranty period costs most likely less than making and selling a proper product. Who cares for a few defective cables, cards and psu connectors? This is not even worth a news article on any television or newspapers.

While I see there are users here that have used the connector for an extended period of time without issue,

No one knows how many hours these cards were used. No one knows how much average wattage.

I doubt these cards are running at 600 Watts @ 24 hours @ 7 days a week for years.

My 7800XT runs usually around 80 - 130 Watts according to the uncalibrated - AMD total board power readout in windows 11 pro 24h2 amd gpu driver. I think the powercolor 7800XT hellhound has something around 250 / 280 Watts max. The card is around 6 to 12 Watts in idle with a single whqd screen.

Running a card not at maximum makes the statement not relevant ... there are users here that have used the connector for an extended period of time without issue
The few issues which pop up are the important ones, not the 90% or slight a bit more users without issues, therefore the statement is not relevant ... there are users here that have used the connector for an extended period of time without issue

So when 90 of 100 airplanes do not crash everything is okay, right? It does not matter if it is 90 or 99 airplanes to explain the point wiht numbers.
 
Last edited:
All correct except the part about binning. The actual reason is Moore's Law wall: because transistors are barely getting smaller anymore, the free performance boost that IC designers used to get from a node shrink (via cramming more transistors in the same area) is gone. That boost was by and large responsible for the generational performance improvements we became used to.

Without that free lunch the designers had to turn to other avenues to maintain the expected performance cadence. What they did was to throw efficiency out of the window, so instead of clocking at a sweet spot between performance and power consumption, silicon is now clocked from the factory at the maximum it will do without catching fire. The underclocking craze is literally just people doing manually what designers used to.

The problem is that after the designers pushed the power lever all the way to the right, there are no more levers to pull. It's why RTX 5090 draws ~600W compared to RTX 4090 which pulls ~400W; ~50% more power for ~25% more performance. It's why power consumption is going to continue to go up, die sizes are going to continue to increase, leading-edge silicon is going to continue getting more expensive, and performance increases are going to taper off and eventually plateau.

The entire silicon microprocessor industry isn't just staring down the barrel; said barrel is firmly pressed against their collective eyeball. It is going to be a very long, very hard winter until either some new silicon breakthrough is reached and commercialised, or an economical alternative to that material is demonstrated; TSMC and ASML trying to get closer and closer to the inevitable limits of physics is neither of these. As enthusiasts, we are all gonna have a bad time.

I agree, but it's easiest to explain. Trying to explain that performance per wattage was balanced for, and that you had to have a rather large sampling size, etc... was too much to try and explain (in my opinion). Realistically, cards could be pushed to most of those numbers by increasing voltage...but it ruins a simple explanation.

It's also fun to consider that all of this has come about because the existing 6 and 8 pin connectors are "expensive" to install, so Nvidia ejected their protection system too. Those shunt resistors are a couple of cents, on a product selling for thousands of dollars. Way back I stated the failure rate would have had to be 1 in 1,500,000 to be considered good...but you can have a much higher rate if you build in detections which basically prevent the failure...like the shunt resistors. Oh boy...it's almost like I agree with plenty of other people here in that this is an Nvidia issue.


I...look forward to a future built on something other than silicon. Optical processors are moving forward, quantum chips are present but still impractical, and materials science is pushing us forward ever so slowly. It might be nice for a change to have a computer for 5-7 years without any upgrades, and having the main improvements coming from software having to get more efficient at using resources which cannot simply be thrown at it to overcome limitations of inefficiencies...and I'm glad that I'm a grease monkey rather than a code monkey.

It's bad circuit design. Not more not less. I doubt the NVIDIA graphic card connector provides 1.2 Volt DC. Most likely 12V DC. Someone screwed up reading the specs. Someone screwed up learning the basics. Or do you want to tell me they did not knew what connectors and other parts were placed on the pcb? There is for sure some sort of Direct current to direct current converter circuits on those boards. Someone made mistakes by designing those. You, with your shunt resistors. They are not magic. They are also not magic like fuses. My windows 11 pro 24h2 amd gpu driver has a slider for the voltage which may is, by converting the units, from 0.8 V DC to 1.2 V DC. You need some sort of circuit to provide that to the card.
they saved money on the circuit design and on the parts being used.
Up to 800 Watts is not an excuse for a bad circuit design and bad pcb design and bad choice of components. They should fire those people and get proper people who know their tech field.

--

These products are still sold and bought on a daily basis. PSU are still sold and bought on a daily basis. The usual 1 year RMA warranty period costs most likely less than making and selling a proper product. Who cares for a few defective cables, cards and psu connectors? This is not even worth a news article on any television or newspapers.



No one knows how many hours these cards were used. No one knows how much average wattage.

I doubt these cards are running at 600 Watts @ 24 hours @ 7 days a week for years.

My 7800XT runs usually around 80 - 130 Watts according to the uncalibrated - AMD total board power readout in windows 11 pro 24h2 amd gpu driver. I think the powercolor 7800XT hellhound has something around 250 / 280 Watts max. The card is around 6 to 12 Watts in idle with a single whqd screen.

Flip back through this discussion. Page 26. The point of the shunt resistors would be to detect and mitigate the individual conductors pulling more than their specified loads. Yes, 600 watts / 12 conductors averages to 50 watts a conductor, and at 12 volts would be 4.17 amps. The problem is that when one of the lines is insanely unbalanced, and many of those 12 lines are for signal rather than power load, as per a bunch of reporting, you get more than 9.2 amps per conductor.

Would the shunt resistors be a good thing? No. They aren't breakers. You'd still need to monitor them in software and have some response when each hit a huge load...but Nvidia's spec allowed for none of this to be a viable answer to the problem. Yep, no protection via the shunt resistor was a way to save money, along with a connector with less actual conductors. It seems like they did a fine job of cost reducing their product...assuming of course they never experienced anything but relatively ideal loading scenarios.



So again, my money has been on a bad design from day one. Initial impressions were the 600 watt rating and being able to pull 900 watts in a 1 ms surge. That's evolved into insanely unbalanced loading. It's now in stupid decisions meant to save money consistently, on a premium product, that seem to have been tested about as poorly as possible. But what do I know? It's not like I've ever been in the middle of a recall because of a stupid cost savings measure before... Like regrinding black plastic and reshooting it without UV protectant inside the color dye, then sticking that plastic on an outdoor vehicle...that sees plenty of sun. Sigh.
 
Flip back through this discussion. Page 26. The point of the shunt resistors would be to detect and mitigate the individual conductors pulling more than their specified loads. Yes, 600 watts / 12 conductors averages to 50 watts a conductor, and at 12 volts would be 4.17 amps. The problem is that when one of the lines is insanely unbalanced, and many of those 12 lines are for signal rather than power load, as per a bunch of reporting, you get more than 9.2 amps per conductor.
Except you don't have 12 lines - you have 6. 6 hot and 6 ground wires, so 6 circuits. That makes it 100W per circuit, 8.33A. Is that below the maximum rating? Yes, but that's with a perfectly balanced sharing between your 6 circuits.

I truly find it hard to believe carrying 100W over a single pair (assuming equal load) inside the 12V-2x6 connector to be safe, when 8-pin PEG is rated 150W for the whole connector which uses thicker pins.
 
i mean a graphics card burning your house down unattended with your child or pets inside would be something worth pitchforkery.
That happened to you? Or you find it fun to spread absurd disinformation? Fun fact -- the 120V 3-pin outlet standard has been around for decades, yet still causes several thousand home fires per year ... many of which kill children. Do you complain to any forum or government agency about that?

Yeah, I thought not.
 
i mean a graphics card burning your house down unattended with your child or pets inside would be something worth pitchforkery.
This shouldn't be happening if the PC is turned off, unlike with 7800x3d when you would think the PC is turned off, but instead its burning with 36A.
Maybe we do a protest like that useless reddit blackout.
 
That happened to you? Or you find it fun to spread absurd disinformation? Fun fact -- the 120V 3-pin outlet standard has been around for decades, yet still causes several thousand home fires per year ... many of which kill children. Do you complain to any forum or government agency about that?

Yeah, I thought not.
little jimmy has never been the same since the accident. they are still picking pieces of leather jacket out of his burns.
 
Except you don't have 12 lines - you have 6. 6 hot and 6 ground wires, so 6 circuits. That makes it 100W per circuit, 8.33A. Is that below the maximum rating? Yes, but that's with a perfectly balanced sharing between your 6 circuits.

I truly find it hard to believe carrying 100W over a single pair (assuming equal load) inside the 12V-2x6 connector to be safe, when 8-pin PEG is rated 150W for the whole connector which uses thicker pins.

I'm gonna ask you to read the line you quoted. That part after, where I say many of those 12 conductors are for signal...

Do you maybe wanna take a second and read what you quote, before claiming I said something different, but the quote proves otherwise?

I...seriously, come on. You quoted me saying that exact thing on the next line...I didn't even have to add it. Serious?
 
as if it's no big deal.
It’s not a big deal. In a week it won’t even be news outside of a few AMD fans that need attention.
 
It’s not a big deal. In a week it won’t even be news outside of a few AMD fans that need attention.
I don't know why you tried to make this some tribalist nonsense, but alright, if you say so. All I can do is wish you and the other 4000/5000 series owners the best and hope that it doesn't happen to you. It's your money to spend and your risk to take.
 
I don't know why you tried to make this some tribalist nonsense, but alright, if you say so. All I can do is wish you and the other 4000/5000 series owners the best and hope that it doesn't happen to you. It's your money to spend and your risk to take.

I don’t have to hope that it doesn’t happen to me. I know how to RTFM and plug something in correctly.
 
Ok and what about pros who do it the same way you do and the card burns up? Utter bullshit. Rtx 4000 and 5000 are not isolated incidents.
 
Its more a case of the pins not ever getting more than 150W each. Lots of high power GPUs were made without load balancing.
Instead, the problem was mitigated by just adding more connectors. 2x/3x 8 pin already does load balancing mechanically.
No it does not. The balancing is done on the gpu via active components, not ”mechanically”.

They have shunts per connector and they are tied to specific power delivery components so that the card can balance the load between the different connectors. Otherwise for example a 8pin + 6pin card could easily carry more than 75W over the 6pin, breaking the spec for that connection.
 
I hear your conspiracy...and raise you a common sense.

Let me be less obtuse. If AMD and Nvidia were to release a card, do you think they'd want it to have 20% of its performance left on the table? No. Common sense is that they'd release a card showing 100% of its capability and charge for it. The common sense thing is that during manufacturing there are variations. The reason that they produce cards and specify them to a performance level is that they can be relatively assured to run at that level. This means that for most you setup a minimum level, bin anything below that as bad, and everything else is good. IE, the cut-off is 2.8 GHz, the three cards in question can run at 2.7, 2.9, and 3.1, and you pass 2/3 with them being set to 2.8 GHz and fuse off the bad bits of the third to make the next product down in the stack.

The thing is, this used to have a lot of variation. "Golden" chips overclocked like crazy, and the internal cut-off was made at a point where production matched output. In this way, depending upon what you got, you could overclock to 3.1 GHz, or be stuck at the rated 2.8 GHz. The "conspiracy" is that with better processes, and more consistent output, the cards are now at 2.7, 2.8, and 2.9 GHz...so the 2.8 rating leaves no production inefficiencies swing for the overclocking headroom. Better manufacturing means they come out of the box clocked higher and with less OC headroom.

So...yeah. It's not a conspiracy so much as improvement in production removing inefficiencies, leading to less performance left on the proverbial table that you can get back by overclocking. That sucks....but it's not something to whine about. Note that Nvidia states their cards run at 575 watts. They can pull 900 watts for 1 ms...or 1,000,000 ns. This is all fine and dandy...except they can do this very often when going from high usage spikes...and it's over a connection where the 900 watts/575 watts is not forcibly balanced over conductors and therefore can (and apparently does) peak on a limited amount of them...exceeding the 9.2 amps that the wire connector is theoretically rated for under presumably ideal situations. That's less about failure of manufacturing, or about reality, and somebody designing a connector that wasn't meant for what it is experiencing in practice.

Ahh...but there are those who still believe the engineers specified everything perfect. Again, engineers of vehicles didn't (historically) design them to be destroyed. This how the Ford Pinto became a thing, and it's one of the easiest situations where some common sense would have fixed everything. Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.
The manufacturer want to sell products in the specific price/perf range, that's understandable. But if you watch Buildzoid's video you'll see he explains it very well, each new generation 16 pin gets weaker on the GPU side. If you say a multi-billion dollar company doesn't have the experts to understand this and they do it because they are fools, and if you said that the consumer must accept that like something normal, I don't think so :)

About binning, a lot records in 3DMark are made with 700w+ mods, so the chips/cables/connectors are not the problem here.
 
Would the shunt resistors be a good thing? No. They aren't breakers. You'd still need to monitor them in software and have some response when each hit a huge load.

I know why i do not read such posts and responses.

No and No. Definitely not.

I think you misunderstood a good design principle.

You do that on the hardware level - on the component level in the firmware. Not in the userspace software.

There are some sort of microcontrollers. These are logic with functions and flash memory. That firmware and that circuitry should limit the current flow. Assuming you are not a "smart guy" who desinged that circuit. You have those parts in any power supply unit, television set, and so on.

someone was lazy designing that power circuit.

#719 - That guy is also right. Shunt mods and such. 700Watt or more over the same connector. I have a psu with the older nvidia gpu connector and i have two psu with the usual amd graphic card psu connectors. 600 Watts over such small connectors in comparision with the other connectors - definitely not to my liking.

#719 I would not give a free pass for every chips/cables/connectors

-- I have to admit something is wrong. My conclusion is a do not buy badge over all Nvidia related products.

Anyway I'm lucky that I was not affected because I did not buy Intel 13th or 14th gen processors or a nvidia graphic card.

It's sad still no solution or statement from NVIDIA. I want a real statement and no bullshit statement. I want to see actions. Real actions which solves this permanently.


This was bullshit and easy to see bullshit by nvidia


Nicely Whataboutism: Intel can pull around 460 Watt permanently to the cpu socket on consumer mainboards. No issues with cables on those mainboards. A big increase. MAybe it is valid to compare a intel cpu with 460 Watts with a NVIDIA 4090.
 
This shouldn't be happening if the PC is turned off, unlike with 7800x3d when you would think the PC is turned off, but instead its burning with 36A.
Maybe we do a protest like that useless reddit blackout.

?????

Did I miss something? Since when is that happening?
 
Nobody ever demonstrated that a new correctly installed cable (or used cable with undamaged plugs) causes any problems.

NOBODY. EVER.

Yet this thread and bizzare comments still for some reason goes on.

I myself tested two new and like new cables with 400W load and they performed perfectly.
You are free to report your correct and factual information including how much have the plugs been used here:

 
Last edited:
Nobody ever demonstrated that a new correctly installed cable (or used cable with undamaged plugs) causes any problems.

NOBODY. EVER.

Yet this thread and bizzare comments still for some reason goes on.

I'll never understand the pathological urge some of you have to speak up for a megacorporation that refuses to change a design that's been proven to introduce serious problems for the end user. If you don't own a 4090, I can only take this as fanboyism. If you do (like I do), you should be mad about your property and safety not being respected.

12VHPWR should've never been a thing.
 
Last edited:
So do only the 3rd party and Corsair cables burn? I see a lot of hate for Corsair(just like the Asus hate with burning x3d, even though Asrock was the 1st and other brand did burn too. 9800x3d still burn on Asrock to this day). Or like with Samsung where people use 1-2 years old FW.
 
Status
Not open for further replies.
Back
Top