• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Joined
Oct 17, 2021
Messages
112 (0.09/day)
System Name Nirn
Processor Amd Ryzen 7950X3D
Motherboard MSI MEG ACE X670e
Cooling Noctua NH-D15
Memory 128 GB Kingston DDR5 6000 (running at 4000)
Video Card(s) Radeon RX 7900XTX (24G) + Geforce 4070ti (12G) Physx
Storage SAMSUNG 990 EVO SSD 2TB Gen 5 x2 (OS)+SAMSUNG 980 SSD 1TB PCle 3.0x4 (Primocache) +2X 22TB WD Gold
Display(s) Samsung UN55NU8000 (Freesync)
Case Corsair Graphite Series 780T White
Audio Device(s) Creative Soundblaster AE-7 + Sennheiser GSP600
Power Supply Seasonic PRIME TX-1000 Titanium
Mouse Razer Mamba Elite Wired
Keyboard Razer BlackWidow Chroma v1
VR HMD Oculus Quest 2
Software Windows 10
Sure, let's take the inane fearmongering up another notch. Why not skip the middle man and just grab a torch and pitchfork and head to Santa Clara. to burn down evil Doctor NVidia's fortress of power? That's about as sensible as telling people to contact the FTC, because they read a few stories on the Interwebs.

As for this "violation of UL" idiocy, there is no legal requirement for UL certification, nor does UL certify "standards", but only individual products, meaning it doesn't affect NVidia whatsoever.
i mean a graphics card burning your house down unattended with your child or pets inside would be something worth pitchforkery.
 
Joined
Aug 2, 2012
Messages
2,135 (0.47/day)
Location
Netherlands
System Name TheDeeGee's PC
Processor Intel Core i7-11700
Motherboard ASRock Z590 Steel Legend
Cooling Noctua NH-D15S
Memory Crucial Ballistix 3200/C16 32GB
Video Card(s) Nvidia RTX 4070 Ti 12GB
Storage Crucial P5 Plus 2TB / Crucial P3 Plus 2TB / Crucial P3 Plus 4TB
Display(s) EIZO CX240
Case Lian-Li O11 Dynamic Evo XL / Noctua NF-A12x25 fans
Audio Device(s) Creative Sound Blaster ZXR / AKG K601 Headphones
Power Supply Seasonic PRIME Fanless TX-700
Mouse Logitech G500S
Keyboard Keychron Q6
Software Windows 10 Pro 64-Bit
Benchmark Scores None, as long as my games runs smooth.
posted over the weekend

So instead of a melted connector, you get corrupted save and/or system files while in the heat of a battle, because the PSU decides to shut the system off due to a hot cable.

Awesome...
 
  • Like
Reactions: N/A
Joined
Apr 2, 2011
Messages
2,924 (0.58/day)
What about the bios lock ? :)
Why do you think there is no MPT for the 7000 series ? ;)
Why do you think there is telemetry in/out watts on all cards ?

There is no conspiracy - it's just reality.

I hear your conspiracy...and raise you a common sense.

Let me be less obtuse. If AMD and Nvidia were to release a card, do you think they'd want it to have 20% of its performance left on the table? No. Common sense is that they'd release a card showing 100% of its capability and charge for it. The common sense thing is that during manufacturing there are variations. The reason that they produce cards and specify them to a performance level is that they can be relatively assured to run at that level. This means that for most you setup a minimum level, bin anything below that as bad, and everything else is good. IE, the cut-off is 2.8 GHz, the three cards in question can run at 2.7, 2.9, and 3.1, and you pass 2/3 with them being set to 2.8 GHz and fuse off the bad bits of the third to make the next product down in the stack.

The thing is, this used to have a lot of variation. "Golden" chips overclocked like crazy, and the internal cut-off was made at a point where production matched output. In this way, depending upon what you got, you could overclock to 3.1 GHz, or be stuck at the rated 2.8 GHz. The "conspiracy" is that with better processes, and more consistent output, the cards are now at 2.7, 2.8, and 2.9 GHz...so the 2.8 rating leaves no production inefficiencies swing for the overclocking headroom. Better manufacturing means they come out of the box clocked higher and with less OC headroom.

So...yeah. It's not a conspiracy so much as improvement in production removing inefficiencies, leading to less performance left on the proverbial table that you can get back by overclocking. That sucks....but it's not something to whine about. Note that Nvidia states their cards run at 575 watts. They can pull 900 watts for 1 ms...or 1,000,000 ns. This is all fine and dandy...except they can do this very often when going from high usage spikes...and it's over a connection where the 900 watts/575 watts is not forcibly balanced over conductors and therefore can (and apparently does) peak on a limited amount of them...exceeding the 9.2 amps that the wire connector is theoretically rated for under presumably ideal situations. That's less about failure of manufacturing, or about reality, and somebody designing a connector that wasn't meant for what it is experiencing in practice.

Ahh...but there are those who still believe the engineers specified everything perfect. Again, engineers of vehicles didn't (historically) design them to be destroyed. This how the Ford Pinto became a thing, and it's one of the easiest situations where some common sense would have fixed everything. Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.
 
Joined
Feb 18, 2005
Messages
6,134 (0.84/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
I hear your conspiracy...and raise you a common sense.

Let me be less obtuse. If AMD and Nvidia were to release a card, do you think they'd want it to have 20% of its performance left on the table? No. Common sense is that they'd release a card showing 100% of its capability and charge for it. The common sense thing is that during manufacturing there are variations. The reason that they produce cards and specify them to a performance level is that they can be relatively assured to run at that level. This means that for most you setup a minimum level, bin anything below that as bad, and everything else is good. IE, the cut-off is 2.8 GHz, the three cards in question can run at 2.7, 2.9, and 3.1, and you pass 2/3 with them being set to 2.8 GHz and fuse off the bad bits of the third to make the next product down in the stack.

The thing is, this used to have a lot of variation. "Golden" chips overclocked like crazy, and the internal cut-off was made at a point where production matched output. In this way, depending upon what you got, you could overclock to 3.1 GHz, or be stuck at the rated 2.8 GHz. The "conspiracy" is that with better processes, and more consistent output, the cards are now at 2.7, 2.8, and 2.9 GHz...so the 2.8 rating leaves no production inefficiencies swing for the overclocking headroom. Better manufacturing means they come out of the box clocked higher and with less OC headroom.

So...yeah. It's not a conspiracy so much as improvement in production removing inefficiencies, leading to less performance left on the proverbial table that you can get back by overclocking. That sucks....but it's not something to whine about. Note that Nvidia states their cards run at 575 watts. They can pull 900 watts for 1 ms...or 1,000,000 ns. This is all fine and dandy...except they can do this very often when going from high usage spikes...and it's over a connection where the 900 watts/575 watts is not forcibly balanced over conductors and therefore can (and apparently does) peak on a limited amount of them...exceeding the 9.2 amps that the wire connector is theoretically rated for under presumably ideal situations. That's less about failure of manufacturing, or about reality, and somebody designing a connector that wasn't meant for what it is experiencing in practice.

Ahh...but there are those who still believe the engineers specified everything perfect. Again, engineers of vehicles didn't (historically) design them to be destroyed. This how the Ford Pinto became a thing, and it's one of the easiest situations where some common sense would have fixed everything. Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.
All correct except the part about binning. The actual reason is Moore's Law wall: because transistors are barely getting smaller anymore, the free performance boost that IC designers used to get from a node shrink (via cramming more transistors in the same area) is gone. That boost was by and large responsible for the generational performance improvements we became used to.

Without that free lunch the designers had to turn to other avenues to maintain the expected performance cadence. What they did was to throw efficiency out of the window, so instead of clocking at a sweet spot between performance and power consumption, silicon is now clocked from the factory at the maximum it will do without catching fire. The underclocking craze is literally just people doing manually what designers used to.

The problem is that after the designers pushed the power lever all the way to the right, there are no more levers to pull. It's why RTX 5090 draws ~600W compared to RTX 4090 which pulls ~400W; ~50% more power for ~25% more performance. It's why power consumption is going to continue to go up, die sizes are going to continue to increase, leading-edge silicon is going to continue getting more expensive, and performance increases are going to taper off and eventually plateau.

The entire silicon microprocessor industry isn't just staring down the barrel; said barrel is firmly pressed against their collective eyeball. It is going to be a very long, very hard winter until either some new silicon breakthrough is reached and commercialised, or an economical alternative to that material is demonstrated; TSMC and ASML trying to get closer and closer to the inevitable limits of physics is neither of these. As enthusiasts, we are all gonna have a bad time.
 

qxp

Joined
Oct 27, 2024
Messages
168 (1.44/day)
All correct except the part about binning. The actual reason is Moore's Law wall: because transistors are barely getting smaller anymore, the free performance boost that IC designers used to get from a node shrink (via cramming more transistors in the same area) is gone. That boost was by and large responsible for the generational performance improvements we became used to.

Without that free lunch the designers had to turn to other avenues to maintain the expected performance cadence. What they did was to throw efficiency out of the window, so instead of clocking at a sweet spot between performance and power consumption, silicon is now clocked from the factory at the maximum it will do without catching fire. The underclocking craze is literally just people doing manually what designers used to.

The problem is that after the designers pushed the power lever all the way to the right, there are no more levers to pull. It's why RTX 5090 draws ~600W compared to RTX 4090 which pulls ~400W; ~50% more power for ~25% more performance. It's why power consumption is going to continue to go up, die sizes are going to continue to increase, leading-edge silicon is going to continue getting more expensive, and performance increases are going to taper off and eventually plateau.

The entire silicon microprocessor industry isn't just staring down the barrel; said barrel is firmly pressed against their collective eyeball. It is going to be a very long, very hard winter until either some new silicon breakthrough is reached and commercialised, or an economical alternative to that material is demonstrated; TSMC and ASML trying to get closer and closer to the inevitable limits of physics is neither of these. As enthusiasts, we are all gonna have a bad time.
Very well said. There is also a situation where power loss increases with frequency. People demonstrated THz transistors, but they are in chips housing few at a time and dissipating lots of power. This is similar to how DDR drivers in SSDs and memory are often large contributors to power dissipation.

One possibility for improvement is to switch away from silicon. We have known of better alternatives to silicon for decades, but the industry using silicon substrates had so much momentum that by time you design a CPU on a different substrate you will lose to node shrink. Now that node shrinks are getting harder people might look at other substrates again.
 
Joined
Sep 17, 2014
Messages
23,320 (6.12/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Sure then, but so can the 12vh. We don't see burned connectors on 4070, it's mostly the 90 parts and some 80s (which can also be pushed to 450w and beyond btw).
The 12Vh might be able to, but not the way it is implemented now.

So we can play this game all day, is the cable bad or is the GPU bad... the reality is simply the GPU and the cable are at some wattage no longer a good combination. And you will need 2 connectors or more. Realistically I'd say every 300W would need another connector. Then we're getting near the pcie cabling tolerances and the shitty cables can probably do fine, too.
 
Joined
Jul 31, 2024
Messages
883 (4.33/day)
Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.

It's bad circuit design. Not more not less. I doubt the NVIDIA graphic card connector provides 1.2 Volt DC. Most likely 12V DC. Someone screwed up reading the specs. Someone screwed up learning the basics. Or do you want to tell me they did not knew what connectors and other parts were placed on the pcb? There is for sure some sort of Direct current to direct current converter circuits on those boards. Someone made mistakes by designing those. You, with your shunt resistors. They are not magic. They are also not magic like fuses. My windows 11 pro 24h2 amd gpu driver has a slider for the voltage which may is, by converting the units, from 0.8 V DC to 1.2 V DC. You need some sort of circuit to provide that to the card.
they saved money on the circuit design and on the parts being used.
Up to 800 Watts is not an excuse for a bad circuit design and bad pcb design and bad choice of components. They should fire those people and get proper people who know their tech field.

--

These products are still sold and bought on a daily basis. PSU are still sold and bought on a daily basis. The usual 1 year RMA warranty period costs most likely less than making and selling a proper product. Who cares for a few defective cables, cards and psu connectors? This is not even worth a news article on any television or newspapers.

While I see there are users here that have used the connector for an extended period of time without issue,

No one knows how many hours these cards were used. No one knows how much average wattage.

I doubt these cards are running at 600 Watts @ 24 hours @ 7 days a week for years.

My 7800XT runs usually around 80 - 130 Watts according to the uncalibrated - AMD total board power readout in windows 11 pro 24h2 amd gpu driver. I think the powercolor 7800XT hellhound has something around 250 / 280 Watts max. The card is around 6 to 12 Watts in idle with a single whqd screen.

Running a card not at maximum makes the statement not relevant ... there are users here that have used the connector for an extended period of time without issue
The few issues which pop up are the important ones, not the 90% or slight a bit more users without issues, therefore the statement is not relevant ... there are users here that have used the connector for an extended period of time without issue

So when 90 of 100 airplanes do not crash everything is okay, right? It does not matter if it is 90 or 99 airplanes to explain the point wiht numbers.
 
Last edited:
Joined
Apr 2, 2011
Messages
2,924 (0.58/day)
All correct except the part about binning. The actual reason is Moore's Law wall: because transistors are barely getting smaller anymore, the free performance boost that IC designers used to get from a node shrink (via cramming more transistors in the same area) is gone. That boost was by and large responsible for the generational performance improvements we became used to.

Without that free lunch the designers had to turn to other avenues to maintain the expected performance cadence. What they did was to throw efficiency out of the window, so instead of clocking at a sweet spot between performance and power consumption, silicon is now clocked from the factory at the maximum it will do without catching fire. The underclocking craze is literally just people doing manually what designers used to.

The problem is that after the designers pushed the power lever all the way to the right, there are no more levers to pull. It's why RTX 5090 draws ~600W compared to RTX 4090 which pulls ~400W; ~50% more power for ~25% more performance. It's why power consumption is going to continue to go up, die sizes are going to continue to increase, leading-edge silicon is going to continue getting more expensive, and performance increases are going to taper off and eventually plateau.

The entire silicon microprocessor industry isn't just staring down the barrel; said barrel is firmly pressed against their collective eyeball. It is going to be a very long, very hard winter until either some new silicon breakthrough is reached and commercialised, or an economical alternative to that material is demonstrated; TSMC and ASML trying to get closer and closer to the inevitable limits of physics is neither of these. As enthusiasts, we are all gonna have a bad time.

I agree, but it's easiest to explain. Trying to explain that performance per wattage was balanced for, and that you had to have a rather large sampling size, etc... was too much to try and explain (in my opinion). Realistically, cards could be pushed to most of those numbers by increasing voltage...but it ruins a simple explanation.

It's also fun to consider that all of this has come about because the existing 6 and 8 pin connectors are "expensive" to install, so Nvidia ejected their protection system too. Those shunt resistors are a couple of cents, on a product selling for thousands of dollars. Way back I stated the failure rate would have had to be 1 in 1,500,000 to be considered good...but you can have a much higher rate if you build in detections which basically prevent the failure...like the shunt resistors. Oh boy...it's almost like I agree with plenty of other people here in that this is an Nvidia issue.


I...look forward to a future built on something other than silicon. Optical processors are moving forward, quantum chips are present but still impractical, and materials science is pushing us forward ever so slowly. It might be nice for a change to have a computer for 5-7 years without any upgrades, and having the main improvements coming from software having to get more efficient at using resources which cannot simply be thrown at it to overcome limitations of inefficiencies...and I'm glad that I'm a grease monkey rather than a code monkey.

It's bad circuit design. Not more not less. I doubt the NVIDIA graphic card connector provides 1.2 Volt DC. Most likely 12V DC. Someone screwed up reading the specs. Someone screwed up learning the basics. Or do you want to tell me they did not knew what connectors and other parts were placed on the pcb? There is for sure some sort of Direct current to direct current converter circuits on those boards. Someone made mistakes by designing those. You, with your shunt resistors. They are not magic. They are also not magic like fuses. My windows 11 pro 24h2 amd gpu driver has a slider for the voltage which may is, by converting the units, from 0.8 V DC to 1.2 V DC. You need some sort of circuit to provide that to the card.
they saved money on the circuit design and on the parts being used.
Up to 800 Watts is not an excuse for a bad circuit design and bad pcb design and bad choice of components. They should fire those people and get proper people who know their tech field.

--

These products are still sold and bought on a daily basis. PSU are still sold and bought on a daily basis. The usual 1 year RMA warranty period costs most likely less than making and selling a proper product. Who cares for a few defective cables, cards and psu connectors? This is not even worth a news article on any television or newspapers.



No one knows how many hours these cards were used. No one knows how much average wattage.

I doubt these cards are running at 600 Watts @ 24 hours @ 7 days a week for years.

My 7800XT runs usually around 80 - 130 Watts according to the uncalibrated - AMD total board power readout in windows 11 pro 24h2 amd gpu driver. I think the powercolor 7800XT hellhound has something around 250 / 280 Watts max. The card is around 6 to 12 Watts in idle with a single whqd screen.

Flip back through this discussion. Page 26. The point of the shunt resistors would be to detect and mitigate the individual conductors pulling more than their specified loads. Yes, 600 watts / 12 conductors averages to 50 watts a conductor, and at 12 volts would be 4.17 amps. The problem is that when one of the lines is insanely unbalanced, and many of those 12 lines are for signal rather than power load, as per a bunch of reporting, you get more than 9.2 amps per conductor.

Would the shunt resistors be a good thing? No. They aren't breakers. You'd still need to monitor them in software and have some response when each hit a huge load...but Nvidia's spec allowed for none of this to be a viable answer to the problem. Yep, no protection via the shunt resistor was a way to save money, along with a connector with less actual conductors. It seems like they did a fine job of cost reducing their product...assuming of course they never experienced anything but relatively ideal loading scenarios.



So again, my money has been on a bad design from day one. Initial impressions were the 600 watt rating and being able to pull 900 watts in a 1 ms surge. That's evolved into insanely unbalanced loading. It's now in stupid decisions meant to save money consistently, on a premium product, that seem to have been tested about as poorly as possible. But what do I know? It's not like I've ever been in the middle of a recall because of a stupid cost savings measure before... Like regrinding black plastic and reshooting it without UV protectant inside the color dye, then sticking that plastic on an outdoor vehicle...that sees plenty of sun. Sigh.
 
Joined
Sep 26, 2022
Messages
2,375 (2.70/day)
Location
Braziguay
System Name G-Station 2.0 "YGUAZU"
Processor AMD Ryzen 7 5700X3D
Motherboard Gigabyte X470 Aorus Gaming 7 WiFi
Cooling Freezemod: Pump, Reservoir, 360mm Radiator, Fittings / Bykski: Blocks / Barrow: Meters
Memory Asgard Bragi DDR4-3600CL14 2x16GB
Video Card(s) Sapphire PULSE RX 7900 XTX
Storage 240GB Samsung 840 Evo, 1TB Asgard AN2, 2TB Hiksemi FUTURE-LITE, 320GB+1TB 7200RPM HDD
Display(s) Samsung 34" Odyssey OLED G8
Case Lian Li Lancool 216
Audio Device(s) Astro A40 TR + MixAmp
Power Supply Cougar GEX X2 1000W
Mouse Razer Viper Ultimate
Keyboard Razer Huntsman Elite (Red)
Software Windows 11 Pro, Garuda Linux
Flip back through this discussion. Page 26. The point of the shunt resistors would be to detect and mitigate the individual conductors pulling more than their specified loads. Yes, 600 watts / 12 conductors averages to 50 watts a conductor, and at 12 volts would be 4.17 amps. The problem is that when one of the lines is insanely unbalanced, and many of those 12 lines are for signal rather than power load, as per a bunch of reporting, you get more than 9.2 amps per conductor.
Except you don't have 12 lines - you have 6. 6 hot and 6 ground wires, so 6 circuits. That makes it 100W per circuit, 8.33A. Is that below the maximum rating? Yes, but that's with a perfectly balanced sharing between your 6 circuits.

I truly find it hard to believe carrying 100W over a single pair (assuming equal load) inside the 12V-2x6 connector to be safe, when 8-pin PEG is rated 150W for the whole connector which uses thicker pins.
 
Joined
Sep 29, 2020
Messages
200 (0.12/day)
i mean a graphics card burning your house down unattended with your child or pets inside would be something worth pitchforkery.
That happened to you? Or you find it fun to spread absurd disinformation? Fun fact -- the 120V 3-pin outlet standard has been around for decades, yet still causes several thousand home fires per year ... many of which kill children. Do you complain to any forum or government agency about that?

Yeah, I thought not.
 
Joined
Aug 28, 2023
Messages
261 (0.48/day)
i mean a graphics card burning your house down unattended with your child or pets inside would be something worth pitchforkery.
This shouldn't be happening if the PC is turned off, unlike with 7800x3d when you would think the PC is turned off, but instead its burning with 36A.
Maybe we do a protest like that useless reddit blackout.
 
Joined
Oct 17, 2021
Messages
112 (0.09/day)
System Name Nirn
Processor Amd Ryzen 7950X3D
Motherboard MSI MEG ACE X670e
Cooling Noctua NH-D15
Memory 128 GB Kingston DDR5 6000 (running at 4000)
Video Card(s) Radeon RX 7900XTX (24G) + Geforce 4070ti (12G) Physx
Storage SAMSUNG 990 EVO SSD 2TB Gen 5 x2 (OS)+SAMSUNG 980 SSD 1TB PCle 3.0x4 (Primocache) +2X 22TB WD Gold
Display(s) Samsung UN55NU8000 (Freesync)
Case Corsair Graphite Series 780T White
Audio Device(s) Creative Soundblaster AE-7 + Sennheiser GSP600
Power Supply Seasonic PRIME TX-1000 Titanium
Mouse Razer Mamba Elite Wired
Keyboard Razer BlackWidow Chroma v1
VR HMD Oculus Quest 2
Software Windows 10
That happened to you? Or you find it fun to spread absurd disinformation? Fun fact -- the 120V 3-pin outlet standard has been around for decades, yet still causes several thousand home fires per year ... many of which kill children. Do you complain to any forum or government agency about that?

Yeah, I thought not.
little jimmy has never been the same since the accident. they are still picking pieces of leather jacket out of his burns.
 
Joined
Apr 2, 2011
Messages
2,924 (0.58/day)
Except you don't have 12 lines - you have 6. 6 hot and 6 ground wires, so 6 circuits. That makes it 100W per circuit, 8.33A. Is that below the maximum rating? Yes, but that's with a perfectly balanced sharing between your 6 circuits.

I truly find it hard to believe carrying 100W over a single pair (assuming equal load) inside the 12V-2x6 connector to be safe, when 8-pin PEG is rated 150W for the whole connector which uses thicker pins.

I'm gonna ask you to read the line you quoted. That part after, where I say many of those 12 conductors are for signal...

Do you maybe wanna take a second and read what you quote, before claiming I said something different, but the quote proves otherwise?

I...seriously, come on. You quoted me saying that exact thing on the next line...I didn't even have to add it. Serious?
 
Joined
Jun 19, 2024
Messages
466 (1.89/day)
System Name XPS, Lenovo and HP Laptops, HP Xeon Mobile Workstation, HP Servers, Dell Desktops
Processor Everything from Turion to 13900kf
Motherboard MSI - they own the OEM market
Cooling Air on laptops, lots of air on servers, AIO on desktops
Memory I think one of the laptops is 2GB, to 64GB on gamer, to 128GB on ZFS Filer
Video Card(s) A pile up to my knee, with a RTX 4090 teetering on top
Storage Rust in the closet, solid state everywhere else
Display(s) Laptop crap, LG UltraGear of various vintages
Case OEM and a 42U rack
Audio Device(s) Headphones
Power Supply Whole home UPS w/Generac Standby Generator
Software ZFS, UniFi Network Application, Entra, AWS IoT Core, Splunk
Benchmark Scores 1.21 GigaBungholioMarks
as if it's no big deal.
It’s not a big deal. In a week it won’t even be news outside of a few AMD fans that need attention.
 
Joined
Jan 12, 2023
Messages
275 (0.36/day)
System Name IZALITH (or just "Lith")
Processor AMD Ryzen 7 7800X3D (4.2Ghz base, 5.0Ghz boost, -30 PBO offset)
Motherboard Gigabyte X670E Aorus Master Rev 1.0
Cooling Deepcool Gammaxx AG400 Single Tower
Memory Corsair Vengeance 64GB (2x32GB) 6000MHz CL40 DDR5 XMP (XMP enabled)
Video Card(s) PowerColor Radeon RX 7900 XTX Red Devil OC 24GB (2.39Ghz base, 2.99Ghz boost, -30 core offset)
Storage 2x1TB SSD, 2x2TB SSD, 2x 8TB HDD
Display(s) Samsung Odyssey G51C 27" QHD (1440p 165Hz) + Samsung Odyssey G3 24" FHD (1080p 165Hz)
Case Corsair 7000D Airflow Full Tower
Audio Device(s) Corsair HS55 Surround Wired Headset/LG Z407 Speaker Set
Power Supply Corsair HX1000 Platinum Modular (1000W)
Mouse Logitech G502 X LIGHTSPEED Wireless Gaming Mouse
Keyboard Keychron K4 Wireless Mechanical Keyboard
Software Arch Linux
It’s not a big deal. In a week it won’t even be news outside of a few AMD fans that need attention.
I don't know why you tried to make this some tribalist nonsense, but alright, if you say so. All I can do is wish you and the other 4000/5000 series owners the best and hope that it doesn't happen to you. It's your money to spend and your risk to take.
 
Joined
Jun 19, 2024
Messages
466 (1.89/day)
System Name XPS, Lenovo and HP Laptops, HP Xeon Mobile Workstation, HP Servers, Dell Desktops
Processor Everything from Turion to 13900kf
Motherboard MSI - they own the OEM market
Cooling Air on laptops, lots of air on servers, AIO on desktops
Memory I think one of the laptops is 2GB, to 64GB on gamer, to 128GB on ZFS Filer
Video Card(s) A pile up to my knee, with a RTX 4090 teetering on top
Storage Rust in the closet, solid state everywhere else
Display(s) Laptop crap, LG UltraGear of various vintages
Case OEM and a 42U rack
Audio Device(s) Headphones
Power Supply Whole home UPS w/Generac Standby Generator
Software ZFS, UniFi Network Application, Entra, AWS IoT Core, Splunk
Benchmark Scores 1.21 GigaBungholioMarks
I don't know why you tried to make this some tribalist nonsense, but alright, if you say so. All I can do is wish you and the other 4000/5000 series owners the best and hope that it doesn't happen to you. It's your money to spend and your risk to take.

I don’t have to hope that it doesn’t happen to me. I know how to RTFM and plug something in correctly.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,714 (6.78/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Ok and what about pros who do it the same way you do and the card burns up? Utter bullshit. Rtx 4000 and 5000 are not isolated incidents.
 
Joined
Oct 15, 2019
Messages
635 (0.32/day)
Its more a case of the pins not ever getting more than 150W each. Lots of high power GPUs were made without load balancing.
Instead, the problem was mitigated by just adding more connectors. 2x/3x 8 pin already does load balancing mechanically.
No it does not. The balancing is done on the gpu via active components, not ”mechanically”.

They have shunts per connector and they are tied to specific power delivery components so that the card can balance the load between the different connectors. Otherwise for example a 8pin + 6pin card could easily carry more than 75W over the 6pin, breaking the spec for that connection.
 
Joined
Sep 20, 2021
Messages
566 (0.45/day)
Processor Ryzen 7 9700x
Motherboard Asrock B650E PG Riptide WiFi
Cooling Underfloor CPU cooling
Memory 2x32GB 6200MT/s
Video Card(s) RTX 4080 SUPER Noctua OC Edition
Storage Kingston Fury Renegade 1TB, Seagate Exos 12TB
Display(s) MSI Optix MAG301RF 2560x1080@200Hz
Case Phanteks Enthoo Pro
Power Supply NZXT C850 850W Gold
Mouse Bloody W95 Max Naraka
I hear your conspiracy...and raise you a common sense.

Let me be less obtuse. If AMD and Nvidia were to release a card, do you think they'd want it to have 20% of its performance left on the table? No. Common sense is that they'd release a card showing 100% of its capability and charge for it. The common sense thing is that during manufacturing there are variations. The reason that they produce cards and specify them to a performance level is that they can be relatively assured to run at that level. This means that for most you setup a minimum level, bin anything below that as bad, and everything else is good. IE, the cut-off is 2.8 GHz, the three cards in question can run at 2.7, 2.9, and 3.1, and you pass 2/3 with them being set to 2.8 GHz and fuse off the bad bits of the third to make the next product down in the stack.

The thing is, this used to have a lot of variation. "Golden" chips overclocked like crazy, and the internal cut-off was made at a point where production matched output. In this way, depending upon what you got, you could overclock to 3.1 GHz, or be stuck at the rated 2.8 GHz. The "conspiracy" is that with better processes, and more consistent output, the cards are now at 2.7, 2.8, and 2.9 GHz...so the 2.8 rating leaves no production inefficiencies swing for the overclocking headroom. Better manufacturing means they come out of the box clocked higher and with less OC headroom.

So...yeah. It's not a conspiracy so much as improvement in production removing inefficiencies, leading to less performance left on the proverbial table that you can get back by overclocking. That sucks....but it's not something to whine about. Note that Nvidia states their cards run at 575 watts. They can pull 900 watts for 1 ms...or 1,000,000 ns. This is all fine and dandy...except they can do this very often when going from high usage spikes...and it's over a connection where the 900 watts/575 watts is not forcibly balanced over conductors and therefore can (and apparently does) peak on a limited amount of them...exceeding the 9.2 amps that the wire connector is theoretically rated for under presumably ideal situations. That's less about failure of manufacturing, or about reality, and somebody designing a connector that wasn't meant for what it is experiencing in practice.

Ahh...but there are those who still believe the engineers specified everything perfect. Again, engineers of vehicles didn't (historically) design them to be destroyed. This how the Ford Pinto became a thing, and it's one of the easiest situations where some common sense would have fixed everything. Maybe a bolt sticking proud from a hole right next to a gas tank is a bad idea, like taking a connector designed with a 600 watt balanced load and running an unbalanced 900 watts through it was silly. As was stated way back, if they installed overcurrent protection via a shunt resistor on each conductor they'd have detected this and stopped them from burning...but magical engineering specs assuming situations other than what happen in reality don't usually fix issues unless they absolutely overspec things to an insane degree...and that would cost more money, when the goal for this connector was to save money. That's a no-no of common sense.
The manufacturer want to sell products in the specific price/perf range, that's understandable. But if you watch Buildzoid's video you'll see he explains it very well, each new generation 16 pin gets weaker on the GPU side. If you say a multi-billion dollar company doesn't have the experts to understand this and they do it because they are fools, and if you said that the consumer must accept that like something normal, I don't think so :)

About binning, a lot records in 3DMark are made with 700w+ mods, so the chips/cables/connectors are not the problem here.
 
Joined
Jul 31, 2024
Messages
883 (4.33/day)
Would the shunt resistors be a good thing? No. They aren't breakers. You'd still need to monitor them in software and have some response when each hit a huge load.

I know why i do not read such posts and responses.

No and No. Definitely not.

I think you misunderstood a good design principle.

You do that on the hardware level - on the component level in the firmware. Not in the userspace software.

There are some sort of microcontrollers. These are logic with functions and flash memory. That firmware and that circuitry should limit the current flow. Assuming you are not a "smart guy" who desinged that circuit. You have those parts in any power supply unit, television set, and so on.

someone was lazy designing that power circuit.

#719 - That guy is also right. Shunt mods and such. 700Watt or more over the same connector. I have a psu with the older nvidia gpu connector and i have two psu with the usual amd graphic card psu connectors. 600 Watts over such small connectors in comparision with the other connectors - definitely not to my liking.

#719 I would not give a free pass for every chips/cables/connectors

-- I have to admit something is wrong. My conclusion is a do not buy badge over all Nvidia related products.

Anyway I'm lucky that I was not affected because I did not buy Intel 13th or 14th gen processors or a nvidia graphic card.

It's sad still no solution or statement from NVIDIA. I want a real statement and no bullshit statement. I want to see actions. Real actions which solves this permanently.


This was bullshit and easy to see bullshit by nvidia


Nicely Whataboutism: Intel can pull around 460 Watt permanently to the cpu socket on consumer mainboards. No issues with cables on those mainboards. A big increase. MAybe it is valid to compare a intel cpu with 460 Watts with a NVIDIA 4090.
 
Joined
Feb 18, 2025
Messages
21 (7.00/day)
Location
Spain
System Name "Nave Espacial"
Processor AMD Ryzen 7 7800X3D
Motherboard MSI B650M Project Zero
Cooling Corsair H150i Elite LCD XT
Memory Corsair Vengeance RGB 64GB (2x32GB) DDR5 6000MT/s CL30
Video Card(s) MSI GeForce RTX 4090 GAMING X SLIM
Storage Samsung 990 PRO 4TB + Acer Predator GM7 4TB
Display(s) Corsair Xeneon 27QHD240
Case Corsair 2500X (black)
Audio Device(s) Corsair HS80 Wireless Headset
Power Supply Corsair RM1200x Shift
Mouse Corsair Darkstar Wireless
Keyboard Corsair K65 Pro Mini 65%
Software Windows 11, iCUE
This shouldn't be happening if the PC is turned off, unlike with 7800x3d when you would think the PC is turned off, but instead its burning with 36A.
Maybe we do a protest like that useless reddit blackout.

?????

Did I miss something? Since when is that happening?
 
Joined
May 24, 2023
Messages
1,114 (1.74/day)
Nobody ever demonstrated that a new correctly installed cable (or used cable with undamaged plugs) causes any problems.

NOBODY. EVER.

Yet this thread and bizzare comments still for some reason goes on.

I myself tested two new and like new cables with 400W load and they performed perfectly.
You are free to report your correct and factual information including how much have the plugs been used here:

 
Last edited:
Joined
Feb 18, 2025
Messages
21 (7.00/day)
Location
Spain
System Name "Nave Espacial"
Processor AMD Ryzen 7 7800X3D
Motherboard MSI B650M Project Zero
Cooling Corsair H150i Elite LCD XT
Memory Corsair Vengeance RGB 64GB (2x32GB) DDR5 6000MT/s CL30
Video Card(s) MSI GeForce RTX 4090 GAMING X SLIM
Storage Samsung 990 PRO 4TB + Acer Predator GM7 4TB
Display(s) Corsair Xeneon 27QHD240
Case Corsair 2500X (black)
Audio Device(s) Corsair HS80 Wireless Headset
Power Supply Corsair RM1200x Shift
Mouse Corsair Darkstar Wireless
Keyboard Corsair K65 Pro Mini 65%
Software Windows 11, iCUE
Nobody ever demonstrated that a new correctly installed cable (or used cable with undamaged plugs) causes any problems.

NOBODY. EVER.

Yet this thread and bizzare comments still for some reason goes on.

I'll never understand the pathological urge some of you have to speak up for a megacorporation that refuses to change a design that's been proven to introduce serious problems for the end user. If you don't own a 4090, I can only take this as fanboyism. If you do (like I do), you should be mad about your property and safety not being respected.

12VHPWR should've never been a thing.
 
Last edited:
Joined
Aug 28, 2023
Messages
261 (0.48/day)
So do only the 3rd party and Corsair cables burn? I see a lot of hate for Corsair(just like the Asus hate with burning x3d, even though Asrock was the 1st and other brand did burn too. 9800x3d still burn on Asrock to this day). Or like with Samsung where people use 1-2 years old FW.
 
Top