Thursday, February 8th 2024

CPSC Demands a Recall of CableMod GPU Angled Adapters, Estimates $74.5K of Damaged Property

CableMod issued a statement—just before the last Christmas holiday—detailing a safety recall of 16-pin 12VHPWR angled adapters, version 1.0 and 1.1. This announcement received widespread media coverage (at least in tech circles), but some unfortunate customers have not yet received the memo about faulty adapters—CableMod's 90° angled and 180° hard connectors can overheat and in worst case scenarios, actually melt. HotHardware, amusingly named given this context, was the first hardware news outlet to notice that the Consumer Product Safety Commission (CPSC) had published a "GPU Angled Adapter" recall notice to its website earlier today, under "Recall number 24-112."

The US government body's listing outlines aforementioned hazardous conditions, along with an estimated 25,300 affected unit count. The CPSC's recommended "Remedy" advice is as follows: "Consumers should immediately stop using the recalled angled adapters and contact CableMod for instructions on how to safely remove their adapter from the GPU and for a full refund, including cost of shipping, or a $60 store credit for non-customized products, with free standard shipping. Consumers will be asked to destroy the adapter and upload a photo of the destroyed product to cablemod.com/adapterrecall/. The instructions on how to safely remove the adapter are also located on that site. Once destroyed, consumers should discard the adapter in accordance with local laws." The Safety Commission has gathered some customer feedback intelligence on this matter: "The firm (CableMod Ltd., of China) has received 272 reports of the adapters becoming loose, overheating and melting into the GPU, with at least $74,500 in property damage claims in the United States. No injuries have been reported."
HotHardware believes that the recall of faulty CableMod parts will not absolve every owner of flagship Ada Lovelace graphics cards from experiencing scary + melty incidents: "Interestingly enough, YouTuber Northridgefix also posted videos on various GeForce RTX 4090 GPUs that have had issues with damage. While this may or may not be related to any potential adapters, it surely adds to the perceived issues at hand. This makes owners of these GPUs want to double check their expensive piece of hardware more frequently out of caution, even if incidence rates are low."

Sources: CPSC, Hot Hardware News, VideoCardz
Add your own comment

63 Comments on CPSC Demands a Recall of CableMod GPU Angled Adapters, Estimates $74.5K of Damaged Property

#26
SOAREVERSOR
It sounds like a lot of money, till you factor in the cost of the 4090 /nut kick.
Posted on Reply
#27
Daven
The main problem here is not the tears but the fact that many can’t accept any wrong doing on Nvidia’s part. Flawed hardware designs happen all the time. It doesn’t mean anything. Its just life.

What isn’t part of life and a more recent phenomenon is outright denial by brand loyalists and their rage when anyone points out that hey this might not be a good idea.

This is not Nvidia’s first screw up and it won’t be their last. Hell it wasn’t even that bad of a screw up and few out of the whole were affected. So what’s the big deal in calling them out and asking them to do better in the future?
Posted on Reply
#28
SOAREVERSOR
DavenThe main problem here is not the tears but the fact that many can’t accept any wrong doing on Nvidia’s part. Flawed hardware designs happen all the time. It doesn’t mean anything. Its just life.

What isn’t part of life and a more recent phenomenon is outright denial by brand loyalists and their rage when anyone points out that hey this might not be a good idea.

This is not Nvidia’s first screw up and it won’t be their last. Hell it wasn’t even that bad of a screw up and few out of the whole were affected. So what’s the big deal in calling them out and asking them to do better in the future?
Most people have called out nvidia on this. However it's also the case that a ton of people building computers now have no buisness doing so and screwed it up.

These are not the days where you had to set jumpers, IRQs, master and slave drives, and all sorts of other stuff. Building computers is now Lego blocks level simple. It's been getting easier and easier since the get go. It's idiot proof to the point where the part of building a PC today that consumes the most brain cells is how to configure your RGB. That's not a bad thing. Nobody wants to go back to the dark days. But the problem with this is if you idiot proof something, and then due to a screw up something is not idiot proof you're going to have a shit show.

That's fine and that happens all the time in all sorts of things. But there is a strong tendency among PC Gamers to blame everything and everyone but PC Gamers. It's snowflakery.
Posted on Reply
#29
tpa-pr
SOAREVERSORMost people have called out nvidia on this. However it's also the case that a ton of people building computers now have no buisness doing so and screwed it up.

These are not the days where you had to set jumpers, IRQs, master and slave drives, and all sorts of other stuff. Building computers is now Lego blocks level simple. It's been getting easier and easier since the get go. It's idiot proof to the point where the part of building a PC today that consumes the most brain cells is how to configure your RGB. That's not a bad thing. Nobody wants to go back to the dark days. But the problem with this is if you idiot proof something, and then due to a screw up something is not idiot proof you're going to have a shit show.

That's fine and that happens all the time in all sorts of things. But there is a strong tendency among PC Gamers to blame everything and everyone but PC Gamers. It's snowflakery.
I still think it's fair to place blame on the connector design though. This connector seems relatively unique in that it catastrophically fails if it's not seated properly, and it seems some have failed through no fault of the user. The more traditional 8-pin connectors generally don't do that. As for whether Nvidia can be "blamed" for using the connector, I don't think they designed it did they? They hold some responsibility for using it if they knew about the finicky nature of it but if not, it may be they simply thought it was the superior connector to the more traditional method.

Same as i've said before, I feel for any Nvidia users who had the connector fail and damage their card. Spending thousands of dollars on a high-end product only for a cheap cable/connector to destroy has got to be infuriating. With any luck the next generation connector is more robust.
Posted on Reply
#30
R-T-B
DavenThe main problem here is not the tears but the fact that many can’t accept any wrong doing on Nvidia’s part.
I absolutely do. NVIDIA needs to come down hard on substandard vendors in the supply chain. As it is they aren't doing enough. There you go. They did something wrong. Happy?

That isn't actually a problem with the spec itself though. And nor does it make this a highly widespread issue as claimed.
Posted on Reply
#31
Papusan
AssimilatorHere comes another thread of babies crying about how ATX12VHPWR is flawed... despite the fact that neither NVIDIA nor PSU manufacturers have been ordered to recall anything.
No wonder. The failure rate will be a lot lower when Nvidia forced the 12VHPWR connectors on almost all 4000 series SKUs. And you have forgot that nvidia was pushed to change the 12+4 pin power connector with a revised one. Why would you change the orginal specs if the old one was good enough, LOOL
Posted on Reply
#32
trsttte
Chrispy_The PCIe MinFit Jr can absolutely handle far more power than it's rated for. I believe Der8auer did the math and the worst-case scenario, assuming lowest-tolerance of wire gauge and cheapest, nastiest pins available - I think it was 288W minimum per 6+2 pin PCIe connector. That's almost double what it's actually rated to and why it's such a safe, problem-free connector.
I don't remember if that was the exact number he came up with but it's the rating from molex for the pcie mini fit version, 8A per line x3 = 288W - this for the pcie version specifically, there's versions of micro fit going up to 13A per line.

In contrast the micro fit in general only goes up to 8.5A per line (can't find a specific spec for pcie from a good source), so what exactly did we gain beside a much more expensive and harder to implement connector? Some board space?
R-T-BIt's actually the opposite: it has much higher quality requirements
That doesn't make it better. We traded something cheap and reliable, with a huge ammount of headroom for something more expensive, harder to manufacture and running full tilt with no margin whatsoever. All so we could save the space of a single connector, which ammounts to about less than 1% the total size of a gpu. Who in their right mind made this stupid ass decision!?
Posted on Reply
#33
R-T-B
trsttteAll so we could save the space of a single connector,
*Two connectors for 600W.
trsttteThat doesn't make it better.
No. The size makes it better. At least in concept.

Honestly its not a big deal to me either way. Used both. Slightly prefer 12VHPWR vs 3xPCIex8 but no biggie either way. Never had an issue with several adapters and native psu connectors.
Posted on Reply
#34
Jism
Event Horizon12VHPWR almost makes me nostalgic for Molex.


These where required back in the days running a 9700 Pro or so.

Abandon above cable.. Go back to 8 pins. Much more solid, robust.
Posted on Reply
#35
PLAfiller
TheLostSwede.....what is a fairly small company started by a German and a Taiwanese guy in Taipei. (Yes, I know the founders)
Wow, that's cool! Hope they make it past the storm. A lot of people just don't bother with warranties, so it is possible not so many will come looking for a reimbursement.

Looking at Amazon reviews though, 20% are 1-star reviews right off the bat....so they better get ready with everything: marketing, PR, etc... to handle the wave.
Posted on Reply
#36
Onasi
R-T-BNo. The size makes it better. At least in concept.
Honestly, it’s not even the size so much as it’s the idea of having a single cable from the PSU to any GPU with the card negotiating what it needs. It just makes sense as a more modern, less cluttery way of doing things. It’s the same reason I pray that ATX12VO actually takes off.
Posted on Reply
#37
Vayra86
R-T-BI absolutely do. NVIDIA needs to come down hard on substandard vendors in the supply chain. As it is they aren't doing enough. There you go. They did something wrong. Happy?

That isn't actually a problem with the spec itself though. And nor does it make this a highly widespread issue as claimed.
Sorry but it really is. A spec for consumer use needs to be far more tolerant. Just because you can manage things fine as it is, says nothing. The first products in the wild with this connector have provided proof its not sufficient, not idiot proof and not cost effective-commercial production proof.

Its really that simple.

Also, what is 'widespread'.. I think seeing repeated instances is enough to call it that, but apparently you do not. So its only an issue if they catch fire left and right? I beg to differ... it should simply be impossible to fail so catastrophically.
Posted on Reply
#38
Chrispy_
trsttteI don't remember if that was the exact number he came up with but it's the rating from molex for the pcie mini fit version, 8A per line x3 = 288W - this for the pcie version specifically, there's versions of micro fit going up to 13A per line.

In contrast the micro fit in general only goes up to 8.5A per line (can't find a specific spec for pcie from a good source), so what exactly did we gain beside a much more expensive and harder to implement connector? Some board space?



That doesn't make it better. We traded something cheap and reliable, with a huge ammount of headroom for something more expensive, harder to manufacture and running full tilt with no margin whatsoever. All so we could save the space of a single connector, which ammounts to about less than 1% the total size of a gpu. Who in their right mind made this stupid ass decision!?
I think that was it, yes.
Like I said, 288 was worst-case using the lowest possible values, ie the lower molex 8A per line, the smaller AWG18 cabling rather than AWG16 that is recommended but not always used, and then additional concerns about daisy-chained PCIe cables coming from the PSU itself.

288W per 6+2 pin connector already includes a massive safety margin, it's rare that a 13A Molex with AWG16 would struggle to achieve double that and still have the same safety margin.

I'm not seeing that safety margin on 12VHPWR and when it melts people are blaming the adapters, the bend radius, the manufacturing quality... Nope - it's that the design itself has no safety margin at all. You can look up the temperature delta on stranded cable of any given size, Molex have similar exact specs of what temperatures their connectors will get at any given current - this is all in the public domain and no secret.
Posted on Reply
#39
R-T-B
Vayra86The first products in the wild with this connector have provided proof its not sufficient
By the numbers I have seen I remain unconvinced of that.
Posted on Reply
#40
Vayra86
R-T-BBy the numbers I have seen I remain unconvinced of that.
Guess I'm easier to impress than you then. Still. Its out shortly, and we have issues, and they're repeatedly happening. I'm dead certain part of them are indeed 'false positives' in one way or another. But still. They happen. They don't happen with 6/8pin, and there's a whole lot more of those out there.

Other things I compare this sort of stuff with are normal wall sockets/plugs. Yet another example of something heavily overbuilt in terms of tolerances, because it was built with the knowledge that a lot of stupid people are going to screw around with it. That really is the expectation I have when it comes to consumer grade electrics. And even here, if you look across the globe, there are marked differences in the quality of these systems between countries/continents. In this area, I really do expect something as perfect as it can get.
Posted on Reply
#41
R-T-B
Vayra86They don't happen with 6/8pin,
They did, they was just no media circus to follow them.
Posted on Reply
#42
Vayra86
R-T-BThey did, they was just no media circus to follow them.
Well, let's not be unrealistic now, you can mismanage a pcie cable a lot more than you can 12VHPWR before shit hits the fan.

Btw: *Two connectors for 600W.
Yeah another interesting point. We're already up to two plugs on the first gen they're introduced. Let's gooo
Posted on Reply
#43
efikkan
Even though CableMod is liable for these products, the underlying problem is still the lower tolerances. And while some of you excuse this by pointing out the stricter quality requirements, that is only correct to a certain extent, because you get to a point where even higher precision production equipment and more QA isn't enough (or is far too costly to be relistic). So even if the standard is "good enough" in theory, real world products will have a lot of variance, and the tolerances needs to account for this. This is the elephant in the room, that I haven't seen people address. And despite all the excuses, this is bad engineering and this standard needs to be abandoned.
Franzen4RealI am curious as to why the adaptors were recalled but not the cables. Both have the same plug on the GPU end.
Probably because the cables can ease some of the strain, reducing the chance of failure.
Posted on Reply
#44
sLowEnd
Jism

These where required back in the days running a 9700 Pro or so.

Abandon above cable.. Go back to 8 pins. Much more solid, robust.
How about even further back to a barrel jack external power brick? :laugh:

Posted on Reply
#45
Chrispy_
R-T-BNo. The size makes it better. At least in concept.
Slightly prefer 12VHPWR vs 3xPCIex8 but no biggie either way.
OnasiHonestly, it’s not even the size so much as it’s the idea of having a single cable from the PSU to any GPU with the card negotiating what it needs. It just makes sense as a more modern, less cluttery way of doing things.
Using 3xPCIe cables is definitely clunky, but I also don't want a future where GPUs routinely need 450W+

12VHPWR should have just kept MiniFit Jr but ditched all the sense wires to make the connector smaller than two 8-pins and also mandated the Molex 13A connectors. Without changing anything dramatic, a 10-pin, hypothetical 12VHPWR done properly could carry 406W with the exact same safety margins as the existing 6+2pin PCIe connectors, rather than the total absence of safety margins in the original 12VHPWR.

Like I said, I don't want a future where we need half a Kilowatt for a single GPU, but there's a lot of room to expand pin-count using MiniFit Jr before we get to 20+4 pin connectors like on motherboards.
Posted on Reply
#46
Onasi
Chrispy_Using 3xPCIe cables is definitely clunky, but I also don't want a future where GPUs routinely need 450W+

Like I said, I don't want a future where we need half a Kilowatt for a single GPU, but there's a lot of room to expand pin-count using MiniFit Jr before we get to 20+4 pin connectors like on motherboards.
I agree here, actually. For years I am pretty much refusing to use any GPU above 250 watts because I find giant monstrosities silly. In my mind, if the card can’t be effectively cooled by a 2-2.5 slot dual fan cooler, then the design and power envelope honestly is just too much. When I see enormous cards held up by anti sag brackets I feel like we have gone long past sanity and usefulness of PCI-E AIB design and if GPU vendors insist on continuing with this (and if NV and AMD want to continue pushing thermals and power for the sake of performance) then we need a new form-factor for GPUs since the current one isn’t working. I mean, PCI actually has a freaking spec for add-in cards size-wise. Reference designs from both teams used to follow it, but now even those are wildly out of spec. It’s a mess.
Posted on Reply
#47
kapone32
Well if there was nothing wrong it would be revised so quickly. We did not see a redesign of 8 pin connectors when they replaced 6 pin on PCIe power cables. Nvidia, championed the first iteration and it has issues. The argument for anecdotal and true are not important. What is important is the rest of the Group has created ATX 3.1 for CES 2024 to stifle the narrative. There is also the fact that if I paid $2500 for a GPU I would be pretty cheesed if something like this happened.
Posted on Reply
#48
Chrispy_
kapone32Well if there was nothing wrong it would be revised so quickly. We did not see a redesign of 8 pin connectors when they replaced 6 pin on PCIe power cables. Nvidia, championed the first iteration and it has issues. The argument for anecdotal and true are not important. What is important is the rest of the Group has created ATX 3.1 for CES 2024 to stifle the narrative. There is also the fact that if I paid $2500 for a GPU I would be pretty cheesed if something like this happened.
There's plenty wrong with the connector. It greaty exceeds the current of the MicroFit connector as defined by Molex, the manufacturer of the MicroFit connector.


Intel and Nvidia are just creating a spec that's violates the manufacturers guidelines right out of the gate and everyone's wondering why there are problems.

Posted on Reply
#49
Panther_Seraphin
Chrispy_There's plenty wrong with the connector. It greaty exceeds the current of the MicroFit connector as defined by Molex, the manufacturer of the MicroFit connector.
So looking at some of the documentation they are using Amphenol connectors vs Molex for the specification and there they seem to be within the specification as stated.

However.....some of the things I noticed in the documentation.

[INDENT]
  1. The connector is rate for an operational power of 600 watts max nothing more.
  2. However they say there is the possibilty of drawing 9.5 Amps per pin (so you would expect 6 12v 6 Ground pins) which actually maths out at 684 watts. Also this is at 12VDC nothing more. Your PSU doesnt output at LEAST 12V on that rail especially under vDroop? Well your actually only rated for less power now.
  3. It is only rated for 600 watt of power draw from -40 to 105 degrees with no mention of derate due to temperatures that you would expect.
  4. All testing will be accomplished at 25 -+5 degrees celsius.
[/INDENT]

Just some areas that make me dubious at the overall ratings.
Posted on Reply
#50
R-T-B
Chrispy_Using 3xPCIe cables is definitely clunky, but I also don't want a future where GPUs routinely need 450W+
No one does but we also want advancement without significant silicon advances, so something's gotta give.
Chrispy_There's plenty wrong with the connector. It greaty exceeds the current of the MicroFit connector as defined by Molex, the manufacturer of the MicroFit connector.
You want to quit bullshitting?

Nvm, I need to quit playing electrical pretendo-engineer. Stuff below is invalid math as discussed below.

www.molex.com/content/dam/molex/molex-dot-com/products/automated/en-us/productspecificationpdf/430/43045/PS-43045-001.pdf?inline


Do your own wattage math. With 18AWG wire the 12pin connector is capable of 12v*5.5a*12pins or 792W. They just need to build it to spec. Heck, even 20AWG makes it to 642W.
Posted on Reply
Add your own comment
Nov 19th, 2024 11:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts