Wednesday, July 12th 2023

NVIDIA Now Ships GeForce RTX 4090 Founders Edition with Updated Power Connector

A few weeks ago, we reported that NVIDIA is already shipping its GeForce RTX 4070 Founders Edition cards with an improved 12VHPWR connector called 12V-2x6. However, today we learn that NVIDIA is also now shipping GeForce RTX 4090 Founders Edition with an improved 12V-2x6 connector. Thanks to the Reddit user u/prackprackprack posting in r/NVIDIA, the user reported that his Founders Edition RTX 4090 has shortened sensing pins on the connector. If not adequately plugged in, the sensing pins will not allow the card to draw full power and melt the connector. Besides RTX 4070 FE, the RTX 4090 FE is now updated as well, which makes sense as it is the most power-hungry card in the family. However, this may be a partial 12V-2x6 implementation. Below, you can see images showing shortened sensing pins.
Sources: r/NVIDIA (Reddit), via VideoCardz
Add your own comment

62 Comments on NVIDIA Now Ships GeForce RTX 4090 Founders Edition with Updated Power Connector

#26
bug
Bomby569no, it is not, you need your reading glasses. July 3rd.
When news broke out about the revised connector, it was said 4070 and later cards are already using it. It was only the 4090 and 4080 that people were left wondering about.
Posted on Reply
#27
Object55
They shouldve just abandoned this connector, everybody got PTSD when they see it.
Posted on Reply
#28
Bomby569
bugWhen news broke out about the revised connector, it was said 4070 and later cards are already using it. It was only the 4090 and 4080 that people were left wondering about.
i just quoted the news and that's not correct in any way.
Posted on Reply
#29
Darmok N Jalad
Curious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
Posted on Reply
#30
Crackong
Darmok N JaladCurious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
The EPS12V is always there.
300W a piece and Server market proven:)
Posted on Reply
#31
persondb
AssimilatorThere neither is nor was a design flaw, it was always user error. The updated design simply makes it more difficult for user error to cause physical hardware damage - now the card will more reliably fail to boot when the connector isn't properly seated, as opposed to making improper connection causing the melting and/or burning.
So it's a design flaw. Allowing the user to make an error is a design flaw, those connectors should be designed so that they only really plug in when it's actually safe to do so.

You have to realize that 'user error' isn't a blanket term to absolve all mistakes. This one was to be expected and should have absolutely been a part of the design.
Posted on Reply
#32
bug
Darmok N JaladCurious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
Well, that's an intriguing idea.

Though the underlying problem would still remain: either make the motherboard deliver all the power we need (which, as @Assimilator pointed out, would make motherboards more expensive, even for those that do not need a discrete video card), or design yet another power connector that can jumpstart your car, yet can be easily inserted and not bigger than what we have today. It really feels like we're trying to mock physics in a way.
persondbSo it's a design flaw. Allowing the user to make an error is a design flaw, those connectors should be designed so that they only really plug in when it's actually safe to do so.

You have to realize that 'user error' isn't a blanket term to absolve all mistakes. This one was to be expected and should have absolutely been a part of the design.
The auto industry would like to have a word with you.
Posted on Reply
#33
Darmok N Jalad
CrackongThe EPS12V is always there.
300W a piece and Server market proven:)
Apple had a solution with the 2019 Mac Pro, but it was really loooong and certainly would be hard to adopt. I think it could drive up to 400W GPUs, and it was able to power dual-Vega64s and dual-RDNA GPUs. It's a shame that concept died, because even those custom GPUs had no fans, but used case fans to cool instead. That's the sort of design changes I wouldn't mind seeing. The entire platform reimagined to accommodate today's hardware, as opposed to new hardware being shoehorned to work on a platform standard that is well past its prime.
Posted on Reply
#34
TheinsanegamerN
bugWell, that's an intriguing idea.
It's been tried before with BTX. Problem is, you gotta get all the case manufacturers and motherboard manufacturers on board. ATX required intel inventing the new standard and enforcing it with new designs for their latest chips, much like the NUC did for mini PCs and the ultrabook did for laptops.

Intel tried with BTX but didnt fully commit.
bugThough the underlying problem would still remain: either make the motherboard deliver all the power we need (which, as @Assimilator pointed out, would make motherboards more expensive, even for those that do not need a discrete video card), or design yet another power connector that can jumpstart your car, yet can be easily inserted and not bigger than what we have today. It really feels like we're trying to mock physics in a way.
We already have the answer. The 8 pin is easy to manage, just make that into 12 or 16 pins. It was making the pins smaller and weaker that caused this whole issue. Point the cables towards the front of the case and let them rest on the GPU instead of being crammed into the side panel, like GPUs of old.
Darmok N JaladApple had a solution with the 2019 Mac Pro, but it was really loooong and certainly would be hard to adopt. I think it could drive up to 400W GPUs, and it was able to power dual-Vega64s and dual-RDNA GPUs. It's a shame that concept died, because even those custom GPUs had no fans, but used case fans to cool instead. That's the sort of design changes I wouldn't mind seeing. The entire platform reimagined to accommodate today's hardware, as opposed to new hardware being shoehorned to work on a platform standard that is well past its prime.
That great and all, except such a design is proprietary. Not just the apple part, but having GPUs like that cooled by the chassis requires specific designs that dont work generation to generation, or brand to brand. See also: the trouble with MXM GPU swaps in old gaming laptops.

New designs are nice, but I dont want to sacrifice the ability to maintain or upgrade my hardware in the process.
Posted on Reply
#35
Assimilator
Darmok N JaladApple had a solution with the 2019 Mac Pro, but it was really loooong and certainly would be hard to adopt. I think it could drive up to 400W GPUs, and it was able to power dual-Vega64s and dual-RDNA GPUs. It's a shame that concept died, because even those custom GPUs had no fans, but used case fans to cool instead. That's the sort of design changes I wouldn't mind seeing. The entire platform reimagined to accommodate today's hardware, as opposed to new hardware being shoehorned to work on a platform standard that is well past its prime.
It died because it was stupid proprietary nonsense that required the components to be designed to fit a once-off, custom form factor. Which is the exact issue that the infinitely extensible PC, and ATX form factor, was designed to overcome. So decide whether you want extensibility, or "clever" design features - because you can't have both.
CrackongThe EPS12V is always there.
300W a piece and Server market proven:)
Two EPS12V connectors for 600W are still 2 connectors, not one.
Posted on Reply
#36
Timbaloo
TheoneandonlyMrKYou don't spend Money Fixing something that doesn't need it. ..
.
Only decades of quality management will disagree.
Posted on Reply
#37
Bomby569
TimbalooOnly decades of quality management will disagree.
months after release, for the exact same product, creating 2 different skus and obviously adding costs to the product, this on the Nvidia side.
On the PCI-SIG side, a revision a couple months later on what was supposed to be a established standard, after numerous cases of cables melting.

You sure are drinking the cool aid if you call this regular scheduled quality management improvements. More like damaged control after no quality control.
Posted on Reply
#38
BorisDG
DIY - "Cut the sense pins a bit. Congratulations, you have the new connector!"
Posted on Reply
#39
N/A
BorisDGDIY - "Cut the sense pins a bit. Congratulations, you have the new connector!"
And elongate the big ones by 0,25mm also part of the spec. Wiggles less that way. 3090 Ti was supposed to test this new connector. 4090 is on the way out anyway. All served as beta tests for the 5090.
Posted on Reply
#40
BorisDG
N/AAnd elongate the big ones by 0,25mm also part of the spec. Wiggles less that way. 3090 Ti was supposed to test this new connector. 4090 is on the way out anyway. All served as beta tests for the 5090.
Like I said in other post, I tried to wiggle my cable. It won't budge. Rock solid.
Posted on Reply
#41
ARF
TheinsanegamerNWhile his numbers are off, he's not technically wrong.

The 8 pin connectors could handle double their rating. EG the 8 pin could tolerate 310 watts, but was specd for 150 watt in official capacity. The 12 pin is rated for 684 watt, and specd for 600. That's not a lot of safety margin in the design, which naturally means if the connection is not 100% perfect the pins will begin to overheat.

It is, simply put, a poor design that was shrunk to be trendy looking, rather then functional. They should have worked with the old 8 pin design and adapted that size pin to a 12 or even 16 pin config to handle these 600w throughputs.
There are power spikes over 600 watts (20 ms measurement).
I want to know how these rated numbers are obtained. Do they contain the factor that the cards tend to additionally heat up the connectors, so the melting point becomes pretty easily reached?



I don't trust the "connector".
The pins area needs to be much more robust.
Posted on Reply
#42
TheDeeGee
the54thvoidI've got a 4070ti, and I'm not going to lie. If I didn't know about the issue with the 12v cable, I'd have not pushed it in as hard as I did. The pressure needed was excessive. On my card, with the PSU supplied cable, it's easy to see how folk might not fully clip it in. Of note, I discerned no click of the clip. I used a torch to look at the seam to ensure it was tight. Never needed that before.


Edit: Also, lets avoid personal barbs, please.
In a way that means it's a better design, the traditional 8-pin is actually not that tight of a fit at all and doesn't need wiggle to remove it. You can just release the clip and it's loose.

An electrical connection should fit tight and not work it's way out.

Both the nvidia adapter as well as my new Seasonic cable fit really snug and both have a pretty clear sounding click.
Darmok N JaladCurious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
The same, 12VHPWR isn't going anywhere. It will get updated, but it's here to stay.

Won't be surprised the next gen Intel GPUs will use it as wellz seeing this was Intel's idea after all. All NVIDIA did was design the sense chip and pins.
Posted on Reply
#43
TheoneandonlyMrK
TimbalooOnly decades of quality management will disagree.
What kind of political statement is that.

No it took weeks to find failures.
Posted on Reply
#44
SuperConker
ChomiqTbf, 4090 FE price has been in steady 3% decline per month. We just need to wait 2 years for it to come down to reasonable levels for 2023.
It's not a real price drop until nVidia Officially drops the MSRP on their own webpage (which hasn't happened yet).

I'm tired of only seeing coupons and steam gift cards from certain retailers on certain models only like the MSI Ventus,
we need a price drop across the whole board.
Posted on Reply
#45
Darmok N Jalad
TheinsanegamerNIt's been tried before with BTX. Problem is, you gotta get all the case manufacturers and motherboard manufacturers on board. ATX required intel inventing the new standard and enforcing it with new designs for their latest chips, much like the NUC did for mini PCs and the ultrabook did for laptops.

Intel tried with BTX but didnt fully commit.

We already have the answer. The 8 pin is easy to manage, just make that into 12 or 16 pins. It was making the pins smaller and weaker that caused this whole issue. Point the cables towards the front of the case and let them rest on the GPU instead of being crammed into the side panel, like GPUs of old.

That great and all, except such a design is proprietary. Not just the apple part, but having GPUs like that cooled by the chassis requires specific designs that dont work generation to generation, or brand to brand. See also: the trouble with MXM GPU swaps in old gaming laptops.

New designs are nice, but I dont want to sacrifice the ability to maintain or upgrade my hardware in the process.
AssimilatorIt died because it was stupid proprietary nonsense that required the components to be designed to fit a once-off, custom form factor. Which is the exact issue that the infinitely extensible PC, and ATX form factor, was designed to overcome. So decide whether you want extensibility, or "clever" design features - because you can't have both.


Two EPS12V connectors for 600W are still 2 connectors, not one.
Don’t hear me wrong. I wasn’t saying Apple’s solution is the answer, but rather to imagine if that was what the new standard was like, and you could easily pick cards off the shelf that supported this design idea. It wouldn’t be impossible to migrate to, as the 2019 also supported standard graphics cards as well. The motherboard would simply need to have the new port to allow for long term adoption. PCIe adoption did just that as well, having both PCIe and PCI slots for a period of time. In this case, we wouldn’t even be taking about replacing PCIe, but introducing an extension of it.
Posted on Reply
#46
bug
Darmok N JaladDon’t hear me wrong. I wasn’t saying Apple’s solution is the answer, but rather to imagine if that was what the new standard was like, and you could easily pick cards off the shelf that supported this design idea. It wouldn’t be impossible to migrate to, as the 2019 also supported standard graphics cards as well. The motherboard would simply need to have the new port to allow for long term adoption. PCIe adoption did just that as well, having both PCIe and PCI slots for a period of time. In this case, we wouldn’t even be taking about replacing PCIe, but introducing an extension of it.
Fwiw, for a while we had PCI, AGP and PCIe. And before that, it was VLB... Power delivery wasn't a main concern for any of these, though.
Posted on Reply
#47
Soul_
I still don't see the need for a mandatory requirement on this connector. I and probably most of us here, are ok connecting 4 cables to our GPU.
Posted on Reply
#48
SuperConker
And why is the 4070 ti and 4080 being left out? They're more powerful than the 4070
(for those who don't know, the 4070 is getting this new power connector too).
Posted on Reply
#49
bug
Soul_I still don't see the need for a mandatory requirement on this connector. I and probably most of us here, are ok connecting 4 cables to our GPU.
I love the assumption, but I still haven't got used having to plug in the second cable.
Posted on Reply
#50
Bomby569
Soul_I still don't see the need for a mandatory requirement on this connector. I and probably most of us here, are ok connecting 4 cables to our GPU.
it does help the PSU sales and nvidia seem to be doing premium stuff and things :rolleyes:
Posted on Reply
Add your own comment
Dec 18th, 2024 19:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts