• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Now Ships GeForce RTX 4090 Founders Edition with Updated Power Connector

The actual technical issue is that, as usual, you're making things up to suit your narrative. If each 12VHPWR connector pin was only rated for 4A @ 12V then it would be physically impossible for the connector to provide up to 600W, because that's how basic math works. Since it can provide up to 600W, your completely unsubstantiated BS about only 4A per pin is quite obviously incorrect, and you should go back to school.
While his numbers are off, he's not technically wrong.

The 8 pin connectors could handle double their rating. EG the 8 pin could tolerate 310 watts, but was specd for 150 watt in official capacity. The 12 pin is rated for 684 watt, and specd for 600. That's not a lot of safety margin in the design, which naturally means if the connection is not 100% perfect the pins will begin to overheat.

It is, simply put, a poor design that was shrunk to be trendy looking, rather then functional. They should have worked with the old 8 pin design and adapted that size pin to a 12 or even 16 pin config to handle these 600w throughputs.
 
no, it is not, you need your reading glasses. July 3rd.
When news broke out about the revised connector, it was said 4070 and later cards are already using it. It was only the 4090 and 4080 that people were left wondering about.
 
When news broke out about the revised connector, it was said 4070 and later cards are already using it. It was only the 4090 and 4080 that people were left wondering about.

i just quoted the news and that's not correct in any way.
 
Curious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
 
Curious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.

The EPS12V is always there.
300W a piece and Server market proven:)
 
There neither is nor was a design flaw, it was always user error. The updated design simply makes it more difficult for user error to cause physical hardware damage - now the card will more reliably fail to boot when the connector isn't properly seated, as opposed to making improper connection causing the melting and/or burning.

So it's a design flaw. Allowing the user to make an error is a design flaw, those connectors should be designed so that they only really plug in when it's actually safe to do so.

You have to realize that 'user error' isn't a blanket term to absolve all mistakes. This one was to be expected and should have absolutely been a part of the design.
 
Curious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
Well, that's an intriguing idea.

Though the underlying problem would still remain: either make the motherboard deliver all the power we need (which, as @Assimilator pointed out, would make motherboards more expensive, even for those that do not need a discrete video card), or design yet another power connector that can jumpstart your car, yet can be easily inserted and not bigger than what we have today. It really feels like we're trying to mock physics in a way.

So it's a design flaw. Allowing the user to make an error is a design flaw, those connectors should be designed so that they only really plug in when it's actually safe to do so.

You have to realize that 'user error' isn't a blanket term to absolve all mistakes. This one was to be expected and should have absolutely been a part of the design.
The auto industry would like to have a word with you.
 
The EPS12V is always there.
300W a piece and Server market proven:)
Apple had a solution with the 2019 Mac Pro, but it was really loooong and certainly would be hard to adopt. I think it could drive up to 400W GPUs, and it was able to power dual-Vega64s and dual-RDNA GPUs. It's a shame that concept died, because even those custom GPUs had no fans, but used case fans to cool instead. That's the sort of design changes I wouldn't mind seeing. The entire platform reimagined to accommodate today's hardware, as opposed to new hardware being shoehorned to work on a platform standard that is well past its prime.
 
Well, that's an intriguing idea.
It's been tried before with BTX. Problem is, you gotta get all the case manufacturers and motherboard manufacturers on board. ATX required intel inventing the new standard and enforcing it with new designs for their latest chips, much like the NUC did for mini PCs and the ultrabook did for laptops.

Intel tried with BTX but didnt fully commit.
Though the underlying problem would still remain: either make the motherboard deliver all the power we need (which, as @Assimilator pointed out, would make motherboards more expensive, even for those that do not need a discrete video card), or design yet another power connector that can jumpstart your car, yet can be easily inserted and not bigger than what we have today. It really feels like we're trying to mock physics in a way.
We already have the answer. The 8 pin is easy to manage, just make that into 12 or 16 pins. It was making the pins smaller and weaker that caused this whole issue. Point the cables towards the front of the case and let them rest on the GPU instead of being crammed into the side panel, like GPUs of old.
Apple had a solution with the 2019 Mac Pro, but it was really loooong and certainly would be hard to adopt. I think it could drive up to 400W GPUs, and it was able to power dual-Vega64s and dual-RDNA GPUs. It's a shame that concept died, because even those custom GPUs had no fans, but used case fans to cool instead. That's the sort of design changes I wouldn't mind seeing. The entire platform reimagined to accommodate today's hardware, as opposed to new hardware being shoehorned to work on a platform standard that is well past its prime.
That great and all, except such a design is proprietary. Not just the apple part, but having GPUs like that cooled by the chassis requires specific designs that dont work generation to generation, or brand to brand. See also: the trouble with MXM GPU swaps in old gaming laptops.

New designs are nice, but I dont want to sacrifice the ability to maintain or upgrade my hardware in the process.
 
Apple had a solution with the 2019 Mac Pro, but it was really loooong and certainly would be hard to adopt. I think it could drive up to 400W GPUs, and it was able to power dual-Vega64s and dual-RDNA GPUs. It's a shame that concept died, because even those custom GPUs had no fans, but used case fans to cool instead. That's the sort of design changes I wouldn't mind seeing. The entire platform reimagined to accommodate today's hardware, as opposed to new hardware being shoehorned to work on a platform standard that is well past its prime.
It died because it was stupid proprietary nonsense that required the components to be designed to fit a once-off, custom form factor. Which is the exact issue that the infinitely extensible PC, and ATX form factor, was designed to overcome. So decide whether you want extensibility, or "clever" design features - because you can't have both.

The EPS12V is always there.
300W a piece and Server market proven:)
Two EPS12V connectors for 600W are still 2 connectors, not one.
 
Only decades of quality management will disagree.

months after release, for the exact same product, creating 2 different skus and obviously adding costs to the product, this on the Nvidia side.
On the PCI-SIG side, a revision a couple months later on what was supposed to be a established standard, after numerous cases of cables melting.

You sure are drinking the cool aid if you call this regular scheduled quality management improvements. More like damaged control after no quality control.
 
DIY - "Cut the sense pins a bit. Congratulations, you have the new connector!"
 
DIY - "Cut the sense pins a bit. Congratulations, you have the new connector!"

And elongate the big ones by 0,25mm also part of the spec. Wiggles less that way. 3090 Ti was supposed to test this new connector. 4090 is on the way out anyway. All served as beta tests for the 5090.
 
And elongate the big ones by 0,25mm also part of the spec. Wiggles less that way. 3090 Ti was supposed to test this new connector. 4090 is on the way out anyway. All served as beta tests for the 5090.
Like I said in other post, I tried to wiggle my cable. It won't budge. Rock solid.
 
While his numbers are off, he's not technically wrong.

The 8 pin connectors could handle double their rating. EG the 8 pin could tolerate 310 watts, but was specd for 150 watt in official capacity. The 12 pin is rated for 684 watt, and specd for 600. That's not a lot of safety margin in the design, which naturally means if the connection is not 100% perfect the pins will begin to overheat.

It is, simply put, a poor design that was shrunk to be trendy looking, rather then functional. They should have worked with the old 8 pin design and adapted that size pin to a 12 or even 16 pin config to handle these 600w throughputs.

There are power spikes over 600 watts (20 ms measurement).
I want to know how these rated numbers are obtained. Do they contain the factor that the cards tend to additionally heat up the connectors, so the melting point becomes pretty easily reached?

1689170009031.png


I don't trust the "connector".
The pins area needs to be much more robust.
 

Attachments

  • 1689170249170.png
    1689170249170.png
    671 KB · Views: 61
I've got a 4070ti, and I'm not going to lie. If I didn't know about the issue with the 12v cable, I'd have not pushed it in as hard as I did. The pressure needed was excessive. On my card, with the PSU supplied cable, it's easy to see how folk might not fully clip it in. Of note, I discerned no click of the clip. I used a torch to look at the seam to ensure it was tight. Never needed that before.


Edit: Also, lets avoid personal barbs, please.
In a way that means it's a better design, the traditional 8-pin is actually not that tight of a fit at all and doesn't need wiggle to remove it. You can just release the clip and it's loose.

An electrical connection should fit tight and not work it's way out.

Both the nvidia adapter as well as my new Seasonic cable fit really snug and both have a pretty clear sounding click.

Curious what we'll see what connector they use next generation. It was too late to redesign the 40x0 series, other than to revise the existing connector.

I've felt for some time that these unsightly cables hanging out of the side of an increasingly heavy GPU needs to be entirely rethought. I know that's easier said than done, but it seems like we're reaching the point where a more drastic design change needs to occur. You either get this new connector and its potential pitfalls, or dual or triple 8-pins. They ditched AGP years ago for PCIe, maybe there needs to be a revised PCIe slot setup where more power can be fed to GPUs through the motherboard, delivering more power through more connections, with more safety mechanisms. Even if it makes motherboards longer, it's all the same when you are selecting a massive GPU for your case. Honestly, perhaps the entire system build needs to be reimagined to accommodate the cooling requirements of everything we have today. ATX started with nothing requiring cooling, and it's had a good run, but maybe it's time for a change. Maybe it would even drive sales for awhile.
The same, 12VHPWR isn't going anywhere. It will get updated, but it's here to stay.

Won't be surprised the next gen Intel GPUs will use it as wellz seeing this was Intel's idea after all. All NVIDIA did was design the sense chip and pins.
 
Only decades of quality management will disagree.
What kind of political statement is that.

No it took weeks to find failures.
 
Tbf, 4090 FE price has been in steady 3% decline per month. We just need to wait 2 years for it to come down to reasonable levels for 2023.
It's not a real price drop until nVidia Officially drops the MSRP on their own webpage (which hasn't happened yet).

I'm tired of only seeing coupons and steam gift cards from certain retailers on certain models only like the MSI Ventus,
we need a price drop across the whole board.
 
It's been tried before with BTX. Problem is, you gotta get all the case manufacturers and motherboard manufacturers on board. ATX required intel inventing the new standard and enforcing it with new designs for their latest chips, much like the NUC did for mini PCs and the ultrabook did for laptops.

Intel tried with BTX but didnt fully commit.

We already have the answer. The 8 pin is easy to manage, just make that into 12 or 16 pins. It was making the pins smaller and weaker that caused this whole issue. Point the cables towards the front of the case and let them rest on the GPU instead of being crammed into the side panel, like GPUs of old.

That great and all, except such a design is proprietary. Not just the apple part, but having GPUs like that cooled by the chassis requires specific designs that dont work generation to generation, or brand to brand. See also: the trouble with MXM GPU swaps in old gaming laptops.

New designs are nice, but I dont want to sacrifice the ability to maintain or upgrade my hardware in the process.

It died because it was stupid proprietary nonsense that required the components to be designed to fit a once-off, custom form factor. Which is the exact issue that the infinitely extensible PC, and ATX form factor, was designed to overcome. So decide whether you want extensibility, or "clever" design features - because you can't have both.


Two EPS12V connectors for 600W are still 2 connectors, not one.
Don’t hear me wrong. I wasn’t saying Apple’s solution is the answer, but rather to imagine if that was what the new standard was like, and you could easily pick cards off the shelf that supported this design idea. It wouldn’t be impossible to migrate to, as the 2019 also supported standard graphics cards as well. The motherboard would simply need to have the new port to allow for long term adoption. PCIe adoption did just that as well, having both PCIe and PCI slots for a period of time. In this case, we wouldn’t even be taking about replacing PCIe, but introducing an extension of it.
 
Don’t hear me wrong. I wasn’t saying Apple’s solution is the answer, but rather to imagine if that was what the new standard was like, and you could easily pick cards off the shelf that supported this design idea. It wouldn’t be impossible to migrate to, as the 2019 also supported standard graphics cards as well. The motherboard would simply need to have the new port to allow for long term adoption. PCIe adoption did just that as well, having both PCIe and PCI slots for a period of time. In this case, we wouldn’t even be taking about replacing PCIe, but introducing an extension of it.
Fwiw, for a while we had PCI, AGP and PCIe. And before that, it was VLB... Power delivery wasn't a main concern for any of these, though.
 
I still don't see the need for a mandatory requirement on this connector. I and probably most of us here, are ok connecting 4 cables to our GPU.
 
And why is the 4070 ti and 4080 being left out? They're more powerful than the 4070
(for those who don't know, the 4070 is getting this new power connector too).
 
Last edited:
I still don't see the need for a mandatory requirement on this connector. I and probably most of us here, are ok connecting 4 cables to our GPU.
I love the assumption, but I still haven't got used having to plug in the second cable.
 
Back
Top