• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Proposed new Power Connector

a couple resistors fixes this problem
 
a couple resistors fixes this problem

but some ticker wires and a second connector or extra wires wouldn't hurt. But to be clear i also think they are not the issue, but for some extra precaution as this connector is stretched a bit too much
 
They did work, which is good, but they were a kludge solution. You are providing +12V using 3 wires instead of one. Why ?? No reason. The right way is to up the voltage and use one wire for positive and one for ground.
Upping the voltage isn't possible though; We've already covered that it needs to stay 12V.

12V power is enough to start a car at 300Amps, provided the wiring is correctly gauged and connected - so it's not a case of 12V being unsuitable, it's a question of picking the appropriate wiring, connectors, and current-balancing across multiple cables.

Yes, 48V would solve the issue of high current over cheap wiring and cheap connectors nicely, but it would cost a lot to implement, cause huge disruption to people who don't benefit in any way, and be of actual use to a vanishingly-tiny proportion of people. Clearly, the answer to solve GPU power for the rich and wealthy is to stop penny-pinching on these stupid cheap wires and cheap connectors.

14AWG wiring and more 12V pairs brings us back to the same safety margins as PCIe and EPS 12V
 
Upping the voltage isn't possible though; We've already covered that it needs to stay 12V.
No it doesn't.

12V power is enough to start a car at 300Amps, provided the wiring is correctly gauged and connected - so it's not a case of 12V being unsuitable, it's a question of picking the appropriate wiring, connectors, and current-balancing across multiple cables.
With a cable as thick as a finger, which isn't a problem due to the form factor. A cable that thick is a problem in the PC form factor.

Yes, 48V would solve the issue of high current over cheap wiring and cheap connectors nicely, but it would cost a lot to implement, cause huge disruption to people who don't benefit in any way, and be of actual use to a vanishingly-tiny proportion of people.
Wrong, wrong, wrong. 48V wouldn't be an opportunity to increase the current over existing connectors, it would be an opportunity to do away with all of the legacy connectors and their cruft. 3.3V wires? Gone. 5V wires? Gone. EPS12V? Gone. PCIe 6- and 8-pin? Gone. Your PSU has one connector that plugs into the motherboard and drives everything (including USB-C power delivery), it has at least one connector you can plug into your high-end GPU, and that is it. It's what ATX12VO should have been.

Fewer, safer connectors are good for everyone, not just enthusiasts.

14AWG wiring and more 12V pairs brings us back to the same safety margins as PCIe and EPS 12V
This is still a band-aid over the underlying problem. Stop it with the band aids! They are exactly how we got into this nonsensical situation!
 
No it doesn't.


With a cable as thick as a finger, which isn't a problem due to the form factor. A cable that thick is a problem in the PC form factor.


Wrong, wrong, wrong. 48V wouldn't be an opportunity to increase the current over existing connectors, it would be an opportunity to do away with all of the legacy connectors and their cruft. 3.3V wires? Gone. 5V wires? Gone. EPS12V? Gone. PCIe 6- and 8-pin? Gone. Your PSU has one connector that plugs into the motherboard and drives everything (including USB-C power delivery), it has at least one connector you can plug into your high-end GPU, and that is it. It's what ATX12VO should have been.

Fewer, safer connectors are good for everyone, not just enthusiasts.


This is still a band-aid over the underlying problem. Stop it with the band aids! They are exactly how we got into this nonsensical situation!
Introducing a brand new standard to serve the 0.01% is dumb when the issue is that Nvidia just wanted to use a smaller connector for no good reason. You can't just throw away the compatiblity of all existing ATX PSUs to cater to this miniscule, largely-irrelevent niche of wealthy xx90 owners just because Nvidia made a stupid decision.

As for the starter motor example - you're missing the point of that example. 300A is safe if the wire is thick enough, 12VHPWR cables and connectors are melting because there's not enough safety margin to cover a malfunction that causes all 500+ Watts down one wire pair. We don't need 300A, it's merely proof that any amount of current can be delivered as long as the wire gauge is sized appropriately. AWG14 is perfectly usable in a PC case and is rated to 20A (240W) per wire with a generous safety margin that covers running long distances through drywall cavities flammable insulation and zero ventilation. I'd expect an AWG14 12V pair to handle 600W in a ventilated PC case as a worst-case scenario during malfunction of the PSU/connector/GPU. It wouldn't be happy, but it wouldn't melt everything like the tiny pins and wires in 12VHPWR do.

In an ideal world we'd probably have a 20V or 48V standard but if you've been following the PC industry for any significant amount of time, you'll know from experience how poorly new standards are adopted and the lowest common denominator is always the sole survivor, because costs are the driving factor in popularity and popularity drives economic viability. New standards are rarely economically viable. At best, what tends to happen is a fork in the standard where there's now the burden of two conflicting standards and the number of accidents from someone using the wrong standard likely outweigh the number of accidents that the new standard is supposed to prevent!

We're here discussing how to fix the silly undersized connector, and that doesn't realistically involve attempting to force every ATX PSU and motherboard on the planet into obsolescence just to appease a few concerned 5090 owners.

This is still a band-aid over the underlying problem. Stop it with the band aids! They are exactly how we got into this nonsensical situation!
We got into this problem because Nvidia threw the safety margin away and doubled the current per wire while throwing it all into a much denser, smaller connector.
The band aid is 12V-6X2. 4x8-pin is clunky but safe, we can simply improve on that rather than trying to fix the failed 12VHPWR/12V-2X6 which is dumb by design.
 
Introducing a brand new standard to serve the 0.01% is dumb when the issue is that Nvidia just wanted to use a smaller connector for no good reason. You can't just throw away the compatiblity of all existing ATX PSUs to cater to this miniscule, largely-irrelevent niche of wealthy xx90 owners just because Nvidia made a stupid decision.
[...]

But aren't NVidia GPUs nowadays serve much less than 0.01% ? At least that's what 5xxx series launch looked like.

On a more serious note, the industry went through several such changes, including during transition to ATX PSUs. Only now there might be a window for us to influence the next standard. If the change were coming what would you like to see in the connectors and PSUs ?
 
There is nobody who has really given me a real reason why we ever had to abandon pcie 8 pin.
In hydraulics, having multiple pipes over a single, larger one is bad design. At the same throughput, you'd have more losses.
The couple of courses I had on electrical engineering were more than a decade ago, but from what little I remember, the story should be the same.

Having a single cable is actually more KISS-compliant. Multiple cables add, well, more cables... More degrees of freedom = You're going the wrong way, simplicity-wise.
 
But aren't NVidia GPUs nowadays serve much less than 0.01% ? At least that's what 5xxx series launch looked like.

On a more serious note, the industry went through several such changes, including during transition to ATX PSUs. Only now there might be a window for us to influence the next standard. If the change were coming what would you like to see in the connectors and PSUs ?
5090s are vanishingly rare right now, 4090s are reasonably common but still comfortably under 1% of gamers, and therefore likely to be under 0.1% of the wider home PC industry. I don't have an exact figure but 99.x% are served just fine by existing, compatible, ubiquitous, 12V ATX PSUs.

The problem isn't that 48V wouldn't be a good idea, it's that 48V isn't needed by the mass-market, so they won't shoulder the cost of adoption or change. These are basic, proven economic behaviours that I have no control over, it's just the way the world works.

OEMs switched to ATX12VO because it saved costs overall when they were paying for both the PSU and the motherboard. 48V has no economic incentive so it cannot survive. I don't make the rules of economics, I'm simply telling you what they are. Changes that don't result in profit increases either never make it to market in the first place, or fail and get withdrawn from the market. The last time we drastically changed PSU standard was 30 years ago, and that was a direct result in the prior standard becoming incompatible with CPU coolers and longer ISA/PCI cards. It was a combined case/motherboard/PSU redesign and it only worked because the DIY PC industry barely existed at the time, it was a fraction the size it is today and most people bought prebuilts from companies like Compaq/Dell/Packard Bell/Gateway2000 etc. OEMs adopted the change out of necessity, and since they controlled 90% of the market, the DIY industry just followed it.

As you can probably tell, this situation no longer applies, and it's why attempts to move to other standards like BTX, DTX, and ATX12VO either failed, or are struggling to gain traction outside of niche markets like low-end prebuilts.
 
How about all GPU's just receive power the way they do on BTF motherboards and eliminate the cable all together?
 
How about all GPU's just receive power the way they do on BTF motherboards and eliminate the cable all together?
Because all you're doing is moving the problem to the motherboard, and nobody wants stupidly long PCIe slots that take up all the space on the board.

AWG14 is perfectly usable in a PC case
Consumers don't want more connectors with thicker wires. They want fewer connectors with thinner wires, that are less hassle to remember to plug in and can be more easily routed.

it's that 48V isn't needed by the mass-market
But it is, because the USB-C power delivery specification currently allows up to 240W to be drawn from a single port, and does so completely safely over a single cable because it runs at 48V and therefore only needs 5A. This leaves room in the standard for future extensibility, e.g. 480W @ 5A. We are moving to a USB-C-everything future and people are going to expect every USB-C port, everywhere, to be able to put out 240W.

Currently there is no way for motherboards or expansion cards to support these power demands except via extra EPS/PCIe power connectors, which robs valuable space and is another point of failure for melting - or users that just forget to plug the damn things in. Every connector band-aided onto the ATX standard - first the extra 4 pins, then EPS12V, 6-pin PCIe, 8-pin PCIe, and now 12VHPWR and 12V-2x6 - has been an indication that the standard needs to be reworked to a higher base voltage.

We can accomplish this trivially by building PSUs that are capable of both 12V and 48V output and using a new single connector for 48V motherboards that has a sense pin that signals to the PSU to operate in 48V mode; if that pin isn't grounded when the PSU turns on, then it works in ye olde 12V mode. This also allows for the same modular connector to be used on the PSU side for either the 12V/24-pin, or 48V cable to the board.
 
Consumers don't want more connectors with thicker wires. They want fewer connectors with thinner wires, that are less hassle to remember to plug in and can be more easily routed.
And there's the problem. That scenario of fewer connectors with thinner wires is why they're melting, as per the laws of physics. We MUST have either more wires or thicker wires, otherwise it'll all overheat and melt/burn.
We can accomplish this trivially by building PSUs that are capable of both 12V and 48V output and using a new single connector for 48V motherboards that has a sense pin that signals to the PSU to operate in 48V mode; if that pin isn't grounded when the PSU turns on, then it works in ye olde 12V mode. This also allows for the same modular connector to be used on the PSU side for either the 12V/24-pin, or 48V cable to the board.
In theory, great. In practice that's not going to happen for a while because dual-voltage PSUs are going to be crazy expensive compared to regular ATX 3.x PSUs. The only reason they'll gain traction is if the motherboards that need them gain traction, and those motherboards that need them will only make it if the GPUs that need them gain traction.

Given the astronomical price of >500W GPUs and their near-irrelevance to the wider PC industry (Steam Hardware Survey), it's hard to see how they're going to make it under free-market capitalism that's driven by cost and profit.

Meanwhile, AMD is using standard 8-pin connectors that will either prove or disprove that we need a new standard at all. When the 400W+ factory overclocked 9070XT Turbo Plus Nitro XXX editions start to melt their three 8-pins, we can maybe see that we need a new standard that is neither 8-pin nor 12VHPWR.
 
What truly needs to happen is the gpus need to stop drawing more and more current

Correct @eidairaman1 I wouldn't mind that AMD and NV saod that over 1-2years that performance would be the same if they could cut power draw in half of their high-end cards that would be a innovation instead of pushing their cards to the absolute limit which doesn't benefit anyone.

It's been before lowering the power on the higher end cards and locking the clocks actually helps on power draw and some it's really like wow how much do it actually needs to pull and it's not like the 5% difference if you lower the power draw by 100W makes the difference in-game for majority of users.
 
The other thing to remember about these insanely power-hungry cards is that they're dinosaurs. The PC industry has been moving to laptops at the expense of desktops. Like it or not, desktops are a dying breed and laptops now outsell them 2:1

600W graphics cards aren't going to catch on in the laptop market. Laptop GPUs typically top out at 145-150W, and some models with dynamic boost can push 25W of spare cooling headroom from the CPU to the GPU. But 175 is kind of a hard wall for laptops - where are you going to fit the power delivery and cooling for anything needing 600W like the 5090?
 
But aren't NVidia GPUs nowadays serve much less than 0.01% ? At least that's what 5xxx series launch looked like.

On a more serious note, the industry went through several such changes, including during transition to ATX PSUs. Only now there might be a window for us to influence the next standard. If the change were coming what would you like to see in the connectors and PSUs ?
My PSU came with a 12-year warranty about 2 years ago. I've still got about 10 of it left. So I don't want any new standard, thank you.
 
In hydraulics, having multiple pipes over a single, larger one is bad design. At the same throughput, you'd have more losses.
The couple of courses I had on electrical engineering were more than a decade ago, but from what little I remember, the story should be the same.

Having a single cable is actually more KISS-compliant. Multiple cables add, well, more cables... More degrees of freedom = You're going the wrong way, simplicity-wise.
And yet the solution that is apparent for 12v6 is using more than one cable to power say a 5090. So yeah sure. There is a utopia and then there is reality.
 
In hydraulics, having multiple pipes over a single, larger one is bad design. At the same throughput, you'd have more losses.
The couple of courses I had on electrical engineering were more than a decade ago, but from what little I remember, the story should be the same.

Having a single cable is actually more KISS-compliant. Multiple cables add, well, more cables... More degrees of freedom = You're going the wrong way, simplicity-wise.
The problem isn't the fewer cables. The problem is that those fewer cables are also thinner and the connector less robust than the old one.
 
And there's the problem. That scenario of fewer connectors with thinner wires is why they're melting, as per the laws of physics. We MUST have either more wires or thicker wires, otherwise it'll all overheat and melt/burn.

In theory, great. In practice that's not going to happen for a while because dual-voltage PSUs are going to be crazy expensive compared to regular ATX 3.x PSUs. The only reason they'll gain traction is if the motherboards that need them gain traction, and those motherboards that need them will only make it if the GPUs that need them gain traction.

Given the astronomical price of >500W GPUs and their near-irrelevance to the wider PC industry (Steam Hardware Survey), it's hard to see how they're going to make it under free-market capitalism that's driven by cost and profit.

Meanwhile, AMD is using standard 8-pin connectors that will either prove or disprove that we need a new standard at all. When the 400W+ factory overclocked 9070XT Turbo Plus Nitro XXX editions start to melt their three 8-pins, we can maybe see that we need a new standard that is neither 8-pin nor 12VHPWR.
3x8 plus slot power is 525W... and then you still have more safety margin left than anything x80> on 12v6.

Nvidia shouldve just not made a boutique 5090 and continue the cinder block design and simply place either 2-3x 12v6 or 4x8 pin. Its really quite simple and they would then probably even still get away with no shunt cheapout design.
 
Correct @eidairaman1 I wouldn't mind that AMD and NV saod that over 1-2years that performance would be the same if they could cut power draw in half of their high-end cards that would be a innovation instead of pushing their cards to the absolute limit which doesn't benefit anyone.

It's been before lowering the power on the higher end cards and locking the clocks actually helps on power draw and some it's really like wow how much do it actually needs to pull and it's not like the 5% difference if you lower the power draw by 100W makes the difference in-game for majority of users.
Yea, were quickly approaching i9-9900K and beyond levels of insane bumps in power draw vs previous generations and its getting kind of scary. I am worried that NVIDIA, AMD and Intel are gonna peghole themselves like Intel did with the 9900K and beyond where they keep just increasing the amount of power and power and power until we got wattage monsters like the 14900KS which consume more watts than some GPU's. Which is utterly insane. For credit where its due, NVIDIA's 40 series was relatively power efficient but then they completely 180'd that with the 50 series not really being that power efficient (infact, I've heard some claims that the 5090 is drawing 30 to 35 watts more on average from the wall than what pops up on software. The other versions of the 5090, especially watercooled ones apparently can up pull to 100+ more watts than spec but I can't really verify that one so take that with salt.)

Efficiency is something I'm hoping AMD focuses a little bit on for the 9070- on a unrelated note. Hoping the Intel B770, if it ever exists, is also pretty efficient. I don't think NVIDIA will really care for efficiency anymore, which is really disappointing because if your gonna do a software generation (which is basically what the 50 series is) you'd *think* that efficiency improvements would be best implemented now than later, but nope.

On a more serious note, the industry went through several such changes, including during transition to ATX PSUs. Only now there might be a window for us to influence the next standard. If the change were coming what would you like to see in the connectors and PSUs ?
I don't think its the right time to try and introduce a standard yet. 12VHPWR has only been widely adapted since realistically the 40 series launch, we still, whether we like it or not, will probably have to pass on any opportunities to influence a new design because of that. If you asked this sort of stuff in maybe like 4 or 6 years time, I think people would be all over it.

In hydraulics, having multiple pipes over a single, larger one is bad design. At the same throughput, you'd have more losses.
The couple of courses I had on electrical engineering were more than a decade ago, but from what little I remember, the story should be the same.
I think that was some of the idea behind the 12VHPWR connector but its flopped in that regard, primarily due to safety concerns (which ironically make it harder to justify using over 6 or 8 pin It's not as straight forward as 'simplifying' it because not everything needs to be super simplified. If you simplify something too much your ultimately hurting it more than your helping it.

What do you think ?
To kind of elaborate on what I was talking about earlier with another thing I responded to you about;

I don't think were ready for a new standard; even if I think many people are in agreeance that the current standard;
  1. Sucks.
  2. Needs redesigned (again), or completely gotten rid of.
But I don't think were at that point yet, nor ready yet. And I'm not sure that consumers are ready to jump on onto that realistically. We as enthusiasts can say what we want but 12VHPWR is still simpler than 6 or 8 pin despite the safety concerns and everything; and unfortunately, many consumers are pretty ignorant and don't really care for the safety stuff as long as its simpler. Consumers vastly prefer simpler products over ones that can seem more complicated, and its understandable, but it opens alot of room for carelessness on companies part, like we see with the 12VHPWR connector.
I also don't think motherboard manufacturers or GPU manufacturers really would jump on that, and you can try to introduce a new standard sure but it wont go anywhere if motherboard manufacturers and GPU manufacturers aren't on board.

It's a noble idea, your suggestion, and I like the thinking behind it. This whole thread is been really intriguing but I think of it more as a pipedream than anything.
 
With all the attention on melting graphics card connectors, this might be an opportunity to switch manufacturers to a connector that we like. Here is my proposal - please comment whether you like the idea or not, any changes and if we get enough people behind it we'll try to make them listen.

The power connector and cable shall:
  • have only two power delivery wires, ground and positive voltage
  • the voltage should be increased to 48V (same as maximum for USB-C)
  • the pins for power delivery should be designed to handle large current - paddle shaped or nub, according with the best state or the art.
  • there should be four auxiliary wires.
  • two of the auxiliary wires can be used for communicating between GPU (or other accessory) board and the power supply.
  • two of the auxiliary wires should be tied on the GPU (or other accessory) board to power ground and power voltage. They will be used by the power supply to determine actual voltage drop, resistance and power dissipation of power delivery wires via Kelvin measurement. Excessive power dissipation in the cable should result in power supply disconnecting this power port.
  • There should be a standard data connection from power supply to the motherboard to transfer real time data on current consumption and voltage. Ideally per port.
What do you think ?

Only two power delivery wires means they will be very stiff and hard to bend creating more tension on connectors and offers 0 fault tolerance, for example if the ground failed for any reason there is still path to ground through the PCIe socket for the 600W load and now the GPU, motherboard and possibly other hardware has just been cooked. Building harnesses for machinery with varying power demands for years, wiring homes, audio systems and building hundreds of computers I would say 6 wires with very close tolerances per connector 8-10 strand copper 14ga should allow for the standard 15A per pair, so two 6 pin connectors could handle 90A. The redundancy is if any set on a connector failed the other sets could handle the current load for a safe shutdown.

Increasing the voltage may help with the short transmission but will encourage a new standard that will cost to adopt in VRM's, PSU's, mobos, GPU's etc... and encourage MFG's to use thinner wire since the voltage is higher exacerbating broken strands in wires.

There are plenty of connectors, the issues I see with the new design power connector is the pins are thin, weak and the total connector seating area is small and weak allowing pins to move too much deforming the contact surfaces. Blade design is fine but may suffer complex design to endure multiple cycles without losing contact force, the existing PCIe connectors are just fine, Nvidia is the only company having these issues that I know of and if they engineered a better implementation of the harness or provided a couple adapters themselves to take stress off the connector thats frankly weak and crappy.... but they won't.
 
Literally just pump more current through existing 8-pin connectors. The safety factor on them is higher than 12-pin at the same wattage.
 
Efficiency is something I'm hoping AMD focuses a little bit on for the 9070- on a unrelated note. Hoping the Intel B770, if it ever exists, is also pretty efficient. I don't think NVIDIA will really care for efficiency anymore, which is really disappointing because if your gonna do a software generation (which is basically what the 50 series is) you'd *think* that efficiency improvements would be best implemented now than later, but nope.

The RX 9070 XT is set to draw about 330W and the non-XT about 225W but this is not final yet.
 
Well, progress usually means using more energy in more efficient ways. What would be interesting to see if people start modding houses to provide higher amperage and higher voltage outlets.
We already have 20amp 120v outlets in nearly all new homes stateside, so that started a while ago.

Don't think we want to go higher than 240V and most houses in the US have such voltages available.
Not in anywhere but appliance circuits usually, which tend to be away from the main living spaces.
 
Literally just pump more current through existing 8-pin connectors. The safety factor on them is higher than 12-pin at the same wattage.
I mean, that's not the silliest suggestion. It's still less dangerous than 12V6X2 if Nvidia is going to design its power rail with a single shunt resistor. 600W could go down a single pair and the bigger the physical connector for that eventuality, the better.

Lots of small wires is a better way to deal with the situation, but it requires some intelligence at either end, and if Nvidia (the single richest company on earth) can't be arsed to just two more* $0.05 components as a safety feature to graphics cards costing $2000, then that tells you everything you need to know about greed.


* - I'm sure there's a little more to it that that, because unmonitored shunts achieve nothing - but presumably the voltage-monitoring controller has spare capacity to monitor two more points, and if not, we're still talking about additional hardware that is both miniscule and ludicriously cheap.
 
Back
Top