• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 50-series "Blackwell" to Debut 16-pin PCIe Gen 6 Power Connector Standard

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA is reportedly looking to change the power connector standard for the fourth successive time in a span of three years, with its upcoming GeForce RTX 50-series "Blackwell" GPUs, Moore's Law is Dead reports. NVIDIA began its post 8-pin PCIe journey with the 12-pin Molex MicroFit connector for the GeForce RTX 3080 and RTX 3090 Founders Edition cards. The RTX 3090 Ti would go on to standardize the 12VHPWR connector, which the company would debut across a wider section of its GeForce RTX 40-series "Ada" product stack (all SKUs with TGP of over 200 W). In the face of rising complains of the reliability of 12VHPWR, some partner RTX 40-series cards are beginning to implement the pin-compatible but sturdier 12V-2x6. The implementation of the 16-pin PCIe Gen 6 connector would be the fourth power connector change, if the rumors are true. A different source says that rival AMD has no plans to change from the classic 8-pin PCIe power connectors.

Update 15:48 UTC: Our friends at Hardware Busters have reliable sources in the power supply industry with equal access to the PCIe CEM specification as NVIDIA, and say that the story of NVIDIA adopting a new power connector with "Blackwell" is likely false. NVIDIA is expected to debut the new GPU series toward the end of 2024, and if a new power connector was in the offing, by now the power supply industry would have some clue. It doesn't. Read more about this in the Hardware Busters article in the source link below.

Update Feb 20th: In an earlier version of the article, it was incorrectly reported that the "16-pin connector" is fundamentally different from the current 12V-2x6, with 16 pins dedicated to power delivery. We have since been corrected by Moore's Law is Dead, that it is in fact the same 12V-2x6, but with an updated PCIe 6.0 CEM specification.



View at TechPowerUp Main Site | Source
 
With 16 pins, does that mean the connector supplies up to 800 W? Since the 4090 goes up to almost 500 W, does that mean the 5090 will need over 600 W? I would not be surprised by this as the 3 nm node is nowhere near ready for this kind of chip so that means Blackwell will be on the same 4 nm process.
 
I won't be upgrading my GPU and PSU until GTA 6 is out for PC, so I'll see what connector we have then....Hopefully it's something better than there is now...
 
Why not Pcie gen 7 power 20 pin microfiber with all 20 pin dedicated to power 2x10 and no signaling pins as they are unnecessarily raising the height of the connector and the complexity.
 
With 16 pins, does that mean the connector supplies up to 800 W? Since the 4090 goes up to almost 500 W, does that mean the 5090 will need over 600 W? I would not be surprised by this as the 3 nm node is nowhere near ready for this kind of chip so that means Blackwell will be on the same 4 nm process.
Let's hope the extra pins are only to lessen the burden on the pins themselves (and by consequence the fire hazard), rather than for allowing bigger loads.
 
Last edited:
SO anyone who bought a new PSU wiht the "new" 12V connector basically got Jebaited
 
I will be taking a AMD GPU for my next rig. Just smiling about NVidia. How much tries will they need to have a new solid connector.AMD needs the same power and still uses the old reliable connector.
 
I will be taking a AMD GPU for my next rig. Just smiling about NVidia. How much tries will they need to have a new solid connector.AMD needs the same power and still uses the old reliable connector.
Progress takes revision sometimes.

How many variants of form factor/connector did we use before settling on ATX, or USB, for example.

Besides, as people miss, this isn't NVIDIA making these connectors, it's PCI-SIG or other standardization consortiums.
 
Besides, as people miss, this isn't NVIDIA making these connectors, it's PCI-SIG or other standardization consortiums.
Dell and nVidia are the ones paying and collaborating with PCI-SIG to create these connectors. PCI-SIG isn't doing anything nV wouldn't like.
 
Dell and nVidia are the ones paying and collaborating with PCI-SIG to create these connectors. PCI-SIG isn't doing anything nV wouldn't like.
The connectors make sense. They're used in industry and datacentres at massive scale without apparent issue.

The fact that connectors evolve harms who? You get a free adapter in the box.

I like the modern tiny PCBs with a single connector, makes waterblocking cards nice and compact.
 
The connectors make sense. They're used in industry and datacentres at massive scale without apparent issue.

The fact that connectors evolve harms who? You get a free adapter in the box.
What does that have to do with my reply to your post? I simply said your statement "this isn't NVIDIA making these connectors" is false, nV is/was directly involved in making the 12VHPWR standard with PCI-SIG.
 
What does that have to do with my reply to your post? I simply said your statement "this isn't NVIDIA making these connectors" is false, nV is/was directly involved in making the 12VHPWR standard with PCI-SIG.
NVIDIA being part of a consortium and NVIDIA doing something on it's own that it's main competitor is ignoring for now (this is the news here) is a bit different. AMD has literally used the 8 pin conservative approach in their marketing, so they'll probably jump ship when a new standard is widely adopted and proven, just like they did with ray tracing.

NVIDIA is the biggest player in GPUs, so them being involved isn't surprising. But it seems people think NVIDIA is the sole reason why these connectors exist (probably because the only hardware they see as consumers with them are NVIDIA cards), and it's some kind of joke, i.e. everyone else is too smart to use such worthless connectors as the old ones are perfect. The reality is a bunch of companies (including NVIDIA) are involved in making more suitable standards, that evolve over time.

https://pcisig.com/membership/member-companies As you can see, quite a long list of companies, including both Intel and AMD.
 
I will be taking a AMD GPU for my next rig. Just smiling about NVidia. How much tries will they need to have a new solid connector.AMD needs the same power and still uses the old reliable connector.
Having a more reliable power connector is one reason I went with an AMD gpu, I got tired of nvidia's greed and planned obsolescence with VRAM.
It seems like Nvidia went with a new connector for aesthetics reasons, because they couldn't fit 3x 8 pin on their already cost cut GPU design having a triangle cutout on the board.
 
With 16 pins, does that mean the connector supplies up to 800 W? Since the 4090 goes up to almost 500 W, does that mean the 5090 will need over 600 W? I would not be surprised by this as the 3 nm node is nowhere near ready for this kind of chip so that means Blackwell will be on the same 4 nm process.
Having the ability to supply x amount of power doesn't automatically mean components using the connector will max it out. The 4090 uses a power connector rated for 600 W, but the typical max power draw is ~450 W, and the 4080 which uses the same connector stays below 350 W.
 
https://pcisig.com/membership/member-companies As you can see, quite a long list of companies, including both Intel and AMD.
I'm talking specifically 12VHPWR, not other PCI-SIG standards which AMD and Intel are involved directly in.

First off, PCIe 5.0 and the 12VHPWR connector are not Intel spec. It was developed by the PCI-SIG for a spec sponsored by Nvidia and Dell. It appears in the Intel spec after the fact because Intel had to make it part of the spec since the PCI-SIG were requiring consumer to use the connector for powering graphics cards.
 
Let's hope the extra pins is only to lessen the burden on the pins themselves (and by consequence the fire hazard), rather than for allowing bigger loads.
Wouldn't really make sense, unless it's a 4x8pin to 16pin adapter.
 
The connectors make sense. They're used in industry and datacentres at massive scale without apparent issue.

The fact that connectors evolve harms who? You get a free adapter in the box.

I like the modern tiny PCBs with a single connector, makes waterblocking cards nice and compact.
The manufacturing tolerances in datacenter hardware isn't the same as consumer hardware, 12VHPWR isn't suitable for consumer use when there is no room for error, unlike 8 pin & 6+2 pin molex which even the cheapest garbage connector will fit and you can tell when it clicks into place.
Having another new connector harms anyone who paid a premium for an ATX 3.0 power supply, I'd also rather have a PCB be another few inches longer with the tradeoff of knowing the connector won't melt.
 
The manufacturing tolerances in datacenter hardware isn't the same as consumer hardware, 12VHPWR isn't suitable for consumer use when there is no room for error, unlike 8 pin & 6+2 pin molex which even the cheapest garbage connector will fit and you can tell when it clicks into place.
Having another new connector harms anyone who paid a premium for an ATX 3.0 power supply, I'd also rather have a PCB be another few inches longer with the tradeoff of knowing the connector won't melt.
The actual plug socket would be manufactured to the same tolerances, likely by the same factory and assembly line. The difference is probably as simple as technicians trusted to work in datacentres know how to fully plug a connector in, and install things to spec. Your average consumer probably does as well, but there's always special cases... The pin compatible 12V-2x6 revision to make the connector more idiot proof is evidence of this.

The manufacturing tolerances in datacenter hardware isn't the same as consumer hardware, 12VHPWR isn't suitable for consumer use when there is no room for error, unlike 8 pin & 6+2 pin molex which even the cheapest garbage connector will fit and you can tell when it clicks into place.
Having another new connector harms anyone who paid a premium for an ATX 3.0 power supply, I'd also rather have a PCB be another few inches longer with the tradeoff of knowing the connector won't melt.
The ATX 3.0 spec isn't just a connector standard, it requires much higher ratings from the PSU, for example 200% spike load etc. Using an adapter for your PSU connectors to the PCIE gen 6 connector doesn't take away the basic advantages that ATX 3.0 offers over previous generation PSUs.
 
SO anyone who bought a new PSU wiht the "new" 12V connector basically got Jebaited
Between every card being offered with an adapter and PSU being modular these days, I fail to see a problem here.

If there's a problem here, it's the apparent drive to deliver increasingly more amps via ever shrinking connectors. There was something in physics I learned about that...
 
Between every card being offered with an adapter and PSU being modular these days, I fail to see a problem here.

If there's a problem here, it's the apparent drive to deliver increasingly more amps via ever shrinking connectors. There was something in physics I learned about that...
One of the old issues was that some AIB weren't using the correct type of contacts dictated by the spec in their plugs (not the PSU, not the GPU, the plug on the cable included with the GPU), so if the connector wasn't fully seated and an angle was present, the actual contact area was very low and this caused issues. The root cause of this was fixed in the 12V-2x6 revision, which recessed the sense pins so that angled connections would limit the power delivered, even if the plug was using contacts not up to spec.
 
Between every card being offered with an adapter and PSU being modular these days, I fail to see a problem here.

If there's a problem here, it's the apparent drive to deliver increasingly more amps via ever shrinking connectors. There was something in physics I learned about that...
I know :D

I was just pointing out the fact that everyone seems to be trying to jump onto the lastest thing as fast as possible to get ahead of the market without due dilligence being done.

Also because most failure points have been on the GPU side due to the bending etc the PSU side also uses exactly the same connector so that failure point is also there as well. It just isnt as apparent as most people dont immediately have a bend coming out of it.
The ATX 3.0 spec isn't just a connector standard, it requires much higher ratings from the PSU, for example 200% spike load etc. Using an adapter for your PSU connectors to the PCIE gen 6 connector doesn't take away the basic advantages that ATX 3.0 offers over previous generation PSUs.
This has been something we have been needing for a while with the spikes being apparent really since the Vega VII/2080 days.
 
I won't be upgrading my GPU and PSU until GTA 6 is out for PC, so I'll see what connector we have then....Hopefully it's something better than there is now...
Probably have a 24pin by then, and require a 25A breaker to run a top of the line gamingpc
 
Back
Top