Monday, February 19th 2024
NVIDIA RTX 50-series "Blackwell" to Debut 16-pin PCIe Gen 6 Power Connector Standard
NVIDIA is reportedly looking to change the power connector standard for the fourth successive time in a span of three years, with its upcoming GeForce RTX 50-series "Blackwell" GPUs, Moore's Law is Dead reports. NVIDIA began its post 8-pin PCIe journey with the 12-pin Molex MicroFit connector for the GeForce RTX 3080 and RTX 3090 Founders Edition cards. The RTX 3090 Ti would go on to standardize the 12VHPWR connector, which the company would debut across a wider section of its GeForce RTX 40-series "Ada" product stack (all SKUs with TGP of over 200 W). In the face of rising complains of the reliability of 12VHPWR, some partner RTX 40-series cards are beginning to implement the pin-compatible but sturdier 12V-2x6. The implementation of the 16-pin PCIe Gen 6 connector would be the fourth power connector change, if the rumors are true. A different source says that rival AMD has no plans to change from the classic 8-pin PCIe power connectors.
Update 15:48 UTC: Our friends at Hardware Busters have reliable sources in the power supply industry with equal access to the PCIe CEM specification as NVIDIA, and say that the story of NVIDIA adopting a new power connector with "Blackwell" is likely false. NVIDIA is expected to debut the new GPU series toward the end of 2024, and if a new power connector was in the offing, by now the power supply industry would have some clue. It doesn't. Read more about this in the Hardware Busters article in the source link below.
Update Feb 20th: In an earlier version of the article, it was incorrectly reported that the "16-pin connector" is fundamentally different from the current 12V-2x6, with 16 pins dedicated to power delivery. We have since been corrected by Moore's Law is Dead, that it is in fact the same 12V-2x6, but with an updated PCIe 6.0 CEM specification.
Sources:
Moore's Law is Dead, Hardware Busters
Update 15:48 UTC: Our friends at Hardware Busters have reliable sources in the power supply industry with equal access to the PCIe CEM specification as NVIDIA, and say that the story of NVIDIA adopting a new power connector with "Blackwell" is likely false. NVIDIA is expected to debut the new GPU series toward the end of 2024, and if a new power connector was in the offing, by now the power supply industry would have some clue. It doesn't. Read more about this in the Hardware Busters article in the source link below.
Update Feb 20th: In an earlier version of the article, it was incorrectly reported that the "16-pin connector" is fundamentally different from the current 12V-2x6, with 16 pins dedicated to power delivery. We have since been corrected by Moore's Law is Dead, that it is in fact the same 12V-2x6, but with an updated PCIe 6.0 CEM specification.
106 Comments on NVIDIA RTX 50-series "Blackwell" to Debut 16-pin PCIe Gen 6 Power Connector Standard
2. AMD has already indicated it's interest in using the 16 pin in their future products.
Well obviously NV had zero QC over any of this connector crap.
Hell they going to color code this one so users know which one they get lol
And now, they go out with "possibly" new design, and still might look as a savior. Exactly. There's no way, such huge company, which sells millions of cards, can't find resourses to test the effin plug and socket, which goes into these connector. Instead they probably put money into making "techtuber gang" to patter this issue away. Looks like a valid point. Since they have no challenge to even put any effort into making new connector. And even if it is, it's more likely appear in AI/Enterprise first, this time. Rumour or not, this story might have had place, like dgianstefani have mentioned. But I guess even if it was real, no PSU manufacturers would confess, because it will make even a bigger outrage, for bringing yet another connector, after selling more expensive PSUs that have nought future-proofing. They might not want to step into the same cr*p twice, and this time to let the "GPU ventor" to get their sh*t sorted out first, before pushing it onto others. After the sh*tfest that USB "standardising" consortium made the entire world to deal with. The PCI-SIG, is seems inclined to join the clown show. There's no trust left for these organisations, that let corporations to push their issued stuff for high margins, and they don't prevent these from happening. Same goes to numerous regulating entities.
And im sure u want to take very expensive Gpu, but try not to get caught when u take it.
The only upgrade from that is procuring a native cable that is directly compatible with your power supply. Fortunately, most high-end power supplies have received third party cables, for example, the EVGA G2/P2/T2 and corresponding 1st generation Super Flower Leadex (of which they are derived from) power supplies will work just fine with the CableMod E-series cable, and I'm sure options are available if you don't like that company for whatever reason. Corsair provides first-party cables for most of their high capacity (750W+) units. In the absence of compatible cables (for example some CWT/HEC/Andyson low or midrange power supplies by EVGA or Thermaltake), your best bet is to use the supplied 3- or 4-way 8-pin to 12VHPWR adapter cable.
The 8-pin spec is 216-288/150 = 44%-92% additional wattage capacity
And 12vhpwr is 660/600 = 10%
So the 12vhpwr is trading safety factor for smaller size.
As a result the Overprovision is cut in half while 4x 8 pins are replaced by a single 16 pin.
For a 300-450W GPU 150-300 W safety remains,
but if quality terminals are used it is really 13 A by 12V by 6 and 936W, woot. 2x 3x safety overprovision.
660/300-450 = 2.2 - 1.46 safety factor
While 300-450 is 0.5 - 0.75 of the maximum rated power of the 12vhpwr connector
The comparison will be 75 - 112.5 for the regular 8pin
288/75 - 112.5 = 3.84 - 2.56 safety factor
Still, 8pin is much much safer vs the 12vhpwr.
And for your '13 A by 12V by 6 and 936W' calculation,
First, 12vhpwr connector doesn't use 13A pins within spec.
So your 13A calculation is out-of-spec in the first place.
The safety factor will be:
936/600 = 1.56 safety factor
In a fair comparison, the 8pin with 13A rated pins into it now boosted to 468W out-of-spec
And its safety factor will be:
468/150 = 3.12 safety factor
You just made the 8pin twice as safe vs the 12vhpwr
That is what you are dealing with.
But since they can't cherry-pick their customer, only thing they could do is to assume everyone are idiots and make things more idiot-proof.
And instead of making more idiot-proof connectors they made a 'idiot-prone' one, and so the backfire comes...
Also i guess that the cables and their quality of my new bought Corsair HX1500i PSU ist not that bad. Because of the size of my case i did need extensions. That are CableMod ones. I had the same extensions already in use inside my actual rig since 4 yrs. If nVidia is not able to know what a norm is? I'm sorry. Then they are not worth to be in my focus.The older PCIe norm for power supply have shown their reliablility and trustworthy for years. To make something new only because it is new is real stupid. A standardization process is started to ensure that the described thinbg is available for use in a larger time frame. Just imagine a screw producer would introduce each year new Screw sizes. M2,7, M3,3,...
I'm not that little boy building a pc. I'm not i need to get a "Good boy" to compensate my low or non existing self esteem. Aside all that nVidia is more or less trash in the world of Linux.
The 12VHPWR connector doesn't automatically translate to "I chug 600 W, hear me roar!". Using DLSS and especially DLSS-G in supported games would help you retain image quality while lowering power consumption even further. Starfield with DLAA-G (native resolution DLSS with frame generation) at ultra high settings at 4K/120 would run sub-200W on my 4080, and that's because it's a Strix with a sky high power limit and without any undervolting or curve optimization involved.
1. It's quite easy. I decide what card i take because i pay for it. Also the powerconsumption story is a ferrytale. The 4080 Super needs up to 340W. The 7900xt needs 315W. Both with stock OC. The AMD card can use a lot of more power to reach the higher investment prices of a nVidia 4080/4080S. With the higher estimated Power needs of a 4080S that will never happen. Btw. In my area the 4080 and the 4080S have the same price level. The 4080 uses up to 320W so 20 W less than the 4080S. I definitly don't see (in your words) significantly less power every time and in any circumstances. All the wattage figures are taken from Geizhals, a german price search engine. btw. I also own a brand new MSI 1.000W PSU. But that doesn't suit the power needs.
2. If the 4080 Super is more powerful doesn't matter to me as i need the card to drive my monitors. I don't use programs that utilitize the nVidia unique techniques "blablabla". I don't play games on my PC. If i want to play games i invite friends sit together with them and play analog games like i.e. Monopoly. Im a member of the Gen X. I learned to use my head. Nowadays everybody seems to use AI instead. To take my upper example: I don't need that huge overland Truck. It is enough that the truck will be able to carry the weight in downtown.
Far away from your experiences my build is running mainly on Linux. Showing the desktop on two parallel UWQHD (3440x1440 px) monitors at 155 hz refresh rate. I don't give a sh*t on how good which card is at games. I work on the rig and it has to make money. I'm developing software (free pascal/lazarus, Gambas), calculate on a spreadsheed (Excel/Libre Office) with a hell of self written macros, scan objects in 3d, prepare those scans for 3D printing, slice them, etc. If i would do photogrammetry i could use the power of a 4080s. But i don't. All of my workload is far far far away from anything, that you think about.
Two 8 pins are more than enough for a 4090 Strix OC
www.techpowerup.com/review/thermal-grizzly-kryosheet-amd-gpu/
Anyway, since you're a Linux user that means you don't really have the option to run Nvidia IMHO. Not only the things that make having an Nvidia card worth it aren't available to you, maintaining them under Linux sucks as well. They are more power efficient but they also mean you need to run Windows. Bit of a moot point. Unlikely IMO and the only reason 7900 XTX didn't have them is that the hardware design was finalized by the point those began to rollout. Other cards just rode the wave primarily as a publicity stunt "hey look ours don't blow up"
I guess nvidia is scared by Intels 400W CPU, so they need to assert their dominance.