Tuesday, October 25th 2022
Radeon RX 7000 Series Won't Use 16-pin 12VHPWR, AMD Confirms
AMD just officially confirmed that its upcoming Radeon RX 7000 series next-generation graphics card will not use the 12+4 pin ATX 12VHPWR connector across the product stack. Scott Herkelman. SVP and GM of the AMD Radeon product group, confirmed on Twitter that the current RX 6000 series and future GPUs based on the RDNA3 graphics architecture, will not use this power connector. This would mean that even its add-in board (AIB) partners won't find the connector as a qualified part by AMD to opt for. This would mean that Radeon RX 7000 series will stick with 8-pin PCIe power connectors on the card, each drawing up to 150 W of power. For some of the higher-end products with typical board power of over 375 W; this will mean >2 8-pin connectors. AMD is expected to debut RDNA3 on November 3, 2022.
Source:
Scott Herkelman (Twitter)
49 Comments on Radeon RX 7000 Series Won't Use 16-pin 12VHPWR, AMD Confirms
It's more around December/Q1 2023 that we will see the first broadly available PSU.
Maybe the AMD 8000 GPU once PSUs are ATX 3.0 and the whole connector is "stabilised" and well tested.
I don't think Nvidia can afford to let the 12vhpwr issue as is anyways and they'll probably clarify this and/or recall products, I am not really making the RDNA 3 vs 4090 decision based solely on the connectors tho, I'll see November 3rd.
Huh???
I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
A chiplet approach could mean better yields, cheaper and lower power consumption if they don't push the silicon to the limit.
What interest me and most is what they do in the midrange, how can AMD chiplet compete with a not so big monolithic chip from Nvidia which is also made at TSMC but at 4nm.
One thing is sure, GPU sales continue to be very low.
The thing that I don't understand is the 4090 has about the same power draw as the 3090TI but yet those cards are 3 slots wide. I can understand SFF builds but to be compromised by the size of the GPU in an ATX build is crazy.
The difference between MBs that support PCIe flexibility is already price but because Nvidia has done this there is no need for ATX as Micro ATX does the same thing. I will be interested to see how those boards that have 2 NVME slots populated in between the X16 slots will work with heat dissipation with a giant GPU sitting above them as well. When (if) Direct Storage becomes a thing you could have those NVME drives singing along with the GPU and that would be quite the heat soak.
Of course we cannot forget the heat that PCIe 5.0 drives will produce based on the exotic heatsinks on the MBs for that protocol.
Slots really isn't the bad thing here I do remember the silly looking asus with noctua heatsink/ fans release :laugh:
It's the length and height that is getting really stupid.
A new bar has been set and its okay to be sentimental about that. You can wait for this size to arrive at midrange. One gen? Two?
It's pretty clear that the 4090 cooler was overbuilt and Nvidia cut down the power limit at the very end, maybe expecting outrage, maybe because the performance gains were not enough to justify it. But it gives you the benefit of the card being very quiet. So then you go buy the low end if you do not want leading edge performance.
Did you guys forget to look at performance per watt?
4090 is the most power efficient card on the planet by a wide margin. You can expect lower end cards using the same architecture to also be very fast and small.
One such connector cannot deliver 600 watts. The pins are too few and too thin to sustain the load - both electrical current through them and the dissipated heat from the enormous heatsink nearby.
Just think about it - you need a 2-kilo heatsink to dissipate that energy which you want to squeeze through super thin pins - not gonna happen.
Look at the second image - the normal PCI-e 6-pin and 8-pin connectors have a good size for the current that passes through them.
Whoever made the decision to call the new connector "600W-ready" must be fired, and his degree in electrical engineering and heat transfer revoked and publicly humiliated.
This said, the RTX 4090 needs not 1 but 3 such connectors to function safely.
The issue with the connector has nothing to do with the current or the heat, the only issue is mechanical, it doesn't have a proper latch to resist getting partially pulled out if someone really bends the cables.
Even the 4070 might be two slot, considering 3080 Ti was.
I don't see increased slot size for increased performance as progress. Its just about going bigger.
And I do agree, 30% improvement on perf/w is impressive, but then that also confirms how shitty Samsung was.