Tuesday, October 25th 2022

Radeon RX 7000 Series Won't Use 16-pin 12VHPWR, AMD Confirms

AMD just officially confirmed that its upcoming Radeon RX 7000 series next-generation graphics card will not use the 12+4 pin ATX 12VHPWR connector across the product stack. Scott Herkelman. SVP and GM of the AMD Radeon product group, confirmed on Twitter that the current RX 6000 series and future GPUs based on the RDNA3 graphics architecture, will not use this power connector. This would mean that even its add-in board (AIB) partners won't find the connector as a qualified part by AMD to opt for. This would mean that Radeon RX 7000 series will stick with 8-pin PCIe power connectors on the card, each drawing up to 150 W of power. For some of the higher-end products with typical board power of over 375 W; this will mean >2 8-pin connectors. AMD is expected to debut RDNA3 on November 3, 2022.
Source: Scott Herkelman (Twitter)
Add your own comment

49 Comments on Radeon RX 7000 Series Won't Use 16-pin 12VHPWR, AMD Confirms

#1
kiddagoat
This seems like a smart decision, in my opinion.
Posted on Reply
#2
sephiroth117
Regardless of the current issue (AMD made that decisions months before the 4090 controversies), ATX 3.0 is just not ready.

It's more around December/Q1 2023 that we will see the first broadly available PSU.

Maybe the AMD 8000 GPU once PSUs are ATX 3.0 and the whole connector is "stabilised" and well tested.

I don't think Nvidia can afford to let the 12vhpwr issue as is anyways and they'll probably clarify this and/or recall products, I am not really making the RDNA 3 vs 4090 decision based solely on the connectors tho, I'll see November 3rd.
Posted on Reply
#3
Hyderz
lets hope we dont see 4slots behemoths on the new radeon gpus
Posted on Reply
#4
ZetZet
Hyderzlets hope we dont see 4slots behemoths on the new radeon gpus
Why do you care about that? Just buy lower tier cards if that's an issue. Man people get so weird, it's like religion all over again.
Posted on Reply
#5
thewan
sephiroth117Regardless of the current issue (AMD made that decisions months before the 4090 controversies), ATX 3.0 is just not ready.

It's more around December/Q1 2023 that we will see the first broadly available PSU.

Maybe the AMD 8000 GPU once PSUs are ATX 3.0 and the whole connector is "stabilised" and well tested.

I don't think Nvidia can afford to let the 12vhpwr issue as is anyways and they'll probably clarify this and/or recall products, I am not really making the RDNA 3 vs 4090 decision based solely on the connectors tho, I'll see November 3rd.
Define broadly available. In my 3rd world country with a slowly obliterating currency when compared to USD, there are already three brands of 12VHPWR PCIE 5.0 and ATX 3.0 PSUs. FSP, Thermaltake, and (I pray that this brand get their priorities straight for the sake of whoever buys their PSUs) Gigabyte. The FSP even comes in 850w and 1000w flavors, the other 2 only 1000w so far. Some of the listings are 2 weeks or more old. And all of this is ready stock mind you, no back order or preorder rubbish.
Posted on Reply
#6
btarunr
Editor & Senior Moderator
Hyderzlets hope we dont see 4slots behemoths on the new radeon gpus
Remember this double-decker connector from the GTX 680? This could be one way for AIBs to avoid having a row of four 8-pin connectors. I doubt they'll use it, but it exists.

Posted on Reply
#7
Lionheart
ZetZetWhy do you care about that? Just buy lower tier cards if that's an issue. Man people get so weird, it's like religion all over


Huh???
Posted on Reply
#8
ks2
thewanDefine broadly available. In my 3rd world country with a slowly obliterating currency when compared to USD, there are already three brands of 12VHPWR PCIE 5.0 and ATX 3.0 PSUs. FSP, Thermaltake, and (I pray that this brand get their priorities straight for the sake of whoever buys their PSUs) Gigabyte. The FSP even comes in 850w and 1000w flavors, the other 2 only 1000w so far. Some of the listings are 2 weeks or more old. And all of this is ready stock mind you, no back order or preorder rubbish.
And in the US there is only one single atx3.0 psu with 12vhpwer under 1200w that's in stock.
Posted on Reply
#9
kapone32
ZetZetWhy do you care about that? Just buy lower tier cards if that's an issue. Man people get so weird, it's like religion all over again.
This sentiment is because there are people that like to populate their MBs with other peripherals. The weight of these cards is also a worry. 2KG is no joke hanging in a PCIe slot. There is no long term data to show this is not an issue. Especially given the size of the actual card.
Posted on Reply
#10
ZetZet
kapone32This sentiment is because there are people that like to populate their MBs with other peripherals. The weight of these cards is also a worry. 2KG is no joke hanging in a PCIe slot. There is no long term data to show this is not an issue. Especially given the size of the actual card.
Yes, so why not buy a lower tier card then?

I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
Posted on Reply
#11
TechLurker
I'm curious if AMD might instead look towards adopting EPS12V power connectors should the power reqs increase further. IIRC, there was some discussion on the subject around the time NVIDIA and Intel came up with the 12VHPWR plug of using EPS12V connectors to replace or supplement 8-pin PCI connectors, as 4-pin EPS12V can output a continuous 155 watts, while the 8-pin EPS12V can output a continuous 235 watts (depending on wire quality) vs the 8-pin PCI's limit of 150w continuous. Further, many high-end 1k+ PSUs aimed at energy intensive rigs usually have 2, sometimes 3, EPS 8-pin cables included in box. And some modular PSUs, such as Seasonic's, can provide EPS12V or PCI output from the same modular port, while other modular PSUs, such as EVGA's, has one spare dedicated EPS12V port.
Posted on Reply
#12
Luminescent
I think AMD won't have a better architecture than Nvidia, so if they were to make a monolithic chip they would probably lose in performance/Watt.
A chiplet approach could mean better yields, cheaper and lower power consumption if they don't push the silicon to the limit.
What interest me and most is what they do in the midrange, how can AMD chiplet compete with a not so big monolithic chip from Nvidia which is also made at TSMC but at 4nm.
One thing is sure, GPU sales continue to be very low.
Posted on Reply
#13
kapone32
ZetZetYes, so why not buy a lower tier card then?

I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
So let me understand this. If you get a 13900K /7950X system, want the best performance and to take advantage of the PCIe lanes available you should gimp your GPU purchase because the manufacturers have made cards too big for that and it's ok?

The thing that I don't understand is the 4090 has about the same power draw as the 3090TI but yet those cards are 3 slots wide. I can understand SFF builds but to be compromised by the size of the GPU in an ATX build is crazy.

The difference between MBs that support PCIe flexibility is already price but because Nvidia has done this there is no need for ATX as Micro ATX does the same thing. I will be interested to see how those boards that have 2 NVME slots populated in between the X16 slots will work with heat dissipation with a giant GPU sitting above them as well. When (if) Direct Storage becomes a thing you could have those NVME drives singing along with the GPU and that would be quite the heat soak.

Of course we cannot forget the heat that PCIe 5.0 drives will produce based on the exotic heatsinks on the MBs for that protocol.
Posted on Reply
#14
Devon68
kapone32This sentiment is because there are people that like to populate their MBs with other peripherals. The weight of these cards is also a worry. 2KG is no joke hanging in a PCIe slot.
TBH if I would get that card I can probably afford a GPU support of maybe one of those little figurines. Maybe a mini Hulk holding up a card.
Posted on Reply
#15
ThrashZone
ZetZetYes, so why not buy a lower tier card then?

I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
Hi,
Slots really isn't the bad thing here I do remember the silly looking asus with noctua heatsink/ fans release :laugh:
It's the length and height that is getting really stupid.
Posted on Reply
#16
Denver
LuminescentI think AMD won't have a better architecture than Nvidia, so if they were to make a monolithic chip they would probably lose in performance/Watt.
A chiplet approach could mean better yields, cheaper and lower power consumption if they don't push the silicon to the limit.
What interest me and most is what they do in the midrange, how can AMD chiplet compete with a not so big monolithic chip from Nvidia which is also made at TSMC but at 4nm.
One thing is sure, GPU sales continue to be very low.
Despite the name, Nvidia GPUs are made in 5nm. I think AMD has a huge chance to beat Nvidia by a considerable margin, after all it will have more than twice as many compute units, my only question is the RT performance.
Posted on Reply
#17
Vayra86
ZetZetI don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
That's ironically what they've been doing for the better part of the last 20 years, yes.

A new bar has been set and its okay to be sentimental about that. You can wait for this size to arrive at midrange. One gen? Two?
Posted on Reply
#18
ZetZet
kapone32So let me understand this. If you get a 13900K /7950X system, want the best performance and to take advantage of the PCIe lanes available you should gimp your GPU purchase because the manufacturers have made cards too big for that and it's ok?

The thing that I don't understand is the 4090 has about the same power draw as the 3090TI but yet those cards are 3 slots wide. I can understand SFF builds but to be compromised by the size of the GPU in an ATX build is crazy.
If your build cannot support a thicker card then you buy a slimmer card and sacrifice performance. Not to mention PCIe lanes are really not an issue anymore for desktops, SLI is dead, storage doesn't benefit much from speeds in the real world.

It's pretty clear that the 4090 cooler was overbuilt and Nvidia cut down the power limit at the very end, maybe expecting outrage, maybe because the performance gains were not enough to justify it. But it gives you the benefit of the card being very quiet.
Vayra86That's ironically what they've been doing for the better part of the last 20 years, yes.

A new bar has been set and its okay to be sentimental about that. You can wait for this size to arrive at midrange. One gen? Two?
So then you go buy the low end if you do not want leading edge performance.



Did you guys forget to look at performance per watt?


4090 is the most power efficient card on the planet by a wide margin. You can expect lower end cards using the same architecture to also be very fast and small.
Posted on Reply
#19
ARF
kiddagoatThis seems like a smart decision, in my opinion.
There is no problem with the hybrid 12+4-pin 12HPWR connector in itself. But... As long as one uses as many as needed.
One such connector cannot deliver 600 watts. The pins are too few and too thin to sustain the load - both electrical current through them and the dissipated heat from the enormous heatsink nearby.
Just think about it - you need a 2-kilo heatsink to dissipate that energy which you want to squeeze through super thin pins - not gonna happen.

Look at the second image - the normal PCI-e 6-pin and 8-pin connectors have a good size for the current that passes through them.
Whoever made the decision to call the new connector "600W-ready" must be fired, and his degree in electrical engineering and heat transfer revoked and publicly humiliated.

This said, the RTX 4090 needs not 1 but 3 such connectors to function safely.



Posted on Reply
#20
ZetZet
ARFOne such connector cannot deliver 600 watts. The pins are too few and too thin to sustain the load - both electrical current through them
That's just wrong. The connector itself is overkill for 50amps. You only have 8 amps per pin, can do 8 amps on a hair.

The issue with the connector has nothing to do with the current or the heat, the only issue is mechanical, it doesn't have a proper latch to resist getting partially pulled out if someone really bends the cables.
Posted on Reply
#21
Vayra86
ZetZetSo then you go buy the low end if you do not want leading edge performance.
That is a possibility, but that wasn't the subject, you say Nvidia can't pull performance out of similar slot size, but that's exactly what they've done.
Posted on Reply
#22
ZetZet
Vayra86That is a possibility, but that wasn't the subject, you say Nvidia can't pull performance out of similar slot size, but that's exactly what they've done.
But that's my point exactly, wait for the 4060 and buy that if you want 2 slot cards... I'm sure it will be faster than 2080 Ti.

Even the 4070 might be two slot, considering 3080 Ti was.
Posted on Reply
#23
Vayra86
ZetZetBut that's my point exactly, wait for the 4060 and buy that if you want 2 slot cards... I'm sure it will be faster than 2080 Ti.
The point is people are sentimental about a change in the definition of 'progress'.

I don't see increased slot size for increased performance as progress. Its just about going bigger.
Posted on Reply
#24
ZetZet
Vayra86The point is people are sentimental about a change in the definition of 'progress'.

I don't see increased slot size for increased performance as progress. Its just about going bigger.
The progress is in the performance per watt department, total performance is absolutely irrelevant in that sense. You could push the cards to use that amount of power for a long time, it's just that there was no reason to, couldn't gain any performance, now they can get gains so they do. The only people that benefit are the consumers, you don't have to wait two more years for that performance in a two slot card.
Posted on Reply
#25
Vayra86
ZetZetThe progress is in the performance per watt department, total performance is absolutely irrelevant in that sense. You could push the cards to use that amount of power for a long time, it's just that there was no reason to, couldn't gain any performance, now they can get gains so they do. The only people that benefit are the consumers, you don't have to wait two more years for that performance in a two slot card.
A good point. Still, that point does not disqualify that a new bar has been set and it will bleed to lower parts of the stack. Ampere was already fat, Ada went for the full brick.

And I do agree, 30% improvement on perf/w is impressive, but then that also confirms how shitty Samsung was.
Posted on Reply
Add your own comment
Nov 21st, 2024 11:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts