Friday, May 31st 2024

ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector

ASRock is ready with the first Radeon RX gaming graphics card to feature a modern 12V-2x6 power connector, replacing the up to three 8-pin PCIe power connectors it took, to power a Radeon RX 7900 series graphics card. The ASRock RX 7900 series WS graphics cards are also the first 2-slot RX 7900 series graphics cards. They target workstations and GPU rendering farms that stack multiple graphics cards into 4U or 5U rackmount cases, with no spacing between 2-slot graphics cards. ASRock is designing cards based on both the RX 7900 XT, and the flagship RX 7900 XTX.

The ASRock RX 7900 series WS graphics cards appear long and no more than 2 slots thick. To achieve these dimensions, a lateral-flow cooling solution is used, which combines a dense aluminium or copper channel heatsink with a lateral blower. Remember we said these cards are meant for workstations or rendering farms? So the noise output will be deafening, at least up to datacenter standards. The most striking aspect of these cards of course is their 12+4 pin ATX 12V-2x6 power input, which is capable of drawing 600 W of continuous power from a single cable. It's located at the card's tail-end, where it would have been an engineering challenge to put three 8-pin connectors.
Add your own comment

94 Comments on ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector

#26
TheDeeGee
TomorrowAnd most people are not. They're not gonna buy a new PSU.
You don't even have to, since most well known brands sell a 2x 8-Pin to 12VHPWR cable.

I use one on my Seasonic PRIME Fanless TX-700 without any issues.
Posted on Reply
#27
trsttte
ZoneDymoImagine though if we can do 300 watt pci-e slots with 300 watt high end gpu's that then dont need a power connector at all, that would be beautiful.
Not beautiful at all, you're adding a middle man to the majority of the power delivery and would run into the same problems as with this stupid microfit connector: very small surface to move all that power through.
GuckyThe 12V-2xx6 is the next industry standard, if we want it or not. The moment all PSU makers included it in their products, it was set in stone.
Nothing about the old 8pin connectors was discontinued. There are some new possible features with the 12VHPWR connector but so far no one has used them or even showed any intention to do so.

The only one pushing for this stupid standard is nvidia and since they're by far the market leader on GPUs, PSU makers had to start including this fire hazzard of a connector, but nothing about ATX3.0 requires them to do so. Funny enough a lot of the designs are using the same regular 8 pin molex on the PSU side because they know it's a better solution both in terms of surface area to move the power through and to connect a cable in a cramped space and only use the 12VHPWR connector on the GPU side to appease nvidia/nvidia clients.

Asrock clearly failed to read the room, every discussion about this connector is super negative because everyone fucking hates the thing and with good reasons, since they only work with AMD they could avoid this hole thing but here they go and decide to enter the hate train.
Posted on Reply
#28
R-T-B
I guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
Posted on Reply
#29
Tomorrow
GuckyOf course they are present. There is always a transition phase between 2 standards. That is why 12VO never came to the consumer market, it is impossible to make a transition phase for that.
And there are other things beside GPUs that might use the 6 or 8-Pins.
PSU's released in 2023 - out of 370, 91 still included the Floppy connector. Out of 142 so far released in 2024, 28 still included the Floppy connector.
6pin and 8pin are not going anywhere for decades. Also because of huge backlog of GPU's that use them.

Aside from GPU's very few devices actually need more power than the PCIe slot can provide. I've seen some SSD addon cards use and some motherboards but that's about it.

There will be no transition period. Once something better comes along this "experiment" will be dropped faster than a hot potato.
Even Nvidia cards (AIB included) released in 2024 - out of 196, 142 used this new connector. So even Nvidia is not fully committed or mandated this for all their cards.
R-T-BI guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
Oh of course. Nvidia fans and card owners totally LOVE this connector. /s
In another thread a Nvidia fanboy told me how Nvidia owners hated FSR Frame Generation. Except 30 series and earlier apparently...
Posted on Reply
#30
Ferrum Master
Agh... the flow of hate in this thread... it almost feels you can slice it.
Posted on Reply
#31
kondamin
TomorrowEasier? You call this easier? Having to watch cable bends, unplugging periodically to check for damage and stupid "dongles".
There's nothing "easier" about the new standard. Easier would have been to adopt 8pin EPS already used on workstation cards.
they can make an L connector, heck they could even do one with a hinge and im talking about the power capacity of the cable as it can replace 2~3 of the old ones.
Posted on Reply
#32
Random_User
TomorrowPSU's released in 2023 - out of 370, 91 still included the Floppy connector. Out of 142 so far released in 2024, 28 still included the Floppy connector.
At least it is possible to power some ancient peripherals, eg sound card, with FDD connector, so is 6pin PCIE. I hardly see any use in this scenario of "compact" 600W connector, (outside the powerhog "compact internal space heater"graphic cards), that the PSU's connection space.
trsttteNot beautiful at all, you're adding a middle man to the majority of the power delivery and would run into the same problems as with this stupid microfit connector: very small surface to move all that power through.



Nothing about the old 8pin connectors was discontinued. There are some new possible features with the 12VHPWR connector but so far no one has used them or even showed any intention to do so.

The only one pushing for this stupid standard is nvidia and since they're by far the market leader on GPUs, PSU makers had to start including this fire hazzard of a connector, but nothing about ATX3.0 requires them to do so. Funny enough a lot of the designs are using the same regular 8 pin molex on the PSU side because they know it's a better solution both in terms of surface area to move the power through and to connect a cable in a cramped space and only use the 12VHPWR connector on the GPU side to appease nvidia/nvidia clients.

Asrock clearly failed to read the room, every discussion about this connector is super negative because everyone fucking hates the thing and with good reasons, since they only work with AMD they could avoid this hole thing but here they go and decide to enter the hate train.
R-T-BI guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
I wholely support your sentiments here. But let's hope this is related to the WS cards only, and the "regular" card design is safe. AsRock is a huge OEM company, so I don't hold my breath, though. They're after where the money is...

On the other hand, AMD said they will eventually move to this connector sometime in the future. Who knows if the future has come.

Also I have a guess, that nVidia was preparing to the enterprise/datacenter/workstation domination for a long time. And they've just wanted to alpha/beta-test this connector, on wealthy gulible consumer "guinea pigs", or were just lazy/greedy do deferetiate the design between enterprise and consumer products PCB design, making it uniform for both.
Posted on Reply
#33
TheinsanegamerN
AusWolfOr just get GPU power consumption back to normal human levels to... I don't know... 300 W for high-end (2x8-pin), 150 W for mid-range (1x8-pin) and 75 W (no power connector) for entry level?
Nobody has taken away your 300w GPUs, or your 150w GPUs, or your 75W GPUs. Go buy as many RX 6400XTs as you'd like!

Hell, why stop there? High end used to mean sub 40w, because that was all AGP could support! We HAVE TO GO BACK! :fear:

Or, we can adapt to the changing world instead.
ZoneDymohilarious, first of all screw Asrock as a company, but apart from that...innovation through a crappy connector AND then you also dare to throw on a blowerstyle cooler?

I said it before...can we not better just update the now ancient power delivery of the pci-e slot? its been 75 watts since its inception....change that to...oh idk 300 watts?
People already whine and bitch and moan about motherboard pricing. You want to quadruple the power capability on top of all that?
ZoneDymooh 100% agree, im all for it, throwing more power at it is the weakest form of innovation.
Imagine though if we can do 300 watt pci-e slots with 300 watt high end gpu's that then dont need a power connector at all, that would be beautiful.
So, if you can do a high end GPU with 300w, why not scale that tech up to 400, or 500? Chips size is not a limiting factor anymore. Removing heat is now the limiting factor. Limiting your GPU lineup to 300w at best didnt work out so well for alchemist, nor historically has it worked well for AMD. If you dont want a 600w GPU....dont buy a 600w GPU? 4060s and 6650xts and 7800xts still exist.
Posted on Reply
#34
Dirt Chip
Copy and screen save the comments for when AMD will officially use this connector on future GPU's.
What will the crowd will the say then...
Posted on Reply
#35
Tomorrow
kondaminthey can make an L connector, heck they could even do one with a hinge and im talking about the power capacity of the cable as it can replace 2~3 of the old ones.
And how did this L connector work out for CableMod? Having pre made 90 degree bend does not solve the problem with safety margins and bad design. It merely resolves one failure point.

Also most Nvidia cards used either 1x8pin or 2x8pin before the introduction of this new 12pin (16 with sense pins) standard. Very few cards used 3x8pin and like i said before 8pin EPS could replace 8pin PCIe while carrying more power, making the new "compact" 16pin unnecessary.

Also chasing this compactness is meaningless if only the power connector is small but sits smack in the middle of the card with huge coolers taking 3+ slots.
Does anyone really worry about the space 8pin PCIe occupied in a situation like this?

If Nvidia truly wanted a compact card they could have made the coolers smaller or mandated smaller coolers and used HBM2 to further cut down the size of the PCB itself. Like AMD did back in 2015 with the R9 Nano: www.techpowerup.com/gpu-specs/radeon-r9-nano.c2735
Posted on Reply
#36
Why_Me
natr0nwe should online boycott that pos connection
You will bend the knee!



Posted on Reply
#37
TheinsanegamerN
TomorrowAnd how did this L connector work out for CableMod? Having pre made 90 degree bend does not solve the problem with safety margins and bad design. It merely resolves one failure point.

Also most Nvidia cards used either 1x8pin or 2x8pin before the introduction of this new 12pin (16 with sense pins) standard. Very few cards used 3x8pin and like i said before 8pin EPS could replace 8pin PCIe while carrying more power, making the new "compact" 16pin unnecessary.

Also chasing this compactness is meaningless if only the power connector is small but sits smack in the middle of the card with huge coolers taking 3+ slots.
Does anyone really worry about the space 8pin PCIe occupied in a situation like this?

If Nvidia truly wanted a compact card they could have made the coolers smaller or mandated smaller coolers and used HBM2 to further cut down the size of the PCB itself. Like AMD did back in 2015 with the R9 Nano: www.techpowerup.com/gpu-specs/radeon-r9-nano.c2735
HBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.

If you dont want a 3 slot card, dont buy one! Plenty of 2 slot cards out there.
Dirt ChipCopy and screen save the comments for when AMD will officially use this connector on future GPU's.
What will the crowd will the say then...
If they burn up: "Told you the connector was shit"
If they dont: "Told you Nvidia screwed up".
Posted on Reply
#38
natr0n
Why_MeYou will bend the knee!



Posted on Reply
#39
trsttte
TheinsanegamerNHBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.
They were failures but not because of HBM, HBM must be worth something otherwise it wasn't used by workstation/server cards. Much much higher bus width has it's advantages.
Random_User
Also I have a guess, that nVidia was preparing to the enterprise/datacenter/workstation domination for a long time. And they've just wanted to alpha/beta-test this connector, on wealthy gulible consumer "guinea pigs", or were just lazy/greedy do deferetiate the design between enterprise and consumer products PCB design, making it uniform for both.
You're assuming they want this desing, here's the thing, they probably don't. Just like they were using CPU power connectors without the sense pins of PCIe power connectors, they won't have a reason for a more expensive microfit with more sense signals they have no use for. Every penny counts and big server OEMs have no reason to spend a couple extra bucks dealling with sense pins and all that.
Posted on Reply
#40
Tomorrow
TheinsanegamerNHBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.
No matter how close Nvidia places GDDR chips to the GPU die the GDDR die is still larger and still requires space on PCB itself.
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB.
TheinsanegamerNIf you dont want a 3 slot card, dont buy one! Plenty of 2 slot cards out there.
Out of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.

I would not say 47 out of 323 is "plenty".
Data is from Geizhals: geizhals.eu/?cat=gra16_512&xf=1481_2%7E5585_1x+16-Pin+5PCIe%7E653_NVIDIA
Posted on Reply
#41
Chrispy_
Urgghh.

I guess the new connector is less of a fire hazard when it's only handling ~300W, but I was hoping that 12VHPWR would either move to MiniFit Jr or be officially downrated to 300W instead of 600W.
Posted on Reply
#42
Beer4Myself
TomorrowNo matter how close Nvidia places GDDR chips to the GPU die the GDDR die is still larger and still requires space on PCB itself.
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB.

Out of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.

I would not say 47 out of 323 is "plenty".
Data is from Geizhals: geizhals.eu/?cat=gra16_512&xf=1481_2~5585_1x+16-Pin+5PCIe~653_NVIDIA
there is also a 2 slot 4080 super from inno3d
Posted on Reply
#43
Vya Domus
R-T-BIt's borderline irrational.
It's not irrational to not want something with a track record of being a fire hazard, especially when the older option works just fine. It's not like it's an absolute necessity to move away from 8pin PCIe.
Posted on Reply
#45
TheinsanegamerN
trsttteThey were failures but not because of HBM, HBM must be worth something otherwise it wasn't used by workstation/server cards. Much much higher bus width has it's advantages.
Higher bandwidth, significantly higher latency, and it's of no real benefit to consumer workloads. And it wont make the cards smaller. Last I checked, every AMD HBM card was at least the standard full PCI card height. Switching to HBM from GDDR wont make the cards smaller. It WILL raise the price of the GPU and make proper cooling harder, so......yay?
TomorrowNo matter how close Nvidia places GDDR chips to the GPU die the GDDR die is still larger and still requires space on PCB itself.
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB.
If the consumer loads dont need the bandwidth, they dont need the bandwidth. To date we'e yet to see a single game that ran significantly faster on a vega 64 then a 1080 because of bandwidth.

And as I said above, every HBM GPU from AMD was at least standard PCI card height. Switching to HBM does not make the cards smaller.
TomorrowOut of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.

I would not say 47 out of 323 is "plenty".
Data is from Geizhals: geizhals.eu/?cat=gra16_512&xf=1481_2~5585_1x+16-Pin+5PCIe~653_NVIDIA
Yes, if you want a high TDP GPU, you will need a big cooler.

Do you want 4080s that throttle under light load? Would putting a dual slot cooler on a 4080 and giving up 20%+ performance make you happy? IDK what you want. You cant delete physics because you want a dual slot cooler.
Posted on Reply
#46
Tomorrow
TheinsanegamerNHigher bandwidth, significantly higher latency, and it's of no real benefit to consumer workloads.
Video memory has always been higher latency than system memory so that's irrelevant.
TheinsanegamerNIt WILL raise the price of the GPU and make proper cooling harder, so......yay?
No it wont. AMD was able to sell 16GB HBM2 card for 700. And it had the same peak bandwidth as 4090 today - five years ago.
Also cooling is easier assuming there is epoxy fill to make the GPU die and HBM the same height. We have seen time and time again how a badly engineered card cooks its GDDR chips.
TheinsanegamerNIf the consumer loads dont need the bandwidth, they dont need the bandwidth. To date we'e yet to see a single game that ran significantly faster on a vega 64 then a 1080 because of bandwidth.
Vega 64 was not "only" bandwidth starved. It's a false assumption that if a game that benefits from massive bandwidth would benefit from Vega 64 merely thanks to HBM. Every consumer GPU benefits from higher bandwidth to some degree. Especially at higher resolutions.
TheinsanegamerNAnd as I said above, every HBM GPU from AMD was at least standard PCI card height. Switching to HBM does not make the cards smaller.
It all depends on engineering. And why are we talking about height? We are talking about length and thickness (that's what she said), not how "tall" cards are.
Looking at 3090 PCB with it's stupid vertically placed angled 12pin there is massive free space there for 3x8pin. Less so on 4090 but still possible.
TheinsanegamerNYes, if you want a high TDP GPU, you will need a big cooler.
The argument was about the new connector size and how most card utilizing this connector are actually huge - negating any benefit from a smaller connector. They may as well have 3x8pin and it would make no difference in the cooler size.
TheinsanegamerNDo you want 4080s that throttle under light load? Would putting a dual slot cooler on a 4080 and giving up 20%+ performance make you happy? IDK what you want. You cant delete physics because you want a dual slot cooler.
Why would a dual-slot 4080 throttle under light load? I linked the review of the dual-slot 4080S and there was no mention throttling in the review. I suspect the noise levels might have been higher than triple or quad slot card but performance was on par with other 4080S models.

Even 4090 could be undervolted with minimal performance loss on a dual-slot cooler.
Posted on Reply
#47
Dr. Dro
ChaitanyaAbsolute stupid decision to put that power connector on product which was heavily marketted as not having that fire hazard of connector.
You should know better than this, man. Really.
RavenmasterFirst Radeon card to burst into flames
I triple dare you get a 2x6 connector to smoke. Aris couldn't do it on a load tester intentionally straining the connector.
GuckyMaybe they never had any reason to switch. They supply 300W with 8-Pins and can make a native!! adapter with 2x8-Pin to the 12+4-Pin connector. I use such a cable for my GPU.
In the end it reduces some cost for them.
All I needed was an $25 cable to get my EVGA 1300 G2 ready. But buying a cable doesn't stroke anyone's ego or "pride in being an AMD customer".

What a trainwreck of a thread. Here's where you see where people's loyalties lie, to a brand or the trade.
Posted on Reply
#48
Chrispy_
R-T-BI guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
I don't think it has anything to do with AMD fans. It's already been annoying equally-vocal Nvidia fans for two years.

The resistance against it comes from two main points -
  1. people feel they need to buy a new PSU or dedicated PSU>12v2x6 cables because adapters have a poor safety record and they're butt-ugly.
  2. there have been too many examples of cables melting or burning in situations where the user did absolutely nothing wrong; stock GPUs with first-party, genuine cables.
The naysayers will cite examples of old 8-pin cables melting too and that's fair, but almost all of those 8-pin examples are things like mining rigs overloading adapters, overclocks, or faulty GPUs pulling way more than they should. Also, the number of reports of melting 8-pins is far lower per year or per product - remember how much noise there was about melting 12VHPWR in 2022 and 2023? Google has far more results for "12VHPWR melting" than 8-pin cables already, and 8-pin cables have had 16 more years on the market to fail and generate results. Again, most of the "8-pin melting" results are miners abusing cables and adapters, not ordinary people with a single GPU in a PC.

My take on it, as someone with a degree that covered physics and electronics to a decent standard - is that the new connectors are being rated to draw much more power than the older cables. The technical drawings and manufacturer specs on pin contact surfaces from both Molex and Amphenol confirm that each pin has slightly less contact area than the older 8-pin MiniFit Jr. Then you have a claimed rating of 8.3A per wire-pair going through that newer, smaller pin, with less contact area compared to 4.2A per wire pair in a 150W 8-pin connector.

So we have a new connector that (ignoring fanboy loyalty) simply puts twice as much juice through an even smaller connector than we're used to. It's a problem that isn't going away because the basic laws of electricity aren't changing any time soon.

Is the safety-margin on the old 8-pin cable too high? Maybe it is. I can't prove that, but it is very rare that the cable has been blamed for melting or fires. It just seems dumb to reduce the current-carrying capacity of the connector by making it smaller and giving it smaller contact patches, and then double the current running through it as well. IMO the 12V2x6 connector should be rated for 300W with its existing Amphenol/MicroFit connector size. That's still less safe than 8-pin as it's about 1.4x more current per square millimetre of pin contact, but if we assume that 8-pin is overbuilt, it's reasonable. 450W cables are about 2.6x more current-per-area than 8-pin and 600W cables are about 3.5x more current-per-area. To me, and probably to all the people whose GPUs have been burned, that's too big a jump and it's eaten too much of the safety margin that was built into the MiniFit Jr we've been successfully using with minimal drama for almost two decades.

There's nothing physically wrong with the new connector. The problem is the power rating applied to it; it's not a 600W connector. If they downrated it to 300W that would likely shut up all the complainers. Sure, perhaps the 5090 will need two of them, but having multiple connectors on a GPU isn't exactly a new or outrageous idea, and the first ever GPU series to use the older he PCIe connector (the 8800-series) launched with dual 6-pin connectors right out of the gate!
Posted on Reply
#49
AusWolf
TheinsanegamerNNobody has taken away your 300w GPUs, or your 150w GPUs, or your 75W GPUs. Go buy as many RX 6400XTs as you'd like!

Hell, why stop there? High end used to mean sub 40w, because that was all AGP could support! We HAVE TO GO BACK! :fear:

Or, we can adapt to the changing world instead.
If somebody wants to pump 600+ Watts into their Geforce 9090 Ultra Super Ti Übermensch Edition, so be it. I just want to game on high graphics with a GPU that doesn't burn the house down.

If GPU manufacturers want to use more and more power because they don't have a better idea to squeeze more performance out of their architectures, that's one thing, but I can have an opinion on it, surely? ;)
Posted on Reply
#50
ZoneDymo
TheinsanegamerN1. People already whine and bitch and moan about motherboard pricing. You want to quadruple the power capability on top of all that?


2. So, if you can do a high end GPU with 300w, why not scale that tech up to 400, or 500? Chips size is not a limiting factor anymore. Removing heat is now the limiting factor. Limiting your GPU lineup to 300w at best didnt work out so well for alchemist, nor historically has it worked well for AMD. If you dont want a 600w GPU....dont buy a 600w GPU? 4060s and 6650xts and 7800xts still exist.
1. ermm no? the power still comes from the psu in the end, just a bigger main plug in it, instead of 24 pin maybe 26 or so or just make that plug carry more power, its just that the peripherals pull the power from the board instead of having all kinds of separate cables going everywhere.

2. why not scale it up? welll I think I addressed already in the original comment, its the weakest form of progress.
We need the devs to get their advancement elsewhere and focus money and resources on that instead of just blasting it with more power consumption.
Posted on Reply
Add your own comment
Dec 17th, 2024 20:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts