Friday, May 31st 2024
ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector
ASRock is ready with the first Radeon RX gaming graphics card to feature a modern 12V-2x6 power connector, replacing the up to three 8-pin PCIe power connectors it took, to power a Radeon RX 7900 series graphics card. The ASRock RX 7900 series WS graphics cards are also the first 2-slot RX 7900 series graphics cards. They target workstations and GPU rendering farms that stack multiple graphics cards into 4U or 5U rackmount cases, with no spacing between 2-slot graphics cards. ASRock is designing cards based on both the RX 7900 XT, and the flagship RX 7900 XTX.
The ASRock RX 7900 series WS graphics cards appear long and no more than 2 slots thick. To achieve these dimensions, a lateral-flow cooling solution is used, which combines a dense aluminium or copper channel heatsink with a lateral blower. Remember we said these cards are meant for workstations or rendering farms? So the noise output will be deafening, at least up to datacenter standards. The most striking aspect of these cards of course is their 12+4 pin ATX 12V-2x6 power input, which is capable of drawing 600 W of continuous power from a single cable. It's located at the card's tail-end, where it would have been an engineering challenge to put three 8-pin connectors.
The ASRock RX 7900 series WS graphics cards appear long and no more than 2 slots thick. To achieve these dimensions, a lateral-flow cooling solution is used, which combines a dense aluminium or copper channel heatsink with a lateral blower. Remember we said these cards are meant for workstations or rendering farms? So the noise output will be deafening, at least up to datacenter standards. The most striking aspect of these cards of course is their 12+4 pin ATX 12V-2x6 power input, which is capable of drawing 600 W of continuous power from a single cable. It's located at the card's tail-end, where it would have been an engineering challenge to put three 8-pin connectors.
94 Comments on ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector
I use one on my Seasonic PRIME Fanless TX-700 without any issues.
The only one pushing for this stupid standard is nvidia and since they're by far the market leader on GPUs, PSU makers had to start including this fire hazzard of a connector, but nothing about ATX3.0 requires them to do so. Funny enough a lot of the designs are using the same regular 8 pin molex on the PSU side because they know it's a better solution both in terms of surface area to move the power through and to connect a cable in a cramped space and only use the 12VHPWR connector on the GPU side to appease nvidia/nvidia clients.
Asrock clearly failed to read the room, every discussion about this connector is super negative because everyone fucking hates the thing and with good reasons, since they only work with AMD they could avoid this hole thing but here they go and decide to enter the hate train.
6pin and 8pin are not going anywhere for decades. Also because of huge backlog of GPU's that use them.
Aside from GPU's very few devices actually need more power than the PCIe slot can provide. I've seen some SSD addon cards use and some motherboards but that's about it.
There will be no transition period. Once something better comes along this "experiment" will be dropped faster than a hot potato.
Even Nvidia cards (AIB included) released in 2024 - out of 196, 142 used this new connector. So even Nvidia is not fully committed or mandated this for all their cards. Oh of course. Nvidia fans and card owners totally LOVE this connector. /s
In another thread a Nvidia fanboy told me how Nvidia owners hated FSR Frame Generation. Except 30 series and earlier apparently...
On the other hand, AMD said they will eventually move to this connector sometime in the future. Who knows if the future has come.
Hell, why stop there? High end used to mean sub 40w, because that was all AGP could support! We HAVE TO GO BACK! :fear:
Or, we can adapt to the changing world instead. People already whine and bitch and moan about motherboard pricing. You want to quadruple the power capability on top of all that? So, if you can do a high end GPU with 300w, why not scale that tech up to 400, or 500? Chips size is not a limiting factor anymore. Removing heat is now the limiting factor. Limiting your GPU lineup to 300w at best didnt work out so well for alchemist, nor historically has it worked well for AMD. If you dont want a 600w GPU....dont buy a 600w GPU? 4060s and 6650xts and 7800xts still exist.
What will the crowd will the say then...
Also most Nvidia cards used either 1x8pin or 2x8pin before the introduction of this new 12pin (16 with sense pins) standard. Very few cards used 3x8pin and like i said before 8pin EPS could replace 8pin PCIe while carrying more power, making the new "compact" 16pin unnecessary.
Also chasing this compactness is meaningless if only the power connector is small but sits smack in the middle of the card with huge coolers taking 3+ slots.
Does anyone really worry about the space 8pin PCIe occupied in a situation like this?
If Nvidia truly wanted a compact card they could have made the coolers smaller or mandated smaller coolers and used HBM2 to further cut down the size of the PCB itself. Like AMD did back in 2015 with the R9 Nano: www.techpowerup.com/gpu-specs/radeon-r9-nano.c2735
If you dont want a 3 slot card, dont buy one! Plenty of 2 slot cards out there. If they burn up: "Told you the connector was shit"
If they dont: "Told you Nvidia screwed up".
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB. Out of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.
I would not say 47 out of 323 is "plenty".
Data is from Geizhals: geizhals.eu/?cat=gra16_512&xf=1481_2%7E5585_1x+16-Pin+5PCIe%7E653_NVIDIA
I guess the new connector is less of a fire hazard when it's only handling ~300W, but I was hoping that 12VHPWR would either move to MiniFit Jr or be officially downrated to 300W instead of 600W.
Review: www.tweaktown.com/reviews/10655/inno3d-geforce-rtx-4080-super-x3/index.html
The review does not mention dB number - Only that the temps were in check at at max the cooler was running at 62% with 1700rpm but i doubt it's the quietest at that speed.
And as I said above, every HBM GPU from AMD was at least standard PCI card height. Switching to HBM does not make the cards smaller. Yes, if you want a high TDP GPU, you will need a big cooler.
Do you want 4080s that throttle under light load? Would putting a dual slot cooler on a 4080 and giving up 20%+ performance make you happy? IDK what you want. You cant delete physics because you want a dual slot cooler.
Also cooling is easier assuming there is epoxy fill to make the GPU die and HBM the same height. We have seen time and time again how a badly engineered card cooks its GDDR chips. Vega 64 was not "only" bandwidth starved. It's a false assumption that if a game that benefits from massive bandwidth would benefit from Vega 64 merely thanks to HBM. Every consumer GPU benefits from higher bandwidth to some degree. Especially at higher resolutions. It all depends on engineering. And why are we talking about height? We are talking about length and thickness (that's what she said), not how "tall" cards are.
Looking at 3090 PCB with it's stupid vertically placed angled 12pin there is massive free space there for 3x8pin. Less so on 4090 but still possible. The argument was about the new connector size and how most card utilizing this connector are actually huge - negating any benefit from a smaller connector. They may as well have 3x8pin and it would make no difference in the cooler size. Why would a dual-slot 4080 throttle under light load? I linked the review of the dual-slot 4080S and there was no mention throttling in the review. I suspect the noise levels might have been higher than triple or quad slot card but performance was on par with other 4080S models.
Even 4090 could be undervolted with minimal performance loss on a dual-slot cooler.
What a trainwreck of a thread. Here's where you see where people's loyalties lie, to a brand or the trade.
The resistance against it comes from two main points -
- people feel they need to buy a new PSU or dedicated PSU>12v2x6 cables because adapters have a poor safety record and they're butt-ugly.
- there have been too many examples of cables melting or burning in situations where the user did absolutely nothing wrong; stock GPUs with first-party, genuine cables.
The naysayers will cite examples of old 8-pin cables melting too and that's fair, but almost all of those 8-pin examples are things like mining rigs overloading adapters, overclocks, or faulty GPUs pulling way more than they should. Also, the number of reports of melting 8-pins is far lower per year or per product - remember how much noise there was about melting 12VHPWR in 2022 and 2023? Google has far more results for "12VHPWR melting" than 8-pin cables already, and 8-pin cables have had 16 more years on the market to fail and generate results. Again, most of the "8-pin melting" results are miners abusing cables and adapters, not ordinary people with a single GPU in a PC.My take on it, as someone with a degree that covered physics and electronics to a decent standard - is that the new connectors are being rated to draw much more power than the older cables. The technical drawings and manufacturer specs on pin contact surfaces from both Molex and Amphenol confirm that each pin has slightly less contact area than the older 8-pin MiniFit Jr. Then you have a claimed rating of 8.3A per wire-pair going through that newer, smaller pin, with less contact area compared to 4.2A per wire pair in a 150W 8-pin connector.
So we have a new connector that (ignoring fanboy loyalty) simply puts twice as much juice through an even smaller connector than we're used to. It's a problem that isn't going away because the basic laws of electricity aren't changing any time soon.
Is the safety-margin on the old 8-pin cable too high? Maybe it is. I can't prove that, but it is very rare that the cable has been blamed for melting or fires. It just seems dumb to reduce the current-carrying capacity of the connector by making it smaller and giving it smaller contact patches, and then double the current running through it as well. IMO the 12V2x6 connector should be rated for 300W with its existing Amphenol/MicroFit connector size. That's still less safe than 8-pin as it's about 1.4x more current per square millimetre of pin contact, but if we assume that 8-pin is overbuilt, it's reasonable. 450W cables are about 2.6x more current-per-area than 8-pin and 600W cables are about 3.5x more current-per-area. To me, and probably to all the people whose GPUs have been burned, that's too big a jump and it's eaten too much of the safety margin that was built into the MiniFit Jr we've been successfully using with minimal drama for almost two decades.
There's nothing physically wrong with the new connector. The problem is the power rating applied to it; it's not a 600W connector. If they downrated it to 300W that would likely shut up all the complainers. Sure, perhaps the 5090 will need two of them, but having multiple connectors on a GPU isn't exactly a new or outrageous idea, and the first ever GPU series to use the older he PCIe connector (the 8800-series) launched with dual 6-pin connectors right out of the gate!
If GPU manufacturers want to use more and more power because they don't have a better idea to squeeze more performance out of their architectures, that's one thing, but I can have an opinion on it, surely? ;)
2. why not scale it up? welll I think I addressed already in the original comment, its the weakest form of progress.
We need the devs to get their advancement elsewhere and focus money and resources on that instead of just blasting it with more power consumption.