• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector

Joined
Jun 1, 2010
Messages
380 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
Maybe they never had any reason to switch. They supply 300W with 8-Pins and can make a native!! adapter with 2x8-Pin to the 12+4-Pin connector. I use such a cable for my GPU.
In the end it reduces some cost for them.
As this is still very "new" and "raw" standard for consumer space, and might as well undergo some future changes.
This was indeed enough to include just the cable itself. But loads of PSU manufacturers went through and added the "uncertain" connector, for consumer grade PSUs. There would be a non-issue, if this 12V-2x6 (and it's predecessor 12VHPWR) would remain workstation/enterprise-only connector. As there would be least of a risk to blame their users of "unprofessional" or "dumb" behaviour. Stick it with the certified folks, and all be fine.
 
Joined
Aug 2, 2012
Messages
1,986 (0.44/day)
Location
Netherlands
System Name TheDeeGee's PC
Processor Intel Core i7-11700
Motherboard ASRock Z590 Steel Legend
Cooling Noctua NH-D15S
Memory Crucial Ballistix 3200/C16 32GB
Video Card(s) Nvidia RTX 4070 Ti 12GB
Storage Crucial P5 Plus 2TB / Crucial P3 Plus 2TB / Crucial P3 Plus 4TB
Display(s) EIZO CX240
Case Lian-Li O11 Dynamic Evo XL / Noctua NF-A12x25 fans
Audio Device(s) Creative Sound Blaster ZXR / AKG K601 Headphones
Power Supply Seasonic PRIME Fanless TX-700
Mouse Logitech G500S
Keyboard Keychron Q6
Software Windows 10 Pro 64-Bit
Benchmark Scores None, as long as my games runs smooth.
And most people are not. They're not gonna buy a new PSU.
You don't even have to, since most well known brands sell a 2x 8-Pin to 12VHPWR cable.

I use one on my Seasonic PRIME Fanless TX-700 without any issues.
 
Joined
Jun 18, 2021
Messages
2,547 (2.03/day)
Imagine though if we can do 300 watt pci-e slots with 300 watt high end gpu's that then dont need a power connector at all, that would be beautiful.

Not beautiful at all, you're adding a middle man to the majority of the power delivery and would run into the same problems as with this stupid microfit connector: very small surface to move all that power through.

The 12V-2xx6 is the next industry standard, if we want it or not. The moment all PSU makers included it in their products, it was set in stone.

Nothing about the old 8pin connectors was discontinued. There are some new possible features with the 12VHPWR connector but so far no one has used them or even showed any intention to do so.

The only one pushing for this stupid standard is nvidia and since they're by far the market leader on GPUs, PSU makers had to start including this fire hazzard of a connector, but nothing about ATX3.0 requires them to do so. Funny enough a lot of the designs are using the same regular 8 pin molex on the PSU side because they know it's a better solution both in terms of surface area to move the power through and to connect a cable in a cramped space and only use the 12VHPWR connector on the GPU side to appease nvidia/nvidia clients.

Asrock clearly failed to read the room, every discussion about this connector is super negative because everyone fucking hates the thing and with good reasons, since they only work with AMD they could avoid this hole thing but here they go and decide to enter the hate train.
 
Joined
Aug 20, 2007
Messages
21,461 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
I guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
 
Joined
Aug 21, 2013
Messages
1,898 (0.46/day)
Of course they are present. There is always a transition phase between 2 standards. That is why 12VO never came to the consumer market, it is impossible to make a transition phase for that.
And there are other things beside GPUs that might use the 6 or 8-Pins.
PSU's released in 2023 - out of 370, 91 still included the Floppy connector. Out of 142 so far released in 2024, 28 still included the Floppy connector.
6pin and 8pin are not going anywhere for decades. Also because of huge backlog of GPU's that use them.

Aside from GPU's very few devices actually need more power than the PCIe slot can provide. I've seen some SSD addon cards use and some motherboards but that's about it.

There will be no transition period. Once something better comes along this "experiment" will be dropped faster than a hot potato.
Even Nvidia cards (AIB included) released in 2024 - out of 196, 142 used this new connector. So even Nvidia is not fully committed or mandated this for all their cards.

I guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
Oh of course. Nvidia fans and card owners totally LOVE this connector. /s
In another thread a Nvidia fanboy told me how Nvidia owners hated FSR Frame Generation. Except 30 series and earlier apparently...
 
Joined
Nov 18, 2010
Messages
7,530 (1.47/day)
Location
Rīga, Latvia
System Name HELLSTAR
Processor AMD RYZEN 9 5950X
Motherboard ASUS Strix X570-E
Cooling 2x 360 + 280 rads. 3x Gentle Typhoons, 3x Phanteks T30, 2x TT T140 . EK-Quantum Momentum Monoblock.
Memory 4x8GB G.SKILL Trident Z RGB F4-4133C19D-16GTZR 14-16-12-30-44
Video Card(s) Sapphire Pulse RX 7900XTX. Water block. Crossflashed.
Storage Optane 900P[Fedora] + WD BLACK SN850X 4TB + 750 EVO 500GB + 1TB 980PRO+SN560 1TB(W11)
Display(s) Philips PHL BDM3270 + Acer XV242Y
Case Lian Li O11 Dynamic EVO
Audio Device(s) SMSL RAW-MDA1 DAC
Power Supply Fractal Design Newton R3 1000W
Mouse Razer Basilisk
Keyboard Razer BlackWidow V3 - Yellow Switch
Software FEDORA 41
Agh... the flow of hate in this thread... it almost feels you can slice it.
 
Joined
Jan 11, 2022
Messages
871 (0.83/day)
Easier? You call this easier? Having to watch cable bends, unplugging periodically to check for damage and stupid "dongles".
There's nothing "easier" about the new standard. Easier would have been to adopt 8pin EPS already used on workstation cards.
they can make an L connector, heck they could even do one with a hinge and im talking about the power capacity of the cable as it can replace 2~3 of the old ones.
 
Joined
Jun 1, 2010
Messages
380 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
PSU's released in 2023 - out of 370, 91 still included the Floppy connector. Out of 142 so far released in 2024, 28 still included the Floppy connector.
At least it is possible to power some ancient peripherals, eg sound card, with FDD connector, so is 6pin PCIE. I hardly see any use in this scenario of "compact" 600W connector, (outside the powerhog "compact internal space heater"graphic cards), that the PSU's connection space.
Not beautiful at all, you're adding a middle man to the majority of the power delivery and would run into the same problems as with this stupid microfit connector: very small surface to move all that power through.



Nothing about the old 8pin connectors was discontinued. There are some new possible features with the 12VHPWR connector but so far no one has used them or even showed any intention to do so.

The only one pushing for this stupid standard is nvidia and since they're by far the market leader on GPUs, PSU makers had to start including this fire hazzard of a connector, but nothing about ATX3.0 requires them to do so. Funny enough a lot of the designs are using the same regular 8 pin molex on the PSU side because they know it's a better solution both in terms of surface area to move the power through and to connect a cable in a cramped space and only use the 12VHPWR connector on the GPU side to appease nvidia/nvidia clients.

Asrock clearly failed to read the room, every discussion about this connector is super negative because everyone fucking hates the thing and with good reasons, since they only work with AMD they could avoid this hole thing but here they go and decide to enter the hate train.
I guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
I wholely support your sentiments here. But let's hope this is related to the WS cards only, and the "regular" card design is safe. AsRock is a huge OEM company, so I don't hold my breath, though. They're after where the money is...

On the other hand, AMD said they will eventually move to this connector sometime in the future. Who knows if the future has come.

Also I have a guess, that nVidia was preparing to the enterprise/datacenter/workstation domination for a long time. And they've just wanted to alpha/beta-test this connector, on wealthy gulible consumer "guinea pigs", or were just lazy/greedy do deferetiate the design between enterprise and consumer products PCB design, making it uniform for both.
 
Joined
Dec 28, 2012
Messages
3,877 (0.89/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
Or just get GPU power consumption back to normal human levels to... I don't know... 300 W for high-end (2x8-pin), 150 W for mid-range (1x8-pin) and 75 W (no power connector) for entry level?
Nobody has taken away your 300w GPUs, or your 150w GPUs, or your 75W GPUs. Go buy as many RX 6400XTs as you'd like!

Hell, why stop there? High end used to mean sub 40w, because that was all AGP could support! We HAVE TO GO BACK! :fear:

Or, we can adapt to the changing world instead.

hilarious, first of all screw Asrock as a company, but apart from that...innovation through a crappy connector AND then you also dare to throw on a blowerstyle cooler?

I said it before...can we not better just update the now ancient power delivery of the pci-e slot? its been 75 watts since its inception....change that to...oh idk 300 watts?
People already whine and bitch and moan about motherboard pricing. You want to quadruple the power capability on top of all that?

oh 100% agree, im all for it, throwing more power at it is the weakest form of innovation.
Imagine though if we can do 300 watt pci-e slots with 300 watt high end gpu's that then dont need a power connector at all, that would be beautiful.
So, if you can do a high end GPU with 300w, why not scale that tech up to 400, or 500? Chips size is not a limiting factor anymore. Removing heat is now the limiting factor. Limiting your GPU lineup to 300w at best didnt work out so well for alchemist, nor historically has it worked well for AMD. If you dont want a 600w GPU....dont buy a 600w GPU? 4060s and 6650xts and 7800xts still exist.
 
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
Copy and screen save the comments for when AMD will officially use this connector on future GPU's.
What will the crowd will the say then...
 
Joined
Aug 21, 2013
Messages
1,898 (0.46/day)
they can make an L connector, heck they could even do one with a hinge and im talking about the power capacity of the cable as it can replace 2~3 of the old ones.
And how did this L connector work out for CableMod? Having pre made 90 degree bend does not solve the problem with safety margins and bad design. It merely resolves one failure point.

Also most Nvidia cards used either 1x8pin or 2x8pin before the introduction of this new 12pin (16 with sense pins) standard. Very few cards used 3x8pin and like i said before 8pin EPS could replace 8pin PCIe while carrying more power, making the new "compact" 16pin unnecessary.

Also chasing this compactness is meaningless if only the power connector is small but sits smack in the middle of the card with huge coolers taking 3+ slots.
Does anyone really worry about the space 8pin PCIe occupied in a situation like this?

If Nvidia truly wanted a compact card they could have made the coolers smaller or mandated smaller coolers and used HBM2 to further cut down the size of the PCB itself. Like AMD did back in 2015 with the R9 Nano: https://www.techpowerup.com/gpu-specs/radeon-r9-nano.c2735
 
Joined
Jan 29, 2021
Messages
1,851 (1.33/day)
Location
Alaska USA
we should online boycott that pos connection
You will bend the knee!



 
Joined
Dec 28, 2012
Messages
3,877 (0.89/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
And how did this L connector work out for CableMod? Having pre made 90 degree bend does not solve the problem with safety margins and bad design. It merely resolves one failure point.

Also most Nvidia cards used either 1x8pin or 2x8pin before the introduction of this new 12pin (16 with sense pins) standard. Very few cards used 3x8pin and like i said before 8pin EPS could replace 8pin PCIe while carrying more power, making the new "compact" 16pin unnecessary.

Also chasing this compactness is meaningless if only the power connector is small but sits smack in the middle of the card with huge coolers taking 3+ slots.
Does anyone really worry about the space 8pin PCIe occupied in a situation like this?

If Nvidia truly wanted a compact card they could have made the coolers smaller or mandated smaller coolers and used HBM2 to further cut down the size of the PCB itself. Like AMD did back in 2015 with the R9 Nano: https://www.techpowerup.com/gpu-specs/radeon-r9-nano.c2735
HBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.

If you dont want a 3 slot card, dont buy one! Plenty of 2 slot cards out there.
Copy and screen save the comments for when AMD will officially use this connector on future GPU's.
What will the crowd will the say then...
If they burn up: "Told you the connector was shit"
If they dont: "Told you Nvidia screwed up".
 
Joined
Jan 29, 2012
Messages
6,881 (1.47/day)
Location
Florida
System Name natr0n-PC
Processor Ryzen 5950x-5600x | 9600k
Motherboard B450 AORUS M | Z390 UD
Cooling EK AIO 360 - 6 fan action | AIO
Memory Patriot - Viper Steel DDR4 (B-Die)(4x8GB) | Samsung DDR4 (4x8GB)
Video Card(s) EVGA 3070ti FTW
Storage Various
Display(s) Pixio PX279 Prime
Case Thermaltake Level 20 VT | Black bench
Audio Device(s) LOXJIE D10 + Kinter Amp + 6 Bookshelf Speakers Sony+JVC+Sony
Power Supply Super Flower Leadex III ARGB 80+ Gold 650W | EVGA 700 Gold
Software XP/7/8.1/10
Benchmark Scores http://valid.x86.fr/79kuh6
Joined
Jun 18, 2021
Messages
2,547 (2.03/day)
HBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.

They were failures but not because of HBM, HBM must be worth something otherwise it wasn't used by workstation/server cards. Much much higher bus width has it's advantages.

Also I have a guess, that nVidia was preparing to the enterprise/datacenter/workstation domination for a long time. And they've just wanted to alpha/beta-test this connector, on wealthy gulible consumer "guinea pigs", or were just lazy/greedy do deferetiate the design between enterprise and consumer products PCB design, making it uniform for both.

You're assuming they want this desing, here's the thing, they probably don't. Just like they were using CPU power connectors without the sense pins of PCIe power connectors, they won't have a reason for a more expensive microfit with more sense signals they have no use for. Every penny counts and big server OEMs have no reason to spend a couple extra bucks dealling with sense pins and all that.
 
Joined
Aug 21, 2013
Messages
1,898 (0.46/day)
HBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.
No matter how close Nvidia places GDDR chips to the GPU die the GDDR die is still larger and still requires space on PCB itself.
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB.
If you dont want a 3 slot card, dont buy one! Plenty of 2 slot cards out there.
Out of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.

I would not say 47 out of 323 is "plenty".
Data is from Geizhals: https://geizhals.eu/?cat=gra16_512&xf=1481_2~5585_1x+16-Pin+5PCIe~653_NVIDIA
 
Joined
Feb 20, 2019
Messages
8,278 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Urgghh.

I guess the new connector is less of a fire hazard when it's only handling ~300W, but I was hoping that 12VHPWR would either move to MiniFit Jr or be officially downrated to 300W instead of 600W.
 
Joined
Jul 28, 2020
Messages
60 (0.04/day)
System Name PC / Laptop
Processor I9 13900K/ Ryzen 5 4600H
Motherboard MSI Z790 Tomahawk Wifi
Cooling Thermalright Peerless Assassin 120
Memory 2x16GB G.Skill F5-7200J3445G16G / 16GB 3200 2x8GB
Video Card(s) MSI RTX 3080 Gaming Z (using rn), Asus 1650 LP (Backup), / 1650 Gddr6
Storage 1TB SN750, 1TB Samsung 980, 1,7 TB HDD, 2x2TB MX500, 4TB 870 Evo/ Toshiba BG4 500GB NVME
Display(s) Gigabyte M27Q, Samsung Syncmaster 2043BW
Case BeQuiet! Pure Base 500DX / Lenovo Ideapad Gaming 3
Audio Device(s) Beyerdynamic DT 990 Pro with Fiio K5 Pro ESS / Philips Fidelio X2HR with Fiio Olympus 2 E10K
Power Supply BeQuiet! Pure Power 12M 850W / Lenovo 135W
Mouse Microsoft Intellimouse Pro / Cooler Master MM731
Keyboard Sharkoon Skiller SGK30
Software W10 Pro (both machines)
No matter how close Nvidia places GDDR chips to the GPU die the GDDR die is still larger and still requires space on PCB itself.
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB.

Out of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.

I would not say 47 out of 323 is "plenty".
Data is from Geizhals: https://geizhals.eu/?cat=gra16_512&xf=1481_2~5585_1x+16-Pin+5PCIe~653_NVIDIA
there is also a 2 slot 4080 super from inno3d
 
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It's borderline irrational.
It's not irrational to not want something with a track record of being a fire hazard, especially when the older option works just fine. It's not like it's an absolute necessity to move away from 8pin PCIe.
 
Joined
Aug 21, 2013
Messages
1,898 (0.46/day)
Joined
Dec 28, 2012
Messages
3,877 (0.89/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
They were failures but not because of HBM, HBM must be worth something otherwise it wasn't used by workstation/server cards. Much much higher bus width has it's advantages.
Higher bandwidth, significantly higher latency, and it's of no real benefit to consumer workloads. And it wont make the cards smaller. Last I checked, every AMD HBM card was at least the standard full PCI card height. Switching to HBM from GDDR wont make the cards smaller. It WILL raise the price of the GPU and make proper cooling harder, so......yay?
No matter how close Nvidia places GDDR chips to the GPU die the GDDR die is still larger and still requires space on PCB itself.
Not to mention insane bandwidth advantage HBM has over GDDR. With GDDR7 we will be lucky if the top cards approach 1,5TB/s.
HBM3 already achieved 3,36TB/s last year. More so with HBM3e this and next year, and HBM4 is in development.
No to mention massive capacity where a single stack that is physically smaller than a single 2GB GDDR chip can hold 24GB.
If the consumer loads dont need the bandwidth, they dont need the bandwidth. To date we'e yet to see a single game that ran significantly faster on a vega 64 then a 1080 because of bandwidth.

And as I said above, every HBM GPU from AMD was at least standard PCI card height. Switching to HBM does not make the cards smaller.
Out of the 323 cards with this new connector only 47 are Dual-Slot and out of these the fastest aircooled models are 4070 Ti Super models.
So if someone wants 4080 or faster they have no choice but to go to triple or quad slot or watercooling route via monoblock or AIO that further adds to the price and simply displaced some of cooling apparatus to other parts of the case like front or top.

I would not say 47 out of 323 is "plenty".
Data is from Geizhals: https://geizhals.eu/?cat=gra16_512&xf=1481_2~5585_1x+16-Pin+5PCIe~653_NVIDIA
Yes, if you want a high TDP GPU, you will need a big cooler.

Do you want 4080s that throttle under light load? Would putting a dual slot cooler on a 4080 and giving up 20%+ performance make you happy? IDK what you want. You cant delete physics because you want a dual slot cooler.
 
Joined
Aug 21, 2013
Messages
1,898 (0.46/day)
Higher bandwidth, significantly higher latency, and it's of no real benefit to consumer workloads.
Video memory has always been higher latency than system memory so that's irrelevant.
It WILL raise the price of the GPU and make proper cooling harder, so......yay?
No it wont. AMD was able to sell 16GB HBM2 card for 700. And it had the same peak bandwidth as 4090 today - five years ago.
Also cooling is easier assuming there is epoxy fill to make the GPU die and HBM the same height. We have seen time and time again how a badly engineered card cooks its GDDR chips.
If the consumer loads dont need the bandwidth, they dont need the bandwidth. To date we'e yet to see a single game that ran significantly faster on a vega 64 then a 1080 because of bandwidth.
Vega 64 was not "only" bandwidth starved. It's a false assumption that if a game that benefits from massive bandwidth would benefit from Vega 64 merely thanks to HBM. Every consumer GPU benefits from higher bandwidth to some degree. Especially at higher resolutions.
And as I said above, every HBM GPU from AMD was at least standard PCI card height. Switching to HBM does not make the cards smaller.
It all depends on engineering. And why are we talking about height? We are talking about length and thickness (that's what she said), not how "tall" cards are.
Looking at 3090 PCB with it's stupid vertically placed angled 12pin there is massive free space there for 3x8pin. Less so on 4090 but still possible.
Yes, if you want a high TDP GPU, you will need a big cooler.
The argument was about the new connector size and how most card utilizing this connector are actually huge - negating any benefit from a smaller connector. They may as well have 3x8pin and it would make no difference in the cooler size.
Do you want 4080s that throttle under light load? Would putting a dual slot cooler on a 4080 and giving up 20%+ performance make you happy? IDK what you want. You cant delete physics because you want a dual slot cooler.
Why would a dual-slot 4080 throttle under light load? I linked the review of the dual-slot 4080S and there was no mention throttling in the review. I suspect the noise levels might have been higher than triple or quad slot card but performance was on par with other 4080S models.

Even 4090 could be undervolted with minimal performance loss on a dual-slot cooler.
 
Joined
Dec 25, 2020
Messages
6,734 (4.71/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Absolute stupid decision to put that power connector on product which was heavily marketted as not having that fire hazard of connector.

You should know better than this, man. Really.

First Radeon card to burst into flames

I triple dare you get a 2x6 connector to smoke. Aris couldn't do it on a load tester intentionally straining the connector.

Maybe they never had any reason to switch. They supply 300W with 8-Pins and can make a native!! adapter with 2x8-Pin to the 12+4-Pin connector. I use such a cable for my GPU.
In the end it reduces some cost for them.

All I needed was an $25 cable to get my EVGA 1300 G2 ready. But buying a cable doesn't stroke anyone's ego or "pride in being an AMD customer".

What a trainwreck of a thread. Here's where you see where people's loyalties lie, to a brand or the trade.
 
Joined
Feb 20, 2019
Messages
8,278 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I guess ASRock failed to estimate AMD fans hate for this connector, I will grant. It's borderline irrational. Case in point: this thread.
I don't think it has anything to do with AMD fans. It's already been annoying equally-vocal Nvidia fans for two years.

The resistance against it comes from two main points -
  1. people feel they need to buy a new PSU or dedicated PSU>12v2x6 cables because adapters have a poor safety record and they're butt-ugly.
  2. there have been too many examples of cables melting or burning in situations where the user did absolutely nothing wrong; stock GPUs with first-party, genuine cables.
The naysayers will cite examples of old 8-pin cables melting too and that's fair, but almost all of those 8-pin examples are things like mining rigs overloading adapters, overclocks, or faulty GPUs pulling way more than they should. Also, the number of reports of melting 8-pins is far lower per year or per product - remember how much noise there was about melting 12VHPWR in 2022 and 2023? Google has far more results for "12VHPWR melting" than 8-pin cables already, and 8-pin cables have had 16 more years on the market to fail and generate results. Again, most of the "8-pin melting" results are miners abusing cables and adapters, not ordinary people with a single GPU in a PC.

My take on it, as someone with a degree that covered physics and electronics to a decent standard - is that the new connectors are being rated to draw much more power than the older cables. The technical drawings and manufacturer specs on pin contact surfaces from both Molex and Amphenol confirm that each pin has slightly less contact area than the older 8-pin MiniFit Jr. Then you have a claimed rating of 8.3A per wire-pair going through that newer, smaller pin, with less contact area compared to 4.2A per wire pair in a 150W 8-pin connector.

So we have a new connector that (ignoring fanboy loyalty) simply puts twice as much juice through an even smaller connector than we're used to. It's a problem that isn't going away because the basic laws of electricity aren't changing any time soon.

Is the safety-margin on the old 8-pin cable too high? Maybe it is. I can't prove that, but it is very rare that the cable has been blamed for melting or fires. It just seems dumb to reduce the current-carrying capacity of the connector by making it smaller and giving it smaller contact patches, and then double the current running through it as well. IMO the 12V2x6 connector should be rated for 300W with its existing Amphenol/MicroFit connector size. That's still less safe than 8-pin as it's about 1.4x more current per square millimetre of pin contact, but if we assume that 8-pin is overbuilt, it's reasonable. 450W cables are about 2.6x more current-per-area than 8-pin and 600W cables are about 3.5x more current-per-area. To me, and probably to all the people whose GPUs have been burned, that's too big a jump and it's eaten too much of the safety margin that was built into the MiniFit Jr we've been successfully using with minimal drama for almost two decades.

There's nothing physically wrong with the new connector. The problem is the power rating applied to it; it's not a 600W connector. If they downrated it to 300W that would likely shut up all the complainers. Sure, perhaps the 5090 will need two of them, but having multiple connectors on a GPU isn't exactly a new or outrageous idea, and the first ever GPU series to use the older he PCIe connector (the 8800-series) launched with dual 6-pin connectors right out of the gate!
 
Last edited:
Joined
Jan 14, 2019
Messages
12,337 (5.76/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Nobody has taken away your 300w GPUs, or your 150w GPUs, or your 75W GPUs. Go buy as many RX 6400XTs as you'd like!

Hell, why stop there? High end used to mean sub 40w, because that was all AGP could support! We HAVE TO GO BACK! :fear:

Or, we can adapt to the changing world instead.
If somebody wants to pump 600+ Watts into their Geforce 9090 Ultra Super Ti Übermensch Edition, so be it. I just want to game on high graphics with a GPU that doesn't burn the house down.

If GPU manufacturers want to use more and more power because they don't have a better idea to squeeze more performance out of their architectures, that's one thing, but I can have an opinion on it, surely? ;)
 
Top