Friday, May 31st 2024

ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector

ASRock is ready with the first Radeon RX gaming graphics card to feature a modern 12V-2x6 power connector, replacing the up to three 8-pin PCIe power connectors it took, to power a Radeon RX 7900 series graphics card. The ASRock RX 7900 series WS graphics cards are also the first 2-slot RX 7900 series graphics cards. They target workstations and GPU rendering farms that stack multiple graphics cards into 4U or 5U rackmount cases, with no spacing between 2-slot graphics cards. ASRock is designing cards based on both the RX 7900 XT, and the flagship RX 7900 XTX.

The ASRock RX 7900 series WS graphics cards appear long and no more than 2 slots thick. To achieve these dimensions, a lateral-flow cooling solution is used, which combines a dense aluminium or copper channel heatsink with a lateral blower. Remember we said these cards are meant for workstations or rendering farms? So the noise output will be deafening, at least up to datacenter standards. The most striking aspect of these cards of course is their 12+4 pin ATX 12V-2x6 power input, which is capable of drawing 600 W of continuous power from a single cable. It's located at the card's tail-end, where it would have been an engineering challenge to put three 8-pin connectors.
Add your own comment

94 Comments on ASRock Innovates First AMD Radeon RX 7000 Graphics Card with 12V-2x6 Power Connector

#51
Xajel
john_Fun part.
If these cards don't suffer from burns, it will be absolutely clear that the problem is with Nvidia's design.
Fun part, you missed the part where the RTX 4090 has a 450W TDP, the XTX has 355W IIRC.. Hell even the RTX 4080 has 320W and it has much lower chances of having these issues, it's like 95%+ of the problems come from RTX 4090 because it uses so much power, and the connector looked like a prototype product at that time, and that's why they released another version of it. They made every RTX 4090 owner a prototype tester.

To be clear, I don't blame NV's design.. I blame their decision -and insistence- of using this connector without doing enough QC of it and then accusing the user of not knowing how to plug it!!
Posted on Reply
#52
ARF
XajelFun part, you missed the part where the RTX 4090 has a 450W TDP
What's the difference if you undervolt and underclock the 4090 to 350W TDP?
Xajelthe XTX has 355W IIRC.. Hell even the RTX 4080 has 320W and it has much lower chances of having these issues, it's like 95%+ of the problems come from RTX 4090 because it uses so much power, and the connector looked like a prototype product at that time, and that's why they released another version of it. They made every RTX 4090 owner a prototype tester.
To be clear, I don't blame NV's design.. I blame their decision -and insistence- of using this connector without doing enough QC of it and then accusing the user of not knowing how to plug it!!
It was a mistake. RTX 4090 is a very badly engineered card. Too large heatsink, too short PCB, wrong power connectors choice. Should have been with 4x 8-pin original connectors.
Also, the new 12VHPWR connector should have been used only on the entry level cards in order to test if the thin wires can carry that high a current (without melting).
Posted on Reply
#53
john_
XajelTo be clear, I don't blame NV's design.. I blame their decision -and insistence- of using this connector without doing enough QC of it and then accusing the user of not knowing how to plug it!!
I don't remember Nvidia blaming the user. Only Gamers Nexus is blaming the user and strangely and unfortunately, what a single Youtuber insisting on saying and promoting, was enough for almost everybody else to jump to the conclusion that it was simply "user error".
Posted on Reply
#54
TheinsanegamerN
TomorrowVideo memory has always been higher latency than system memory so that's irrelevant.

No it wont. AMD was able to sell 16GB HBM2 card for 700. And it had the same peak bandwidth as 4090 today - five years ago.
Also cooling is easier assuming there is epoxy fill to make the GPU die and HBM the same height. We have seen time and time again how a badly engineered card cooks its GDDR chips.

Vega 64 was not "only" bandwidth starved. It's a false assumption that if a game that benefits from massive bandwidth would benefit from Vega 64 merely thanks to HBM. Every consumer GPU benefits from higher bandwidth to some degree. Especially at higher resolutions.
All those words to say "HBM is better than GDDR" with 0 examples of consumer workloads...benefiting? You may have missed something, so allow me to say it simply: GDDR provides sufficient bandwidth for consumer workloads. HBM offers far more, but with far higher latency then GDDR MEMORY and as a result, the multiple cards using HBM that were made for consumers showed 0 benefit in any consumer workloads.

The interposer required for HBM raises costs and the unequal height of HBM makes heatsink envelopment more difficult, as Gigabyte and MSI delved into during Vega 64's launch. Switching to HBM will not make a 4090 a dual slot card, that GPU will still be hot and still require sufficient cooling. If you were RIGHT, and the HBM was a major improvement, then the GPU would be utilized at a higher rate and....need more cooling to compensate vs current designs anyway. Big GPUs need big cooling, if you dont like that, dont buy a big GPU. Really simple answer.
TomorrowIt all depends on engineering. And why are we talking about height? We are talking about length and thickness (that's what she said), not how "tall" cards are.
Looking at 3090 PCB with it's stupid vertically placed angled 12pin there is massive free space there for 3x8pin. Less so on 4090 but still possible.
I agree that there is room for multiple 8 pins, I was speaking to the insistence that HBM will magically make cards smaller, since I couldnt comprehend that someone would think that the current 3-4 slot coolers are for GDDR, and not the 400+ watt GPU chips.
TomorrowThe argument was about the new connector size and how most card utilizing this connector are actually huge - negating any benefit from a smaller connector. They may as well have 3x8pin and it would make no difference in the cooler size.
see above
TomorrowWhy would a dual-slot 4080 throttle under light load? I linked the review of the dual-slot 4080S and there was no mention throttling in the review. I suspect the noise levels might have been higher than triple or quad slot card but performance was on par with other 4080S models.
Consumers have spoken, they do NOT want 70+ dba jet engines in their PCs. You cannot make a dual slot 4080s without it sounding like a wailing banshee. If you did, it would not be sufficient for the power draw and heat output.....and would throttle as a result.
TomorrowEven 4090 could be undervolted with minimal performance loss on a dual-slot cooler.
Yeah, sure it could bud. Just like a 4070 could be a low profile single slot GPU if Nvidia really wanted! Manufacturers want to put massive expensive heatsinks on their cards just because they're lazy!

:rolleyes:
Posted on Reply
#55
Tomorrow
TheinsanegamerNHBM offers far more, but with far higher latency then GDDR MEMORY and as a result, the multiple cards using HBM that were made for consumers showed 0 benefit in any consumer workloads.
And let me say this again: video memory is not latency sensitive use case where you need as low latency as possible. Our only examples of consumer cards using HBM were the Radeon Fiji and Vega series. You're basing the "no benefit" angle merely on those. Those cards had much bigger problems that held back their performance like poor coolers and architecture. You make it sound like HBM provided zero benefit there and that's just untrue. Those cards would have been even worse with GDDR5.
TheinsanegamerNThe interposer required for HBM raises costs and the unequal height of HBM makes heatsink envelopment more difficult, as Gigabyte and MSI delved into during Vega 64's launch.
Sure it raises costs, but in case you haven't noticed the costs have gone up already. The unequal height thing was resolved pretty easily. Teething issues.
And it is not as if manufacturers have not had any troubles with GDDR cooling - because they have. The unequal height applies even more here and unlike HBM they cant fill the entire PCB with epoxy to make the GPU die and GDDR chips the same height. HBM would actually simplify heatsink development.
TheinsanegamerNConsumers have spoken, they do NOT want 70+ dba jet engines in their PCs. You cannot make a dual slot 4080s without it sounding like a wailing banshee. If you did, it would not be sufficient for the power draw and heat output.....and would throttle as a result.
4080S is a 320W card. The highest wattage dual slot card Nvidia has ever released was the 350W 3080 Ti.
Reading from TPU's 3080 Ti FE review im not seeing this supposed 70+ dba and throttling you keep talking about: www.techpowerup.com/review/nvidia-geforce-rtx-3080-ti-founders-edition/32.html

At roughly 39dBa it's in line with AIB models. Only the massive triple slot MSI model and ASUS AIO model at quiet BIOS setting are noticeably quieter at 33dBa.
You are talking like dual-slot and quiet are mutually exclusive. They're not. It's perfectly possible to design a dual slot cooler to dissipate 350W while not sounding like a jet engine. And there's not throttling either.
Posted on Reply
#56
Vayra86
ARFAlso, the new 12VHPWR connector should have been used only on the entry level cards in order to test if the thin wires can carry that high a current (without melting).
You do realize that entry level cards are traditionally very low TDP cards too, right? Not sure how you're going to make that work just yet. Unless you're advocating that we run our 4060ti 8GB at 600W :) Fireworks guaranteed...
TheinsanegamerNYeah, sure it could bud. Just like a 4070 could be a low profile single slot GPU if Nvidia really wanted! Manufacturers want to put massive expensive heatsinks on their cards just because they're lazy!

:rolleyes:
Yes, they actually do, history is full of repurposed coolers that are oversized for their next gen alternative. Let's recall all those heatpipes just floating over nothing in the past real quick? Its almost standard procedure especially when it comes to AMD cards. Design costs money too.

Heatsinks are not expensive. Its a bit of aluminium.
Posted on Reply
#57
Dr. Dro
ARFIt was a mistake. RTX 4090 is a very badly engineered card. Too large heatsink, too short PCB, wrong power connectors choice. Should have been with 4x 8-pin original connectors.
Also, the new 12VHPWR connector should have been used only on the entry level cards in order to test if the thin wires can carry that high a current (without melting).
Says who? This thread is a gold mine of salt and resentment from armchair semiconductor and electrical engineers, who are in fact little more than disgruntled AMD fans who are attempting to defend their favorite company, no matter the cost and yet again - I'd recognize this brand of resentment anywhere. You have absolutely no qualification to back those claims.

The large heatsink is necessary if you want near-silent operation of high-wattage parts. Furthermore, a smaller PCB is preferable due to trace length, signaling strength and many other variables involved. Reliably producing a small PCB was a technological challenge to overcome. The new connector is fine. It had a couple of blunders in its earliest revision, but it's been fixed and it is now the adopted industry standard whether anyone here likes it or not. Next-gen Radeon cards will use it, and the only reason the RX 7900 series do not is that their design was finalized before the rollout of this standard was complete.
Tomorrow4080S is a 320W card. The highest wattage dual slot card Nvidia has ever released was the 350W 3080 Ti.
Reading from TPU's 3080 Ti FE review im not seeing this supposed 70+ dba and throttling you keep talking about: www.techpowerup.com/review/nvidia-geforce-rtx-3080-ti-founders-edition/32.html

At roughly 39dBa it's in line with AIB models. Only the massive triple slot MSI model and ASUS AIO model at quiet BIOS setting are noticeably quieter at 33dBa.
You are talking like dual-slot and quiet are mutually exclusive. They're not. It's perfectly possible to design a dual slot cooler to dissipate 350W while not sounding like a jet engine. And there's not throttling either.
You're attempting to be reasonable with people who long since have willfully chosen to be irrational. It is quite apparent that posters in this thread have no compromise with the truth. The sheer amount of hyperbole fueled by all sorts of fear, uncertainty and doubt in this thread should be a big red flag, were I a mod I'd have locked it so long ago.
Posted on Reply
#58
kapone32
TheinsanegamerNHBM is no advantage. Look how close the GDDR sits tot he GPU on modern ada cards. IDK why people are obsessed with HBM, the R9 fury/nano/vega 56/64 were all failures.

If you dont want a 3 slot card, dont buy one! Plenty of 2 slot cards out there.

If they burn up: "Told you the connector was shit"
If they dont: "Told you Nvidia screwed up".
Well one thing about HBM is that it made Water Cooling academic. Everything is in one place making it a joy to put under water. Vega also fully supported Crossfire for up to 16GB of HBM. Vega 7 only failed because AMD did not make enough of them.

The only GPU maker that still makes 2 slot cards is As Rock. They are also the only cards that you don't need a WB for as they have great temps.

These cards are not for the consumer channel but enterprise.
Dr. DroSays who? This thread is a gold mine of salt and resentment from armchair semiconductor and electrical engineers, who are in fact little more than disgruntled AMD fans who are attempting to defend their favorite company, no matter the cost and yet again - I'd recognize this brand of resentment anywhere. You have absolutely no qualification to back those claims.

The large heatsink is necessary if you want near-silent operation of high-wattage parts. Furthermore, a smaller PCB is preferable due to trace length, signaling strength and many other variables involved. Reliably producing a small PCB was a technological challenge to overcome. The new connector is fine. It had a couple of blunders in its earliest revision, but it's been fixed and it is now the adopted industry standard whether anyone here likes it or not. Next-gen Radeon cards will use it, and the only reason the RX 7900 series do not is that their design was finalized before the rollout of this standard was complete.



You're attempting to be reasonable with people who long since have willfully chosen to be irrational. It is quite apparent that posters in this thread have no compromise with the truth. The sheer amount of hyperbole fueled by all sorts of fear, uncertainty and doubt in this thread should be a big red flag, were I a mod I'd have locked it so long ago.
Here comes the hyperbole. EK waterblocks are single slot and even come with an adapter to make the card single slot. Just because a heatsink and shroud are large does not mean it will offer better performance. I will use the example of the 6500XT Gaming from Gigabyte that has 3 fans and a 2.2 wide shroud, It still suffered from Gigabyte's problem of their creation process putting not enough TP on the GPU. Last Black Friday I bought the Asus Dual 6500XT and out of the box it runs 30 C cooler than the Gigabyte before I fixed it.

Do you understand how you look like an Nvidia fan boy by calling out AMD fans in an AMD based thread.
Posted on Reply
#59
Cheeseball
Not a Potato
Finally another two-slot 7900 XT and XTX. Aside from the Sapphire Pulse (which is unfortunately too tall), there doesn't seem to be any other cooler that can match the original MBA with just two PCIe 8-pin connector.

The only problem with this one is that it's a blower, which is naturally loud, but the 12V-2x6 connector is on the back and is just one cable which is always an advantage.
Posted on Reply
#60
A Computer Guy
The first opportunity to double the connector count to increase the safety margin by 2 and they failed.
Why_MeYou will bend the knee!



Just be careful not to bend it too much!
Posted on Reply
#61
R-T-B
Vya DomusIt's not irrational to not want something with a track record of being a fire hazard, especially when the older option works just fine. It's not like it's an absolute necessity to move away from 8pin PCIe.
So don't buy it. You don't need to start a meme campaign.
Chrispy_I don't think it has anything to do with AMD fans. It's already been annoying equally-vocal Nvidia fans for two years.
Having been on both sides, no not that I've seen.
Posted on Reply
#62
Ruru
S.T.A.R.S.
I still don't understand the need for this fire hazard connector. If its purpose is to have a single connector instead of many of the classic PCIe connectors, I'm sure that upcoming enthusiast-level cards (especially from Nvidia) will have two of these, sooner or later.
Posted on Reply
#63
Dr. Dro
kapone32Well one thing about HBM is that it made Water Cooling academic. Everything is in one place making it a joy to put under water. Vega also fully supported Crossfire for up to 16GB of HBM. Vega 7 only failed because AMD did not make enough of them.

The only GPU maker that still makes 2 slot cards is As Rock. They are also the only cards that you don't need a WB for as they have great temps.

These cards are not for the consumer channel but enterprise.


Here comes the hyperbole. EK waterblocks are single slot and even come with an adapter to make the card single slot. Just because a heatsink and shroud are large does not mean it will offer better performance. I will use the example of the 6500XT Gaming from Gigabyte that has 3 fans and a 2.2 wide shroud, It still suffered from Gigabyte's problem of their creation process putting not enough TP on the GPU. Last Black Friday I bought the Asus Dual 6500XT and out of the box it runs 30 C cooler than the Gigabyte before I fixed it.

Do you understand how you look like an Nvidia fan boy by calling out AMD fans in an AMD based thread.
Do you know what hyperbole means? And dude, a waterblock has no cooling capability of its own. You can't run them dry like you run a heatsink. When I mean irrational, this is what I'm referring to. There isn't a shred of good faith in your argument. Enterprise? 6500 XTs? What?
Keullo-eI still don't understand the need for this fire hazard connector. If its purpose is to have a single connector instead of many of the classic PCIe connectors, I'm sure that upcoming enthusiast-level cards (especially from Nvidia) will have two of these, sooner or later.
Shorter PCBs are now possible, which means more tightly integrated circuits resulting in more advanced GPUs. Easier for the user to wire. Higher reliability because there are less connectors, therefore less points of failure. Supports higher wattages. The fact that the original revision had some flaws won't discredit any of these points.
Posted on Reply
#64
kapone32
Dr. DroDo you know what hyperbole means? And dude, a waterblock has no cooling capability of its own. You can't run them dry like you run a heatsink. When I mean irrational, this is what I'm referring to. There isn't a shred of good faith in your argument. Enterprise? 6500 XTs? What?
This from the person that used a Frontier card for Gaming and complained about it not working properly. If you do not understand HBM on Vega sits right underneath the GPU so that means that you have one area to cool. Not the GPU and the surrounding DDR chips.

6500XT, if you must know is to run CPU mining rigs as the 5000 chips from AMD do not have an I GPU but we can go on. BTW it still does 1080P better than the 8700G so you can go on anyway.

As far as enterprise, when was the last time you saw a blower card like this for consumers? The Raven02 is no longer relevant for consumers. Do you think we are getting these in the consumer chain. What Actual PSU that is affordable comes with 2 12vHPWr connectors in the consumer space? I have not seen a blower card since the 7950XT 3GB. It was a Sapphire reference model.
Posted on Reply
#65
Dr. Dro
kapone32This from the person that used a Frontier card for Gaming and complained about it not working properly. If you do not understand HBM on Vega sits right underneath the GPU so that means that you have one area to cool. Not the GPU and the surrounding DDR chips.

6500XT, if you must know is to run CPU mining rigs as the 5000 chips from AMD do not have an I GPU but we can go on. BTW it still does 1080P better than the 8700G so you can go on anyway.

As far as enterprise, when was the last time you saw a blower card like this for consumers? The Raven02 is no longer relevant for consumers. Do you think we are getting these in the consumer chain. What Actual PSU that is affordable comes with 2 12vHPWr connectors in the consumer space? I have not seen a blower card since the 7950XT 3GB. It was a Sapphire reference model.
You... do realize that Frontier is simply a Vega 64 with 16 GB and it can run gaming drivers, right? It is EXACTLY the same core and has EXACTLY the same performance as any other Vega 64...

You're trying to preach to the pope here, I know what HBM is, I don't see its relevance to anything I've said to begin with
Posted on Reply
#66
kapone32
Dr. DroYou... do realize that Frontier is simply a Vega 64 with 16 GB and it can run gaming drivers, right? It is EXACTLY the same core and has EXACTLY the same performance as any other Vega 64...

You're trying to preach to the pope here, I know what HBM is, I don't see its relevance to anything I've said to begin with
Yep that is why they had the exact same drivers................It is not like AMD software was what is today.

Obviously you are not someone that has Water cooled a Vega card so you would never appreciate it enough to understand the merit of my point about HBM being under the GPU so becoming essentially 1 Die to cool. If you were

Let's keep in mind what started this for context

"Says who? This thread is a gold mine of salt and resentment from armchair semiconductor and electrical engineers, who are in fact little more than disgruntled AMD fans who are attempting to defend their favorite company, no matter the cost and yet again"

The issue with that statement is your PC has no AMD so making a claim like this is indeed hyperbole but the best is the 7000 owners club destroys your feelings about AMD.
Posted on Reply
#67
Vya Domus
R-T-BSo don't buy it.
Easy to say, soon you're not gonna have a choice.
Posted on Reply
#68
Dr. Dro
kapone32Yep that is why they had the exact same drivers................It is not like AMD software was what is today.

Obviously you are not someone that has Water cooled a Vega card so you would never appreciate it enough to understand the merit of my point about HBM being under the GPU so becoming essentially 1 Die to cool. If you were

Let's keep in mind what started this for context

"Says who? This thread is a gold mine of salt and resentment from armchair semiconductor and electrical engineers, who are in fact little more than disgruntled AMD fans who are attempting to defend their favorite company, no matter the cost and yet again"

The issue with that statement is your PC has no AMD so making a claim like this is indeed hyperbole but the best is the 7000 owners club destroys your feelings about AMD.
You... do realize that the drivers are and have always been the same, yes?

Anyway, It could be any GPU from any brand ever, using any kind of memory technology. It's the same thing. Neither are gonna run on a dry waterblock. The last single-slot upper-range GPU I recall was the Galax GTX 1070 Katana, and it was very noisy, not to mention that the 1070 has a much lower power footprint compared to an RX 7900 XTX. Even if you somehow made a single-slot XTX, it would require a 5000 RPM blower to get it anywhere even close to operable.

You're just being a contrarian for the sake of it. The thing of displaying irrational behavior to defend a brand? You're proving my point. Nothing you said is even remotely connected, let alone coherent.
Vya DomusEasy to say, soon you're not gonna have a choice.
The previous connector was the standard because it was... just there? No one ever asked for that connector specifically, just like no one asked for this one.
Posted on Reply
#69
Vya Domus
Dr. DroThe previous connector was the standard because it was... just there? No one ever asked for that connector specifically, just like no one asked for this one.
It's weird how some of you are cool with whatever these corporations are doing and are bewildered when you see someone complain about something that not only was unnecessary but turned out to be a genuine hazard.
Posted on Reply
#70
AusWolf
Dr. DroThe previous connector was the standard because it was... just there? No one ever asked for that connector specifically, just like no one asked for this one.
The previous connector came as a result of graphics cards needing more power that power supplies couldn't push through the ATX connector and PCI-e slot. So you upgraded your GPU and your PSU.

This new connector came as a result of the 4090 needing more power than previous generations, even though most high-end PSUs can easily supply that, so no upgrade would be necessary if not for a darn cable.

If someone can tell me why a 4070 needs this connector with a logical explanation, I'll raise my hat.
Posted on Reply
#71
Onasi
Dr. DroSays who? This thread is a gold mine of salt and resentment from armchair semiconductor and electrical engineers, who are in fact little more than disgruntled AMD fans who are attempting to defend their favorite company, no matter the cost and yet again - I'd recognize this brand of resentment anywhere. You have absolutely no qualification to back those claims.
The 4090FE is actually a masterstroke of design. It’s trying to be as close to PCI-E AIB spec for a triple-slot card as it physically can be, considering that it’s a 450W card, and almost succeeds in complying. The partner models are honestly an embarrassment in comparison, though I do understand that those had their own limitations.
Posted on Reply
#72
kapone32
Dr. DroYou... do realize that the drivers are and have always been the same, yes?

Anyway, It could be any GPU from any brand ever, using any kind of memory technology. It's the same thing. Neither are gonna run on a dry waterblock. The last single-slot upper-range GPU I recall was the Galax GTX 1070 Katana, and it was very noisy, not to mention that the 1070 has a much lower power footprint compared to an RX 7900 XTX. Even if you somehow made a single-slot XTX, it would require a 5000 RPM blower to get it anywhere even close to operable.

You're just being a contrarian for the sake of it. The thing of displaying irrational behavior to defend a brand? You're proving my point. Nothing you said is even remotely connected, let alone coherent.



The previous connector was the standard because it was... just there? No one ever asked for that connector specifically, just like no one asked for this one.
Yeah the drivers are the same. Just different colours right? It is easier to cool one area than a spread. With those Vega cards as long as the 4 main screws were installed the rest of the screws were not an issue. Like when you use too thin or thick thermal pads on the memory modules. When these modules are up to 2GB each that means 10 to 12 modules to think of. I cannot seem to get you to understand that.

Once again you show your ignorance. One of the selling features of EK waterblocks is that they are single slot. Even come with an adapter for that.

You seem to be the only person that thinks that the water block would be dry and before you jump on that realize that the Alphacool Eisbaer is about $100 for the 360 version and all you need is another set of Quick connect cables for $11 to get the block filled. Corsair sell their coolant for $20 all over the place. Look Mommy I am water-cooling. Today is even easier to get a good deal on a block with the competition.

Have you bought a pre-built lately? I have and the MSI card in that machine was actually less than 2 slots. You have been sold on this monster shroud that was introed for the 3090 by most brands was just reused for AMD and Intel. My 7900XT is about 1.4 slots wide but it is from Alphacool and I don't miss the noise of these GPU fans spinning up when enjoying high intensity Gaming, Like when fog killed the last 16 hours of 24hrs of Nurburing and I used the Audi 2016 to do a nice 32 lap race on that circuit.

By the way my Vega 64 is still going strong and is part of a mining rig. Yes it is single slot in a Byiski block with a pump/res and some quick connect cables. There are also 2 more 6800XTs in that loop too. This is about performance not looks but even my main rig uses Quick connects. It makes life a joy when you get a new GPU or need to do anything to the loop.

The previous connector really? Do you mean 6 pin and 8 pin? If you are making that seem in some way comparable to 12VHPWR in terms of relevance? The only cards that caught fire on that connector were 2080TIs. Don't blame me, blame Gamer's Nexus.

Let's get back to the root. These cards are not meant for Gamers but for whatever people do with rack mount systems. If you browse the site you will see a 40 series with a blower card. GPUs are used for more than just Gaming.
Posted on Reply
#73
A Computer Guy
Vya DomusIt's weird how some of you are cool with whatever these corporations are doing and are bewildered when you see someone complain about something that not only was unnecessary but turned out to be a genuine hazard.
Especially when doubling the connector would simply half the risk - probably eliminating the problem altogether. Unfortunately PSU's aren't equipped to deal with 2 connectors of this type. (to my knowledge at this time)
AusWolfThe previous connector came as a result of graphics cards needing more power that power supplies couldn't push through the ATX connector and PCI-e slot. So you upgraded your GPU and your PSU.

This new connector came as a result of the 4090 needing more power than previous generations, even though most high-end PSUs can easily supply that, so no upgrade would be necessary if not for a darn cable.

If someone can tell me why a 4070 needs this connector with a logical explanation, I'll raise my hat.
It may have to do with manufacturing economics. Perhaps cheaper to unify the production lines with the same power connector?
Posted on Reply
#74
john_
Keullo-eI still don't understand the need for this fire hazard connector. If its purpose is to have a single connector instead of many of the classic PCIe connectors, I'm sure that upcoming enthusiast-level cards (especially from Nvidia) will have two of these, sooner or later.
Many of the latest PSUs do have this connector, so I am guessing ASRock thought that buyers of those PSUs will want to make use of that connector. They did payed for a new PSU, I bet they don't want to have a 600W power connector sitting useless. So, ASRock probably thinks that buyers will see these new models as a way to also utilize their new PSUs. And frankly if I was considering that connector 1000% safe(NOT), I could have preferred a GPU with one power connector, over another model with 2-3 8 pin connectors. ASRock is probably targeting that group.
Posted on Reply
#75
R-T-B
Vya DomusIt's weird how some of you are cool with whatever these corporations are doing and are bewildered when you see someone complain about something that not only was unnecessary but turned out to be a genuine hazard.
The numbers on this hazzard convince me it's more meme than real hazard, especially with the revision.
Posted on Reply
Add your own comment
Nov 23rd, 2024 21:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts