• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel ATX 3.0 PSU standard has more power for GPU's (article)

Joined
Dec 12, 2020
Messages
1,755 (1.05/day)
https://www.tomshardware.com/news/intel-atx-v3-psu-standard

quotes from article:
"A significant point of interest is that power spikes for PCIe cards rated from 300 to 600W are only allowed when the 12VHPWR is used, not with the legacy 6 pin and 6+2 pin PCIe connectors."
"In any case, you will need 1800-2000W PSUs to exploit 600W graphics cards fully."

So can AMD ignore this standard or create their own? Why should AMD have to follow Intel's lead? Intel doesn't get to dictate policy to Nvidia (and AMD's graphics division) about how they should configure their PCIe power connectors for their videocards do they?
 
No, AMD can't ignore this, as this is an extension of the ATX standard.
However, AMD doesn't have to use the 12-pin connector on their graphics cards if they don't want/need to.
This seems to be something that Nvidia has pushed for more so than AMD or Intel.
 
Last edited:
Well, they can ignore it and create their own, but (1) it would be a stupid business move and (2) they could not call their product "ATX" compliant which takes us back to (1).

BTW, if a company does try to go their own way, that is NOT creating their own "standard" - that is going "proprietary" - never good for consumers - see (1) above.
 
1800-2000w psu... I've just bought an 850 thinking that was loads!
 
I imagine the average 5A/10A dedicated bedroom circuit in the average house is totally going to be fine with this.
 
So what's the deal with 1800 - 2000 watt PSU's?
Mining may be one [more or less] "legit" reason. Those rigs can be very power hungry.

Another reason may include marketing fluff - "bigger" sells because some falsely believe bigger always means better. And some believe bigger gives them bragging rights. It often means they are just fools with their money.

Intel or Nvidia not trying to save the planet and fry us all instead?
LOL. No. If there wasn't a demand, there would not be the product.
 
Well, they can ignore it and create their own, but (1) it would be a stupid business move and (2) they could not call their product "ATX" compliant which takes us back to (1).

BTW, if a company does try to go their own way, that is NOT creating their own "standard" - that is going "proprietary" - never good for consumers - see (1) above.
Why does Intel get to set the ATX standard? No one gets any input except them? It's almost like Intel's proprietary standard is becoming the industry standard.
 
I imagine the average 5A/10A dedicated bedroom circuit in the average house is totally going to be fine with this.
That's why 220-240 V is superior. Also, most wall outlets are fused for 10 or 16 A in Europe.
 
Why does Intel get to set the ATX standard?
They don't get to all by themselves - but they are a major influence. And as one of the biggest (by far!) players in the industry, they have earned it. But a standard does not become an industry "standard" unless the rest of the industry comes to a consensus and either agrees, or concedes then agrees.

Majority rules - just a fact of life.
 
They did it because of power excursions and the mess of 3080/ti cards that would fail on some PSUs.

Setting this standard spells out how that all works for all camps.
 
They don't get to all by themselves - but they are a major influence. And as one of the biggest (by far!) players in the industry, they have earned it. But a standard does not become an industry "standard" unless the rest of the industry comes to a consensus and either agrees, or concedes then agrees.

Majority rules - just a fact of life.
I hope they can also become a big player in the gaming/mining videocard market, I'm tired of the duopoly.

Will Intel's new Arc series of videocards require Intel's ridiculous proposed new PCIe power connector?
 
for all camps
^^^THIS!^^^ I don't think that can be emphasized enough. Had Intel not led the charge way back when IBM "clones" first started to appear, there would have been no "AT" standard, or the subsequent "ATX" standard. And without them, there would have been no "self-build" industry, no NewEgg, MicroCenter, ASUS, Gigabyte (except as OEMs - maybe) and so much more. We would be stuck in the proprietary world of the big factory makers - just as we are today with laptops where the self-build industry is virtually non-existant.
Will Intel's new Arc series of videocards require Intel's ridiculous proposed new PCIe power connector?

I fail to see why the hostility, or calling it ridiculous. As SP pointed out, setting this standard tells everyone exactly how it works and how to implement it ensuring compliance AND COMPATIBILITY regardless the maker. And that's a very good thing. It is exactly what will allow you to use an ASUS motherboard with a MSI graphics card all powered by an SeaSonic power supply.

I don't know what the Arc series will require. It might depend on if the power demands need it, or not. That said, it typically is cheaper in the long run (for makers and consumers) if there are fewer number of required connectors.
 
Will Intel's new Arc series of videocards require Intel's ridiculous proposed new PCIe power connector?
So far, nothing suggests it will be used for the first generation of cards.
 
I hope they can also become a big player in the gaming/mining videocard market, I'm tired of the duopoly.

Will Intel's new Arc series of videocards require Intel's ridiculous proposed new PCIe power connector?
not that i really know much (or anything) but traditionally intel sets the standard for connections and (as far as add in card)) pci-sig sets the standard on how to use them. just peruse some docs of the standards from each (though pci-sig stuff is behind a paywall :banghead: ) you'll (maybe?) see how each doc refers to the other in appropriate concerns/areas.

fwiw the standard set by psi-sig allows the max power draw to be 300 watts; single 6+2, 6 and slot power. all these cartds with 2 or more 6+2 connectors cannot be certified and is a vewry bad look.

what good are standard if they can be blatantly ignored?
 
what good are standard if they can be blatantly ignored?
Again - they can't be, unless you are Dell, HP or Apple. But if you want to sell to the self-build or self-upgrade market, you cannot ignore those standards. If you deviate, you must at least ensure compatibility with the standards AND your custom drivers better be bullet proof.
 
^^^THIS!^^^ I don't think that can be emphasized enough. Had Intel not led the charge way back when IBM "clones" first started to appear, there would have been no "AT" standard, or the subsequent "ATX" standard. And without them, there would have been no "self-build" industry, no NewEgg, MicroCenter, ASUS, Gigabyte (except as OEMs - maybe) and so much more. We would be stuck in the proprietary world of the big factory makers - just as we are today with laptops where the self-build industry is virtually non-existant.


I fail to see why the hostility, or calling it ridiculous. As SP pointed out, setting this standard tells everyone exactly how it works and how to implement it ensuring compliance AND COMPATIBILITY regardless the maker. And that's a very good thing. It is exactly what will allow you to use an ASUS motherboard with a MSI graphics card all powered by an SeaSonic power supply.

I don't know what the Arc series will require. It might depend on if the power demands need it, or not. That said, it typically is cheaper in the long run (for makers and consumers) if there are fewer number of required connectors.
The new standard will certainly sell more PSU's. I hope it's possible to have adapter cables for the 12VHPWR standard although from what I read any videocard capable of using the 12VHPWR is supposed to limit their power draw to the minimum if there are no sense pins detected (one reason I don't like it).

If someone has a 1200 Watt seasonic PSU that doesn't support the 12VHPWR connector any videocards that support the 12VHPWR will artificially limit their power draw and performance when using an older seasonic 1200 watt PSU! That's why it's ridiculous.
 
Again - they can't be, unless you are Dell, HP or Apple. But if you want to sell to the self-build or self-upgrade market, you cannot ignore those standards. If you deviate, you must at least ensure compatibility with the standards AND your custom drivers better be bullet proof.
since you do not seem familiar with the standards for 150/225/300watt add in cards i've attached them. (and yes they are 14y/o and no they have NOT been updated!!)then just look at cards that have three (6+2) (w/o the pci-sig certification on the box!) and then say what can't and can be done.

the frustrating thing about pci-sig's paywall is all the misunderstanding perpetrated on forums.
 

Attachments

''Card Mass Limit'' 1.5kg... :roll:
 
ince you do not seem familiar with the standards for 150/225/300watt add in cards i've attached them.
:( I am familiar with them. All you did was prove my point. Ignore is not the same thing as do nothing about it when it comes to compliance with industry standards.

If, as you suggest, Acme graphics card maker totally "ignores" all industry standards, designs and requires their own "proprietary" power connections, then that would require Acme also produce their own Acme branded power supply with Acme power connectors designed "exclusively" to support Acme cards. That is NOT happening - at least not in the self-build industry.

If you look at Dell for example. They have a notorious history for modifying motherboards to make them proprietary for which ONLY Dell power supplies will support. This results in consumers being forced to buy more expensive replacement parts directly from Dell. Not cool.
 
:( I am familiar with them. All you did was prove my point. Ignore is not the same thing as do nothing about it when it comes to compliance with industry standards.

If, as you suggest, Acme graphics card maker totally "ignores" all industry standards, designs and requires their own "proprietary" power connections, then that would require Acme also produce their own Acme branded power supply with Acme power connectors designed "exclusively" to support Acme cards. That is NOT happening - at least not in the self-build industry.
or simply they provide adapters as all AIBs have when the connector on the card were newer than PSU. remember before 6+2/8 pins. gezz what did NV *just* do for their founder cards?

history would disagree
If you look at Dell for example. They have a notorious history for modifying motherboards to make them proprietary for which ONLY Dell power supplies will support. This results in consumers being forced to buy more expensive replacement parts directly from Dell. Not cool.
no it not cool but isn't applicable here. gezz switching two wires is sooooo bad :rolleyes:

nice try, you fly boys think you're always the sharpest knifes in the drawer. :roll:

we are done here.
 
So what's the deal with 1800 - 2000 watt PSU's? Intel or Nvidia not trying to save the planet and fry us all instead?
nope, this is the classic "I'll scratch your back if you scratch mine" kickback situation (from the psu makers), for them to push bigger & bigger units, just as the gpu makers are making moar & moar cards that require moar & moar juice.....

not that ANYONE would ever publicly admit to it, but it IS collusion nonetheless, in every way, shape & form.....no if's, ands or butts :D
 
So the PCIe-sig supports the concept of three 2x3 connectors, but not three 2x4 connectors? Why not three 2x4 connectors to get to 500+ watts?
 
The new standard will certainly sell more PSU's. I hope it's possible to have adapter cables for the 12VHPWR standard although from what I read any videocard capable of using the 12VHPWR is supposed to limit their power draw to the minimum if there are no sense pins detected (one reason I don't like it).

If someone has a 1200 Watt seasonic PSU that doesn't support the 12VHPWR connector any videocards that support the 12VHPWR will artificially limit their power draw and performance when using an older seasonic 1200 watt PSU! That's why it's ridiculous.
My guess is it will be a situation where you lose 2-3% at most not been on the new ATX, power efficiency gets worse and worse as you scale up. As long as the PSU has enough grunt for average expected usage it will be ok I think, but I am not excited for this new ATX standard neither the newer planned GPUs from Nvidia. I am very energy conscience now due to energy costs here in the UK, e.g. I have abandoned recording via x264 now in OBSS since cpu encoding consumes so much power, and have had to tinker with trying to get the best out of NVENC (the files are ginormous, 35 gig file yesterday for 3 hours 42 mins gameplay footage).
 
Back
Top