Wednesday, July 6th 2016

AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

AMD today provided an update on how it is addressing the Radeon RX 480 power-draw controversy. The company stated that it has assembled a worldwide team of developers to put together a driver update that lowers power-draw from the PCIe slot, with minimal performance impact. This driver will be labeled the Radeon Software Crimson Edition 16.7.1, and will be released in the next 2 days (before weekend). This fix will be called the "Compatibility" toggle in the Global Settings of the Radeon Settings app, which will be disabled by default. So AMD is giving users a fix, at the same time, isn't making a section of users feel like the card has been gimped with a driver update. The drivers will also improve game-specific performance by up to 3 percent.

The statement by AMD follows.

We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We're pleased to report that this driver-Radeon Software 16.7.1-is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we've implemented a change to address power distribution on the Radeon RX 480 - this change will lower current drawn from the PCIe bus.

Separately, we've also included an option to reduce total power with minimal performance impact. Users will find this as the "compatibility" UI toggle in the Global Settings menu of Radeon Settings. This toggle is "off" by default.

Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle.

AMD is committed to delivering high quality and high performance products, and we'll continue to provide users with more control over their product's performance and efficiency. We appreciate all the feedback so far, and we'll continue to bring further performance and performance/W optimizations to the Radeon RX 480.
Add your own comment

77 Comments on AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

#51
revin
buggalugsThe PCI-E spec is designed to handle more than 75 Watt reference spec.
o_OConfused :confused: So when it {SIG} revised the spec up from the 25w to allow 75w total[12v+3.3v] where is this allowing the 12v to be over 75w not including the 3.3v already there.
From what I seen this is the only card found that has had an equal draw from both slot and cable. [BTW I really like this card !]
So if it's been revised to allow 150w thru the slot why can't that be found ?
Posted on Reply
#52
Filip Georgievski
bugSo instead of trying your motherboard, now you can choose to fry your PSU instead.
HAHAHA, nice one, Mr. PSU Expert.
A 6pin gpu connector can provide up to 200W of power, so even if the 480 does not draw power from PCI Slot, it would still be enought and will run the card, and not damage your power plug or PSU.

12V rail can provide the power the card needs to run only on that 6 pin connector alone.

I for one would try a different solution:
Why not design a custom PCB for this card and route all power to the chips to come from the 6 or 8 pin connector, and exclude PCI power?
Asus, MSI, Sapphire could do that and instead of 6 pin, go for an 8 pin for better power delivery, since 8 pin is rated at 150W per connector, and max it can go is 300W???
Posted on Reply
#53
gupsterg
nemesis.ieI think they've actually said they are doing two things (as well as the original draw being a non-issue as you mention):

1. Lowering the power draw from the PCIe slot (they didn't say by how much or if there was a penalty for that, benching will confirm) and this will be enabled as standard.

and separately:

2. Adding an option to reduce overall power by "some amount" at a cost to performance that should be off-set by a claimed 3% improvement in driver performance, assuming your game of choice is one of the "uplifted" ones.

Number 1 will be the most interesting to see the results of; how much have they reduced the PCIe slot power use and is the overall use now the same just with some amount moved to the PCIe power connector on the card and how has this affected performance, in theory, on the uplifted games, performance should be 3% better than the launch reviews.

Another question is if overall TDP/TBP has changed.

We should know more in a couple of days. ;)
For 1 this is the information we currently have, the current ref RX 480 PCB design is such that the IR3567B loop 1 (6 phases) is controlling the mosfets that supply GPU VDDC. Now the 6 phases are split 50/50 between PCI-E slot/plugs, we have independent 2 power planes, there is no "chip" on PCB that can adjust supply source to mosfets but IR3567B can do load balancing independently per phase so basically The Stilt's fix is lowering the ratio of loading capability of 3 phases supplied by PCI-E slot and shifting it to the 3 phases supplied by PCI-E plug. If there was any other method he could employ I would think he would have done that, especially with his experience and knowledge of AMD products. This is basically what I think AMD will do to reduce current/power usage from PCI-E slot. This does not effect how much power the GPU will draw or what voltage it will use, nor will it's performance be limited but what it does mean is 3 phases are being loaded more, you could depending on GPU properties/how much you push card for OC'ing hit OCP limit.

For 2 my opinion is this, PowerPlay in ROM has PowerTune table, which contains PowerLimit. These values are TDC/TDC/MPDL.

TDP: "Change TDP limit based on customer's thermal solution"

TDC: "PowerTune limit for maximum thermally sustainable current by VDDC regulator that can be supplied"

Maximum Power Delivery Limit (MPDL): "This power limit is the total chip power that we need to stay within in order to not violate the PCIe rail/connector power delivery"

These values limit GPU only not any other board elements (RAM,etc). TDC does not do any load balancing or differentiate between phases on VRM. MPDL does not differentiate between where power is drawn (ie slot/plugs). So AMD maybe implementing tweak of these settings in OS once driver load, as this is what we do when we change PL in OD/TriXX/MSI AB.

They may implement PowerTune algorithm tweak. Without delving too much into this I'll present an example based on 2 programs I experienced this on Fiji. In driver at x point they exposed a feature called "Power Efficiency". Prior to this setting being available I would adjust PL in ROM/OS to max, 3DM FS would run at max GPU clock but Heaven would show some clock dropping, with PE=Off no clock drop in Heaven. I was never reaching A/W/temps issue to make GPU drop clocks in Heaven with PE=On, so it has lead me to believe an aspect of PowerTune algorithm is being tweaked when I set PE=Off.

Just like you said I would agree it would be at a cost of performance which they are giving back in "some" titles for fix/point 2.

Now getting back to GPU properties, mainly leakage, higher leakage ASIC will draw more power. So you will have some cards not benefiting as much from the shift of load on phases to reduce PCI-E slot power usage. Like wise a higher leakage ASIC will reach PowerTune/Limit sooner under load so the owner may see more of a performance loss. IMO owners of lower leakage ASIC will benefit more from these fixes in context of "issue" and will be somewhat drawing less power as a whole at "stock".

When W1zzard tested The Stilt's tweak it drew ~10W less on PCI-E slot, which will be soon consumed by some OC'ing.

IMO the fixes are not "ideal" considering this is card which has PCI-E plug and by this I mean no disrespect to The Stilt, W1zzard and others involved with it. The ref PCB needs a redesign to be inline with what recent past AMD cards drew on PCI-E slot or to substantially reduce PCI-E slot power usage.

My apologies to members for mega long post.
Posted on Reply
#54
TRWOV
bugI guess math isn't your strong point. The card currently consumes more that the compliant 150W (~165W). If you limit PCIe input to <75W, then you end up drawing >75W from the 6 pin connector. Which is still outside the spec. So instead of trying your motherboard, now you can choose to fry your PSU instead.
The only sane thing to do is make the thing draw 150W as advertised.
The PSU cables can withstand the current. Take a look at your PSU's 6+2 pin cable: the extra 2 pins don't even have their own wires, they are just bridged from the existing ones so when you use it for an 8 pin card only 6 wires are connected directly to the PSU

The 6 pin connector can safely draw an extra 16w without caching fire or anything.

Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).

Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.
Posted on Reply
#55
$ReaPeR$
TRWOVThe PSU cables can withstand the current. Take a look at your PSU's 6+2 pin cable: the extra 2 pins don't even have their own wires, they are just bridged from the existing ones so when you use it for an 8 pin card only 6 wires are connected directly to the PSU

The 6 pin connector can safely draw an extra 16w without caching fire or anything.

Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).

Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.
the horror will never end...
they have a fix people.. relax. or dont buy the damn card if you are so concerned. can we please just move along..
Posted on Reply
#56
Air
Is it really necessary for AMD to target its reference designs so close to the limit?

290x with jet engine cooler that cant keep temperatures bellow 90 ºC, 480x with 6-pin, but needs 8-pin and goes off spec.

Is the bad press they get from this stuff really worth the savings?:confused:
Posted on Reply
#57
TRWOV
some people are saying that this will increase the load on the 6 pin VRMs and cause damage to them. My gosh, the VRMs are overspecified, each phase can provide 40w (240w total) so even if the PCIe VRMs are underused the 6pin VRMs can take the heat. Heck, AMD could disable one of the phases entirely and the card would still get more than enough power and stay within the VRMs limits.

AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.

I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.
Posted on Reply
#58
HD64G
TRWOVsome people are saying that this will increase the load on the 6 pin VRMs and cause damage to them. My gosh, the VRMs are overspecified, each phase can provide 40w (240w total) so even if the PCIe VRMs are underused the 6pin VRMs can take the heat. Heck, AMD could disable one of the phases entirely and the card would still get more than enough power and stay within the VRMs limits.

AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.

I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.
Who can be so stupid to sell his RX480 for a 1060? If he was a green fanboy he wouldn't buy it from the start. If he isn't a fanboy he will keep it just because it's the best VFM GPU for the last many years. :)
Posted on Reply
#59
TRWOV
I'm sure some will. Perception is reality, they say. Right now the perception is that AMD is selling hardware that could damage your system... of course, people like you and me know better but that's not the case for most people.

Anyway, I'm sure it won't affect AMD much but why give munition to your critics in the first place?
Posted on Reply
#60
$ReaPeR$
TRWOVsome people are saying that this will increase the load on the 6 pin VRMs and cause damage to them. My gosh, the VRMs are overspecified, each phase can provide 40w (240w total) so even if the PCIe VRMs are underused the 6pin VRMs can take the heat. Heck, AMD could disable one of the phases entirely and the card would still get more than enough power and stay within the VRMs limits.

AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.

I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.
HD64GWho can be so stupid to sell his RX480 for a 1060? If he was a green fanboy he wouldn't buy it from the start. If he isn't a fanboy he will keep it just because it's the best VFM GPU for the last many years. :)
TRWOVI'm sure some will. Perception is reality, they say. Right now the perception is that AMD is selling hardware that could damage your system... of course, people like you and me know better but that's not the case for most people.

Anyway, I'm sure it won't affect AMD much but why give munition to your critics in the first place?
you can always trust on the stupidity of people.. ;)
Posted on Reply
#61
Fluffmeister
AirIs it really necessary for AMD to target its reference designs so close to the limit?

290x with jet engine cooler that cant keep temperatures bellow 90 ºC, 480x with 6-pin, but needs 8-pin and goes off spec.

Is the bad press they get from this stuff really worth the savings?:confused:
Wonders never cease my friend.
Posted on Reply
#62
D007
buggalugsomg you're annoying.

The toggle is off by default, so reviewers would have to re-run the card not in the default setting, which they never do. But no doubt they will this time because of the beatup around this issue

Like I said from the start this whole issue is a beatup. AMD is saying what I said, they are confident the power draw will not damage hardware.

Hardware specs are waaaay on the conservative side. Thats why we can overclock the crap out of our computers and not do damage. The PCI-E spec is designed to handle more than 75 Watt reference spec. Much more. Same with the 6 pin and 8 pin plugs, they can handle double the power of the spec.

If people think an extra 10% or 15% is going to destroy a motherboard, they have no idea how things work.
Get over it..
Posted on Reply
#63
Melvis
cryohellincGlad for you Red Team owners, hopefully no more burned mobo's!
Can you link me to a thread or video that has shown someone that has had a burnt out mobo? I havent seen any yet, people talk about it but I still havent seen any evidence to support it.
Posted on Reply
#64
EarthDog
cryohellincGlad for you Red Team owners, hopefully no more burned mobo's!
Were any burned in the first place???
Posted on Reply
#65
nem..
"Already they have begun to see cases of damaged motherboards"

Really it is not, so you have to grab it with tweezers, and in some cases of those who have caused so much uproar in the network, then we have seen (when reviewing pictures of the burnt motherboard) riding three graphics cards providing only 2 PCI Express slots with adapters extensors-because I was doing mining bitcoins 24 hours seven days a week, and when installing new graphics board was loaded. This case is one of the most publicized, but it was really out of spec to start was the user.

But of course that did not stop him putting together all possible disturbance in the network, although the logical thing would be that high demand for energy that should have started by buying a plate with three PCI Express slots for the three graphs mounted.

:/
Posted on Reply
#66
cadaveca
My name is Dave
MelvisCan you link me to a thread or video that has shown someone that has had a burnt out mobo? I havent seen any yet, people talk about it but I still havent seen any evidence to support it.
Fiery (AIDA dev) posted a link in an earlier thread.
Posted on Reply
#67
bug
TRWOVThe PSU cables can withstand the current. Take a look at your PSU's 6+2 pin cable: the extra 2 pins don't even have their own wires, they are just bridged from the existing ones so when you use it for an 8 pin card only 6 wires are connected directly to the PSU

The 6 pin connector can safely draw an extra 16w without caching fire or anything.

Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).

Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.
Maybe they can, but the PCIe still says 75W from the slot and another 75W from the 6 pin connector, so the card would still be running outside specs. Of course, this is more of a PCI-SIG failure, because they have certified a card that's not actually compliant, but what do they care? They're not the one selling video cards so they're not the ones catching flak.
Posted on Reply
#68
Brusfantomet
revino_OConfused :confused: So when it {SIG} revised the spec up from the 25w to allow 75w total[12v+3.3v] where is this allowing the 12v to be over 75w not including the 3.3v already there.
From what I seen this is the only card found that has had an equal draw from both slot and cable. [BTW I really like this card !]
So if it's been revised to allow 150w thru the slot why can't that be found ?
If the plug can handle 75 W at 12+3.3 it can handle more at 12 V alone.

The reason would be this: electrical transmission losses are as flows: I*I*R=P,
R is resistance in the plug, stays the same at 3.3 V or 12 V
I is current, and at the same power for 3,3 V and 12 V the current will be 3,6 = (12 V/3.3V) times higher, the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin.

The 3.3 V pins cannot be used at 12 V pins, but without the added heat from the 3,3 V pins the 12 V pins will have more power headroom , as the whole plug will be cooler.
Posted on Reply
#69
McSteel
BrusfantometIf the plug can handle 75 W at 12+3.3 it can handle more at 12 V alone.

The reason would be this: electrical transmission losses are as flows: I*I*R=P,
R is resistance in the plug, stays the same at 3.3 V or 12 V
I is current, and at the same power for 3,3 V and 12 V the current will be 3,6 = (12 V/3.3V) times higher, the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin.

The 3.3 V pins cannot be used at 12 V pins, but without the added heat from the 3,3 V pins the 12 V pins will have more power headroom , as the whole plug will be cooler.
The official limit for 3.3V is 9.9W, or up to 3A. 3.3V supply line has 4 pins at its disposal. Thus, 0.75A per pin maximum.
The 12V limit is actually officially 66W or 5.5A. Given five pins for power transmission, this is 1.1A per pin maximum.

Also, take a look at your formula again, it correctly says dissipated power = current squared times resistance.
12V line is allowed more current per pin than 3.3V line is...
Posted on Reply
#70
sutyi
FieryI've never had a PC full of dust, and the dust causing a PCIe slot to get burnt and become inoperational. Dust is not good, I grant you that, but it rarely cause any hardware failures. Except when the dust fills up a heatsink and that causes overheating.
That might be true and all, but you might want to ask yourself how does this individual handle his PC components? I'm guessing with not much care at all as it seems...
I'm sorry but, when the insides of your computer looks like a cat lives in it, with the accompanying fur, piss and god knows what else deposited on your motherboard, then any sort of HW failure claims are nullified in my eyes.
Posted on Reply
#71
Brusfantomet
McSteelThe official limit for 3.3V is 9.9W, or up to 3A. 3.3V supply line has 4 pins at its disposal. Thus, 0.75A per pin maximum.
The 12V limit is actually officially 66W or 5.5A. Given five pins for power transmission, this is 1.1A per pin maximum.

Also, take a look at your formula again, it correctly says dissipated power = current squared times resistance.
12V line is allowed more current per pin than 3.3V line is...
the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin at the same power transfered.

was mostly just thinking it out while typing, and what is there to look at?
even with the 12 V pins beeing alowwed more A than a 3.3 V pin i still think the slightly to big power draw problem is blown out of proportions.
Posted on Reply
#73
revin
BrusfantometIf the plug can handle 75 W at 12+3.3 it can handle more at 12 V alone
Source? It's already been raised from 25 to the 75, so that say's the plug is maxed out now..
Also from reading thru the spec they only allowed that since there is better convection cooling with newer PC's
Posted on Reply
#74
TRWOV
Just read the link okidna posted. I guess we can put all of this behind and enjoy our 480s :D

I plan to enable the compatibility mode, undervolt the core and overclock the memory as much as possible.
Posted on Reply
#75
sutyi
Brusfantometthe loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin at the same power transfered.

was mostly just thinking it out while typing, and what is there to look at?
even with the 12 V pins beeing alowwed more A than a 3.3 V pin i still think the slightly to big power draw problem is blown out of proportions.
Depends really. Even if you have a relatively new budget MB, then you should've been okay even without the latest driver fix. On older budget boards however with cheaper components used, especially having flimsy PCIe connectors in mind, there might be concerns. Having a cheap connector work @ 115% capacity especially for prolonged periods of time probably not a good idea.

This whole PCIe power malarky raised two I think valid points:
  1. PCI-SIG certification methods need to be looked at and probably reworked.
  2. Why on earth isn't there built-in limit baked in to motherboards, so add-in boards (VGA, soundcard, etc.) wouldn't be able to draw dangerous amounts of current trough slots / pins that are not rated for it?
Posted on Reply
Add your own comment
Dec 24th, 2024 07:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts