Wednesday, July 6th 2016

AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

AMD today provided an update on how it is addressing the Radeon RX 480 power-draw controversy. The company stated that it has assembled a worldwide team of developers to put together a driver update that lowers power-draw from the PCIe slot, with minimal performance impact. This driver will be labeled the Radeon Software Crimson Edition 16.7.1, and will be released in the next 2 days (before weekend). This fix will be called the "Compatibility" toggle in the Global Settings of the Radeon Settings app, which will be disabled by default. So AMD is giving users a fix, at the same time, isn't making a section of users feel like the card has been gimped with a driver update. The drivers will also improve game-specific performance by up to 3 percent.

The statement by AMD follows.

We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We're pleased to report that this driver-Radeon Software 16.7.1-is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we've implemented a change to address power distribution on the Radeon RX 480 - this change will lower current drawn from the PCIe bus.

Separately, we've also included an option to reduce total power with minimal performance impact. Users will find this as the "compatibility" UI toggle in the Global Settings menu of Radeon Settings. This toggle is "off" by default.

Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle.

AMD is committed to delivering high quality and high performance products, and we'll continue to provide users with more control over their product's performance and efficiency. We appreciate all the feedback so far, and we'll continue to bring further performance and performance/W optimizations to the Radeon RX 480.
Add your own comment

77 Comments on AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

#26
RejZoR
D007I read twice in that statement that this will affect performance.. ATI users will not be happy about that..
You read it twice and still missed the 3% boost part... Read it for the 3rd time :P
Posted on Reply
#27
nemesis.ie
D007I read twice in that statement that this will affect performance.. ATI users will not be happy about that..
But one of the statements about performance was that it will INCREASE, why would folks not be happy about that? ;)
Posted on Reply
#28
D007
RejZoRYou read it twice and still missed the 3% boost part... Read it for the 3rd time :p
I only needed to read it once really, because I understand word play and marketing attempts to hide things, with clever word play..lol

I read 3%.. 3% of what? Seems deliberately vague.
So basically they get to loose what % to gain 3%? lol..
I think this is going to hurt the card.


"In this driver we've implemented a change to address power distribution on the Radeon RX 480 - this change will lower current drawn from the PCIe bus.
Separately, we've also included an option to reduce total power with minimal performance impact."

So am I misunderstanding this?
That to me says, it WILL affect performance if you use that toggle..
Otherwise, why stress "with minimal performance impact"?

"should substantially offset the performance impact for users who choose to activate the "compatibility" toggle."

Clear as day.. Using that toggle, will affect performance.
"substantially offset"? How can you get a "substantial" number from 3%?..
Seems they are referring to something else, again intentionally vague..

Something just doesn't add up to me.. Why put something in that will decrease performance? Seems like a necessary evil, to offset the overdraw problem, that they can't fully fix.

My point being. It could increase 3%, but drop 6% from the "fixes".
So potentially, you lost 3% and gained nothing. I hope that's not the case but I don't trust vague explanations.
Posted on Reply
#29
bug
RejZoRYou read it twice and still missed the 3% boost part... Read it for the 3rd time :p
There's nothing to read again, the fix will lower the performance. The 3% they mention is just optimizations that would have come with the next driver anyway (it's also applicable to "popular game titles" - whatever that means - only). 3% is something you have a hard time telling in a benchmark. under normal usage you can't see 3%.
But let's not burn AMD to the stake just yet and wait to see what's the impact of their fix first.
Posted on Reply
#30
ensabrenoir
...Amd would not have invested time and resources into a non-issue so something was there that could affect not just a few, but all 480 users. The 3% performance increase and the memory unlock was a little sumtin, sumtin for your troubles. Good to see them address and handle it quickly.
Posted on Reply
#31
RejZoR
bugThere's nothing to read again, the fix will lower the performance. The 3% they mention is just optimizations that would have come with the next driver anyway (it's also applicable to "popular game titles" - whatever that means - only). 3% is something you have a hard time telling in a benchmark. under normal usage you can't see 3%.
But let's not burn AMD to the stake just yet and wait to see what's the impact of their fix first.
And the 1% or so you'll lose because of this "tweak" will somehow be something you'll notice anywhere? But those 3% are nothing...
Posted on Reply
#32
nemesis.ie
bugThere's nothing to read again, the fix will lower the performance..
Again, it only says the OPTIONAL power toggle will lower performance.

We still do not know how the base fix will improve the power distribution and the 3% performance gain on "some titles" should be on top of the current performance if you chose not to enable the lower power mode.
Posted on Reply
#33
D007
nemesis.ieAgain, it only says the OPTIONAL power toggle will lower performance.

We still do not know how the base fix will improve the power distribution and the 3% performance gain on "some titles" should be on top of the current performance if you chose not to enable the lower power mode.
That's exactly my concern.. They spewed out a bunch of words but did not clarify on the most important point.. What will the base loss be?
Driver improvements come on all cards, those are secondary and not even worth mentioning imo..
That's a diversionary tactic, to draw attention away from the base loss of performance, these cards will likely get...
When they use words like "Substantial" you may need to be concerned... Because that generally means something took a substantial hit.
And now we wait for benches.. Fingers crossed. I hope it's not bad for ATI users.
Posted on Reply
#34
laszlo
all this debate is irrelevant

once new driver is out i think wizzard will re-test the card and we'll know all ,power draw from pcie , performance loss/gain...
Posted on Reply
#35
McSteel
ZoneDymoI dont even understand this problem to begin with, why would a motherboard even be allowed to give so much current via a PCIe slot that it could destroy itself?
Why would there not be a hard limit on that?
ArdWarSigh...

The power pin is directly connected into MB power plane and ground plane, there's nothing to limit it.

It's analogous with overloading a power cord. There's nothing that prevent you loading a 24 AWG wires with 100 Amperes of current.

There's fuses, and breakers, etc. But the point of PCI-e specification in the first place is to ensure that no one exceeding the limit so there's no need for system engineer to add unnecessary (i.e. avoidable) components. Reducing costs, simplifying designs and compliance certifications, and less components means higher reliability (if everything is behaving as intended).
ZoneDymoAh thanks for the information.
I still find it an odd choice though on the motherboard part, really doubt an extra fuse would make any difference and there are plenty of safeguards in other areas that I would then consider equally unneeded but sure.
Thanks again.
Even though @ZoneDymo 's question has been answered for the most part, I just need to add that implementing a static "brickwall" limit would cause a lot of grief to the GPU/VGA makers (and the consumers), because of naturally occurring spikes in power draw in multiple scenarios (switching "noise" and dynamic adjustments to transient load changes are the most prominent mechanisms).

If a fuse was used, it would simply blow at some point and would need to be replaced. Apart from interrupting usage (gaming or otherwise), you'd have to have a stock of fuses to hand. Unacceptable.

The other option is OCP (over-current protection) which is already present for USB ports on better motherboards, to prevent damage from shorted pins on the USB port due to damage, for example. But USB is easier to deal with as there's not much room for variance in power draw, plus you can individually switch off certain hubs as opposed to the entire system, thus leaving some of the unaffected USB ports functional and the system running.

An OCP circuit for the PCI-E slot power would need to be very robust and able to allow for transient spikes but react to subtle overdraws meaning it would have to be a DSP + precise measurements tool, which even the $600+ motherboards don't have for CPU VRM monitoring (hence the usage of multimeters and the exposed measurement points for OC-ers).

The whole issue was raised exactly because the motherboard is obligated to deliver as much current as is asked of it, until it gives out and breaks. Or until something else relying on +12V being supplied by the motherboard stops accepting the lowered voltage due to increased current draw, eventually.

I feel I need to stress that the issue as it is without being fixed wouldn't cause any damage in the short term, that's for sure (assuming only one card is used). Multiple cards without an additional power connector on the board itself or given enough time (perhaps on the order of years of usage, it's really hard to say but wouldn't be less than a couple of months given no preexisting problems) with a single card, issues should realistically arise.
Posted on Reply
#36
RejZoR
D007That's exactly my concern.. They spewed out a bunch of words but did not clarify on the most important point.. What will the base loss be?
Driver improvements come on all cards, those are secondary and not even worth mentioning imo..
That's a diversionary tactic, to draw attention away from the base loss of performance, these cards will likely get...
When they use words like "Substantial" you may need to be concerned... Because that generally means something took a substantial hit.
And now we wait for benches.. Fingers crossed. I hope it's not bad for ATI users.
Is it really "loss" when it was working harder to begin with? They are just bringing it to what they've been advertising the entire time...
Posted on Reply
#37
ArdWar
McSteelAn OCP circuit for the PCI-E slot power would need to be very robust and able to allow for transient spikes but react to subtle overdraws meaning it would have to be a DSP + precise measurements tool, which even the $600+ motherboards don't have for CPU VRM monitoring (hence the usage of multimeters and the exposed measurement points for OC-ers).
If an OCP is needed, a simple shunt current monitor with some filtering and MOSFET for switching can do the work, no need for fancy DSP. But that's a couple more $$, a couple mV of voltage drop, a couple more power dissipated and by current PCI spec, the burden to implement is on the expansion board (hence why it's AMD problem, not MB problem).
McSteelI feel I need to stress that the issue as it is without being fixed wouldn't cause any damage in the short term, that's for sure (assuming only one card is used). Multiple cards without an additional power connector on the board itself or given enough time (perhaps on the order of years of usage, it's really hard to say but wouldn't be less than a couple of months given no preexisting problems) with a single card, issues should realistically arise.
Exactly! The hordes of peoples trying to see if their MB will burn/brick/break/crash/whatever when paired with this card always gave me a chuckle. If any, the problem will only show over time. A new card and motherboard with shiny slots and connectors is actually best case scenario. A valid test would be something like accelerated aging test where the card tested with oxidated contact points.
Posted on Reply
#38
W1zzard
ZoneDymoI dont even understand this problem to begin with, why would a motherboard even be allowed to give so much current via a PCIe slot that it could destroy itself?
What you are asking is similar to "why would a rubber band be so elastic that it can snap when I pull on it?"
Posted on Reply
#39
McSteel
ArdWarIf an OCP is needed, a simple shunt current monitor with some filtering and MOSFET for switching can do the work, no need for fancy DSP. But that's a couple more $$, a couple mV of voltage drop, a couple more power dissipated and by current PCI spec, the burden to implement is on the expansion board (hence why it's AMD problem, not MB problem).
I was thinking more along the lines of an LM339 comparator with a slew rate limiter and a small ARM M0 with some memory to serve as an integrator, so as to have more than 1-bit quantization for better recognition and tolerance of large transients and noisy VRMs... But yeah, that would be overdoing it on a German + Japanese level I suppose.

Either way, it's much more prudent and pragmatic to simply adhere to the spec.
Posted on Reply
#40
bug
RejZoRAnd the 1% or so you'll lose because of this "tweak" will somehow be something you'll notice anywhere? But those 3% are nothing...
How do you know it's 1%? And why are you eluding the fact that if you lower the power draw, you lose performance across the board and those 3% are only available in a few titles?
Is it so had to stop speculating and wait a few days for a retest instead?
nemesis.ieAgain, it only says the OPTIONAL power toggle will lower performance.

We still do not know how the base fix will improve the power distribution and the 3% performance gain on "some titles" should be on top of the current performance if you chose not to enable the lower power mode.
Yeah, well, when I have to choose between slightly less performance and running my motherboard outside specs, I have no option. But maybe it's just me.
Posted on Reply
#41
RejZoR
1% is my prediction. I mean, 150W vs 166W at its peak? Do you really think 16W of peak power envelope will turn out at any significant performance difference? Especially since gains start to flatline once you are reaching peaks, be it voltage, frequency, power consumption. For 150mV more you can get 200MHz. For the next 200MHz you could need 500mV. Meaning pumping more power into the GPU doesn't mean you'll exponantially gain more performance. And same goes for "loss". They can cut tons of power at minimal loss to a certain point. From then on down, it will become larger. It goes both ways.

They could just enable this by default, work a bit harder on drivers and negate the "loss" entirely by introducing huge gains. Not sure why they even made it optional. But I guess they want to give users options, which is fine as well.
Posted on Reply
#42
GhostRyder
Well...Interesting solution for those that are really worried about this. Doubt most will use it though, least they didn't completely lock down the voltage lower.
Posted on Reply
#43
bug
RejZoR1% is my prediction. I mean, 150W vs 166W at its peak? Do you really think 16W of peak power envelope will turn out at any significant performance difference? Especially since gains start to flatline once you are reaching peaks, be it voltage, frequency, power consumption. For 150mV more you can get 200MHz. For the next 200MHz you could need 500mV. Meaning pumping more power into the GPU doesn't mean you'll exponantially gain more performance. And same goes for "loss". They can cut tons of power at minimal loss to a certain point. From then on down, it will become larger. It goes both ways.

They could just enable this by default, work a bit harder on drivers and negate the "loss" entirely by introducing huge gains. Not sure why they even made it optional. But I guess they want to give users options, which is fine as well.
I agree.
However, when MSI and Asus sent samples for review running at tens of MHz higher than the retail cards, they were called out for it. By going outside specs, AMD is essentially doing the same, so they deserve the same treatment.
And now, let's just sit back and see how this unfolds.
Posted on Reply
#44
RejZoR
Not quite. There, only review samples had higher clocks. Here, they all have them higher...
Posted on Reply
#45
buggalugs
bugThere's nothing to read again, the fix will lower the performance. The 3% they mention is just optimizations that would have come with the next driver anyway (it's also applicable to "popular game titles" - whatever that means - only). 3% is something you have a hard time telling in a benchmark. under normal usage you can't see 3%.
But let's not burn AMD to the stake just yet and wait to see what's the impact of their fix first.
omg you're annoying.

The toggle is off by default, so reviewers would have to re-run the card not in the default setting, which they never do. But no doubt they will this time because of the beatup around this issue

Like I said from the start this whole issue is a beatup. AMD is saying what I said, they are confident the power draw will not damage hardware.

Hardware specs are waaaay on the conservative side. Thats why we can overclock the crap out of our computers and not do damage. The PCI-E spec is designed to handle more than 75 Watt reference spec. Much more. Same with the 6 pin and 8 pin plugs, they can handle double the power of the spec.

If people think an extra 10% or 15% is going to destroy a motherboard, they have no idea how things work.
Posted on Reply
#46
Steevo
I haven't seen any actual reports of damaged or burned, much less unstable motherboards caused by 480's, so I am genuinely curious how many people are going to download the driver, turn off compatibility mode and still whine about how terrible it could be, while overvolting their processor which consumes way more power than the 480 through a few wires in the board...... I added sinks to the VRM's that weren't covered on my old board and even a few on the board itself due to how hot it would get under stress with 1.5v going though a couple traces.
buggalugsomg you're annoying.

The toggle is off by default, so reviewers would have to re-run the card not in the default setting, which they never do. But no doubt they will this time because of the beatup around this issue

Like I said from the start this whole issue is a beatup. AMD is saying what I said, they are confident the power draw will not damage hardware.

Hardware specs are waaaay on the conservative side. Thats why we can overclock the crap out of our computers and not do damage. The PCI-E spec is designed to handle more than 75 Watt reference spec. Much more. Same with the 6 pin and 8 pin plugs, they can handle double the power of the spec.

If people think an extra 10% or 15% is going to destroy a motherboard, they have no idea how things work.
Many boards can limit or raise the PCIe power draw as well, and I have had options to provide up to 150W per slot to the X16 slots.


Plus the board is designed to provide the 75W to ALL slots, so the same way we daisy chained HDD power to graphics cards and fans, the board daisy chains power meaning in some they are rated to provide a minimum of 4X 75W or more.
Posted on Reply
#47
nemesis.ie
bugYeah, well, when I have to choose between slightly less performance and running my motherboard outside specs, I have no option. But maybe it's just me.
Um, no. The "base fix" (adjusting where the power comes from) should be the thing that stops the motherboard power being out of spec.

How many times does it need to be repeated that (according to the OP) this is separate from the "low power mode" option?
Posted on Reply
#48
TRWOV
The fix comes in 2 parts:

1) a fix that will limit PCIe power to <75w. The card will draw the rest from the 6 pin connector. No performance lose should occur.

2) a toggle that will limit the power draw to 150w total. This will likely lower performance but it's manageable if you undervolt the card.

Thankfully for AMD the VRM controller allows for this fine grained control, otherwise they would be in a lot of trouble
Posted on Reply
#49
bug
TRWOVThe fix comes in 2 parts:

1) a fix that will limit PCIe power to <75w. The card will draw the rest from the 6 pin connector. No performance lose should occur.

2) a toggle that will limit the power draw to 150w total. This will likely lower performance but it's manageable if you undervolt the card.

Thankfully for AMD the VRM controller allows for this fine grained control, otherwise they would be in a lot of problem
I guess math isn't your strong point. The card currently consumes more that the compliant 150W (~165W). If you limit PCIe input to <75W, then you end up drawing >75W from the 6 pin connector. Which is still outside the spec. So instead of trying your motherboard, now you can choose to fry your PSU instead.
The only sane thing to do is make the thing draw 150W as advertised.
Posted on Reply
#50
bug
RejZoRNot quite. There, only review samples had higher clocks. Here, they all have them higher...
You mean, all cards are advertised as 150W parts while drawing ~165W? And that's ok?
Posted on Reply
Add your own comment
Dec 24th, 2024 07:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts