Wednesday, July 6th 2016
AMD Updates its Statement on Radeon RX 480 Power Draw Controversy
AMD today provided an update on how it is addressing the Radeon RX 480 power-draw controversy. The company stated that it has assembled a worldwide team of developers to put together a driver update that lowers power-draw from the PCIe slot, with minimal performance impact. This driver will be labeled the Radeon Software Crimson Edition 16.7.1, and will be released in the next 2 days (before weekend). This fix will be called the "Compatibility" toggle in the Global Settings of the Radeon Settings app, which will be disabled by default. So AMD is giving users a fix, at the same time, isn't making a section of users feel like the card has been gimped with a driver update. The drivers will also improve game-specific performance by up to 3 percent.The statement by AMD follows.
We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We're pleased to report that this driver-Radeon Software 16.7.1-is now undergoing final testing and will be released to the public in the next 48 hours.
In this driver we've implemented a change to address power distribution on the Radeon RX 480 - this change will lower current drawn from the PCIe bus.
Separately, we've also included an option to reduce total power with minimal performance impact. Users will find this as the "compatibility" UI toggle in the Global Settings menu of Radeon Settings. This toggle is "off" by default.
Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle.
AMD is committed to delivering high quality and high performance products, and we'll continue to provide users with more control over their product's performance and efficiency. We appreciate all the feedback so far, and we'll continue to bring further performance and performance/W optimizations to the Radeon RX 480.
77 Comments on AMD Updates its Statement on Radeon RX 480 Power Draw Controversy
From what I seen this is the only card found that has had an equal draw from both slot and cable. [BTW I really like this card !]
So if it's been revised to allow 150w thru the slot why can't that be found ?
A 6pin gpu connector can provide up to 200W of power, so even if the 480 does not draw power from PCI Slot, it would still be enought and will run the card, and not damage your power plug or PSU.
12V rail can provide the power the card needs to run only on that 6 pin connector alone.
I for one would try a different solution:
Why not design a custom PCB for this card and route all power to the chips to come from the 6 or 8 pin connector, and exclude PCI power?
Asus, MSI, Sapphire could do that and instead of 6 pin, go for an 8 pin for better power delivery, since 8 pin is rated at 150W per connector, and max it can go is 300W???
For 2 my opinion is this, PowerPlay in ROM has PowerTune table, which contains PowerLimit. These values are TDC/TDC/MPDL.
TDP: "Change TDP limit based on customer's thermal solution"
TDC: "PowerTune limit for maximum thermally sustainable current by VDDC regulator that can be supplied"
Maximum Power Delivery Limit (MPDL): "This power limit is the total chip power that we need to stay within in order to not violate the PCIe rail/connector power delivery"
These values limit GPU only not any other board elements (RAM,etc). TDC does not do any load balancing or differentiate between phases on VRM. MPDL does not differentiate between where power is drawn (ie slot/plugs). So AMD maybe implementing tweak of these settings in OS once driver load, as this is what we do when we change PL in OD/TriXX/MSI AB.
They may implement PowerTune algorithm tweak. Without delving too much into this I'll present an example based on 2 programs I experienced this on Fiji. In driver at x point they exposed a feature called "Power Efficiency". Prior to this setting being available I would adjust PL in ROM/OS to max, 3DM FS would run at max GPU clock but Heaven would show some clock dropping, with PE=Off no clock drop in Heaven. I was never reaching A/W/temps issue to make GPU drop clocks in Heaven with PE=On, so it has lead me to believe an aspect of PowerTune algorithm is being tweaked when I set PE=Off.
Just like you said I would agree it would be at a cost of performance which they are giving back in "some" titles for fix/point 2.
Now getting back to GPU properties, mainly leakage, higher leakage ASIC will draw more power. So you will have some cards not benefiting as much from the shift of load on phases to reduce PCI-E slot power usage. Like wise a higher leakage ASIC will reach PowerTune/Limit sooner under load so the owner may see more of a performance loss. IMO owners of lower leakage ASIC will benefit more from these fixes in context of "issue" and will be somewhat drawing less power as a whole at "stock".
When W1zzard tested The Stilt's tweak it drew ~10W less on PCI-E slot, which will be soon consumed by some OC'ing.
IMO the fixes are not "ideal" considering this is card which has PCI-E plug and by this I mean no disrespect to The Stilt, W1zzard and others involved with it. The ref PCB needs a redesign to be inline with what recent past AMD cards drew on PCI-E slot or to substantially reduce PCI-E slot power usage.
My apologies to members for mega long post.
The 6 pin connector can safely draw an extra 16w without caching fire or anything.
Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).
Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.
they have a fix people.. relax. or dont buy the damn card if you are so concerned. can we please just move along..
290x with jet engine cooler that cant keep temperatures bellow 90 ºC, 480x with 6-pin, but needs 8-pin and goes off spec.
Is the bad press they get from this stuff really worth the savings?:confused:
AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.
I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.
Anyway, I'm sure it won't affect AMD much but why give munition to your critics in the first place?
Really it is not, so you have to grab it with tweezers, and in some cases of those who have caused so much uproar in the network, then we have seen (when reviewing pictures of the burnt motherboard) riding three graphics cards providing only 2 PCI Express slots with adapters extensors-because I was doing mining bitcoins 24 hours seven days a week, and when installing new graphics board was loaded. This case is one of the most publicized, but it was really out of spec to start was the user.
But of course that did not stop him putting together all possible disturbance in the network, although the logical thing would be that high demand for energy that should have started by buying a plate with three PCI Express slots for the three graphs mounted.
:/
The reason would be this: electrical transmission losses are as flows: I*I*R=P,
R is resistance in the plug, stays the same at 3.3 V or 12 V
I is current, and at the same power for 3,3 V and 12 V the current will be 3,6 = (12 V/3.3V) times higher, the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin.
The 3.3 V pins cannot be used at 12 V pins, but without the added heat from the 3,3 V pins the 12 V pins will have more power headroom , as the whole plug will be cooler.
The 12V limit is actually officially 66W or 5.5A. Given five pins for power transmission, this is 1.1A per pin maximum.
Also, take a look at your formula again, it correctly says dissipated power = current squared times resistance.
12V line is allowed more current per pin than 3.3V line is...
I'm sorry but, when the insides of your computer looks like a cat lives in it, with the accompanying fur, piss and god knows what else deposited on your motherboard, then any sort of HW failure claims are nullified in my eyes.
was mostly just thinking it out while typing, and what is there to look at?
even with the 12 V pins beeing alowwed more A than a 3.3 V pin i still think the slightly to big power draw problem is blown out of proportions.
No noticeable performance change with compatibility mode enabled, well done, AMD.
Also from reading thru the spec they only allowed that since there is better convection cooling with newer PC's
I plan to enable the compatibility mode, undervolt the core and overclock the memory as much as possible.
This whole PCIe power malarky raised two I think valid points: