Saturday, July 2nd 2016

Official Statement from AMD on the PCI-Express Overcurrent Issue

AMD sent us this statement in response to growing concern among our readers that the Radeon RX 480 graphics card violates PCI-Express power specification, by overdrawing power from its single 6-pin PCIe power connector and the PCI-Express slot. Combined, the total power budged of the card should be 150W, however, it was found to draw well over that power limit.

AMD has had out-of-spec power designs in the past with the Radeon R9 295X2, for example, but that card is targeted at buyers with reasonably good PSUs. The RX 480's target audience could face troubles powering the card. Below is AMD's statement on the matter. The company stated that it's working on a driver update that could cap the power at 150W. It will be interesting to see how that power-limit affects performance.
"As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8 Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016)."
Add your own comment

358 Comments on Official Statement from AMD on the PCI-Express Overcurrent Issue

#326
newtekie1
Semi-Retired Folder
cdawallCouple of different gens Fermi is one, but my 470's are water-cooled and consume less power because of it. I had a pair of 480's pulling nearly 900w at the wall by themselves at stock clocks in SLI for reference however.
Yeah, but just because the cards are pulling a lot of power, doesn't mean they are pulling it through the PCI-E bus. Like PCPer showed, with some of the cards they tested, when they overvolted the cards to increase power consumption the extra consumption came from the external 8/6-Pin and the power draw from the PCI-E bus stayed the same. The external connectors are over-built, they can handle the extra power draw, so doing it this way isn't a problem.
Posted on Reply
#327
cdawall
where the hell are my stars
newtekie1Yeah, but just because the cards are pulling a lot of power, doesn't mean they are pulling it through the PCI-E bus. Like PCPer showed, with some of the cards they tested, when they overvolted the cards to increase power consumption the extra consumption came from the external 8/6-Pin and the power draw from the PCI-E bus stayed the same. The external connectors are over-built, they can handle the extra power draw, so doing it this way isn't a problem.
I'm personally just curious at this point.
Posted on Reply
#328
sith'ari
HD64G said:
But since I am sure your MB is a good quality one.........................
HD64G said:
Wasn't sure about what MB you had...............
Either you were sure or you weren't !:rolleyes: You have to decide eventually !
hint: there is something in the user control panel which says "system specs" : perhaps you should check it another time, contains useful info such as ...... the system specs ! ;)
Posted on Reply
#329
newtekie1
Semi-Retired Folder
cdawallI'm personally just curious at this point.
Yeah, me too actually.

As Cadaveca pointed out, Fermi and the HD2900XT had issues. Though I'd like to know if they were right on the edge, and multiple cards pushed it over, or how much they actually were pulling from the PCI-E slot. I know I melted my 24-pin with a pair of Fermi cards. But nVidia obviously learned several lessons with Fermi and their last 3 generations haven't pulled a lot of power through the PCI-E bus.
Posted on Reply
#330
cdawall
where the hell are my stars
sith'ariEither you were sure or you weren't !:rolleyes: You have to decide eventually !
hint: there is something in the user control panel which says "system specs" : perhaps you should check it another time, contains useful info such as ...... the system specs ! ;)
Cocky now aren't you ...
newtekie1Yeah, me too actually.

As Cadaveca pointed out, Fermi and the HD2900XT had issues. Though I'd like to know if they were right on the edge, and multiple cards pushed it over, or how much they actually were pulling from the PCI-E slot. I know I melted my 24-pin with a pair of Fermi cards. But nVidia obviously learned several lessons with Fermi and their last 3 generations haven't pulled a lot of power through the PCI-E bus.
I am curious what the age old beasts pull 3870x2/4870x2/gtx295 etc.
Posted on Reply
#331
cadaveca
My name is Dave
newtekie1Yeah, me too actually.

As Cadaveca pointed out, Fermi and the HD2900XT had issues. Though I'd like to know if they were right on the edge, and multiple cards pushed it over, or how much they actually were pulling from the PCI-E slot. I know I melted my 24-pin with a pair of Fermi cards. But nVidia obviously learned several lessons with Fermi and their last 3 generations haven't pulled a lot of power through the PCI-E bus.
Its interesting to see what boards carry 12V PCIe power-adders, and which contain MOLEX plugs. There is a good reason for those MOLEX plugs instead of a PCIe connector. Also, some boards with PCIe power plug to add PCIe, but then the board has voltage regulation to switch that 12V down to the needed votlages, and some do not.
Posted on Reply
#332
sith'ari
cdawallCocky now aren't you ...
I tend to respond at the same manner that other people are replying to me!;) (he was sarcastic against me so i did the same)
P.S. i guess it's forbidden for someone to own an old motherboard because this ruins the "defensive line" for the AMD fanboys.:shadedshu:
Posted on Reply
#333
cdawall
where the hell are my stars
sith'ariI tend to respond at the same manner that other people are replying to me!;) (he was sarcastic against me so i did the same)
P.S. i guess it's forbidden for someone to own an old motherboard because this ruins the "defensive line" for the AMD fanboys.:shadedshu:
AMD fanboys? I could care less what CPU/GPU you use. Honestly the shear amount that this has been blown out of the water is astounding. I mean hell the 9370/9590's are blowing the mosfets up on $250+ motherboards and have been since release date, yet no one bats an eye. AMD releases a decent bang for the buck GPU that has issues on crap ancient boards and everyone is loosing their mind.
Posted on Reply
#334
sith'ari
cdawallAMD fanboys? I could care less what CPU/GPU you use. ...........
What are you talking about mate? where did i say that you are an AMD fanboy? my comment was a general one.

EDIT:
cdawall said:
.....AMD releases a decent bang for the buck GPU that has issues on crap ancient boards and everyone is loosing their mind.
i have to loose my mind since it's my system. Of course i would care!!
Posted on Reply
#335
cdawall
where the hell are my stars
sith'ariWhat are you talking about mate? where did i say that you are an AMD fanboy? my comment was a general one.
No one in this thread has posted really any fanboy comments. Literally you are one of what two people freaking out about something that isn't new and isn't abnormal. The "sheeple" if you will.
Posted on Reply
#336
sith'ari
cdawallNo one in this thread has posted really any fanboy comments. Literally you are one of what two people freaking out about something that isn't new and isn't abnormal. The "sheeple" if you will.
1. check my edit at my previous post.
2. Also, check my post #43 . I've been clear about my feelings for AMD from my early posts.
Posted on Reply
#337
cdawall
where the hell are my stars
sith'arii have to loose my mind since it's my system. Of course i would care!!
Are you planning on buying this GPU? Or as the next quote mentions are you just hear to complain?
sith'ari1. check my edit at my previous post.
2. Also, check my post #43 . I've been clear about my feelings for AMD from my early posts.
Yet you still post...

Out of curiosity did you loose a board to an AMD/NV GPU? Have you met anyone who has?
Posted on Reply
#338
RejZoR
I wonder how within PCIe specs were the first PCIe graphic cards that were powered entirely from PCIe. I'd die of laughing if people realized those old graphic cards were totally out of spec and no one made any big deal about it, but today, everyone is freaking out like mad... Would be fun to know.
Posted on Reply
#339
sith'ari
cdawallAre you planning on buying this GPU? Or as the next quote mentions are you just hear to complain?
Yet you still post...
Out of curiosity did you loose a board to an AMD/NV GPU? Have you met anyone who has?
I must have said it 100 times by now!, i haven't paid near 600€, for top-notch protection hardware (PSU, UPS, surge protectors ), only to take even the slightest risk this gpu to cause damage to my system.
check your post #193. You were the one that told me NOT to buy this GPU because it might destroy my mobo!!!

P.S. No i wouldn't buy anything from AMD after the FuryX period (*but someone else with a similar system could). I simply don't like their policy.
Posted on Reply
#340
cdawall
where the hell are my stars
RejZoRI wonder how within PCIe specs were the first PCIe graphic cards that were powered entirely from PCIe. I'd die of laughing if people realized those old graphic cards were totally out of spec and no one made any big deal about it, but today, everyone is freaking out like mad... Would be fun to know.
6800 Ultra drew around 80W according to the age old benchmarks and even that had a 6 pin...I imagine the old old cards didn't exceed much of anything they didn't draw enough power for it to be an issue.
sith'ariI must have said it 100 times by now!, i haven't paid near 600€, for top-notch protection hardware (PSU, UPS, surge protectors ), only to take even the slightest risk this gpu to cause damage to my system.
check your post #193. You were the one that told me NOT to buy this GPU because it might destroy my mobo!!!

P.S. No i wouldn't buy anything from AMD after the FuryX period (*but someone else with a similar system could). I simply don't like their policy.
I believe I also mentioned that you are being ridiculous. 600€ for a UPS/PSU yet a board that doesn't even fully support the RX480 (PCI-e 16x would be a jokingly bad limit)
Posted on Reply
#341
newtekie1
Semi-Retired Folder
cdawall6800 Ultra drew around 80W according to the age old benchmarks and even that had a 6 pin...I imagine the old old cards didn't exceed much of anything they didn't draw enough power for it to be an issue.
Of course, back then the limit on the PCI-E slot was 25w... Of course the slot hasn't been changed any, PCI-SIG just upped the limit to 75w because that is what the slot was actually capable of and the high power card manufacturers asked for more. The 25w limit was just a very conservative limit, kind of how 75w is a very conservative limit on the PCI-E 6-pin connector.
Posted on Reply
#343
john_
cadavecaW1zz has been testing PCIe power draw for a LONG time. I have personally been testing motherboards over the 8-pin connector only. Reviewers do look at these things with a critical eye that the normal users does not. So yeah, some people do.

AMD's 2900XT was popping motherboards at the 24-pin.
NVidia's GTX570 did as well.

If you pay attention, sure, there are a few cards that cause motherboard damage fairly consistently. For the most part, that's the whole reason why motherboard makers NOW include additional power for the PCIe slots, but not all boards do. There are MANY 3-x16 slot boards that support Crossfire that do not.



People that overclock should be aware of these sorts of issues in the first place, but the general "overclocker" isn't. There is much more that they aren't aware of. That's why I dropped OC, posting on HWBot, and put little focus on OC in my reviews. To me, OC is deep hardware analysis and testing, not a point-based skill competition like it has become. I don't call chasing numbers without a care at what dies OC'ing... and so I focused on GAMING as the main selling point. THe idea "Stuff dies when you OC" isn't true... stuff dies when you BLINDLY OC.

To me overclocking is an art. In order to make great art, you needs to understand the media you use, whether it be the paint, the pencil, music, or the hardware. However, mass marketing has hidden of all that as people have used OC as a selling feature.


Do a google on "burnt 24-pins". It's a hoot. Nearly every thread will blame the PSU. The real cause? Likely a VGA or a USB controller stuffed it. Not a single mention about that. Well, that's not entirely true. There are a couple, but still... when the blind lead the blind...
For many, overclocking can help them get important extra performance. People who can't or aren't willing to pay more money, will go for the cheaper, out of two, models, thinking that with overclocking, they can get to the performance level of the faster model, and save money in the process. That was always the idea of overclocking, saving money, it's another matter that today many just do it for the benchmarks, spending in fact more money. And yes all these people, me included, will, as you say, blindly OC. I am not going to fire up as voltage as I can on a CPU or a GPU, but others will.

But there are cases where people don't even imagine they are pushing their hardware, and that's why I keep repeating the example of the GTX 950 with no power connector. Putting a two slot beast with 8pin connectors and full controls for the voltages on a PCIe slot, can make someone much more nervous than when putting on a PCIe slot a tiny(compared to the beast) innocent GTX 950 that doesn't even have an extra PCIe connector, doesn't give you probably voltage controls to play and is advertised as a power efficient model. How can something like that be a possible danger for your PCIe slot?

When I saw W1zz's review of the card I realize that, if I haven't understood something wrong, that card could be having - after overclocking - the same power draw problems as the RX 480, because it can turn only to the PCIe bus for power. And that 20% extra performance that W1zz gets after OC, can't come out of thin air. And the card is already at 74W at defaults. W1zz's testing with that card on power draw, wasn't including the overclocking scenario, because his job is to test the card at it's defaults, with the OC page being just the icing on the review's cake. But with all this mess with RX 480, thanks to AMD's stupidity, I believe it's a nice opportunity for professionals to show to all those blind overclockers, that things aren't as simple here as "AMD messed up with RX 480". AMD shoot it's own feet, but probably there are others out there with their gun pointing at their feet, not knowing about it.
Posted on Reply
#345
cadaveca
My name is Dave
AMD's statement via facebook and hour or so ago:
We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop... a driver update to improve the power draw. We’re pleased to report that this driver—Radeon Software 16.7.1—is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we’ve implemented a change to address power distribution on the Radeon RX 480 – this change will lower current drawn from the PCIe bus.
Separately, we’ve also included an option to reduce total power with minimal performance impact. Users will find this as the “compatibility” UI toggle in the Global Settings menu of Radeon Settings. This toggle is “off” by default.

Finally, we’ve implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the “compatibility” toggle.

AMD is committed to delivering high quality and high performance products, and we’ll continue to provide users with more control over their product’s performance and efficiency. We appreciate all the feedback so far, and we’ll continue to bring further performance and performance/W optimizations to the Radeon RX 480.
Posted on Reply
#346
RejZoR
Can't wait to see this driver tested and how it behaves. It'll show how committed AMD is regarding this. I just wonder what they mean with this "Compatibility" toggle which will be OF by default. Does this mean they are confident enough about the PCIe power draw not damaging anything they decided not to enable the fix by default? Hm. Anyway, looking forward for a test of this fix...
Posted on Reply
#347
john_
RejZoRCan't wait to see this driver tested and how it behaves. It'll show how committed AMD is regarding this. I just wonder what they mean with this "Compatibility" toggle which will be OF by default. Does this mean they are confident enough about the PCIe power draw not damaging anything they decided not to enable the fix by default? Hm. Anyway, looking forward for a test of this fix...
They are giving two options.

1) The first option is to throw extra load to the 6pin PCIe connector, if you trust your PSU.

2) The second is to lower power consumption if you want to stay in specs.

The first option will be used as an argument that their fix didn't had any performance loss on the card's performance. That's why it will be the default. Hardware sites test cards with top equipment, so I believe they will ask tech sites to use the first option if they want to retest the card. If a user uses sub standard PSU that doesn't trusts, that's not AMD's fault anyway. I guess that's going to be the logic behind option one which is going to be used as a fix that doesn't affect the card's performance and also doesn't make AMD look like they acknowledge there is a problem. With this option they are not fixing any problem because there isn't one. They are just calming users who are nervous about the whole story with the PCIe bus power draw.

The second option, is what users who are really concerned about the power draw, will use in the end. To be fair, this is the real fix that will bring the card in specs. Probably we will see lower frequency from the GPU, maybe 1200MHz instead of 1266MHz and 1.1V instead of 1.15V GPU voltage. Some sites will choose and advice users to use this option, instead of the first one, and will retest the cards. AMD hopes that, whatever optimizations and performance increases they manage to archive this week in their drivers, that 3%, will be enough to make the card look like not having lost any performance at all.

Then the custom cards will come and everything will go back to normal.
Posted on Reply
#348
RejZoR
"If you trust your PSU". If PSU is so shit it can't handle more than 75W on a PCIe power connector, then you better not use it entirely because it's so shit it'll most likely blow up by itself.
Posted on Reply
#349
McSteel
Thought it might be useful to cross-link to the review commentary thread, as The Stilt has managed to find a way to instruct the power controller to redistribute power draw via software (see original thread on OCN here).

The effect is not huge, but it is significant enough to alleviate the problem, especially when combined with undervolting and/or underclocking the card.

I think we can let the issue rest now, knowing it's fully manageable. But we should definitely continue to investigate every aspect of performance - including detailed insight into power draw - of all future VGAs under review.
Posted on Reply
#350
cadaveca
My name is Dave
McSteelThought it might be useful to cross-link to the review commentary thread, as The Stilt has managed to find a way to instruct the power controller to redistribute power draw via software (see original thread on OCN here).

The effect is not huge, but it is significant enough to alleviate the problem, especially when combined with undervolting and/or underclocking the card.

I think we can let the issue rest now, knowing it's fully manageable. But we should definitely continue to investigate every aspect of performance - including detailed insight into power draw - of all future VGAs under review.
For me it wasn't a question if it was possible. I was pretty sure it was. I just wondered why it wasn't done so that there could NOT be any questions.

But in the end, kudos to AMD for listening to the grumbling and doing something about it. This shows that they are committed to addressing user concerns, and truly care what people think about their products. Doing this driver change costs them money. Yet I want AMD, and yet, all companies, to adhere to specifications of supporting parts 100%. Spec says 75W, you don't overstep it one bit, and now they give the end user the options!
Posted on Reply
Add your own comment
Feb 16th, 2025 17:40 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts