Saturday, July 2nd 2016

Official Statement from AMD on the PCI-Express Overcurrent Issue

AMD sent us this statement in response to growing concern among our readers that the Radeon RX 480 graphics card violates PCI-Express power specification, by overdrawing power from its single 6-pin PCIe power connector and the PCI-Express slot. Combined, the total power budged of the card should be 150W, however, it was found to draw well over that power limit.

AMD has had out-of-spec power designs in the past with the Radeon R9 295X2, for example, but that card is targeted at buyers with reasonably good PSUs. The RX 480's target audience could face troubles powering the card. Below is AMD's statement on the matter. The company stated that it's working on a driver update that could cap the power at 150W. It will be interesting to see how that power-limit affects performance.
"As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8 Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016)."
Add your own comment

358 Comments on Official Statement from AMD on the PCI-Express Overcurrent Issue

#301
john_
newtekie1With normal operation, when the clocks go up the voltage goes up with it. That is why W1z includes voltage/clock tables in his reviews. AMD had to increase the voltage on the RX 480 to keep it stable at the clock speeds they wanted(this is also probably why it overclocks so poorly at stock voltage). However, when W1z does his overclocking he does not increase voltage, he leaves it at the stock. So while he increases the clock speeds, the voltage stays the same, so the current going through the GPU stays the same. So you get no real power consumption increase.

In fact, one of the trick of overclocking nVidia cards is to actually lower the voltage to get higher clock speeds. If your card is stable, but hitting the power limit, you can lower the voltage and raise the clocks to get better performance. It is a commonly used trick, and one I had to use on my GTX970s.
OH MY GOD he is trolling me. I have to believe he is trolling me. 23000 posts, 11 years in TPU he can't be so clueless..........

@sith'ari You start from 74W, don't forget it. The GTX 950 with the 6 pin power connector will hover higher than that at defaults, yes. GTX 950 with no power connector will have lower power consumption yes. But don't forget that you start from 74W. The higher you push your frequencies, even with stable voltage, the higher power consumption you will have. So the card will start moving over 74W.

If the card is power limited, then you will not see but only minor changes in benchmarks. If it is not power limited, you will see an almost linear increase in benchmarks scores the higher you push the frequencies.

Manufacturers will choose not to power limit the card. Will let the user push the card even if that means pulling over 75W from the pcie bus. Why? Because it is bad publicity for them to limit the card and they will also lose the customer, if the customer sees that the card is power limited and doesn't overclocks, or doesn't perform better after overclocking it because of throttling.

That's how AMD thought here. Users already push the power limits with overclocking, so why not push the power limits with the reference RX480, beat GTX 970 and at the same time use only a 6pin power connector. Well that was a stupid way of thinking, a stupid decision and AMD is paying the price now with all this negativity.
Posted on Reply
#302
Ungari
sith'ariAMD managed to confuse the entire gaming community with their propaganda Vs the GTX 970 memory size, & made the people believe that the card had less than advertised memory,
I thought it was Scott Wasson prior to his employment at AMD that researched and discovered the 3.5 + .5 VRAM issue due to anomalies in benchmarks?
Posted on Reply
#303
sith'ari
UngariI thought it was Scott Wasson prior to his employment at AMD that researched and discovered the 3.5 + .5 VRAM issue due to anomalies in benchmarks?
I don't care who started it, the point is that just like in this case the entire world was informed about NV's "deception", if NV had a similar deception at the power sector, AMD would have gladly informed the world again, rest assured!!:p
Posted on Reply
#305
xorbe
newtekie1, power scales linearly with F in the ideal scenario. Power scales by square of voltage. Clock gating has an effect on total activity.
Posted on Reply
#306
GhostRyder
McSteelGood intentions, not quite the most accurate info, though...

1: AMD decided to split the power supply 50/50 between the external power connector (happens to be 6-pin in this case) and the PCI-E slot. To illustrate:



This is a problem because while the official spec for the 6-pin connector is 75W it can realistically provide upwards of 200W continuously without any ill effects.
The PCI-E slot and the card's x16 connector have 5 (five) flimsy pins at their disposal for power transfer. Those cannot physically supply more than a bit above 1A each. The better ones can sometimes handle 1.2A before significantly accelerating oxidation (both due to heating and passing current) and thus increasing resistance, necessitating more amps to pass to supply enough power further increasing oxidation rate... It's a feedback loop eventually leading to failure.

2: AMD cannot fix this via drivers, as there are trace breaks with missing resistors and wires that would bridge the PCI-E slot supply to the 6-pin power connector. This would make the connector naturally preferable to the current flow as its path has a lower resistance and that's the path current prefers to take. It can only be permanently fixed by physical modification. No other methods. AMD can lower the total power draw and thus by extension relieve the stress on the PCI-E slot, but it will probably cost some of the GPU's performance. We'll see.

3: Buying and using this card won't kill your motherboard... straight away. Long-term consequences are unpredictable but cannot be positive. Would driving your car in first gear only, bumping into RPM limiter all the time kill your car? Well, not right away, but... Yeah. It's the same here, you're constantly at the realistic limit of an electromechanical system, constant stress is not going to make it work longer nor better, that's for sure.

The AIB partners would do well to design their PCBs such that the PCI-E slot only supplies power past 150W being drawn from the auxiliary power connector or something like that. Perhaps give one of the six phases to the slot, and the remaining five to the connector... Or better yet, power memory from the slot and GPU from the power connector exclusively. Breaking PCI-E spec that way is much less damaging due to the actual cpaabilities of the Molex Mini-Fit Jr. 2x3-pin connector that we like to call the 6-pin PCI-E power.
Well I disagree, but I have not looked deep into the PCB to determine if this is impossible which is why I said one or the other.

The other part is the killing motherboard part slowly over time. Its the same principle as overclocking slowly killing the chip over time. At these levels your not going to kill a motherboard fast enough. You might if you do 3-4 of these on a cheap motherboard that supports it and does not have an extra power input, but in the majority of cases that is not going to be the case. The most likely scenario would be two of these on a cheap motherboard that supports two way but even then most boards in this day and age are pretty tough for just this amount of extra power.

Either way, we just have to wait and see what the fix is.
Posted on Reply
#307
RejZoR
McSteelYeah, you can see that in this video. Ok, the guy in it may not hold a masters in electronics, but it's clear the power phases are completely separated, and the GPU simply draws in power 50/50 from them.
A bit more current is drawn from the slot than from the aux connector simply due to higher resistance of the slot power pins...

I'm sure @W1zzard could confirm if he could find a bit of free time to do it :)
Still, I think GPU has some control over how much current draws from phases. Meaning, they can limit those for PCIe to 75W total and simply push those on 6pin a bit more. Wasn't there info floating around about power phases being more beefy than the ones on GTX 1080 ? Assuming they have such logic on board, such thing could be a feasible option. Again, just brainstorming, only one who really knows this is AMD (RTG). Tomorrow is the day they'll release more info, can't wait to see what they'll come up with and if it'll really be an effective solution, assuming they'll already deliver driver hotfix for it as mentioned they are working on it.
Posted on Reply
#308
Dippyskoodlez
RejZoROMG FUCKIN' DELUSIONAL OMFG MAH GOD YOU IDIOT NOOB FOOOOOK:
support.amd.com/en-us/kb-articles/Pages/HowtoidentifythemodelofanATIgraphicscard.aspx#DID

Driver can identify it even beyond just basic HW ID and can differentiate between reference or AIB models.
Where does it say it's aware of the exotic cooling systems these cards have? And why does the silent fan operation not require a driver at all? Why do the fans not run full speed 24/7 when powered on?

Please tell me how dos is capable of handling my GTX970 cooling successfully with the Nvidia driver.
Posted on Reply
#309
cdawall
where the hell are my stars
RejZoRStill, I think GPU has some control over how much current draws from phases. Meaning, they can limit those for PCIe to 75W total and simply push those on 6pin a bit more. Wasn't there info floating around about power phases being more beefy than the ones on GTX 1080 ? Assuming they have such logic on board, such thing could be a feasible option. Again, just brainstorming, only one who really knows this is AMD (RTG). Tomorrow is the day they'll release more info, can't wait to see what they'll come up with and if it'll really be an effective solution, assuming they'll already deliver driver hotfix for it as mentioned they are working on it.
Each phase can be independently controlled. They could without a doubt make 3 phases pull more than the others, in the exact same way I can literally turn off half the phases on my several year old crosshair v. This isn't new tech nor is it a new practice. Does anyone on here really think that the phases are split differently on other cards?
Posted on Reply
#310
R-T-B
john_Does anyone understand basic things here?

By increasing the frequency you don't necessarily gain performance. If the card is limited in how much power it will take from the pcie bus, remaining under or at 75W, in will throttle. But if the results of overclocking the card are 20% extra performance, then the card doesn't stop at 75W, it asks and it gets more power from the pcie bus.
No, it doesn't. The bios limits are hard. You'll just get throttled to hell unless you manually raise the power limit. An agressive overclock with no raised power limit may even hurt your performance.

Until the power limit is manually raised by the user, it will NEVER exceed 75W
Posted on Reply
#311
McSteel
RejZoRStill, I think GPU has some control over how much current draws from phases. Meaning, they can limit those for PCIe to 75W total and simply push those on 6pin a bit more. Wasn't there info floating around about power phases being more beefy than the ones on GTX 1080 ? Assuming they have such logic on board, such thing could be a feasible option. Again, just brainstorming, only one who really knows this is AMD (RTG). Tomorrow is the day they'll release more info, can't wait to see what they'll come up with and if it'll really be an effective solution, assuming they'll already deliver driver hotfix for it as mentioned they are working on it.
cdawallEach phase can be independently controlled. They could without a doubt make 3 phases pull more than the others, in the exact same way I can literally turn off half the phases on my several year old crosshair v. This isn't new tech nor is it a new practice. Does anyone on here really think that the phases are split differently on other cards?
Well perhaps, if the GPU input power is all added together into a unified power plane, AMD could potentially disable some or all of the slot-driven phases, and the GPU will naturally compensate by pulling more from the aux connector. But if the power planes are separate for different zones in the chip (which admittedly would be odd), then they don't have the option to do so. I'm not really sure as I'm not privy to engineering blueprints of the Polaris10 GPU. But the traces and contacts on the PCB tell a very unambiguous story - the slot and the aux power are galvanically separated all the way up to the GPU. As such, if and only if they meet up within the GPU AND there is a control bus running between the GPU and the power delivery controller (the IR3567B), will AMD be able to restructure the power distribution without physical modifications to the card. Otherwise the only recourse is to lower consumption by lowering the voltage and then appropriately scaling down boost or even the base clock, depending on the transistor leakage current ("ASIC quality").
Posted on Reply
#312
cdawall
where the hell are my stars
McSteelWell perhaps, if the GPU input power is all added together into a unified power plane, AMD could potentially disable some or all of the slot-driven phases, and the GPU will naturally compensate by pulling more from the aux connector. But if the power planes are separate for different zones in the chip (which admittedly would be odd), then they don't have the option to do so. I'm not really sure as I'm not privy to engineering blueprints of the Polaris10 GPU. But the traces and contacts on the PCB tell a very unambiguous story - the slot and the aux power are galvanically separated all the way up to the GPU. As such, if and only if they meet up within the GPU AND there is a control bus running between the GPU and the power delivery controller (the IR3567B), will AMD be able to restructure the power distribution without physical modifications to the card. Otherwise the only recourse is to lower consumption by lowering the voltage and then appropriately scaling down boost or even the base clock, depending on the transistor leakage current ("ASIC quality").
Spec sheet says it can go as far as to disable all, but one power phase on the card.

www.infineon.com/dgdl/pb-ir3567b.pdf?fileId=5546d462533600a4015356803a7228ef

They are also completely configurable which should in theory mean it could be setup to draw more from whatever phases it chooses.
Posted on Reply
#313
RejZoR
I mean, usually they separate power phases between GPU and memory. And that's it. Then it's entirely down to how clever and flexible is the power delivery system. Which seems to be quite advanced on Maxwell 2 and Polaris products and beyond.
Posted on Reply
#314
john_
R-T-BNo, it doesn't. The bios limits are hard. You'll just get throttled to hell unless you manually raise the power limit. An agressive overclock with no raised power limit may even hurt your performance.

Until the power limit is manually raised by the user, it will NEVER exceed 75W
Look. I am not trying to make the GTX 950 example look like the RX480 example. If people stop trying defending NVidia, they would have realized that I am not defending AMD. They screw up, because this is the reference design at the default stocks. The end.

What I am saying is that there are plenty of cards out there sucking 85-90W from the PCIe bus, with no drama and 13 pages of discussions about that, all those years, no motherboards exploding killing their owners. In W1zzard's review he gets 20% performance, so he either increases the power limit, with the card giving him that capability doing it manually, or the card is already set to use extra power if necessary.

So, I believe it wouldn't have been a bad idea, because of RX480, sites to investigate it and try to educate users. If we stay at just pointing a finger to AMD, from tomorrow it will be forgotten. Users will go back overclocking cards and sucking 85-90W from the pcie bus thinking this was something limited to the reference RX480. "Temps are normal, 3DMark finishes, so no problem here". Isn't it the typical routine when overclocking? Was anyone thinking about power draw until today? Does anyone think about pcie power draw even today?
Posted on Reply
#315
R-T-B
john_Look. I am not trying to make the GTX 950 example look like the RX480 example. If people stop trying defending NVidia, they would have realized that I am not defending AMD. They screw up, because this is the reference design at the default stocks. The end.

What I am saying is that there are plenty of cards out there sucking 85-90W from the PCIe bus, with no drama and 13 pages of discussions about that, all those years, no motherboards exploding killing their owners. In W1zzard's review he gets 20% performance, so he either increases the power limit, with the card giving him that capability doing it manually, or the card is already set to use extra power if necessary.

So, I believe it wouldn't have been a bad idea, because of RX480, sites to investigate it and try to educate users. If we stay at just pointing a finger to AMD, from tomorrow it will be forgotten. Users will go back overclocking cards and sucking 85-90W from the pcie bus thinking this was something limited to the reference RX480. "Temps are normal, 3DMark finishes, so no problem here". Isn't it the typical routine when overclocking? Was anyone thinking about power draw until today? Does anyone think about pcie power draw even today?
I don't know about plenty of cards, but there have certainly been a few. It's only recently that NVIDIA has basically bios locked the cards wattage at stock, as well. (Post-fermi, I think)

I have seen the damage drawing too much from the slot can do (from bitcoin mining). It's not pretty. But I was running 4 heavily overclocked GPUs. The specs do indeed have some wiggle room, I will grant you. But I do believe at least at stock, they should be adhered to.

I will grant you I think the main point you are getting at: This is way blown out of proportion.
Posted on Reply
#316
john_
I had motherboards in the past with a molex next to the first PCIe. Never really search to see why it was there. Better stability I was reading. More stable voltages, better overclocking. But maybe it wasn't just for that. Maybe it was also providing extra power if necessary. Don't know.
R-T-BI will grant you I think the main point you are getting at: This is way blown out of proportion.
Not exactly my point. My point is that it is blown out of proportion, but only in one direction. That of RX 480. It should be, they messed up. But it shouldn't JUST be shown as an RX 480 problem that is(?) going to be addressed today(?) with a driver, a BIOS, dark magic, or something, so we can forget about it tomorrow. Sites should take a few cards that are in their power limits, overclock them and see what happens. Is it just RX 480 that can overload the bus, or the 6pin, or are there other cards that we would never suspect?

We overclock stuff as much as they can remain stable and we only look at temps and if the benchmark finishes without errors. We usually, if not always, ignore power load. The only time in my life that I took really really in consideration what power was consuming the overclocked part of my system, was when overclocking my 1055T on the MSI 790FX-GD70. A really great motherboard, but that period, MSI's boards for the AMD platform where dying one after the other, if I am not mistaken, because their mosfets or the designs of their AMD motherboards, weren't exactly top quality. So in that board I did a combination of overclocking and undervolting, trying to stay below 140W.

RX 480 is the best excuse tech sites will even have, to investigate how much power graphics cards get from the PCIe bus or the 6pin, after we overclock them. That could end up as a very interesting and eye opening article. And that's what I try to say all these days. AMD is not going to be found innocent, if other graphics cards overload the pcie bus under overclocking, because they did it with a reference design and at default speeds. But people who overclock their cards could be interested in the results, if they care about their motherboard more than they care about 100 extra MHz, or if the 600W PSU they are using, cost them $20.
Posted on Reply
#317
EarthDog
The extra molex/PCIe power leads on the motherboard were intended for MULTI GPU setups. It had nothing to do with single GPU setups.
Posted on Reply
#318
newtekie1
Semi-Retired Folder
R-T-BNo, it doesn't. The bios limits are hard. You'll just get throttled to hell unless you manually raise the power limit. An agressive overclock with no raised power limit may even hurt your performance.

Until the power limit is manually raised by the user, it will NEVER exceed 75W
Yep, that is exactly why every piece of GPU overclocking software had to add a power limit slider. And even then, the max you can set that slider to is hard locked by the BIOS to make sure the card doesn't exceed what the manufacturer wants.

In fact, I just took a look at the GTX 950's BIOS, and sure enough the power limit is set to 75w. The user has the option to up the power limit to 90w, but that is the users choice, not something set by the manufacturer. If the user wants to risk their board, they can. The manufacturer of the graphics card should be making the decision to risk my motherboard and power supply.
john_hat I am saying is that there are plenty of cards out there sucking 85-90W from the PCIe bus
Show me one other card that consistently pulls over 75w from the PCI-E bus.
Posted on Reply
#319
john_
newtekie1Show me one other card that consistently pulls over 75w from the PCI-E bus.
You first go and learn the alphabet. Then come and ask me to show you anything. I lost enough time with your fanboyism and your ignorance.

I am really thinking putting this
So while he increases the clock speeds, the voltage stays the same, so the current going through the GPU stays the same. So you get no real power consumption increase.
in my signature with your name on it.
Posted on Reply
#320
cdawall
where the hell are my stars
newtekie1Show me one other card that consistently pulls over 75w from the PCI-E bus.
Hand me the equipment I have a hunch I have one or two on my shelf
Posted on Reply
#321
newtekie1
Semi-Retired Folder
john_You first go and learn the alphabet. Then come and ask me to show you anything. I lost enough time with your fanboyism and your ignorance.

I am really thinking putting this
in my signature with your name on it.
So judging by your insult ridden useless response, I'm going to assume you actually don't have any examples to back up your claim and can't actually show me a single other card that consistently pulls more than 75w from the PCI-E bus. Got it. You can move along if you don't have anything useful to add to the thread.
cdawallHand me the equipment I have a hunch I have one or two on my shelf
I'd guess they were from the Fermi era... and even then, I believe they only did it when overclocked, at stock they didn't.
Posted on Reply
#322
cdawall
where the hell are my stars
newtekie1I'd guess they were from the Fermi era... and even then, I believe they only did it when overclocked, at stock they didn't.
Couple of different gens Fermi is one, but my 470's are water-cooled and consume less power because of it. I had a pair of 480's pulling nearly 900w at the wall by themselves at stock clocks in SLI for reference however.
Posted on Reply
#323
cadaveca
My name is Dave
john_Users will go back overclocking cards and sucking 85-90W from the pcie bus thinking this was something limited to the reference RX480. Does anyone think about pcie power draw even today?
W1zz has been testing PCIe power draw for a LONG time. I have personally been testing motherboards over the 8-pin connector only. Reviewers do look at these things with a critical eye that the normal users does not. So yeah, some people do.

AMD's 2900XT was popping motherboards at the 24-pin.
NVidia's GTX570 did as well.

If you pay attention, sure, there are a few cards that cause motherboard damage fairly consistently. For the most part, that's the whole reason why motherboard makers NOW include additional power for the PCIe slots, but not all boards do. There are MANY 3-x16 slot boards that support Crossfire that do not.
john_"Temps are normal, 3DMark finishes, so no problem here". Isn't it the typical routine when overclocking? Was anyone thinking about power draw until today?
People that overclock should be aware of these sorts of issues in the first place, but the general "overclocker" isn't. There is much more that they aren't aware of. That's why I dropped OC, posting on HWBot, and put little focus on OC in my reviews. To me, OC is deep hardware analysis and testing, not a point-based skill competition like it has become. I don't call chasing numbers without a care at what dies OC'ing... and so I focused on GAMING as the main selling point. THe idea "Stuff dies when you OC" isn't true... stuff dies when you BLINDLY OC.

To me overclocking is an art. In order to make great art, you needs to understand the media you use, whether it be the paint, the pencil, music, or the hardware. However, mass marketing has hidden of all that as people have used OC as a selling feature.


Do a google on "burnt 24-pins". It's a hoot. Nearly every thread will blame the PSU. The real cause? Likely a VGA or a USB controller stuffed it. Not a single mention about that. Well, that's not entirely true. There are a couple, but still... when the blind lead the blind...
Posted on Reply
#325
HD64G
sith'ariI was replying to a comment so apparently somebody cared. :rolleyes:
Wasn't sure about what MB you had. Now I just don't care at all, so have a nice day.
Posted on Reply
Add your own comment
Nov 22nd, 2024 12:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts