# Official Statement from AMD on the PCI-Express Overcurrent Issue



## W1zzard (Jul 2, 2016)

AMD sent us this statement in response to growing concern among our readers that the Radeon RX 480 graphics card violates PCI-Express power specification, by overdrawing power from its single 6-pin PCIe power connector and the PCI-Express slot. Combined, the total power budged of the card should be 150W, however, it was found to draw well over that power limit. 

AMD has had out-of-spec power designs in the past with the Radeon R9 295X2, for example, but that card is targeted at buyers with reasonably good PSUs. The RX 480's target audience could face troubles powering the card. Below is AMD's statement on the matter. The company stated that it's working on a driver update that could cap the power at 150W. It will be interesting to see how that power-limit affects performance.



 



> "As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8 Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016)."



*View at TechPowerUp Main Site*


----------



## the54thvoid (Jul 2, 2016)

Good they're addressing it but they can't blame the memory speed. The GTX 1070 runs at the 'unprecedented' 8Gbps.

Then again, not a huge issue as only really affected much older mobos?


----------



## ZeppMan217 (Jul 2, 2016)

So, they're gonna reduce the voltage? Lower board's max power draw? What kind of a solution can be cooked up with a driver update?


----------



## chinmi (Jul 2, 2016)

Hahaha.... Dead on arrival... People have more and more reason to wait and buy the 1060 now...


----------



## arnoo1 (Jul 2, 2016)

Good but this means that the card wil probally slower


----------



## WhyCry (Jul 2, 2016)

You don't always see Wizzard making a news post.


----------



## Fragment (Jul 2, 2016)

They could just limit the power draw / undervolt the card.
There are already reports on reddit that this actually doesn't affect performance in measurable ways.
Because some cards seem to come with 1.3V vcore out of the box. Which is way over that what's needed to reach boost clock.

https://www.reddit.com/r/Amd/comments/4qupw4/super_psa_all_rx480_owners_please_attempt_to/

Maybe there is even a more elegant solution, lets wait 'til Tuesday.



WhyCry said:


> You don't always see Wizzard making a news post.


Haha. Or you making any kind of post...


----------



## $ReaPeR$ (Jul 2, 2016)

chinmi said:


> Hahaha.... Dead on arrival... People have more and more reason to wait and buy the 1060 now...



yeah mate.. whatever you say.. 

on topic: this doesnt seem to be such a major problem, and i love how people have blown it way out of proportion, what do you think happens when you oc a card geniuses?


----------



## Fragment (Jul 2, 2016)

$ReaPeR$ said:


> yeah mate.. whatever you say..
> 
> on topic: this doesnt seem to be such a major problem, and i love how people have blown it way out of proportion, what do you think happens when you oc a card geniuses?



Probably that the extra power comes from all the Wi-Fi signals that are in the air nowadays hue hue.


----------



## Recus (Jul 2, 2016)

RIP AMD

http://seekingalpha.com/article/3985508-amds-polaris-revealed-overhyped-disaster

That's why AMD community is the worst.

https://www.reddit.com/r/Amd/comments/4qfwd4/rx480_fails_pcie_specification/?sort=new



> *moderators here are now flairing this as 'rumor/FUD' despite plenty of testing having been done to corroborate this.*



Spreading BS that GTX 960 burning motherboards too.
http://www.pcper.com/reviews/Graphi...s-Radeon-RX-480/Evaluating-ASUS-GTX-960-Strix


----------



## RejZoR (Jul 2, 2016)

I was expecting AMD to be using input power monitoring and control just like NVIDIA is using on Maxwell 2 (where you can independently control how much each power input can draw power). In theory, I could limit my GTX 980 to exclusively draw power from 6pin+8pin, but I'd have to adjust the total power limit to about 225W then (oppose to current 250W with some reserve from the PCIe which is still limited to 66W).

Bottom line, clocks may dip tiny bit due to slight power restriction imposed by this fix (only on cards that have these problems), but I'm not expecting any noticeable real world difference. Especially if we consider the fact that AMD, as time progresses, optimizes drivers and gains performance opposed to NVIDIA which seems to drop it over time...

It was a bit of a cock up, but not much bigger than NVIDIA's cocked up fan profile on GTX 1080...


----------



## Ferrum Master (Jul 2, 2016)

What's the VRM IC? Anybody knows?

They can adjust power limits for each, there should be no problems. Later AIB cards will be okay out of the box.


----------



## $ReaPeR$ (Jul 2, 2016)

RejZoR said:


> I was expecting AMD to be using input power monitoring and control just like NVIDIA is using on Maxwell 2 (where you can independently control how much each power input can draw power). In theory, I could limit my GTX 980 to exclusively draw power from 6pin+8pin, but I'd have to adjust the total power limit to about 225W then (oppose to current 250W with some reserve from the PCIe which is still limited to 66W).
> 
> Bottom line, clocks may dip tiny bit due to slight power restriction imposed by this fix (only on cards that have these problems), but I'm not expecting any noticeable real world difference. Especially if we consider the fact that AMD, as time progresses, optimizes drivers and gains performance opposed to NVIDIA which seems to drop it over time...
> 
> It was a bit of a cock up, but not much bigger than NVIDIA's cocked up fan profile on GTX 1080...



is it really that serious of a problem though? because people seem to be panicking about this..


----------



## KainXS (Jul 2, 2016)

Its good to see they addressed it, lets see how they fix it, will they underclock and undervolt or just reduce the power limit and then limit its maximum maybe.


----------



## Fiery (Jul 2, 2016)

Ferrum Master said:


> What's the VRM IC? Anybody knows?



IRF IR3567B


----------



## W1zzard (Jul 2, 2016)

WhyCry said:


> You don't always see Wizzard making a news post.


bta seems asleep, I saw the email after crawling out of bed with my gf, so I thought "let's get this out to the people"


----------



## BiggieShady (Jul 2, 2016)

AMD said:
			
		

> Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal.



Translation: Recently, we were forced to admit someone at our ranks fucked up the complete voltage table for Polaris ... poor bastard thought we were trying to hit 1.3 GHz on a 28 nm chip 

seriously, average sample (72% ASIC quality) gets to 1.3 GHz at 1.15 V ... and new boards are already undervolted in the bios version they ship with


----------



## Ferrum Master (Jul 2, 2016)

Fiery said:


> IRF IR3567B



Same as always... lately, they can manage power delivery for each phase. No downclock or lower voltages, considering there is already 50% power reserve. What's the fuss really?


----------



## $ReaPeR$ (Jul 2, 2016)

Fragment said:


> Probably that the extra power comes from all the Wi-Fi signals that are in the air nowadays hue hue.


LOL yeah!!! hehehe


----------



## arbiter (Jul 2, 2016)

Fragment said:


> There are already reports on reddit that this actually doesn't affect performance in measurable ways.
> Because some cards seem to come with 1.3V vcore out of the box. Which is way over that what's needed to reach boost clock.


Likely its a safe range they can run every chip at to get to xxxx mhz some can do it at well below that voltage yes but there are a few that need it to get there.



$ReaPeR$ said:


> is it really that serious of a problem though? because people seem to be panicking about this..


Um when your computer just goes poof, off and not turn on again cause the board is fried cause the power draw. Would you panic a bit?
MB power circuits more so a problem on cheaper boards can do up to the spec but not really anything past that. Its like an extension cord you pull well above the power it can handle, it gets hot and well you know the end.



> As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8 Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal.


aka they gonna enforce clock limits so it don't boost to where it does now.


----------



## Fluffmeister (Jul 2, 2016)

$ReaPeR$ said:


> is it really that serious of a problem though? because people seem to be panicking about this..



People here go into meltdown when a fan spins up when it doesn't need to, so this is borderline biblical disaster.


----------



## DeathtoGnomes (Jul 2, 2016)

chinmi said:


> Hahaha.... Dead on arrival... People have more and more reason to wait and buy the 1060 now...


...there's one in every crowd...

ontopic: Sounds like a small batch of cards got the wrong burn.


----------



## $ReaPeR$ (Jul 2, 2016)

arbiter said:


> Likely its a safe range they can run every chip at to get to xxxx mhz some can do it at well below that voltage yes but there are a few that need it to get there.
> 
> 
> Um when your computer just goes poof, off and not turn on again cause the board is fried cause the power draw. Would you panic a bit?
> ...



yeah i know how it works, i have seen it in practice. my point is since we havent seen many mobos die from this problem why all the panic?



Fluffmeister said:


> People here go into meltdown when a fan spins up when it doesn't need to, so this borderline biblical disaster.



hahahahahahahahaha the end is nigh repent!!!!!

back on topic though, i think we need more info on this problem, it is an issue, but since the effects of it seem to be almost none existent i really dont understand the panic. most people here have good to really good hardware and most of us here are used to ocing, and when you oc something it will go beyond the specs anyway so..


----------



## GC_PaNzerFIN (Jul 2, 2016)

As usual, technical PR team tries to **** with customers and shove out load of propaganda. Thanks to beyond required effort from multiple hardware sites, this issue finally _may_ be fixed. When you talk down these issues, you should know it is you, customer, who is getting fooled here.


----------



## Ferrum Master (Jul 2, 2016)

GC_PaNzerFIN said:


> As usual, technical PR team tries to **** with customers and shove out load of propaganda. Thanks to beyond required effort from multiple hardware sites, this issue finally _may_ be fixed. When you talk down these issues, you should know it is you, customer, who is getting fooled here.



Lately every new tech product is plagued with some sort of issues due to haste... that's the electronics biz... it has been like that always really.


----------



## arbiter (Jul 2, 2016)

$ReaPeR$ said:


> yeah i know how it works, i have seen it in practice. my point is since we havent seen many mobos die from this problem why all the panic?


Card has been out for what 3-4 days at most. if MB's died already from it image what rate could be in say 3 months, or 6 months, etc. That is the real issue.


----------



## $ReaPeR$ (Jul 2, 2016)

Ferrum Master said:


> Lately every new tech product is plagued with some sort of issues due to haste... that's the electronics biz... it has been like that always really.



indeed!!



arbiter said:


> Card has been out for what 3-4 days at most. if MB's died already from it image what rate could be in say 3 months, or 6 months, etc. That is the real issue.



i havent read anything about dead mobos, im not saying that this isnt an issue, i just dont think its a safety hazard either.
this should be helpful


----------



## arbiter (Jul 2, 2016)

$ReaPeR$ said:


> i havent read anything about dead mobos, im not saying that this isnt an issue, i just dont think its a safety hazard either.
> this should be helpful


you haven't read anything about dead mobo's well some may not be dead dead, but it has messed up some and killed others, have a read. > https://community.amd.com/thread/202410


----------



## the54thvoid (Jul 2, 2016)

It is a definite problem and it's not so easy to say it's trivial.  However, it's only going to affect a very few people (who own older spec boards without the required PCI-E magic to allow the extra pull through the lane).  If the fix through drivers sorts it then the problem is gone - easy as that.  It is a technical oversight on AMD's side that they 'flexed' power draw rules or they thought nobody would notice.  This is very similar to the 3.5Gb issue where a technical issue was thought irrelevant and to be fair is to most users.
I imagine the most vocal protests will come from people who don't use RX 480's (ahem, Nvidia peeps) in the exact same way it was AMD users who had a field day over the 3.5Gb memory 'thing'.

It's a real thing that is being looked at, that affected few people, being hijacked by morons.  Just like the 970 non issue.


----------



## ShurikN (Jul 2, 2016)

Seems to me the custom boards will fix everything in a weeks time.


----------



## Massiverod (Jul 2, 2016)

BEST CARD EVER FOR THE PRICE!! Every company has little setbacks with brand new products on the market. I can't wait to get mine!!! I can't wait, I can't wait!!!


----------



## RejZoR (Jul 2, 2016)

the54thvoid said:


> It is a definite problem and it's not so easy to say it's trivial.  However, it's only going to affect a very few people (who own older spec boards without the required PCI-E magic to allow the extra pull through the lane).  If the fix through drivers sorts it then the problem is gone - easy as that.  It is a technical oversight on AMD's side that they 'flexed' power draw rules or they thought nobody would notice.  This is very similar to the 3.5Gb issue where a technical issue was thought irrelevant and to be fair is to most users.
> I imagine the most vocal protests will come from people who don't use RX 480's (ahem, Nvidia peeps) in the exact same way it was AMD users who had a field day over the 3.5Gb memory 'thing'.
> 
> It's a real thing that is being looked at, that affected few people, being hijacked by morons.  Just like the 970 non issue.



You can't ever fix the 3.5 GB issue/nonsense. RX480 glitches can be and will be fixed. Just like GTX 1080 fan speed nonsense. It was stupid, we joked about it and they fixed it.


----------



## sith'ari (Jul 2, 2016)

Massiverod said:


> BEST CARD EVER FOR THE PRICE!! Every company has little setbacks with brand new products on the market. I can't wait to get mine!!! I can't wait, I can't wait!!!



Yeah, very cheap, especially if i have to give another +-100$ to replace my likely damaged/fried motherboard!! 
No matter what people have said in this thread, that's the 1st time i read a review (*2 reviews actually, TPU & Tom's) that recognises the GPU as a possible threat for the rest of the hardware !!


----------



## newtekie1 (Jul 2, 2016)

$ReaPeR$ said:


> on topic: this doesnt seem to be such a major problem, and i love how people have blown it way out of proportion, what do you think happens when you oc a card geniuses?



In a properly designed card that extra power is pulled from the external PCI-E power connectors.  But properly designed cards are also built with extra power headroom, so even if you overclock you aren't likely to exceed the power rating of the connectors.  That is why most cards that draw close to 150w usually come with either 2x6-pin or an 8-pin.  If they would have just gone with a single 8-pin connector the card would have been good for up to 225w, and this wouldn't be an issue.



W1zzard said:


> bta seems asleep, I saw the email after crawling out of bed with my gf, so I thought "let's get this out to the people"











$ReaPeR$ said:


> i have seen it in practice. my point is since we havent seen many mobos die from this problem why all the panic?



I have seen a motherboard suffer damage, and a power supply die from this problem.  Just not with the RX 480, rather it was a pair of GTX480s.


----------



## Tatty_One (Jul 2, 2016)

I would think it will possibly be only a matter of hours before something is done assuming that a driver update would alleviate the issue, AMD know better than most that anything negative associated with a new product launch if not dealt with promptly could have long term implications.


----------



## ensabrenoir (Jul 2, 2016)

.....to everyone who said its a non issue....please  forward the good news to ail the users out there who have lost one/all their pcie lanes.  Amd is known as a value brand so their is a lot of value boards out there....Do you think Amd is going to cut checks to the "FEW" that this did happen to?  Maybe you all can come together and start a go fund me for them to replace their motherboard.   Its only a non issue until it happens ro you.....* AND NO DRIVER UPDATE IS GOING TO FIX THE ALREADY DAMAGED BOARDS.*


----------



## RejZoR (Jul 2, 2016)

sith'ari said:


> Yeah, very cheap, especially if i have to give another +-100$ to replace my likely damaged/fried motherboard!!
> No matter what people have said in this thread, that's the 1st time i read a review (*2 reviews actually, TPU & Tom's) that recognises the GPU as a possible threat for the rest of the hardware !!



It's almost as if NO ONE in this thread has actually read AMD's *OFFICIAL* statement... Drivers will address the excessive current draw from PCIe. Considering I'm fiddling with my GTX 980 that has similar fully configurable power control logic, I know it can be also controlled through driver (which is just an extension of BIOS). Meaning what AMD said isn't just load of BS and that it is a valid solution.

As for 8pin being a solution, just a small hint since everyone is screaming EVERYONE SHALL FOLLOW PCIe SPECS!!!!11111oneoneone. 8pin is actually not officially supported by PCI Express certifying body. Meaning, if card has 8pin power connector it's kinda violating PCIe specifications (well, not violating, just not following it). Dual 6pin, no problem. single 8pin or 6pin+8pin ain't officially supported by PCIe specifications.

Also, going single 6pin ensures maximum compatibility with wide range of PSU's. If you go with single 8pin, it's already questionable if target PSU's even have it. Because I think cheaper ones still only have just 6pin...


----------



## jg_nwi (Jul 2, 2016)

the54thvoid said:


> Good they're addressing it but they can't blame the memory speed. The GTX 1070 runs at the 'unprecedented' 8Gbps.
> 
> Then again, not a huge issue as only really affected much older mobos?



From everything I've read, it's due to the fact that they attempted to gimp the PSU connections required to make the card available for the  masses. Unfortunately, the ones that don't have the extra power are also the ones whose mobos are going to blow.


----------



## $ReaPeR$ (Jul 2, 2016)

arbiter said:


> you haven't read anything about dead mobo's well some may not be dead dead, but it has messed up some and killed others, have a read. > https://community.amd.com/thread/202410



i read alot in there, most of it was bullshit that cannot be checked, you cant guarantee objectivity in a forum where anyone can post anything.. again im not saying its not an issue, i just dont think its as big as people make it to be. and its affecting people with older hardware.


sith'ari said:


> Yeah, very cheap, especially if i have to give another +-100$ to replace my likely damaged/fried motherboard!!
> No matter what people have said in this thread, that's the 1st time i read a review (*2 reviews actually, TPU & Tom's) that recognises the GPU as a possible threat for the rest of the hardware !!



are you serious? your 750ti is already bottle-necked by your ancient system and you would buy this card?! that would be a waste of money, and i dont think you would do that, you are just whining with no reason whatsoever. and since you think that amd is the devil for killing little baby seals..
https://forums.geforce.com/default/...g-samsung-and-lg-notebook-lcd-display-panels/


----------



## sith'ari (Jul 2, 2016)

RejZoR said:


> It's almost as if NO ONE in this thread has actually read AMD's *OFFICIAL* statement... Drivers will address the excessive current draw from PCIe. ...................................




1)I'm sorry but *untill *AMD release these drivers and fix this problem, i will continue to consider it as an existing problem.  
2) Through drivers they will likely undervolt the card right? won't this reduce the performance? i think it's the obvious outcome.


----------



## Basard (Jul 2, 2016)

WhyCry said:


> You don't always see Wizzard making a news post.



On a Saturday!


----------



## newtekie1 (Jul 2, 2016)

RejZoR said:


> As for 8pin being a solution, just a small hint since everyone is screaming EVERYONE SHALL FOLLOW PCIe SPECS!!!!11111oneoneone. 8pin is actually not officially supported by PCI Express certifying body. Meaning, if card has 8pin power connector it's kinda violating PCIe specifications (well, not violating, just not following it). Dual 6pin, no problem. single 8pin or 6pin+8pin ain't officially supported by PCIe specifications.



You are combining two different statements, made by different people, and trying to say they are both invalid because they contradict each other.  The point is that power specifications for the different connectors should be adhered to, and this isn't the first time AMD has ignored them(to be fair nVidia has done it in the past too).  It doesn't matter that the 8-pin isn't officially supported by PCI-SIG, the 8-pin connector's power capability is specified in the ATX spec, just like the 6-pin connector, and any other connector on your PSU.  Those are the specs we should be following for those connectors.  PCI-SIG set the power limit for the PCI-E connector, because they created it, and for _that_ connector we should follow their specification of max 75w.



RejZoR said:


> Also, going single 6pin ensures maximum compatibility with wide range of PSU's. If you go with single 8pin, it's already questionable if target PSU's even have it. Because I think cheaper ones still only have just 6pin...



There is a lot wrong with this statement.  No one running a PSU with just a single 6-pin should be running this card.  When you have the likes of the bottom of the barrel $30 eVGA 430w that has a 8-pin, if your PSU doesn't have an 8-pin at this point, it's shit.  Go buy a new one, it's only $30!

Plus, if the PSU only has a single 6-pin it is probably very close to the edge of actually being able to provide enough power to actually use that 6-pin, so going over spec on power consumption is probably a very bad thing.  You're talking about generic shit units that might be rated for 500w, but probably can't do 250w reliably.  Do you really think people with those types of power supplies should be using a card that consumes 170w?  

Do you really think AMD wanted people with those types of units to use the card?  I don't.  I think the real reason behind the single 6-pin was marketing.  They wanted to hype the card, to say "look at how power efficient it is, it only uses a _single 6-pin!"_ But it backfired on them.


----------



## sith'ari (Jul 2, 2016)

$ReaPeR$ said:


> .......................
> are you serious? your 750ti is already bottle-necked by your ancient system and you would buy this card?! that would be a waste of money, and i dont think you would do that, you are just whining with no reason whatsoever. and since you think that amd is the devil for killing little baby seals..
> https://forums.geforce.com/default/...g-samsung-and-lg-notebook-lcd-display-panels/



-Buddy, my system remains ancient for a certain reason. If i would like, i could replace it within a heartbeat! it's not a question of money!
-P.S. I won't hide my feelings for AMD. I clearly remember the period before FuryX's release. They have been brainwashing us for months about the tremendeous capabilities of the HBM memory, making us believe that they will release somekind of rocket instead of a GPU that will destroy every competition!! And when they finally released this "rocket" it would struggle to surpass a 980Ti reference model !! EDIT: This was the *LAST* time i took them seriously!!


----------



## McSteel (Jul 2, 2016)

RejZoR said:


> It's almost as if NO ONE in this thread has actually read AMD's *OFFICIAL* statement... Drivers will address the excessive current draw from PCIe. Considering I'm fiddling with my GTX 980 that has similar fully configurable power control logic, I know it can be also controlled through driver (which is just an extension of BIOS). Meaning what AMD said isn't just load of BS and that it is a valid solution.
> 
> As for 8pin being a solution, just a small hint since everyone is screaming EVERYONE SHALL FOLLOW PCIe SPECS!!!!11111oneoneone. 8pin is actually not officially supported by PCI Express certifying body. Meaning, if card has 8pin power connector it's kinda violating PCIe specifications (well, not violating, just not following it). Dual 6pin, no problem. single 8pin or 6pin+8pin ain't officially supported by PCIe specifications.
> 
> Also, going single 6pin ensures maximum compatibility with wide range of PSU's. If you go with single 8pin, it's already questionable if target PSU's even have it. Because I think cheaper ones still only have just 6pin...



Actually... 8-pin PCI-E _does_ exist in the PCI specification, but it was never implemented as specified. Namely the additional two pins (as compared with 6-pin) were supposed to be used for the voltage sense and regulation signal return path. However, now both pins carry the GND/COM wires, which don't really help all that much with making power delivery more stable.

The _real_ issue here is that the card seems to draw excessive amounts of power from the PCI-E *slot*, which is at most fed by two +12V wires on the 24-pin ATX connector, and those are meant for all +12V needs of the motherboard and all connected devices, sans the CPU. Add to that the fact that power traces leading to the PCI-E slot aren't normally very beefy; and the fact that there are only 5 flimsy less-than-paper-thin pins on the card accepting the delivered power and you have a situation where you really want to limit PCI-E slot power delivery as much as possible.

It would actually be fine if the card drew 120W from the 6-pin and up to 50W from the slot, the unnecessary drama would be far less pronounced.
The 6-pin may only be declared as 75W-capable; in reality it can handle the full 150W quoted for the 8-pin in the majority of cases... The only time it can't is if it's on a shitty PSU with wires thinner than 18AWG and cheaply made pins (like on a $15 Diablotek).

Which brings us to...



newtekie1 said:


> There is a lot wrong with this statement.  No one running a PSU with just a single 6-pin should be running this card.  When you have the likes of the bottom of the barrel $30 eVGA 430w that has a 8-pin, if your PSU doesn't have an 8-pin at this point, it's shit.  Go buy a new one, it's only $30!
> 
> Plus, if the PSU only has a single 6-pin it is probably very close to the edge of actually being able to provide enough power to actually use that 6-pin, so going over spec on power consumption is probably a very bad thing.  You're talking about generic shit units that might be rated for 500w, but probably can't do 250w reliably.  Do you really think people with those types of power supplies should be using a card that consumes 170w?
> 
> Do you really think AMD wanted people with those types of units to use the card?  I don't.  I think the real reason behind the single 6-pin was marketing.  They wanted to hype the card, to say "look at how power efficient it is, it only uses a _single 6-pin!"_ But it backfired on them.



Even the aforementioned Diablotek could handle powering one of these, paired with a latest-gen Skylake CPU, a couple sticks of RAM and some storage. It would all easily fit into a 250W envelope (absolute peak power draw, realistically less than that), which even the worst of the worst PSUs can manage, at least for a while.

That being said, everyone should have the common sense not to skimp on the PSU. No need to go crazy, a nice $30-or-so PSU from a reputable manufacturer should do fine, as @newtekie1 pointed out.


----------



## iO (Jul 2, 2016)

Simply lowering the power target to like -10% or so by default should do the trick.

And they might limit OC capabilities in Overdrive drastically as the card still could violate the specs when OCed...


----------



## HD64G (Jul 2, 2016)

iO said:


> Simply lowering the power target to like -10% or so by default should do the trick.
> 
> And they might limit OC capabilities in Overdrive drastically as the card still could violate the specs when OCed...



1st sentence is the solution indeed as detailed tests clearly showed: http://semiaccurate.com/2016/07/01/investigating-thermal-throttling-undervolting-amds-rx-480/


----------



## ZoneDymo (Jul 2, 2016)

Oh well, I was disappointed in the card already....GTX1060 pls be good enough and affordable


----------



## Jism (Jul 2, 2016)

This is such a storm in a glass of water....

If both the 6 pins and motherboard provide a 75W for a total of 150 watts, and the card exceeds at 166 watts, this means that 16 watts split by two (8 watts) is being pulled more then it should.

I think any motherboard is capable of doing more then 25 watts on top, otherwise that system or motherboard would already be at it's limits.


----------



## psyph3r (Jul 2, 2016)

Recus said:


> RIP AMD
> 
> http://seekingalpha.com/article/3985508-amds-polaris-revealed-overhyped-disaster
> 
> ...



Well basically. It's not a big deal, the 960 does it way worse. You are blowing it out of proportion because people love to hate on AMD. This is like an action to head off the idiots at the pass. So people like you have one less reason to spread fear and FUD about amd for no reason. The situation in which the non issue happens is 4k on game that runs at 10fps...which no sane person would play at.  Really, the nvidia community is the worst. Just be happy your cards are cheaper. Your video cards are not dogma.


----------



## ppn (Jul 2, 2016)

ut to 25 watts from the slot or avoid the card. simple as that. i dun trust in power target - 10% solutions.


----------



## Bansaku (Jul 2, 2016)

Why is this an issue? Both the GTX 750 Ti and the GTX 950 both drew significantly more than 75W from the PCI bus! In short bursts, there is absolutely no problem with brief power spikes. Shitty deal if you have an older/cheap mobo and this is an issue; Seriously, quality motherboards are dirt cheap!


----------



## ppn (Jul 2, 2016)

Because you don't SLI 750 Ti (which is 57 watts by the way reviews/NVIDIA/GeForce_GTX_750_Ti/23.html) but CF RX 480 is advertised, that's 150 watts and add 25+ watts DDR3 and all that on 4 pin +12V.


----------



## Jism (Jul 2, 2016)

The ATX 24 pins delivers one or 2 12V wires to the motherboard, providing power for the PCI-express and all. Any modern budget motherboard also carries a 4 pins power for the CPU. And still on some motherboards that powerline is shared as well.

If you think that the extra 16 watts should be a problem then buy a proper motherboard, or a 10 $ PCI-express "booster"


----------



## Bansaku (Jul 2, 2016)

Food for thought: Polaris RX 480 - Power Problems or PCI Exaggeration?


----------



## McSteel (Jul 2, 2016)

Jism said:


> This is such a storm in a glass of water....
> 
> If both the 6 pins and motherboard provide a 75W for a total of 150 watts, and the card exceeds at 166 watts, this means that 16 watts split by two (8 watts) is being pulled more then it should.
> 
> I think any motherboard is capable of doing more then 25 watts on top, otherwise that system or motherboard would already be at it's limits.



The problem is most budget motherboards aren't as capable as you'd like... During the Bitcoin GPU mining craze, I've seen dozens of fried slots and 24-pin ATX connectors...

The connector itself will carry up to 14A (7A per wire pair, two +12V wires + two GND/COMs on the connector) which is 168W. And that's at 20°C, less at higher temps.
Wouldn't be a problem if the card only made brief excursions into 75W territory, but it consistently draws that much from the slot. Adding another one to a CF setup without additional power connectors for +12V on the board and you can be pretty sure you're running at the very limit of the ATX connector's capabilities. As for the individual PCI-E slots, I imagine that depends on the quality of the MoBo itself...

Even so, most people will probably be just fine. _Some_ problems, which could've been avoided, are to be expected though.


----------



## Jism (Jul 2, 2016)

Was'nt that Bitcoin cards could be tweaked for maximum efficiency? I.e clock down the memory to get best power-usage and highest possible MH's a second?

If you stack multiple cards in just one motherboard and expect that only one 24 pins ATX connector is going to supply each card of sufficient power, then i think you should reconsider a better motherboard with external PCI-express source or the use of PCI-express boosters (nothing but an add-in card that provides extra current for the PCI-express bus).

I dont think that again 16watts should cause huge problems. Shared your still talking about 8W in maximum usage.


----------



## Divenity (Jul 2, 2016)

the54thvoid said:


> Good they're addressing it but they can't blame the memory speed. The GTX 1070 runs at the 'unprecedented' 8Gbps.
> 
> Then again, not a huge issue as only really affected much older mobos?



the GTX1070 also has an 8 pin power connector, not a 6 pin, so it wouldn't need to overdraw the PCI-E slot.


----------



## RejZoR (Jul 2, 2016)

So much drama about something AMD already confirmed as fixable via something as simple as DRIVERS most retarded noobs can install. Jesus christ, everyone stop losing your shit already.

Lets just wait for this driver and see if things are resolved. Then whine about it if it wont' actually be fixed.


----------



## rtwjunkie (Jul 2, 2016)

So, a driver will limit power draw to 150w. Now....what will this do to the AIB boards that are adding more power connectors, to up the power available for higher clocks?

If the driver affects all R 480's, then it seems AMD eill be dooming the AIB makers to lackluster performance and sales.


----------



## Divenity (Jul 2, 2016)

If it comes to that, rtw, I'm sure they will fix it.


----------



## rtwjunkie (Jul 2, 2016)

Divenity said:


> If it comes to that, rtw, I'm sure they will fix it.



I hope you're right.  Not sure how they would do that without unnecessarily confusing things by adding specific model recognition into those drivers. It really needs a hardware fix, and send out rev. 02 cards.  That way the AIB's can get on with making these cards better.


----------



## RejZoR (Jul 2, 2016)

rtwjunkie said:


> So, a driver will limit power draw to 150w. Now....what will this do to the AIB boards that are adding more power connectors, to up the power available for higher clocks?
> 
> If the driver affects all R 480's, then it seems AMD eill be dooming the AIB makers to lackluster performance and sales.



How do you think NVIDIA is separating founders edition (reference models) of GTX 1080 with messed up fan profiles from the custom models?

EDIT:
There is no need for a "hardware" fix. Have you ever fiddled with Maxwell II Tweaker? It does exactly what AMD will fix via drivers. It's what I'm doing with my GTX 980. It's what thousands of Maxwell 2 users are doing. It's not rocket science once you figure it out and considering AMD knows where is what in their BIOS, it's a walk in the park. They can tap in with drivers easily, basically their new Wattman is what will they most likely use anyway.


----------



## rtwjunkie (Jul 2, 2016)

RejZoR said:


> How do you think NVIDIA is separating founders edition (reference models) of GTX 1080 with messed up fan profiles from the custom models?



I thought NVIDIA said BIOS update, not driver?  AMD are saying here driver update, which is a more sweeping application.


----------



## GhostRyder (Jul 2, 2016)

Still foolish not to just put an 8 pin as the default...  If they wanted to do this to the 4gb and limit the board spec to 150 with a 6 pin then it's fine, but they should have at least with the 8gb given an 8 pin reference.

This was just a foolish design choice.


----------



## RejZoR (Jul 2, 2016)

rtwjunkie said:


> I thought NVIDIA said BIOS update, not driver?  AMD are saying here driver update, which is a more sweeping application.



NVIDIA fixed fan issues with a driver. They can tap into anything, they are the makers of hardware and BIOS. Same goes for AMD in this case. If RX480 has similar power delivery logic as GTX 9xx and GTX 10xx series (which I suspect it does), they can do exactly the same thing via drivers. They don't need to issue complicated and risky BIOS updates, they can do it via driver update that simply taps into specific parameters between driver, BIOS and hardware. It's how they fixed certain issues with R9 290X cards if you remember. I think it was about thermal throttling and fan profiles as well. It's how you fix things the easiest. BIOS is just too complicated and risky for average users. Where installation of drivers is a thing of few clicks every noob can do basically risk free.



GhostRyder said:


> Still foolish not to just put an 8 pin as the default...  If they wanted to do this to the 4gb and limit the board spec to 150 with a 6 pin then it's fine, but they should have at least with the 8gb given an 8 pin reference.
> 
> This was just a foolish design choice.



Costs my friend, costs. They wanted to make really affordable product. Placing 8pin instead of 6pin could result in higher price. They always say this when bulk components could cost 5p to the final product, but it then somehow becomes a $20 addition...


----------



## ZoneDymo (Jul 2, 2016)

GhostRyder said:


> Still foolish not to just put an 8 pin as the default...  If they wanted to do this to the 4gb and limit the board spec to 150 with a 6 pin then it's fine, but they should have at least with the 8gb given an 8 pin reference.
> 
> This was just a foolish design choice.



Honestly I think Nvidia pushed AMD with their new gpu's.
The fact that you can barely OC the RX480 makes me believe AMD quickly issued as high a clock out of the box as possible to make the cards look good in the performance section, but that originally they were not meant to run this high.
I guess its best to get a custom designed RX480 from a partner that indeed has an 8pin connector and more cooling capability.


----------



## BiggieShady (Jul 2, 2016)

GhostRyder said:


> This was just a *cheaper* design choice.


ftfy

Just like we have a saying "I'm not rich enough to buy cheap things", AMD should say "we are not rich enough to do cheap designs"


----------



## okidna (Jul 2, 2016)

ZoneDymo said:


> I guess its best to get a custom designed RX480 from a partner that indeed has an 8pin connector and more cooling capability.



And they are priced decently as well  : https://www.overclockers.co.uk/detail/index/sArticle/61887


----------



## OneCool (Jul 2, 2016)

W1zzard said:


> bta seems asleep, I saw the email after crawling out of bed with my gf, so I thought "let's get this out to the people"



It wasnt worth it.Should have stayed in bed with your gf.


----------



## Batou1986 (Jul 2, 2016)

I guess AMD is just going to keep silent on the fact that this is somewhat of an issue on PCIe 3.0 but a huge problem for PCIe 2.0 boards, like every single AM3+ board on the market.


----------



## sith'ari (Jul 2, 2016)

okidna said:


> And they are priced decently as well  : https://www.overclockers.co.uk/detail/index/sArticle/61887



*250$ (edit: *i just noticed that they are £ not $, so even more expensive than i thought)
. Exactly the price that i had predicted in the past for the RX480 8GB version, but when i said that, lot of people dissagreed.* ( https://hardforum.com/threads/radeon-rx-480-competition-poll.1903083/page-3#post-1042373817 )*



> *RejZoR said:*
> Costs my friend, costs. They wanted to make really affordable product. Placing 8pin instead of 6pin could result in higher price. They always say this when bulk components could cost 5p to the final product, but it then somehow becomes a $20 addition...


Yeah, that's the hole point: when a company decides to become dirty-cheap and transfer the cost from them to the customer, then i'd say that we are having a problem


----------



## R-T-B (Jul 2, 2016)

$ReaPeR$ said:


> is it really that serious of a problem though? because people seem to be panicking about this..



If you have a junky motherboard, maybe, but I doubt any mainstream brands don't build in A LITTLE reserve.


----------



## G33k2Fr34k (Jul 2, 2016)

sith'ari said:


> Yeah, very cheap, especially if i have to give another +-100$ to replace my likely damaged/fried motherboard!!
> No matter what people have said in this thread, that's the 1st time i read a review (*2 reviews actually, TPU & Tom's) that recognises the GPU as a possible threat for the rest of the hardware !!



The design of the PCB is certainly not cheap, according to pcperspective. The 480 has a better 6+1 power phase design in addition to better beefier VRM setup. It seems that this issue only effects older motherboards. Newer ones don't have that problem.[/QUOTE]


----------



## R-T-B (Jul 2, 2016)

Jism said:


> Was'nt that Bitcoin cards could be tweaked for maximum efficiency? I.e clock down the memory to get best power-usage and highest possible MH's a second?



Going over bitcoin 12V draw on the PCIe rail war normal in mining.  If you had a cheap board, it can and would burn up with 5 cards in it all drawing over spec.  Not unheard of, I experienced it once even.  It doesn't smell good.


----------



## rtwjunkie (Jul 2, 2016)

RejZoR said:


> NVIDIA fixed fan issues with a driver. They can tap into anything, they are the makers of hardware and BIOS. Same goes for AMD in this case. If RX480 has similar power delivery logic as GTX 9xx and GTX 10xx series (which I suspect it does), they can do exactly the same thing via drivers. They don't need to issue complicated and risky BIOS updates, they can do it via driver update that simply taps into specific parameters between driver, BIOS and hardware. It's how they fixed certain issues with R9 290X cards if you remember. I think it was about thermal throttling and fan profiles as well. It's how you fix things the easiest. BIOS is just too complicated and risky for average users. Where installation of drivers is a thing of few clicks every noob can do basically risk free.



Ok, thanks for the update and correct info!


----------



## GC_PaNzerFIN (Jul 2, 2016)

Bansaku said:


> Why is this an issue? Both the GTX 750 Ti and the GTX 950 both drew significantly more than 75W from the PCI bus! In short bursts, there is absolutely no problem with brief power spikes. Shitty deal if you have an older/cheap mobo and this is an issue; Seriously, quality motherboards are dirt cheap!



None of those breaks the average spec of 75W (actually, for 12V its even less), while RX 480 breaks average spec by significant amount. Its not short bursts only. Huge difference.


----------



## HD64G (Jul 2, 2016)

ZoneDymo said:


> Honestly I think Nvidia pushed AMD with their new gpu's.
> The fact that you can barely OC the RX480 makes me believe AMD quickly issued as high a clock out of the box as possible to make the cards look good in the performance section, but that originally they were not meant to run this high.
> I guess its best to get a custom designed RX480 from a partner that indeed has an 8pin connector and more cooling capability.



6pin was used just because AMD targets OEMs and they don't usually sell PCs with high end PSUs. So, they could put a cheap PSU into a PC with Polaris.

The custom ones don't need to take this into consideration so they will clock very high (rumors say about +1400 MHz which will reach  or even surpass stock 980).



GC_PaNzerFIN said:


> None of those breaks the average spec of 75W (actually, for 12V its even less), while RX 480 breaks average spec by significant amount. Its not short bursts only. Huge difference.



Driver fixable thing for those who don't know how to do it themselves, -10% in power settings for the current driver for those who want to do that now. And not any decrease in performance also. 

As for the custom 480s, they will have alternative BIOS as usual and won't depend on default driver settings as usual.


----------



## GC_PaNzerFIN (Jul 2, 2016)

HD64G said:


> Driver fixable thing for those who don't know how to do it themselves, -10% in power settings for the current driver for those who want to do that now. And not any decrease in performance also.


You think they are wasting power for no reason? -10% will change performance, or otherwise AMD would have done it already. 

There is a more technical solution, which does not change power draw or performance but that is still under investigation. Feasibility of that depends whether PCI-E bus and 6-pin are connected to parallel as input to all VREGs or they supply different parts of the VREGs.


----------



## sith'ari (Jul 2, 2016)

HD64G said:


> Driver fixable thing for those who don't know how to do it themselves, -10% in power settings for the current driver for those who want to do that now. *And not any* decrease in performance also.



well , logic suggests that what you say can not be accurate. If there is no decrease when you reduce the power, then why not -20%, or even -50% then?


----------



## qubit (Jul 2, 2016)

Reading AMD's statement cynically, one could say that they deliberately released the cards like this so they'd look good/better in the benchmarks, because once the power use is brought down the performance is gonna be significantly lower and make their value for money much lower. Be interesting to see exactly how much the performance hit will be.

@W1zzard are we gonna see a quick retest review with a handful of benchmarks with the revised driver to check this out?


----------



## Tsukiyomi91 (Jul 2, 2016)

If AIB vendors are fixing the "out of spec PCIe limits" with an 8-pin or possibly 8 + 6-pin to reduce excessive power draw from the PCIe slot, then AMD shouldn't even come up with a statement that it will release a driver fix. Limiting the card's power even by a little affects a lot of aspect. @HD64G why not u bench your VGA before limiting it & then u throw a 10% reduction to it via tuning software & then run the same test again? We wanna see if limiting power does not reduce the card's performance, as per what you claimed.


----------



## RejZoR (Jul 2, 2016)

qubit said:


> Reading AMD's statement cynically, one could say that they deliberately released the cards like this so they'd look good/better in the benchmarks, because once the power use is brought down the performance is gonna be significantly lower and make their value for money much lower. Be interesting to see exactly how much the performance hit will be.
> 
> @W1zzard are we gonna see a quick retest review with a handful of benchmarks with the revised driver to check this out?



Except people actually report higher performance when restricting its power...


----------



## GC_PaNzerFIN (Jul 2, 2016)

RejZoR said:


> Except people actually report higher performance when restricting its power...


Reducing operating voltage MAY increase performance in thermally limited situations. Decreasing total power limit without changing anything is going to have opposite effect. 
Reducing operating voltage MAY work on some GPUs, but due to obvious negative effect on stability (there is a reason why they put the VID it has now) it is too risky to do on all cards.


----------



## cadaveca (Jul 2, 2016)

I'm wondering why no one thought to consider that the BIOS might be the problem, and that AMD simply gave all cards a BIOS that allowed maximum power draw, over-stepping the driver-based tools to increase that. I found it interesting that power draw was high, and no OC was possible using the driver-based tools to give the GPU more power, and when you put those two together, and then consider the ASUS and MSI GPUs recently reviewed, you get a potential BIOS problem.

Perhaps AMD gave the card a BIOS that allowed it to exceed PCIe spec because it wanted it to be reviewed in the best light, and knew reviewers sometimes do not investigate OC? Given how their clocks are "managed" compared to NVidia's Turbo, this actually seems like the most reasonable explanation for what happened. Not every site has the capability to accurately measure power consumption for PCIe devices, so many sites wouldn't even be able to test such an issue.


----------



## Alex Rubio (Jul 2, 2016)

chinmi said:


> Hahaha.... Dead on arrival... People have more and more reason to wait and buy the 1060 now...


Get outta here you fanboy!
As if the gtx 970 or 960 is clean. this is only when you over clock. Try to do some research before commenting you sound like a idiot.
This is just some redit fanboy making a big deal of nothing. I didn't hear a cry when Nvidia came out with the 960 or 970.


----------



## LiabilityMan (Jul 2, 2016)

Recus said:


> RIP AMD
> 
> http://seekingalpha.com/article/3985508-amds-polaris-revealed-overhyped-disaster



Mark Hibben is a joke. Every single one of his articles are clickbait BS made purely to drive down AMD stock, and raise NVIDIA which he personally has long investments in.


----------



## Alex Rubio (Jul 2, 2016)

GC_PaNzerFIN said:


> None of those breaks the average spec of 75W (actually, for 12V its even less), while RX 480 breaks average spec by significant amount. Its not short bursts only. Huge difference.


Gotta do a bit more research. 
Here a video I found to be correct.


----------



## Maddox (Jul 2, 2016)

Alex Rubio said:


> Get outta here you fanboy!
> As if the gtx 970 or 960 is clean. this is only when you over clock. Try to do some research before commenting you sound like a idiot.
> This is just some redit fanboy making a big deal of nothing. I didn't hear a cry when Nvidia came out with the 960 or 970.



Well, for whatever it's worth, I have a habit of checking out user benchmark whenever new hardware is launched, just to check out the performance and popularity of CPU's and GPU's.

Around midnight of the launch day, going into July 30th, market share ranked 74th.  Today it is tied with the R9 290 at 17th and obviously going up.  I'm betting it'll -- in terms of sales -- trade blows with the GTX 1070 from here on out and probably slightly surpass it at some point once the 4GB versions hit the market.

Not exactly DOA by my reckoning.

I just think Nvidia fanboys (who are up there with Apple and Nintendo, imo) were spring loaded to fire at any sudden movement after the 970 fiasco.


----------



## qubit (Jul 2, 2016)

RejZoR said:


> Except people actually report higher performance when restricting its power...


That makes no sense. Perhaps there's something else going on here? Without details one can't say what the true situation is, but simply lowering power consumption isn't gonna increase performance.



GC_PaNzerFIN said:


> *Reducing operating voltage MAY increase performance in thermally limited situations.* Decreasing total power limit without changing anything is going to have opposite effect.
> Reducing operating voltage MAY work on some GPUs, but due to obvious negative effect on stability (there is a reason why they put the VID it has now) it is too risky to do on all cards.


This sounds more plausible.


----------



## RejZoR (Jul 2, 2016)

If it's thermal throttling, it can throttle more than it does if it's power limited... this one would probably depend on quality of case cooling...


----------



## Daiwa Zou (Jul 2, 2016)

Recus said:


> RIP AMD
> 
> http://seekingalpha.com/article/3985508-amds-polaris-revealed-overhyped-disaster
> 
> ...



Lol. "RIP AMD" and here you are referencing one of the most biased source on seekingalpha.com to prove your point. Not sure if you know Hibben is known for his anti-AMD mentality.

This says a lot about how biased you are as well.


----------



## KainXS (Jul 2, 2016)

I've heard cases where the cards are power throttling on the factory overclock, some are doing it on stock also, so increasing the power limit increases the cards performance because its drawing too much power. Some other lower binned ones are running hotter and requiring undervolt or the power limit to be dropped to reduce temps and that increases performance since it won't thermal throttle down the boost clock, its pretty confusing really.


----------



## xorbe (Jul 2, 2016)

R-T-B said:


> If you have a junky motherboard, maybe, but I doubt any mainstream brands don't build in A LITTLE reserve.



I think it has to do more with the connector limitation of the little pins in the slot.


----------



## silentbogo (Jul 2, 2016)

Bansaku said:


> Why is this an issue? Both the GTX 750 Ti and the GTX 950 both drew significantly more than 75W from the PCI bus! In short bursts, there is absolutely no problem with brief power spikes. Shitty deal if you have an older/cheap mobo and this is an issue; Seriously, quality motherboards are dirt cheap!


When did that happen? My MSI GTX750Ti overclocked to 1300 / 1500 never exceeded 65W mark during stress testing (according to sensor readouts). Under normal conditions it stays below 58W.
GTX950 is a 90W card, so under normal circumstances, the only theoretical way it can overdraw power from PCI-E only if it draws no power from 6-pin connector at all (or if you own one of those newer bus-powered cards from ASUS or EVGA).

When it comes to motherboards, you'll be surprised how many shitty products hit the market nowadays. Just because it's high-end does not mean that it won't break.


----------



## newtekie1 (Jul 2, 2016)

McSteel said:


> ven the aforementioned Diablotek could handle powering one of these, paired with a latest-gen Skylake CPU, a couple sticks of RAM and some storage. It would all easily fit into a 250W envelope (absolute peak power draw, realistically less than that), which even the worst of the worst PSUs can manage, at least for a while.
> 
> That being said, everyone should have the common sense not to skimp on the PSU. No need to go crazy, a nice $30-or-so PSU from a reputable manufacturer should do fine, as @newtekie1 pointed out.



Sure, but like you pointed out, why skimp on the PSU.  And if you have a latest-gen skylake, or even a last gen Haswell, or even an Ivy-Bridge, and your PSU doesn't have an 8-pin connector, seriously go buy a new one before upgrading your graphics card!


----------



## R-T-B (Jul 2, 2016)

xorbe said:


> I think it has to do more with the connector limitation of the little pins in the slot.



No, that can't be it because otherwise connectors to boost the amperes to the slot (as my board has) would make little sense.


----------



## EarthDog (Jul 2, 2016)

Fluffmeister said:


> People here go into meltdown when a fan spins up when it doesn't need to, so this is borderline biblical disaster.


I think I just found my first signature quote for TPU... PRICELESS!!!!!


----------



## GhostRyder (Jul 2, 2016)

newtekie1 said:


> Sure, but like you pointed out, why skimp on the PSU.  And if you have a latest-gen skylake, or even a last gen Haswell, or even an Ivy-Bridge, and your PSU doesn't have an 8-pin connector, seriously go buy a new one before upgrading your graphics card!


Yea, I mean even the most entry level PSU is $50 bucks comes with both an 8 and 6 pin (EVGA) so I don't see the logic in not having it and instead opting for the 6 pin.



silentbogo said:


> When did that happen? My MSI GTX750Ti overclocked to 1300 / 1500 never exceeded 65W mark during stress testing (according to sensor readouts). Under normal conditions it stays below 58W.
> GTX950 is a 90W card, so under normal circumstances, the only theoretical way it can overdraw power from PCI-E only if it draws no power from 6-pin connector at all (or if you own one of those newer bus-powered cards from ASUS or EVGA).
> 
> When it comes to motherboards, you'll be surprised how many shitty products hit the market nowadays. Just because it's high-end does not mean that it won't break.


Never checked my 950, but its the 0 power connector from Asus.  Would be curious how much it draws under load.


----------



## brunosp (Jul 2, 2016)

nvidia fanboy trolling 








http://wccftech.com/article/radeon-rx-480-reducing-voltage-increasing-efficiency/


----------



## newtekie1 (Jul 2, 2016)

GhostRyder said:


> Yea, I mean even the most entry level PSU is $50 bucks comes with both an 8 and 6 pin (EVGA) so I don't see the logic in not having it and instead opting for the 6 pin.
> 
> Never checked my 950, but its the 0 power connector from Asus.  Would be curious how much it draws under load.




Even the $30 430w eVGA has an 8-pin.

And the ASUS gtx950 with no power connector pulls a maximum of 76w from the slot.  So they managed to keep it right at the limit.


----------



## HTC (Jul 2, 2016)

I found this video to be quite informative:


----------



## GhostRyder (Jul 2, 2016)

newtekie1 said:


> Even the $30 430w eVGA has an 8-pin.
> 
> And the ASUS gtx950 with no power connector pulls a maximum of 76w from the slot.  So they managed to keep it right at the limit.


Yea, even the Corsair CX 430 has an 8 pin.  In short putting a 6 pin only was a foolish decision.  The whole sad issue is that this seems to only be a detriment for no logical reason.  I mean were already hearing (Supposedly) other vendors hitting 1500mhz+ on cards with different power system.  If they wanted to make one low end, then limit it with the 4gb, and with the 8gb give use the power.  I am so lost by the reasoning behind this...

I am just curious if you are able to go a little past it with overclocking.  Have not tried yet...Excuse me, while I see what I can do with it lol.


----------



## $ReaPeR$ (Jul 2, 2016)

sith'ari said:


> -Buddy, my system remains ancient for a certain reason. If i would like, i could replace it within a heartbeat! it's not a question of money!
> -P.S. I won't hide my feelings for AMD. I clearly remember the period before FuryX's release. They have been brainwashing us for months about the tremendeous capabilities of the HBM memory, making us believe that they will release somekind of rocket instead of a GPU that will destroy every competition!! And when they finally released this "rocket" it would struggle to surpass a 980Ti reference model !! EDIT: This was the *LAST* time i took them seriously!!



its not AMDs fault if people cannot think critically and buy into the hype. if you cannot grasp the simple fact that the vram is secondary to the gpu concearning total performance, then i think you are in the wrong forum buddy.



R-T-B said:


> If you have a junky motherboard, maybe, but I doubt any mainstream brands don't build in A LITTLE reserve.



i know mate.. but people .. well people are people..


----------



## R-T-B (Jul 2, 2016)

$ReaPeR$ said:


> i know mate.. but people .. well people are people..



I hear that.  I mean somehow, China sells shit like this to this very day:

https://www.techpowerup.com/forums/threads/x58-unknown-motherboard.223785/


----------



## HTC (Jul 2, 2016)

GhostRyder said:


> Yea, even the Corsair CX 430 has an 8 pin.  In short putting a 6 pin only was a foolish decision.  The whole sad issue is that this seems to only be a detriment for no logical reason.  I mean were already hearing (Supposedly) other vendors hitting 1500mhz+ on cards with different power system.  If they wanted to make one low end, then limit it with the 4gb, and with the 8gb give use the power.  I am so lost by the reasoning behind this...
> 
> I am just curious if you are able to go a little past it with overclocking.  Have not tried yet...Excuse me, while I see what I can do with it lol.



By watching the video i posted just before this reply, i've learned that the problem isn't the card using over 150W: the problem is the PCI-e *consistently* using over 75W. Had the over-150W-usage come from the 6 pin wouldn't be such an issue but when some of it comes from the PCI-e slot, that *can be* dangerous.


----------



## GhostRyder (Jul 2, 2016)

HTC said:


> By watching the video i posted just before this reply, i've learned that the problem isn't the card using over 150W: the problem is the PCI *consistently* using over 75W. Had the over-150W-usage come from the 6 pin wouldn't be such an issue but when some of it comes from the PCI slot, that *can be* dangerous.


Yea that part is also a problem, but what I am saying is it makes no logical sense not to just put an 8pin on it and call it a day or hard limit it (Since they have two versions, do one of each).


----------



## HTC (Jul 2, 2016)

IMHO, AMD has *once again* shot itself in the foot.

They could have made this card a bit worse so that it used around 130W or so of total power. It's performance would be worse, obviously, but it *shouldn't be* too much of a hit in order to reach that wattage. Ofc, for this to work, *the wattage coming from the PCI-e slot should not exceed 70W*, even when the card is overclocked.

Had they taken this approach, the card would still be a good performer and none of this power-consumption fiasco would have occurred ... but noooo ... the just *HAD* to shoot themselves *AGAIN* ...


----------



## GC_PaNzerFIN (Jul 2, 2016)

Putting a 8-pin power connector would have allowed them to have it solely input GPU VREG leaving pci-e slot for less power hungry items like memory, meeting all PCI-E SIG specs. Really, there is no technical or even cost reason why they couldn't have done this when they realized what the TDP was going to be. 
Only reason that comes to my mind is that marketing PR department had long before said there will be a 6-pin, and engineering team facepalmed while doing only thing they could, using PCI-E bus to deliver power to GPU as well.


----------



## $ReaPeR$ (Jul 2, 2016)

R-T-B said:


> I hear that.  I mean somehow, China sells shit like this to this very day:
> 
> https://www.techpowerup.com/forums/threads/x58-unknown-motherboard.223785/



holy shit!! but ive seen worse.. much worse, lack of solid state caps for the cpu worse.. what can you do, people are ignorant and there are so menay assholes that will use that ignorance..
i was wandering something though, could amd use the drivers to divert power usage from the pcie to the 6pin? is that even possible?


----------



## TheoneandonlyMrK (Jul 2, 2016)

Mines clocked to a meager 1300 stable at the minute and is not as good yet the same as my 390 but I think driver updates will help given a few months and 2 of these make more sense then 2x 390 ,folding power is up a bit too despite already selling my 390 ,defo sidegrade all in, though a water  block will help I'm sure as for power over pciex I have feelings it will be fine.


----------



## arbiter (Jul 2, 2016)

HTC said:


> By watching the video i posted just before this reply, i've learned that the problem isn't the card using over 150W: the problem is the PCI-e *consistently* using over 75W. Had the over-150W-usage come from the 6 pin wouldn't be such an issue but when some of it comes from the PCI-e slot, that *can be* dangerous.


The 6pin/8pin can do well over what spec says of them, case in point the reference 295x2, AMD had that card pulling anywhere 240 to almost 300watts per plug. That card had a TDP of 500watts but it was noted to draw as much as 600. Traces in a MB are much smaller then that wire of PCI-e power cable. so if board is cheap and even mid range boards could be effected. Reason i say midrange boards is they will be little cheaper but not as much so extra power draw over say 6 months or a year could eventually fail.


----------



## jabbadap (Jul 2, 2016)

HTC said:


> IMHO, AMD has *once again* shot itself in the foot.
> 
> They could have made this card a bit worse so that it used around 130W or so of total power. It's performance would be worse, obviously, but it *shouldn't be* too much of a hit in order to reach that wattage. Ofc, for this to work, *the wattage coming from the PCI-e slot should not exceed 70W*, even when the card is overclocked.
> 
> Had they taken this approach, the card would still be a good performer and none of this power-consumption fiasco would have occurred ... but noooo ... the just *HAD* to shoot themselves *AGAIN* ...



Sad part is, they should not have to even make it worse. They could have restrict pcie slot power below the spec and take over power from 6-pin connector(best practice, this way you don't over power pcie slot even while overclocking) or slap that damn 8-pin connector to it.


----------



## $ReaPeR$ (Jul 2, 2016)

GC_PaNzerFIN said:


> Putting a 8-pin power connector would have allowed them to have it solely input GPU VREG leaving pci-e slot for less power hungry items like memory, meeting all PCI-E SIG specs. Really, there is no technical or even cost reason why they couldn't have done this when they realized what the TDP was going to be.
> Only reason that comes to my mind is that marketing PR department had long before said there will be a 6-pin, and engineering team facepalmed while doing only thing they could, using PCI-E bus to deliver power to GPU as well.


that is probably the reason.. marketing people shouldnt even exist, the only produce they make is bullshit, and we already have cows for that..


----------



## $ReaPeR$ (Jul 2, 2016)

jabbadap said:


> Sad part is, they should not have to even make it worse. They could have restrict pcie slot power below the spec and take over power from 6-pin connector(best practice, this way you don't over power pcie slot even while overclocking) or slap that damn 8-pin connector to it.


could they do that from the driver?


----------



## HTC (Jul 2, 2016)

arbiter said:


> The 6pin/8pin can do well over what spec says of them, case in point the reference 295x2, AMD had that card pulling anywhere 240 to almost 300watts per plug. That card had a TDP of 500watts but it was noted to draw as much as 600. Traces in a MB are much smaller then that wire of PCI-e power cable. so if board is cheap and even mid range boards could be effected. Reason i say midrange boards is they will be little cheaper but not as much so extra power draw over say 6 months or a year could eventually fail.



Drawing high amounts of powers isn't necassarily bad, UNLESS that extra power comes from the PCI-e slot on a *consistent* basis: if it has high spikes but keeps a "within tolerance" average, it *should* be OK, depending on how high those spikes actually are.


----------



## john_ (Jul 2, 2016)

cadaveca said:


> Perhaps AMD gave the card a BIOS that allowed it to exceed PCIe spec because it wanted it to be reviewed in the best light, and knew reviewers sometimes do not investigate OC? Given how their clocks are "managed" compared to NVidia's Turbo, this actually seems like the most reasonable explanation for what happened. Not every site has the capability to accurately measure power consumption for PCIe devices, so many sites wouldn't even be able to test such an issue.


AMD should know by now that even if one site finds something that it looks like a problem, all hell will get lose. If they where thinking like that, then maybe AMD's engineers live under a rock and have no contact with internet and the real world. Like Nvidia's engineers who where also living under a rock and never noticed that one of their company's products was selling for over 6 months with wrong specs. Or someone in AMD is a moron. Probably he also insists in giving exclusive interviews to unfriendly sites.


----------



## arbiter (Jul 2, 2016)

john_ said:


> AMD should know by now that even if one site finds something that it looks like a problem, all hell will get lose. If they where thinking like that, then maybe AMD's engineers live under a rock and have no contact with internet and the real world. Like Nvidia's engineers who where also living under a rock and never noticed that one of their company's products was selling for over 6 months with wrong specs. Or someone in AMD is a moron. Probably he also insists in giving exclusive interviews to unfriendly sites.


Theory that was floated, is 480 was clocked at a lower mhz but the launch of pascal 1070/1080 changed what clocks 480 was set to run which changed power draw. Its a plausible idea and AMD didn't have time to test it which they should of.

Techinically, 970 wasn't selling with the wrong spec's It HAS 4gb of memory no matter how much people say its only 3.5. there is 4gb there so specs were correct.


----------



## john_ (Jul 2, 2016)

newtekie1 said:


> And the ASUS gtx950 with no power connector pulls a maximum of 76w from the slot. So they managed to keep it right at the limit.


Overclock it and you probably go to 85-90W.

Custom RX 480 cards will come and this matter will be forgotten fast. It will be only one of the things that fanboys will be remembering in fanboy wars. "RX 480 was a fire hazard". "The same can be said for GTX 570(I think)" etc.

What I think we should learn here, is that when overclocking a card that it is at it's TDP limit from the factory(GTX 950 with no power connector, R9 270X with only one 6pin), we are not just stressing the card, we are probably stressing the motherboard. I was in total darkness until now. Was I the only one? I wonder how many people out there get a GTX 950 without a power connector and overclock it because
"It doesn't need a power connector, so it must have some really top quality GPU in there that probably overclocks better than those used in cards that need an extra power connector".




arbiter said:


> Theory that was floated, is 480 was clocked at a lower mhz but the launch of pascal 1070/1080 changed what clocks 480 was set to run which changed power draw. Its a plausible idea and AMD didn't have time to test it which they should of.
> 
> Techinically, 970 wasn't selling with the wrong spec's It HAS 4gb of memory no matter how much people say its only 3.5. there is 4gb there so specs were correct.


The real competition for AMD, the bar they had set to pass, was GTX 970, not 1070 or 1080. They needed a card faster than GTX 970 and at the same time at 150W TDP limit. Lowering the clocks was probably enough to lose some benchmarks. So they decided to go over the TDP limits, probably knowing that if there where any incidents, they would be few. Wrong thinking.

Technically if my EVO 840 was 100GB SSD and 20GB HDD, I wouldn't say "That's OK, I always have 20GBs free", or "That's OK, 100GBs+20GBs it's 120GBs".

I don't give any excuses to AMD, and no one should. The world will be a better place, and products of better quality, if people also stop giving excuses to Nvidia for it's mistakes/lies.

PS Also less ROPs, less cache, less bandwidth. You keep forgetting those.


----------



## arbiter (Jul 3, 2016)

HTC said:


> I found this video to be quite informative:


If people that claimed "960 uses 200watts" or 750ti does this would watch the first 10min of this video it would go through and in plain and easy terms explain it on what you see with tomshardware's graph they would understand things a bit more.


----------



## newtekie1 (Jul 3, 2016)

john_ said:


> Overclock it and you probably go to 85-90W.
> 
> Custom RX 480 cards will come and this matter will be forgotten fast. It will be only one of the things that fanboys will be remembering in fanboy wars. "RX 480 was a fire hazard". "The same can be said for GTX 570(I think)" etc.
> 
> ...



Except that isn't how it works on modern cards anymore.  They have power limits in place to make sure they don't go over their power target.  The power limit on the GTX950 with no power connector was 75w.  You can up the clocks all you want, but GPU Boost will make them drop back down to keep within the 75w power limit.  That is why tests like furmark don't give stupid high power numbers anymore.  So overclocking without raising the power limit would still give ~75w.



john_ said:


> Technically if my EVO 840 was 100GB SSD and 20GB HDD, I wouldn't say "That's OK, I always have 20GBs free", or "That's OK, 100GBs+20GBs it's 120GBs".



Interesting analogy, but somewhat fitting.  In fact, there are MLC SSDs that perform crazy good, almost at SLC levels until you have them filled up a certain amount, then the performance starts to drop off.  The reason being that they run all the MLC flash in SLC mode until the extra space is needed, then it switches to MLC mode.  But the benchmarks in the review sure look good.


----------



## GC_PaNzerFIN (Jul 3, 2016)

https://bitcointalk.org/index.php?topic=1433925.msg15438988#msg15438988






Hmm, Has anyone run any GPGPU compute power measurements on RX 480? See above, claiming 3x underclocked RX 480s + Coin mining killed it.


----------



## INSTG8R (Jul 3, 2016)

GC_PaNzerFIN said:


> https://bitcointalk.org/index.php?topic=1433925.msg15438988#msg15438988
> 
> 
> 
> ...



MoBo still has IDE Port...Seems Legit. Should be in spec.


----------



## qubit (Jul 3, 2016)

HTC said:


> I found this video to be quite informative:


Just seen the whole video and now I'm even happier that I haven't bought an AMD card since 2008. Signifcant driver and performance glitches are one thing (and bad enough) but potentially killing the mobo with excess current is a new low. There's no way they couldn't have known about this at the design and testing phase. No, they tried to palm off a substandard product and hoped they wouldn't get cought out. It amounts to a kind of fraud FFS. 

IMO these cards should be pulled from the market until the fix has been applied and tested to be effective.

The way this company is going I'm unlikely to ever buy one of their graphics cards again. No wonder NVIDIA can charge what they like for their cards. At least they work beautifully most of the time.


----------



## GC_PaNzerFIN (Jul 3, 2016)

INSTG8R said:


> MoBo still has IDE Port...Seems Legit. Should be in spec.


It is actually not THAT old motherboard, previous generation to Sandy Bridge. I spent a while investigating this and I have much more valid question how you could manage to connect 3 cards on the board, even with ribbon cables.


----------



## ppn (Jul 3, 2016)

ATX24 is the weak point. It is the equivalent of 2/3 6pin. It cant compensate for the lack of 3x6pin

Actually you can connect as many as this motherboard has 4 PCIe. 16,4,1,1x


----------



## GC_PaNzerFIN (Jul 3, 2016)

ppn said:


> ATX24 is the weak point. It is the equivalent of 2/3 6pin. It cant compensate for the lack of 3x6pin
> 
> Actually you can connect as many as this motherboard has 4 PCIe. 16,4,1,1x



Based on connector scheme on AsRock H81 BTC boards, it does seem like people usually use x1 ribbon cables for mining rigs. Indeed possible then.


----------



## Secoya (Jul 3, 2016)

$ReaPeR$ said:


> yeah mate.. whatever you say..
> 
> on topic: this doesnt seem to be such a major problem, and i love how people have blown it way out of proportion, what do you think happens when you oc a card geniuses?



Well, what normally happens is that if you have plenty of power source overhead to draw from, you never exceed the safe limit of it.

A 180W TDP GTX 1080 has a 225W power supply.

A 150W TDP GTX 1070 has a 225W power supply.

A 170W TDP RX 480 has a 150W power supply....

See the difference? The RX 480 was exceeding the limit without OCing too, which makes this "oversight" epic in terms of being bad for the consumer.

I'm certain those people who have smoke their motherboards really think this whole tihng has been blown out of proportion.

AMD RX 480 NERF incoming btw.


----------



## cadaveca (Jul 3, 2016)

john_ said:


> AMD should know by now that even if one site finds something that it looks like a problem, all hell will get lose. If they where thinking like that, then maybe AMD's engineers live under a rock and have no contact with internet and the real world. Like Nvidia's engineers who where also living under a rock and never noticed that one of their company's products was selling for over 6 months with wrong specs. Or someone in AMD is a moron. Probably he also insists in giving exclusive interviews to unfriendly sites.



It's just simply possible the limit was "unlocked" to be controlled by driver, and the driver did not work right. Like, why offer power limit controls in driver in the first place? It only makes sense to me that the driver would control the power limit and prevent PCIe spec from being overrun without manual changes and some sort of pop-up disclaimer. I can't fathom any way that a card would exceed PCIe specs under any conditions, so I make up reasons why it might be acceptable.


----------



## TheinsanegamerN (Jul 3, 2016)

GC_PaNzerFIN said:


> Reducing operating voltage MAY increase performance in thermally limited situations. Decreasing total power limit without changing anything is going to have opposite effect.
> Reducing operating voltage MAY work on some GPUs, but due to obvious negative effect on stability (there is a reason why they put the VID it has now) it is too risky to do on all cards.


AMD has also been known in the past to put way to much voltage through their parts to increase yields in the past, especially for mid range high demand products.

Take the llano APUs. The mobile variants pulled 1.3V at their non turbo clock, and 1.415 volt for full boost clock. Seeing those things hit above the base clock was like seeing a unicorn.

Lo and behold, AMD used super relaxed settings to try and boost yields. Most, although not all, APUs could be undervolted. And I dont mean the -50mv that you get out of a mobile i7. You could typically get -350mv off of the core, while running at a much higher speed. For instance, my A6-3400m could do 2.1 GHz at 1.0375 volt, compared to 1.4GHz at 1.3 volt at stock. I could hit the 2.3 GHZ boost clock with 1.1 volt, compared with the 1.415 volt that the stock boost needed. And this was common, a huge number of llano chips acted this way, with only the rare model actually needing that much voltage to stay stable.

So it wouldnt surprise me if AMD could undervolt most 480s without difficulty. How they would do that in a driver is beyond me, but the headroom may be there.


----------



## Dippyskoodlez (Jul 3, 2016)

If this is a driver fixable issue, what about non official driver users?

That is NOT a good solution.


----------



## RejZoR (Jul 3, 2016)

What "non official" driver users? There is just one driver. The official one.

Like Cadaveca said, I think it was a driver cockup as well from the beginning and that's also one of reasons why cards actually perform kinda bad, because they go over the limit while they shouldn't be doing that. They were tested for stock operation and tuned for that. Including the fan profile. Card heats up more than it should for the factory fan profile, making it thermal throttle as well as hitting power limit.

Lets just wait for the damn promised fix and then evaluate it. Damn, AMD makes a statement and instead of people acknowledging it and waiting for the fix, they keep on dramatizing about it. Why we never see that for NVIDIA? Everyone bunch of fanboys and NVIDIA stock owners? Apparently...


----------



## McSteel (Jul 3, 2016)

HTC said:


> I found this video to be quite informative:



As I have been saying... If AMD had been smart and made the card overdraw from the 6-pin PCI-E and NOT the MB slot, no problem at all, let it sip current.

The 5 +12V pins on the card and in the slot are simply not meant to carry that much.

This can only be fixed by having the VRM-in completely split, so that the PCI-E Slot and PCI-E External power are separated. Meaning, if there are 6 power phases, feed one, maybe two of them from the slot, and the rest from the external connector. This *the only way* to limit slot consumption.


----------



## RejZoR (Jul 3, 2016)

Well, or limiting it to actual 150W as they specified which is what they'll most likely do.


----------



## BiggieShady (Jul 3, 2016)

RejZoR said:


> Everyone bunch of fanboys and NVIDIA stock owners?


By my calculation you'd have to invest 6.11 million dollars into nvidia to have yearly dividends at 70k usd so you could quit your job and spend all your time on forums deviously trying to raise nvdia stock price.


----------



## RejZoR (Jul 3, 2016)

Trust me, people do this sort of stuff for a lot less...


----------



## john_ (Jul 3, 2016)

newtekie1 said:


> Except that isn't how it works on modern cards anymore.  They have power limits in place to make sure they don't go over their power target.  The power limit on the GTX950 with no power connector was 75w.  You can up the clocks all you want, but GPU Boost will make them drop back down to keep within the 75w power limit.  That is why tests like furmark don't give stupid high power numbers anymore.  So overclocking without raising the power limit would still give ~75w.


In that review average power was 74W. After overclocking the GPU at over 1400Mhz and memory at over 2000MHz, the results where close to 20% extra performance. I think I am going to doubt that someone gets a 20% extra performance and stays under the 75W limit. Probably the card goes to 90W(20% extra power for 20% extra performance), if not more considering that usually power consumption goes faster up, compared to performance. If they where getting 1-3% extra performance, I would have agreed with you.



> Interesting analogy, but somewhat fitting.  In fact, there are MLC SSDs that perform crazy good, almost at SLC levels until you have them filled up a certain amount, then the performance starts to drop off.  The reason being that they run all the MLC flash in SLC mode until the extra space is needed, then it switches to MLC mode.  But the benchmarks in the review sure look good.



Are those SSDs advertised as SLC SSDs? I believe not. Well, if they where Nvidia products, probably they would. And people would be happy to convince themselves that while being MLC, performing as SLC would made them equal to SLCs. And anyone saying the opposite, would have been a stupid fanboy that hates Nvidia and doesn't acknowledge Nvidia's superior engineering. "It is a good design".

Also SLC vs MLC is not just performance difference. If I am not mistaken SLCs are considered as having better longevity. The same applies to the 970. It is not just those slow 500 MBs. Also less cache, less ROPs, less memory bandwidth. Specs where completely wrong and we shouldn't be giving any excuses to companies.



qubit said:


> Just seen the whole video and now I'm even happier that I haven't bought an AMD card since 2008. Signifcant driver and performance glitches are one thing (and bad enough) but potentially killing the mobo with excess current is a new low. There's no way they couldn't have known about this at the design and testing phase. No, they tried to palm off a substandard product and hoped they wouldn't get cought out. It amounts to a kind of fraud FFS.
> 
> IMO these cards should be pulled from the market until the fix has been applied and tested to be effective.
> 
> The way this company is going I'm unlikely to ever buy one of their graphics cards again. No wonder NVIDIA can charge what they like for their cards. At least they work beautifully most of the time.


 Google Bumpgate + Nvidia. That's a low that AMD probably will never reach. Also I bet you haven't downloaded a single Nvidia driver the last 12 months, considering that you still talk about drivers. Well I think McCoy's words about this argument would have been "It's dead Jim".



McSteel said:


> As I have been saying... If AMD had been smart and made the card overdraw from the 6-pin PCI-E and NOT the MB slot, no problem at all, let it sip current.


 I guess they thought that there are just too many "600W PSUs" costing $20 out there.

What AMD should have done was to lock the GPU at a specific frequency and say "Sorry guys, you will have to buy a custom if you want overclocking". Or they could just offer only a 4GB reference version at $199 and let AIBs made the 8GB cards. 4GB less GDDR5 on board could also help in lowering power consumption.


----------



## BiggieShady (Jul 3, 2016)

So why does this happen ... stumbled upon the twitch video from @buildzoid (kudos dude) at *53:40 minutes* starts the "gotcha" moment where insane board power delivery design is explained ... out of spec 6 pin connector used as an 8 pin and yet half of vcore vrms connected to pcie power pins.

The video: https://www.twitch.tv/buildzoid/v/75850933?t=53m40s


----------



## $ReaPeR$ (Jul 3, 2016)

Secoya said:


> Well, what normally happens is that if you have plenty of power source overhead to draw from, you never exceed the safe limit of it.
> 
> A 180W TDP GTX 1080 has a 225W power supply.
> 
> ...



how many are affected? find me the number. we are talking about 16watt over the specs, 16 ffs.


----------



## RejZoR (Jul 3, 2016)

Sure. Lets go by the book about the specs. So, where do Titan-Z and R9 295X2 get power from then if 8pin is specified up to 150W and it has 2 of them? That's 300W + 75W on PCIe slot. Where's the rest of 110W+ coming from then? From thin air?

Apparently if RX480 would draw those extra 16W from 6pin all would be pink and fluffy. But PCIe oh noes everyone running around losing their shit. Ever thought the fix might involve just that? Limiting power to actual 150W or drawing more power from 6pin? But no, lets generate even more unnecessary drama. When NVIDIA fucks up, everyone gets defensive to stupendous levels. AMD fucks something up, everyone loses their shit and creates so much drama around it even Venezuelan soap operas look shy in comparison... Por favor!


----------



## zAAm (Jul 3, 2016)

RejZoR said:


> Sure. Lets go by the book about the specs. So, where do Titan-Z and R9 295X2 get power from then if 8pin is specified up to 150W and it has 2 of them? That's 300W + 75W on PCIe slot. Where's the rest of 110W+ coming from then? From thin air?
> 
> Apparently if RX480 would draw those extra 16W from 6pin all would be pink and fluffy. But PCIe oh noes everyone running around losing their shit. Ever thought the fix might involve just that? Limiting power to actual 150W or drawing more power from 6pin? But no, lets generate even more unnecessary drama. When NVIDIA fucks up, everyone gets defensive to stupendous levels. AMD fucks something up, everyone loses their shit and creates so much drama around it even Venezuelan soap operas look shy in comparison... Por favor!



Actually, drawing extra power from the 6-pin or 8-pin PCI-E connector should be relatively safe. Drawing extra power from the motherboard however is not. If you limit your power to 75W for the motherboard you can draw a lot of "out of spec" power from the PCI-E power connector if your PSU is beefy enough. The connectors and accompanying 18AWG wires should be fine for more than the PCI-E rated power (probably close to double if you look at the actual connector datasheets). However, motherboard power tracks and the small contacts of the PCI-E connector aren't designed with that sort of overhead in mind. So AMD could get away with cheating the spec as long as they don't exceed it on the motherboard side. If they can put out a software fix that only limits the motherboard connected phases to 75W then all should be well without actually reducing the TDP.


----------



## BiggieShady (Jul 3, 2016)

zAAm said:


> If they can put out a software fix that only limits the motherboard connected phases to 75W then all should be well without actually reducing the TDP.


Well, it seems some of the VRM phases are connected directly to the pcie power pins ... if that's true software fix is unlikely to happen short of undervolt and underclock ... See my post above


----------



## john_ (Jul 3, 2016)

zAAm said:


> Actually, drawing extra power from the 6-pin or 8-pin PCI-E connector should be relatively safe.



ACE 600W Black ATX Gaming PC PSU Power Supply 120mm Red | eBay

600W, £17.24  inc. VAT

I think there are more (quantity) dangerous PSUs out there, than motherboards. And those PSUs will not go alone to the afterlife. They will take other parts of the hardware with them. Probably the motherboard too. This is the only excuse for AMD choosing the PCIe bus over the 6pin power connector I can think of. Of course I still believe that they have no excuse for the whole mess.


----------



## RejZoR (Jul 3, 2016)

But NVIDIA had one (excuse) when they fucked it up on several occasions in the past? Oh boy the double standards...


----------



## buggalugs (Jul 3, 2016)

This is a beatup., Lots of hardware runs out of spec. Lots of graphics cards run out of spec. There is enough headroom built into the spec to handle this.

 Its kind of strange how people are going crazy about this, but those same people overclock the crap out of their parts and run out of  spec. How do you think we can overclock our shit without our computers blowing up?? Because the hardware is designed to handle more than the spec allows.

  This is a non-issue for 99.9% of people unless you have a crappy cheap motherboard from 2005. Any graphics card update can stress an old motherboard, same with other parts like a PSU. A motherboard is most at risk of dying after a major hardware upgrade when it is old.

  AMD are releasing a driver update anyway just to shut people up.. If  I had a 480 I wouldnt want the driver update, I'd prefer they leave it alone.


----------



## HTC (Jul 3, 2016)

jabbadap said:


> Sad part is, they should not have to even make it worse. They could have restrict pcie slot power below the spec and take over power from 6-pin connector(best practice, this way you don't over power pcie slot even while overclocking) or slap that damn 8-pin connector to it.





$ReaPeR$ said:


> how many are affected? find me the number. we are talking about 16watt over the specs, 16 ffs.



How many of these exceed the PCI-e portion of the TDP? I'm not talking about the whole wattage: only the PCI-e part of it.

Ordinarily, you'd look @ the PSU for higher wattage needs but, with this card, you can have a ... say ... 1000W PSU and STILL have problems simply because the PCI-e is using more power then it should. This could end up with the motherboard's PCI-e slot's contacts burned because, unlike the 6/8 pin connectors, they are NOT made to overclock.


----------



## RejZoR (Jul 3, 2016)

I don't remember, but is the actual PCIe drawing this much? Sure, it's 166W, but is it actually from PCIe or is it over 6pin? Everyone seems to just assume 6pin is absolutely strict 75W so it ha to be PCIe then. But is it? Who has actually measured it at PCIe? Can't remember the testers who would do this at the moment...


----------



## laszlo (Jul 3, 2016)

the 6 pin connector has pin #2 also delivering 12 V  so this 6 pin connector can deliver more than 75W from PSU up to 150W as i see ; i don't know if rx480 PCB has pin 2 connected physically to use current from  it....  

i'm expecting the release of aib with 8 pin to be sure that i'll buy a correct card.


----------



## HTC (Jul 3, 2016)

RejZoR said:


> I don't remember, but is the actual PCIe drawing this much? Sure, it's 166W, but is it actually from PCIe or is it over 6pin? Everyone seems to just assume 6pin is absolutely strict 75W so it ha to be PCIe then. But is it? Who has actually measured it at PCIe? Can't remember the testers who would do this at the moment...



https://www.techpowerup.com/forums/...-overcurrent-issue.223833/page-5#post-3482710

It's BECAUSE it's drawing more then it should that this is a problem to begin with.

Let me give an example. Look @ 2 cards that draw 180W:

- card A: 70W from PCI-e + 110W from 6 pin connector
- card B: 85W from PCI-e + 95W from 6 pin connector

Both cards are out of spec BUT one can end up burning your motherboard (worst case scenario) while the other should not: can you tell which one?


----------



## TRWOV (Jul 3, 2016)

qubit said:


> Just seen the whole video and now I'm even happier that I haven't bought an AMD card since 2008. Signifcant driver and performance glitches are one thing (and bad enough) but potentially killing the mobo with excess current is a new low. There's no way they couldn't have known about this at the design and testing phase. No, they tried to palm off a substandard product and hoped they wouldn't get cought out. It amounts to a kind of fraud FFS.
> 
> IMO these cards should be pulled from the market until the fix has been applied and tested to be effective.
> 
> The way this company is going I'm unlikely to ever buy one of their graphics cards again. No wonder NVIDIA can charge what they like for their cards. At least they work beautifully most of the time.



nVidia cards also do that but somehow AMD always gets the drama for some reason... well, to be fair the reason in this case is that AMD knew the card would pull >150w at times and they should have shipped with an 8pin connector but choose to go with a 6pin instead for marketing reasons (the worst kind of reasons).


Thankfully the fix is somehow easy but it's going to be a two part fix as far as I can tell seeing the quoted videos.

1) The fix AMD is going to push via drivers. This will likely set the power limit to 150w and call it a day. Not different from setting it yourself on  Wattman.

2) A bios update to curb the PCIe power delivery to 75w and let the rest of the power come from the PCIe slot. The VRM phases are capable of outputting 40w each so there should be no issue for doing this. 

The second one is trickier but AMD could potentially release and automatic bios flash tool and label it as a beta driver or hotfix or something.


Why this happened in the first place? I would like to know if all the cards have the same bios version on them. AMD says that the card shouldn't be behaving that way so I have the feeling that there are some 480s out there with a 50/50 power split bios.


----------



## RejZoR (Jul 3, 2016)

Because NVIDIA fanboys are more fanboyish than the rest...


----------



## okidna (Jul 3, 2016)

RejZoR said:


> I don't remember, but is the actual PCIe drawing this much? Sure, it's 166W, but is it actually from PCIe or is it over 6pin? Everyone seems to just assume 6pin is absolutely strict 75W so it ha to be PCIe then. But is it? Who has actually measured it at PCIe? Can't remember the testers who would do this at the moment...



PCPer, they did a stock analysis, increasing the power limit analysis, and even debunking an analysis about GTX 960 STRIX : http://www.pcper.com/reviews/Graphics-Cards/Power-Consumption-Concerns-Radeon-RX-480


----------



## RejZoR (Jul 3, 2016)

Behold, it's "AMD fanboys" analysis when someone has to defend NVIDIA. But when it's the other way around, it was nothing, you know, NVIDIA is working on a driver fix, no need to make drama. But throw AMD in the same scenario and whole internet is losing their shit. Sometimes I'm ashamed for owning a NVIDIA card...


----------



## okidna (Jul 3, 2016)

RejZoR said:


> Behold, it's "AMD fanboys" analysis when someone has to defend NVIDIA. But when it's the other way around, it was nothing, you know, NVIDIA is working on a driver fix, no need to make drama. But throw AMD in the same scenario and whole internet is losing their shit. Sometimes I'm ashamed for owning a NVIDIA card...



Oh sorry, that's my personal opinion. Need to do a quick edit before the Nekker army coming back to haunt my sleep.


----------



## D007 (Jul 3, 2016)

They both screw up. AMD and Nvidia.. Why does it always have to turn into a competition? lol..
I just think it's shitty, regardless of who did it.. Sounds like the power spec is way out.
Kind of a big deal, seeing as they kept screaming how little power it used..


----------



## cdawall (Jul 3, 2016)

I'm finding my lack of care and concern to be growing with this. It's a simple fix that is already being implemented by amd.


----------



## HD64G (Jul 3, 2016)

Tsukiyomi91 said:


> If AIB vendors are fixing the "out of spec PCIe limits" with an 8-pin or possibly 8 + 6-pin to reduce excessive power draw from the PCIe slot, then AMD shouldn't even come up with a statement that it will release a driver fix. Limiting the card's power even by a little affects a lot of aspect. @HD64G why not u bench your VGA before limiting it & then u throw a 10% reduction to it via tuning software & then run the same test again? We wanna see if limiting power does not reduce the card's performance, as per what you claimed.



The answer is in this article mate: http://semiaccurate.com/2016/07/01/investigating-thermal-throttling-undervolting-amds-rx-480/


----------



## NDown (Jul 3, 2016)

RejZoR said:


> Behold, it's "AMD fanboys" analysis when someone has to defend NVIDIA. But when it's the other way around, it was nothing, you know, NVIDIA is working on a driver fix, no need to make drama. But throw AMD in the same scenario and whole internet is losing their shit. Sometimes I'm ashamed for owning a NVIDIA card...



Well what do you expect when most of their fanbase are mostly manchild/literal kid

you cant have a good gaming experience if you dont have the GeForce GTX® logo/sticker in your PC afterall :^)

most probably doesnt care about efficiency either, or they are simply too new to remember the HD5xxx vs GTX 4xx series


----------



## cdawall (Jul 3, 2016)

Oh and one more thing when this driver pops up allowing them to limit pcie to an actual 75w everyone does understand the cheap low quality boards or old worn put boards are still going to pop right? This issue isn't going away. Junk was still never made to survive with actual full spec being pulled over pcie. 

This doesn't even account the insane number of people who will never update past the driver that came on the DVD


----------



## sith'ari (Jul 3, 2016)

RejZoR said:


> Behold, it's "AMD fanboys" analysis when someone has to defend NVIDIA. But when it's the other way around, it was nothing, you know, NVIDIA is working on a driver fix, no need to make drama. But throw AMD in the same scenario and whole internet is losing their shit. Sometimes I'm ashamed for owning a NVIDIA card...



what are you talking about? what drama? what driver fix?  have you read the link posted by @okidna? ( http://www.pcper.com/reviews/Graphi...s-Radeon-RX-480/Evaluating-ASUS-GTX-960-Strix ).
The GTX 960, was tested, and even while overclocked it was working *excellent* regarding its power delivery (*unlike the RX480).


----------



## okidna (Jul 3, 2016)

cdawall said:


> This doesn't even account the insane number of people who will never update past the driver that came on the DVD



This is an issue that I found very frequently, friends or colleagues who complained about their poor FPS when playing a new game, they don't know that both AMD and NVIDIA now use a different approach when it comes to new game support.



sith'ari said:


> what are you talking about? what drama? what driver fix?  have you read the link posted by @okidna? ( http://www.pcper.com/reviews/Graphi...s-Radeon-RX-480/Evaluating-ASUS-GTX-960-Strix ).
> The GTX 960, was tested, and even while overclocked it was working *excellent* regarding its power delivery.



No no no, that's my fault, not RejZor fault. I wrote "fanboy analysis" on my original post before removing it because it's my personal opinion, shouldn't wrote it in the first time.


----------



## sith'ari (Jul 3, 2016)

okidna said:


> No no no, that's my fault, not RejZor fault. I wrote "fanboy analysis" on my original post before removing it because it's my personal opinion, shouldn't wrote it in the first time.



Oh, apologies to RejZor then !


----------



## alucasa (Jul 3, 2016)

Anyone commit suicide yet? Some should have according to wild reactions here.


----------



## newtekie1 (Jul 3, 2016)

john_ said:


> In that review average power was 74W. After overclocking the GPU at over 1400Mhz and memory at over 2000MHz, the results where close to 20% extra performance. I think I am going to doubt that someone gets a 20% extra performance and stays under the 75W limit. Probably the card goes to 90W(20% extra power for 20% extra performance), if not more considering that usually power consumption goes faster up, compared to performance. If they where getting 1-3% extra performance, I would have agreed with you.



Again, not how it works. GPU Boost is designed to adjust the clock speed to keep the card below the power limit.  So even if you do overclock the GTX950, GPU boost guarantees that the card will stay right around 75w.  The only way to go beyond the 75w would be to adjust the power limit in your overclocking software or by BIOS.  Either way, at that point the user would be aware they are overloading the PCI-E slot.



john_ said:


> Are those SSDs advertised as SLC SSDs? I believe not. Well, if they where Nvidia products, probably they would. And people would be happy to convince themselves that while being MLC, performing as SLC would made them equal to SLCs. And anyone saying the opposite, would have been a stupid fanboy that hates Nvidia and doesn't acknowledge Nvidia's superior engineering. "It is a good design".
> 
> Also SLC vs MLC is not just performance difference. If I am not mistaken SLCs are considered as having better longevity. The same applies to the 970. It is not just those slow 500 MBs. Also less cache, less ROPs, less memory bandwidth. Specs where completely wrong and we shouldn't be giving any excuses to companies.



Most of them do put blurbs in their advertising about this feature, yes.

Also, the advertising for the GTX970 was correct.  They didn't advertise ROP count, they didn't advertise cache size.  So you don't really have a point there.


----------



## ensabrenoir (Jul 3, 2016)

NDown said:


> Well what do you expect when most of their fanbase are mostly manchild/literal kid
> 
> you cant have a good gaming experience if you dont have the GeForce GTX® logo/sticker in your PC afterall :^)
> 
> most probably doesnt care about efficiency either, or they are simply too new to remember the HD5xxx vs GTX 4xx series




.......you dare mock us? *At least get your facts right!!!!!  *It take a *GeForce  GTX *logo, and *Racing Stripes *you barbarian!


----------



## sith'ari (Jul 3, 2016)

ensabrenoir said:


> .......you dare mock us? *At least get your facts right!!!!!  *It take a *GeForce  GTX *logo, and *Racing Stripes *you barbarian!



Probably trolling. If you look at his system he owns a GTX 970, so.......... !

P.S. Anyway, just like so many guys already said, i would like to emphasize on this video: 







 .Whoever wants to take a quick look, go at 20:50 and watch the estimation of the possible threat that the RX 480 is for our systems!!


----------



## RejZoR (Jul 3, 2016)

Jesus, people think this will just fry the system after 3 gaming sessions. Sure it's not healthy if you use it for 3 years like this, but c'mon, the card was released what, 3 days ago? And everyone still going absolutely batshit insane over it despite fix being promised (which will most likely limit the card to actual 150W or revert the power delivery for 6pin to accept more and PCIe to be within the limits. In either cases you wouldn't actually be "losing" performance because 150W was advertised from the beginning. But oh well. It's page 7 already...


----------



## sith'ari (Jul 3, 2016)

RejZoR said:


> Jesus, people think this will just fry the system after 3 gaming sessions. Sure it's not healthy if you use it for 3 years like this, but c'mon, the card was released what, 3 days ago? And everyone still going absolutely batshit insane over it despite fix being promised (which will most likely limit the card to actual 150W or revert the power delivery for 6pin to accept more and PCIe to be within the limits. In either cases you wouldn't actually be "losing" performance because 150W was advertised from the beginning. But oh well. It's page 7 already...



Mate, i haven't paid *200€* on a high-end PSU, and *400€* for also a high-end UPS, only to let an RX 480  to endanger my system!!
But , hey, with your money you can do what you want.


----------



## ensabrenoir (Jul 3, 2016)

sith'ari said:


> Probably trolling. If you look at his system he owns a GTX 970, so.......... !
> 
> P.S. Anyway, just like so many guys already said, i would like to emphasize on this video:
> 
> ...



...so am I


----------



## RejZoR (Jul 3, 2016)

sith'ari said:


> Mate, i haven't paid *200€* on a high-end PSU, and *400€* for also a high-end UPS, only to let an RX 480  to endanger my system!!
> But , hey, with your money you can do what you want.



[Drama Intensifies]

You don't even have RX480 and you're making it like it has already fried your PCIe circuitry...


----------



## john_ (Jul 3, 2016)

newtekie1 said:


> Again, not how it works. GPU Boost is designed to adjust the clock speed to keep the card below the power limit.  So even if you do overclock the GTX950, GPU boost guarantees that the card will stay right around 75w.  The only way to go beyond the 75w would be to adjust the power limit in your overclocking software or by BIOS.  Either way, at that point the user would be aware they are overloading the PCI-E slot.


 You are avoiding to answer the question here. How can you have 100% performance at 74W and then get 120% performance and remain at 74W? Simple answer. You can't. 



> Most of them do put blurbs in their advertising about this feature, yes.
> 
> Also, the advertising for the GTX970 was correct.  They didn't advertise ROP count, they didn't advertise cache size.  So you don't really have a point there.


 None advertise it as SLC. If you have any example of an MLC SSD that says "it uses SLC" you are free to show it.


In the end I feel like the stupid little fool, trying to have a honest conversation with people who will commit suicide before posting anything questionable about Nvidia.


----------



## xorbe (Jul 3, 2016)

RejZoR said:


> [Drama Intensifies]
> 
> You don't even have RX480 and you're making it like it has already fried your PCIe circuitry...



A couple websites, one dusty motherboard, and everyone's PCs are on fire!   What happened to the poster claiming the RX480 was sustaining 254 watts?  Maybe that was at [H].

The one long-time poster I've seen with RX480 at [H] has 2 cards and stress tested them for hours without issue in one PC.


----------



## sith'ari (Jul 3, 2016)

> RejZoR said:
> [Drama Intensifies]
> You don't even have RX480 and you're making it like it has already fried your PCIe circuitry...



since i have so expensive protection equipment, that means that i hate taking risks........AT ALL !
( P.S. Of course you are correct. I don't own a RX 480, and the last few years, i'm nowhere near interested fo buy AMD's GPUs. Already explained myself at post *#*43 of this thread. If you like, take a look at it )


----------



## zAAm (Jul 3, 2016)

RejZoR said:


> Jesus, people think this will just fry the system after 3 gaming sessions. Sure it's not healthy if you use it for 3 years like this, but c'mon, the card was released what, 3 days ago? And everyone still going absolutely batshit insane over it despite fix being promised (which will most likely limit the card to actual 150W or revert the power delivery for 6pin to accept more and PCIe to be within the limits. In either cases you wouldn't actually be "losing" performance because 150W was advertised from the beginning. But oh well. It's page 7 already...



Sure it won't fry your system after 3 gaming sessions, but then again, VW's diesel engines won't destroy the planet in 3 months either  It isn't a big issue as long as they actually fix it, but we'll need to see if they can actually limit single phases via software control and if they cannot, how a global TDP limitation will affect the value proposition of the card in terms of potential reduced performance...


----------



## RejZoR (Jul 3, 2016)

VW outright lied and cheated intentionally. There is nothing to fix because what they've done was intentional. AMD simply cocked it up and they are already working on a fix for it. That's a big difference.


----------



## Secoya (Jul 3, 2016)

$ReaPeR$ said:


> how many are affected? find me the number. we are talking about 16watt over the specs, 16 ffs.



If even one person was affected by the reckless overspecced wattage draw of the RX 480 simply because AMD didn't want to put a proper power source onto the card for PR reasons, then it is one too many. Science Studio has a video review posted on Youtube showing that the RX 480 isn't even playable with certain motherboards simply becasue it is WAY over spec. Gamers Nexus showed the card pulling 192W during testing. Don't link a chart showing calculated TDP and stand back and point saying THERE! This is the most ludicrous move in GPU history and it serves AMD right to have it blow up in their faces for doing it. Some cards do in fact go over spec, particularly when OC'd, the difference of course being that they pull that extra wattage through the 6 or 8 pin connector and not the PCIe slot. If you have a good power supply, it doesn't affect you. BUT with the RX 480, this is not the case and it's a problem that exists at the hardware level because of the way the phases are laid out. It's actually pulling more from hte PCIe slot than it is from the 6 pin connector.

Now, lets go back to the Gamers Nexus observed wattage pull when OC'd, 192W from a 150W power supply with MORE THAN HALF  of that coming through the PCIe slot. That's a 128% jump over spec. That's like trying to pull 32 amps from a 25 amp wall socket. You could burn your house down if there were no fail-safes.  Even a nonOC'd card will pull 86W from the PCIe slot, or 15% more than max. These aren't guidelines, they are absolute limits.

The RX 480 is the ONLY GPU in history to average more than 75W from the PCIe slot at stock clocks. The only way to prevent it is to limit wattage to 150W, really less becasue it pulls more from the PCIe slot than the 6 pin connector, so lets say 145W. That would require a 14.4% under-clock of the card. A card that already is 15% less powerful than a GTX 1060 which will be able to AT LEAST OC another 20%. So, now we're talking about a GTX 1060 that will be around 50% more powerful than a reference RX 480 and 25% more powerful than an OC'd AIB 480 for about $250. Being released at the same time as the AIB cards which will most likely cost $300.

This was a MUST WIN for AMD, but instead it became worst case scenario.


----------



## zAAm (Jul 3, 2016)

RejZoR said:


> VW outright lied and cheated intentionally. There is nothing to fix because what they've done was intentional. AMD simply cocked it up and they are already working on a fix for it. That's a big difference.



It was meant as an analogy to the effect, not the intention of the company. No need to get all up in arms


----------



## BiggieShady (Jul 3, 2016)

RejZoR said:


> Jesus, people think this will just fry the system after 3 gaming sessions.


No, just a pcie slot ... you can move 480 to a different slot and have another 3 gaming sessions on the same motherboard  https://community.amd.com/thread/202410


----------



## GLD (Jul 3, 2016)

I want to see AMD and/or the card manufactures release a bios update to fix the power draw problem. A fix in a driver release wont be sufficient. I want a RX 480, when they get this problem sorted.


----------



## medi01 (Jul 3, 2016)

chinmi said:


> Hahaha.... Dead on arrival... People have more and more reason to wait and buy the 1060 now...


Nah, +15% of perf for +25%+ of price, no thanks, f*ck nVidia tax.

It's a good reason to wait for 480 AIBs though.



Tatty_One said:


> I would think it will possibly be only a matter of hours before something is done assuming that a driver update would alleviate the issue, AMD know better than most that anything negative associated with a new product launch if not dealt with promptly could have long term implications.



They said it should be out by 7th, did they change their stance?



GhostRyder said:


> Still foolish not to just put an 8 pin as the default...  If they wanted to do this to the 4gb and limit the board spec to 150 with a 6 pin then it's fine, but they should have at least with the 8gb given an 8 pin reference.
> 
> This was just a foolish design choice.


I'm pretty sure they boosted 480 at the last minute.
Raja expected Pascal to come at least several months later.

If not 1070/1080, RX 480 would look pretty impressive even at 980Mhz (what Sony PS4k is allegedly using)


----------



## cdawall (Jul 3, 2016)

sith'ari said:


> Mate, i haven't paid *200€* on a high-end PSU, and *400€* for also a high-end UPS, only to let an RX 480  to endanger my system!!
> But , hey, with your money you can do what you want.


Unless you paid *20€* the motherboard you are blowing this shit way out of proportion.


----------



## TheoneandonlyMrK (Jul 3, 2016)

GLD said:


> I want to see AMD and/or the card manufactures release a bios update to fix the power draw problem. A fix in a driver release wont be sufficient. I want a RX 480, when they get this problem sorted.


I've got my second one on the way from Amazon for 220 notes and I noted, its there best seller right now ,despite this immense issue ,my mobo has an onboard Molex just for extra power to the pciex for quadfire etc(Cleary asus expected odd loading via pciex) ,my boards a decent but old one and ill have 2 480s running flat out 24/7 folding ill let you all know if my pc explodes or anything.
You  need to realise hardware and software are intertwined on many platforms,intel and nvidia inclusive so a driver Will be fine ty very much.


----------



## RejZoR (Jul 3, 2016)

GLD said:


> I want to see AMD and/or the card manufactures release a bios update to fix the power draw problem. A fix in a driver release wont be sufficient. I want a RX 480, when they get this problem sorted.



And how is BIOS any different than driver? Just because it's burned into a hardware, that doesn't make it any better. And if you've seen Wattman, it's basically controlling hardware directly. So, driver or BIOS, it doesn't really matter at this point. When RX480 boots to desktop, it's not out of specs because it's not under load. But once it's in Windows, does it even matter at this point since drivers are loaded already? And seeing how Windows 10 insists on installing latest Radeon drivers no matter what, users will basically be forced to have the latest version of the driver.


----------



## cdawall (Jul 3, 2016)

and everyone does realize we are talking about 7w over spec on the mainboard right?


----------



## sith'ari (Jul 3, 2016)

cdawall said:


> and everyone does realize we are talking about 7w over spec on the mainboard right?



I suppose you haven't seen this video yet: 







If you are bored to watch it all, go to 20:50 for a quick look.  


*EDIT:* Or this one that says about frequent power shutdowns with older mobos:


----------



## GhostRyder (Jul 3, 2016)

medi01 said:


> Nah, +15% of perf for +25%+ of price, no thanks, f*ck nVidia tax.
> 
> It's a good reason to wait for 480 AIBs though.
> 
> ...


Even if that is the case, its still foolish not to do it.



cdawall said:


> and everyone does realize we are talking about 7w over spec on the mainboard right?


Yea, even if you buy the bottom of the barrel motherboard it still should be ok.  I think some are overreacting to this though its a problem that should not exist anyways.  I have loaded up a motherboard before to ridiculous levels and not managed to harm it, so I doubt most people (Unless you own like an AM1 motherboard maybe?) are not going to be able to harm it especially with an RX480.


----------



## Ungari (Jul 3, 2016)

Where is the outrage for Nvidia products that used PCIE lanes as the sole power source and routinely spiked up in excess of 200 watts?
No boos and hisses for the GTX 950 SE, or 750Ti (_let's not mention 960 Strix with_ it's "_only a  6-pin connector_") since those cards have much higher power spikes than that of the RX 480?
Why did no one inquire if those cards were PCI-SIG certified?
These cards are typically mounted in lower cost basic mainboards as low tier card users aren't likely to spend on high performance mobos; why are there no danger warnings, or reports of mainboards getting blown open from these cards?


----------



## Ravenas (Jul 3, 2016)

Hmm.. So they lower the voltage through a software update.

This card is targeted towards a mainstream market that could probably care less, and are ignorant of the fact. We're talking about a 1080p $199/249 performance/per dollar beast... Not a 290x or 390x or Fury.

Mainstream consumer sees 8.9 review score, for $199/249 it's a pretty obvious choice.


----------



## Dippyskoodlez (Jul 3, 2016)

RejZoR said:


> What "non official" driver users? There is just one driver. The official one.



To include Linux users, as well as people like me where an official, super up to date driver may not be in use:


----------



## sith'ari (Jul 3, 2016)

Ungari said:


> Where is the outrage for Nvidia products that used PCIE lanes as the sole power source and routinely spiked up in excess of 200 watts?
> No boos and hisses for the GTX 950 SE, or 750Ti (let's not mention 960 Strix) since those cards have much higher power spikes than that of the RX 480?
> Why did no one inquire if those cards were PCI-SIG certified?
> These cards are typically mounted in lower cost basic mainboards as low tier card users aren't likely to spend on high performance mobos; why are there no danger warnings, or reports of mainboards getting blown open from these cards?



-Apparently you have skipped a lot of posts from this thread. I would urge you to read *#151*. There is nothing wrong with the GTX 960's power output. 
-Plus, on the video i put at *#184, *go at 27:10 and watch their comments for the GTX 960 power output as well.


----------



## GLD (Jul 3, 2016)

RejZoR said:


> And how is BIOS any different than driver? Just because it's burned into a hardware, that doesn't make it any better. And if you've seen Wattman, it's basically controlling hardware directly. So, driver or BIOS, it doesn't really matter at this point. When RX480 boots to desktop, it's not out of specs because it's not under load. But once it's in Windows, does it even matter at this point since drivers are loaded already? And seeing how Windows 10 insists on installing latest Radeon drivers no matter what, users will basically be forced to have the latest version of the driver.



The way my epeen sees it, a driver can "manage" a problem and a bios update can "fix" a problem.


----------



## cdawall (Jul 3, 2016)

sith'ari said:


> I suppose you haven't seen this video yet:
> 
> 
> 
> ...



They literally just say it may be an issue. Buy something cheap slap a high watt card in and what expect everything to be perfect? This isn't an issue that will cause issues to most people, it causes an issue with someone who purchased a Dell in 2011 and expected to toss this card in and have it work perfectly...

I imagine like has been said AMD bumped card voltage to increase yield size for a known popular card and it exceeded spec because of it.




sith'ari said:


> *EDIT:* Or this one that says about frequent power shutdowns with older mobos:



Do you have an older motherboard?


----------



## sith'ari (Jul 3, 2016)

cdawall said:


> Do you have an older motherboard?



See my signature and tell me yourself!!


----------



## cdawall (Jul 3, 2016)

sith'ari said:


> See my signature and tell me yourself!!



You probably shouldn't buy this card then. Holy hell your entire issue was just avoided and just FYI even when this is fixed I wouldn't recommend you buy this card because you will still probably blow the board.


----------



## Dippyskoodlez (Jul 3, 2016)

RejZoR said:


> And how is BIOS any different than driver? Just because it's burned into a hardware, that doesn't make it any better. And if you've seen Wattman, it's basically controlling hardware directly. So, driver or BIOS, it doesn't really matter at this point.



Bios is a hard cap, a driver requires separate installation and requires it being functional and there are many situations where a driver may not be in control of the ship, but a bios would forcefully override any issues.

Similar to the SMC on a Macbook, or the fan control on the eVGA ACX where it turns off at low temperatures. Having a card able to manage itself is key to ensuring reliability. eVGA GPUs are preferred for my configuration currently because they primarily use the pcie plugs instead of the slot for power.


----------



## zAAm (Jul 3, 2016)

cdawall said:


> You probably shouldn't buy this card then. Holy hell your entire issue was just avoided and just FYI even when this is fixed I wouldn't recommend you buy this card because you will still probably blow the board.



You do realise people can have an opinion of something that doesn't directly affect them right?  

In general though (not directed at cdawall), just because people care about this issue does not automatically mean we're all nvidia fanboys and want the RX480 to fail and to burn down AMD's offices etc etc. It's a new precedent because the issue was detected with THIS card. It's not like everyone was reporting it for all the cards and now suddenly we care. Since the issue has gained some publicity however, it should ensure that manufacturers ensure future cards stay within spec, which won't be a bad thing for consumers. I for one hope the fix is easy and sales aren't affected to any great extent. Competition is always good for the consumer


----------



## Ungari (Jul 3, 2016)

sith'ari said:


> -There is nothing wrong with the GTX 960's power output.



See the 3:20 mark of this video concerning the GTX 960 Strix PCIE Power Consumption:


----------



## Dippyskoodlez (Jul 3, 2016)

zAAm said:


> You do realise people can have an opinion of something that doesn't directly affect them right?



'don't buy this' is also a bad answer: Many people will cram this into a board which it 'shouldn't be in' if that was the fix too.



Ungari said:


> See the 3:20 mark of this video concerning the GTX 960 Strix PCIE Power Consumption:



http://www.pcper.com/reviews/Graphi...s-Radeon-RX-480/Evaluating-ASUS-GTX-960-Strix

960 does not present the same issues the rx480 did.


----------



## RejZoR (Jul 3, 2016)

Dippyskoodlez said:


> Bios is a hard cap, a driver requires separate installation and requires it being functional and there are many situations where a driver may not be in control of the ship, but a bios would forcefully override any issues.
> 
> Similar to the SMC on a Macbook, or the fan control on the eVGA ACX where it turns off at low temperatures. Having a card able to manage itself is key to ensuring reliability. eVGA GPUs are preferred for my configuration currently because they primarily use the pcie plugs instead of the slot for power.



Eh, rubbish.

- not in Windows = No 3D acceleration = no load on GPU = FINE
- Basic Display Driver = No 3D acceleration = basic load on GPU, well within specs = FINE
- Full fixed driver = 3D Acceleration = full load with 150W actual limit or redistributed load to 6pin = FINE

Please do tell me what scenario is not covered. And if you think BIOS has it's mind of its own, this is AMD we're talking about. They made the GPU's and they made the BIOS. Don't you think they know what overrides what?


----------



## Dippyskoodlez (Jul 3, 2016)

RejZoR said:


> Please do tell me what scenario is not covered. And if you think BIOS has it's mind of its own, this is AMD we're talking about. They made the GPU's and they made the BIOS. Don't you think they know what overrides what?




Did you not see my screenshot? My GTX 970 runs "346.03".

There are also Linux users using non-proprietary blobs that provide basic acceleration. If a system fires up any compute, you could run into problems, especially considering it's likely to beat any game power draw.

I don't have any evga 'gaming' software' to run, I don't have GPU-z to tweak things, I have strictly the on card BIOS to control cooling of the ACX2.0 fans. If it were driver controlled thinking it was a blower, the 'off' feature at low load/temperatures wouldn't function.

An RX480 trying to pull >75w through my pcie slot would actually start causing stability problems if it doesn't actually cause any physical mishaps from the overcurrent, as is very well documented in eGPU setups already.

Edit: On a side note, this is why the 'bullshit' reviewer bios fiasco is also very important to me. I can't necessarily take advantage of an overclock if it's not preset.


----------



## Ungari (Jul 3, 2016)

Dippyskoodlez said:


> 'don't buy this' is also a bad answer: Many people will cram this into a board which it 'shouldn't be in' if that was the fix too.
> 
> 
> 
> ...



Correct, the 960 Strix is much worse in terms of Power Consumption on the PCIE lane.
If the 960 Strix does not concern you, than the RX 480 is just fine and dandy.


----------



## bpgt64 (Jul 3, 2016)

I can't even get mine to post atm.   Now I am on a 420W(seasonic)  psu on an mini itx board..so I didn't think it would be an issue, but I'll likely have to buy a new PSU or return the card...


----------



## McSteel (Jul 3, 2016)

Ungari said:


> Correct, the 960 Strix is much worse in terms of Power Consumption on the PCIE lane.
> If the 960 Strix does not concern you, than the RX 480 is just fine and dandy.









The problem is slot power draw, not total power draw. The 960 Strix was proven to "overdraw" from the 6-pin external connector, the slot power draw stays below 50W even when overclocked...


----------



## arbiter (Jul 3, 2016)

cdawall said:


> Unless you paid *20€* the motherboard you are blowing this shit way out of proportion.





cdawall said:


> and everyone does realize we are talking about 7w over spec on the mainboard right?



I was deciding on if to say this or but but why not. The belief that 75watts is what you can draw from the PCI-e slot, That is True. However that 75 watts is COMBINED draw. Some people just reading up to this should already have an idea about this. Offically the spec says you can Pull 66 watts from the +12 volt pins in the board so the Sry to bust the bubble a bit there cdawall but its 16 watts over spec. The other 9 watts is +3.3volt pins. In the Pcper video linked they do say this but seems like a lot of people missed it. Below is table showing the draw and if you know how to do the conversions you will see it is the truth.


----------



## Ungari (Jul 3, 2016)

McSteel said:


> The problem is slot power draw, not total power draw. The 960 Strix was proven to "overdraw" from the 6-pin external connector, the slot power draw stays below 50W even when overclocked...



Incorrect. The GTX 960 Strix was peaking regularly at over 225 Watts on the PCIE Slot.
The RX 480 has much lower peaks around 160 Watts, and therefore should be of no concern to those who accept Nvidia's previous offerings.


----------



## the54thvoid (Jul 3, 2016)

Ungari said:


> Incorrect. The GTX 960 Strix was peaking regularly at over 225 Watts on the PCIE Slot..



You have a confirmed source for that?  225watt draw for a maxwell card that far down the hierarchy seems implausible.

EDIT: lol - I didn't actually use Toms hardware as a source for power draw (don't rate too highly) then watched the video..  As for that guy on AdoredTV - as a Glaswegian, his accent was a sham.  Scottish people don't talk that way - when they do, they get a punch in the mouth.


----------



## McSteel (Jul 3, 2016)

Ungari said:


> Incorrect. The GTX 960 Strix was peaking regularly at over 225 Watts on the PCIE Slot.
> The RX 480 has much lower peaks around 160 Watts, and therefore should be of no concern to those who accept Nvidia's previous offerings.



Read the entire thread please, no need for anything to be repeated 100 times for everyone.
Also, this.


----------



## Ungari (Jul 3, 2016)

the54thvoid said:


> You have a confirmed source for that?  225watt draw for a maxwell card that far down the hierarchy seems implausible.
> 
> EDIT: lol - I didn't actually use Toms hardware as a source for power draw (don't rate too highly) then watched the video..  As for that guy on AdoredTV - as a Glaswegian, his accent was a sham.  Scottish people don't talk that way - when they do, they get a punch in the mouth.



Tom's Hardware was one of 4 websites that promoted this faux furor.
You are using a Total Power Consumption chart when the real issue here is the amount of power is coming through the mainboard slot,  so it isn't relevant to the subject.
Thank you for dismissing the video based on the accent of the author. This methodology of establishing the accuracy of technical information on the basis of a speaker's accent should be Standard Operating Procedure.


----------



## R-T-B (Jul 3, 2016)

Ungari said:


> Thank you for dismissing the video based on the accent of the author. This methodology of establishing the accuracy of technical information on the basis of a speaker's accent should be Standard Operating Procedure.



I think the point was more along the lines of "if he's faking an accent, how honest is he?"



> Incorrect. The GTX 960 Strix was peaking regularly at over 225 Watts on the PCIE Slot.



Bullshit.  The PCIe slot's standard watt limit is 50W's.  I mean it can do more especially with boosters, but 225W?  No, it'd melt and we'd have lawsuits.  It pulls the majority of that by overspeccing the 6pin, which is far less of an issue.


----------



## cdawall (Jul 3, 2016)

arbiter said:


> I was deciding on if to say this or but but why not. The belief that 75watts is what you can draw from the PCI-e slot, That is True. However that 75 watts is COMBINED draw. Some people just reading up to this should already have an idea about this. Offically the spec says you can Pull 66 watts from the +12 volt pins in the board so the Sry to bust the bubble a bit there cdawall but its 16 watts over spec. The other 9 watts is +3.3volt pins. In the Pcper video linked they do say this but seems like a lot of people missed it. Below is table showing the draw and if you know how to do the conversions you will see it is the truth.



+/- 8% which puts you at 72w


----------



## ensabrenoir (Jul 4, 2016)

Almost to page 10 guys!!!!  Keep up the good Work!!!  Seriously though.....we gotta hold these companies Nvdia(memory gate) Amd(pci-gate) Volkswagen(emissions-gate: gotta have a car analogy) to the highest standards or we all loose.  Ain't right to unfairly trash'em...... but no free passes on anything that could potentially lead to the fleecing of our pockets or to our detriment.


----------



## arbiter (Jul 4, 2016)

Ungari said:


> Incorrect. The GTX 960 Strix was peaking regularly at over 225 Watts on the PCIE Slot.
> The RX 480 has much lower peaks around 160 Watts, and therefore should be of no concern to those who accept Nvidia's previous offerings.


The "225watts" i bet is the info you got from toms hardware. If you watch pcper they would explain it but i guess i will have it. Dc to DC switch's operate in a way they turn on and off very fast they don't have mid state just cause its more efficient to go on and off.  That 225watts you see is a spike power which happens only for a few milliseconds at a time. If you look at ALL video cards you will get that same thing at some point. Problem with Tomshardware power graph is even though its technically correct but when you show that to people that have no knowledge of basic electronics and how they work its easy to see that spike and cry wolf that it uses 225watts.  The slot can do spike loads for short periods and not have issues cause the heat created is such a short amount of time and no damage will happen. Easy terms are like a dragster, you can run the engine in those cars for shot time and not burn them off without burning up the motor but if you ran the motor for 10 min it would damage it. Tomshardware does show an avg usage in their graph but what they show overall has people jumping in arms over what is normal spike loads that happen all the time. The thing that is the problem is when you avg a load higher then what the spec allows. If it happens for short time probably won't hurt the machine but do it for hours on end day after the day damage will happen.


----------



## Ungari (Jul 4, 2016)

R-T-B said:


> I think the point was more along the lines of "if he's faking an accent, how honest is he?"



Correct. 
Perhaps if I posted a Soundcloud of my voice he would take my word for it?


----------



## xenocide (Jul 4, 2016)

There is no way the 960 Strix was pulling 225W at the Slot.  It would melt the board.


----------



## Ungari (Jul 4, 2016)

arbiter said:


> The "225watts" i bet is the info you got from toms hardware. If you watch pcper they would explain it but i guess i will have it. Dc to DC switch's operate in a way they turn on and off very fast they don't have mid state just cause its more efficient to go on and off.  That 225watts you see is a spike power which happens only for a few milliseconds at a time. If you look at ALL video cards you will get that same thing at some point. Problem with Tomshardware power graph is even though its technically correct but when you show that to people that have no knowledge of basic electronics and how they work its easy to see that spike and cry wolf that it uses 225watts.  The slot can do spike loads for short periods and not have issues cause the heat created is such a short amount of time and no damage will happen. Easy terms are like a dragster, you can run the engine in those cars for shot time and not burn them off without burning up the motor but if you ran the motor for 10 min it would damage it. Tomshardware does show an avg usage in their graph but what they show overall has people jumping in arms over what is normal spike loads that happen all the time. The thing that is the problem is when you avg a load higher then what the spec allows. If it happens for short time probably won't hurt the machine but do it for hours on end day after the day damage will happen.



This idea of average load vs. power peak spikes argument is a diversion that favors Nvidia's lower average due to it's more extreme oscillations.
If you would agree that overclocking cards like the 750's and 950 SE exceeds the power specs for PCIE, then where are all the broken mainboards?


----------



## xenocide (Jul 4, 2016)

Ungari said:


> If you would agree that overclocking cards like the 750's and 950 SE exceeds the power specs for PCIE, then where are all the broken mainboards?



Doesn't that support the idea that your assertion is in fact false?  If Nvidia had cards out that were really breaking the limits as bad as you claim, we'd have bricked computers left and right, as well as thousands of people sueing Nvidia.  But that doesn't exist, so obviously something is wrong with your claim that Nvidia cards (like the Strix 960) are somehow pulling 225W from the PCI-e slot--despite that being physically impossible.


----------



## cdawall (Jul 4, 2016)

There really aren't bricked computers left and right from the 480's...


----------



## $ReaPeR$ (Jul 4, 2016)

cdawall said:


> There really aren't bricked computers left and right from the 480's...


yes, and im really getting bored of this drama.. we are having the same conversation over and over and over and over and over... for 16 watts over spec that will be fixed with a driver update.. i mean, the horror.. -_- and the only person actually owning a 480 isnt complaining ffs..


----------



## arbiter (Jul 4, 2016)

Ungari said:


> This idea of average load vs. power peak spikes argument is a diversion that favors Nvidia's lower average due to it's more extreme oscillations.
> If you would agree that overclocking cards like the 750's and 950 SE exceeds the power specs for PCIE, then where are all the broken mainboards?


If the card did exceed the power draw then would been a story of it happening but it hasn't happened probably due to power limits enforced on the card to prevent it.



cdawall said:


> There really aren't bricked computers left and right from the 480's...


If power circuits on a board are working properly then machine would just shut it self off to protect itself, but if they don't well then some postings of PCI-e slots not working will happen. Either one those 2 are bad no either way to spell it out.



$ReaPeR$ said:


> yes, and im really getting bored of this drama.. we are having the same conversation over and over and over and over and over... for 16 watts over spec that will be fixed with a driver update.. i mean, the horror.. -_- and the only person actually owning a 480 isnt complaining ffs..


New card, Limited stock so kinda hard for everyone to have one already. With just handful of people already with it, Reality is for AMD this was best case for them. This was Found early on and not months later. yes it hurt PR wise but could hurt even more if it was found out 6 months down the road that 100's of thousands of people's machines were damaged by this and it would cost AMD a ton of cash They don't have.


----------



## cdawall (Jul 4, 2016)

arbiter said:


> If power circuits on a board are working properly then machine would just shut it self off to protect itself, but if they don't well then some postings of PCI-e slots not working will happen. Either one those 2 are bad no either way to spell it out.



I did that coming up on 10 years ago with an am2 board and 3 way crossfire. Cards were 3850's, it was a common issue with the board. Guess what? No one died and people kept buying the product.


----------



## $ReaPeR$ (Jul 4, 2016)

arbiter said:


> If the card did exceed the power draw then would been a story of it happening but it hasn't happened probably due to power limits enforced on the card to prevent it.
> 
> 
> If power circuits on a board are working properly then machine would just shut it self off to protect itself, but if they don't well then some postings of PCI-e slots not working will happen. Either one those 2 are bad no either way to spell it out.
> ...



and that was my point. the drama is too much compared to the actual damage. and this is something that can actually be fixed with a driver update. so.. it is becoming more and more pointless whining from people that probably will never own a 480.


----------



## Ungari (Jul 4, 2016)

xenocide said:


> Doesn't that support the idea that your assertion is in fact false?  If Nvidia had cards out that were really breaking the limits as bad as you claim, we'd have bricked computers left and right, as well as thousands of people sueing Nvidia.  But that doesn't exist, so obviously something is wrong with your claim that Nvidia cards (like the Strix 960) are somehow pulling 225W from the PCI-e slot--despite that being physically impossible.



That's my point.
Since Nvidia cards did not brick boards with their higher power spikes, then the furor over the RX 480 is needless unless there is a bias.

AMD will likely make unnecessary changes just to mollify the uproar.



arbiter said:


> If the card did exceed the power draw then would been a story of it happening but it hasn't happened probably due to power limits enforced on the card to prevent it.



Those cards routinely exceed the 75 Watt limit with it's power spikes, just like the RX 480. Yet PCI-SIG certifies all these cards---why?
Because it isn't an issue!


----------



## arbiter (Jul 4, 2016)

Ungari said:


> That's my point.
> Since Nvidia cards did not brick boards with their higher power spikes, then the furor over the RX 480 is needless unless there is a bias.
> 
> AMD will likely make unnecessary changes just to mollify the uproar.


High power spikes that is normal when a Dc to DC switch turns on, like a light bulb that turns on draw's a lot of power to turn on quick then drops down. Problem that could be from all this, people want to build super cheap 550$ gaming machine. Not gonna be a good thing if machine keeps shutting it self down in middle of game play. Most people probably wouldn't haven't the trouble shooting to figure out the gpu is drawing to much power from the board and causing it.


$ReaPeR$ said:


> and that was my point. the drama is too much compared to the actual damage. and this is something that can actually be fixed with a driver update. so.. it is becoming more and more pointless whining from people that probably will never own a 480.


Well it is a 2 way street, the same people whined and complained about the gtx970 issue most them were not likely to ever buy one.



Ungari said:


> Those cards routinely exceed the 75 Watt limit with it's power spikes, just like the RX 480. Yet PCI-SIG certifies all these cards---why?
> Because it isn't an issue!



Problem with what you say there is spikes, if you look at all gpu's they spike to 100+ watts all the time its just nature of Dc to DC switch's. Its the over all avg draw over time that is where it gets to be the problem. drawing 225watts for matter of ms will do no damage but pulling 100watts constant for say 2-3 min can cause it as the heat is able to build up and melt something. If you go watch the video Pcper did on the issue that is one the things they cover.


----------



## cdawall (Jul 4, 2016)

arbiter said:


> Problem with what you say there is spikes, if you look at all gpu's they spike to 100+ watts all the time its just nature of Dc to DC switch's. Its the over all avg draw over time that is where it gets to be the problem. drawing 225watts for matter of ms will do no damage but pulling 100watts constant for say 2-3 min can cause it as the heat is able to build up and melt something. If you go watch the video Pcper did on the issue that is one the things they cover.



If you watch the video PCPer did you'll notice their board was fine.


----------



## arbiter (Jul 4, 2016)

cdawall said:


> If you watch the video PCPer did you'll notice their board was fine.


did you also notice they were using an x99 high end board as well? more expensive higher end boards can handle it as they are overly designed that way its the cheap sub 100$ boards and older ones that are what people are worried about.


----------



## cdawall (Jul 4, 2016)

arbiter said:


> did you also notice they were using an x99 high end board as well? more expensive higher end boards can handle it as they are overly designed that way its the cheap sub 100$ boards and older ones that are what people are worried about.



The *only* board I have seen with any reports of any actual failure were a throw away Asrock 970 board that looked like it was drenched in coke at some point and coated in fur. Board failed after 7hrs straight of TW3 using an 8350@4.5 and 500w corsair CX PSU. I feel like we will see multiple reports like that. Cheap powersupplies with heavy vdroop on the 12v rail mean more current will be pulled across an already stressed motherboard 12v. Outside of that instance I have yet to see anything other than shut downs.

As I have already said multiple times dropping the consumption down a few watts isn't going to fix this, people with cheap motherboards are still going to have failures after long gaming sessions with these cards and any other high watt draw across PCI-e the 750ti's already did this in OEM units with VGA upgrades.


----------



## cdawall (Jul 4, 2016)

Oh and as for pcper as a whole



			
				pcper said:
			
		

> In this shot, we are using the same data but zooming on a section towards the beginning. It is easier to see our power consumption results, with the highest spike on total power nearly reaching the 170-watt mark. Keep in mind this is NOT with any kind of overclocking applied – everything is running at stock here. The blue line hits 85 watts and the white line (motherboard power) hits nearly 80 watts. PCI Express specifications state that the +12V power output through a motherboard connection shouldn’t exceed 66 watts (actually it is based on current, more on that later). *Clearly, the RX 480 is beyond the edge of these limits but not to a degree where we would be concerned*.



here is another from their written review, now keep in mind the 95w power draw was overclocked.



			
				pcper said:
			
		

> I asked around our friends in the motherboard business for some feedback on this issue - is it something that users should be concerned about or are modern day motherboards built to handle this type of variance? One vendor told me directly that while spikes as high as 95 watts of power draw through the PCIE connection are tolerated without issue, sustained power draw at that kind of level would likely cause damage. The pins and connectors are the most likely failure points - he didn’t seem concerned about the traces on the board as they had enough copper in the power plane to withstand the current.



they also seem to rather like the card


----------



## ensabrenoir (Jul 4, 2016)

...took an Am2  and a socket 775 to produce a crash in this video...










for the most part a fix is coming and as long as the fix don't kill performance... and they take care of those who's board got damaged........nothing else going on. Next story please! 
.....pooka dots?  Not promoting this guys taste in T-shirts.


----------



## newtekie1 (Jul 4, 2016)

Ungari said:


> This idea of average load vs. power peak spikes argument is a diversion that favors Nvidia's lower average due to it's more extreme oscillations.
> If you would agree that overclocking cards like the 750's and 950 SE exceeds the power specs for PCIE, then where are all the broken mainboards?



No, the average vs. spike argument is because an average load over time heats things up and can kill hardware.  A spike doesn't heat up the connector and cause damage.

The 960 might spike to 225w, but a lot of cards spike pretty damn high, it is a byproduct of DC to DC conversion.  The difference is the GTX960 overages significantly under spec on average, like only pulling ~30w from the PCI-E slot over time.  So the connector doesn't heat up, and there isn't a risk of damage.  The RX 480 on the other hand average way over the spec, which will heat things up and can cause damage.

The funny thing is, I've seen a 24-pin connector melt on a PSU with two 480s connected and folding...they just weren't AMD's 480, they were nVidia's.  It wasn't exactly an ancient board either, it was a ASUS 990X based motherboard.  And the PSU wasn't a slouch either, it was a Corsair HX850.


----------



## cadaveca (Jul 4, 2016)

newtekie1 said:


> The funny thing is, I've seen a 24-pin connector melt on a PSU with two 480s connected and folding...they just weren't AMD's 480, they were nVidia's.  It wasn't exactly an ancient board either, it was a ASUS 990X based motherboard.  And the PSU wasn't a slouch either, it was a Corsair HX850.



Well, this is where the concerns should be placed... on those running multiple cards in a system. 16W on a single card becomes 48W on three cards. And since AMD doesn't have a 2-card limit to the number of cards in Crossfire configs, this sort of thing is something people who are considering such rigs need to consider.

This isn't a huge issue at all, however, it is something that needs to be considered depending on how many of these cards you might plan to use.


----------



## newtekie1 (Jul 4, 2016)

cadaveca said:


> Well, this is where the concerns should be placed... on those running multiple cards in a system. 16W on a single card becomes 48W on three cards. And since AMD doesn't have a 2-card limit to the number of cards in Crossfire configs, this sort of thing is something people who are considering such rigs need to consider.
> 
> This isn't a huge issue at all, however, it is something that needs to be considered depending on how many of these cards you might plan to use.



I agree, this isn't a huge issue.  But definitely an issue that needs to be addressed.  The people that are downplaying this and saying the issue isn't really an issue are just plain wrong.  It is causing problems.  They are saying don't worry because newer motherboard have protection to stop damage.  Ok, fine.  But my machine just shutting down randomly is an annoying f'n issue that I'd want fixed.  It definitely isn't "no real issue".

Plus, as with my experience with the GTX480s, the problem worsened over time until the connector completely melted.  I didn't even realize what happened until it was too late.  The machine ran fine for almost a year.  Then started randomly shutting down about once a month.  Then it started happening about once a week.  I did visual inspections, all looked fine.  I even wiped and re-installed Windows once it started happening once a week.  Then one week it shut down every day for about 4 days straight.  Then then finally it shut down and wouldn't power back on.  It wasn't until I pulled the computer completely apart that I found the burn/melted 24-pin connectors on the PSU/Motherboard.

And that is the thing that concerns me, even with only running one card.  Over time, if you are constantly over-driving the connector, it can deteriorate.  And the problem with a power connector is, when it start to deteriorate it takes more current to overcome the resistance of the poor connector.  More current and more resistance means more heat at the connector.  It is just a snowballing affect until failure.

I'm not saying this is something that is going to happen all the time.  But even something like dirty contacts cause cause more resistance and a higher potential for failure.  So it is definitely a possibility.  And there really isn't any reason to pull that much power from the PCI-E slot.  The 6-pin/8-pin connectors are way overbuilt.  The extra power should be pulled from that connector.

And obviously custom designed cards aren't likely to have this problem anyway.


----------



## cdawall (Jul 4, 2016)

cadaveca said:


> Well, this is where the concerns should be placed... on those running multiple cards in a system. 16W on a single card becomes 48W on three cards. And since AMD doesn't have a 2-card limit to the number of cards in Crossfire configs, this sort of thing is something people who are considering such rigs need to consider.
> 
> This isn't a huge issue at all, however, it is something that needs to be considered depending on how many of these cards you might plan to use.



To be fair most of the boards that would have 3 RX480's would have an add on connector on the board for more power to the PCI-e


----------



## Prima.Vera (Jul 4, 2016)

ensabrenoir said:


> ...took an Am2  and a socket 775 to produce a crash in this video...
> 
> 
> 
> ...



Really dude. I stopped on the second nr. 2 immediately after I saw the guy dressed like a woman...


----------



## Ungari (Jul 4, 2016)

It's funny that so many think that AMD should have put an 8-pin instead of a 6-pin, as if this would changed the power draw from the motherboard slot.


----------



## Ungari (Jul 4, 2016)

ensabrenoir said:


> and they take care of those who's board got damaged



I haven't heard of any credible article where someone lost a mainboard due to the RX 480, have you?


----------



## cdawall (Jul 4, 2016)

Ungari said:


> I haven't heard of any credible article where someone lost a mainboard due to the RX 480, have you?



Only one I have seen was a user report of a dead asrock board, but it was well abused.


----------



## AsRock (Jul 4, 2016)

cdawall said:


> Only one I have seen was a user report of a dead asrock board, but it was well abused.



As much as the 480 is in that last video some one posted, like OMG he only just got that lol.



Prima.Vera said:


> Really dude. I stopped on the second nr. 2 immediately after I saw the guy dressed like a woman...



His problem not ours, still don't mean the info he found was not bad or good.



ensabrenoir said:


> ...took an Am2  and a socket 775 to produce a crash in this video...
> 
> 
> 
> ...



As i heard it he tested a 775 setup were as he said some one else tested the AM2.

But again OMG few days later and the card looked like this Sheesh.


----------



## ensabrenoir (Jul 4, 2016)

AsRock said:


> As much as the 480 is in that last video some one posted, like OMG he only just got that lol.
> 
> 
> 
> ...



The card still had the plastic protection on it.....hes a responsible youtuber.


----------



## AsRock (Jul 4, 2016)

ensabrenoir said:


> The card still had the plastic protection on it.....hes a responsible youtuber.



Youtubers the place you keep going lol.


EDIT, yes my bad .


----------



## Ungari (Jul 4, 2016)

ensabrenoir said:


> The card still had the plastic protection on it.....hes a responsible youtuber.



He's definitely not riding dirty!


----------



## ensabrenoir (Jul 4, 2016)

.....wonder if we can strerch this out till tuesday or when ever they release their update.  Then we can start a performance impact debate....well its late one trouble maker signing off.


----------



## rtwjunkie (Jul 4, 2016)

NDown said:


> Well what do you expect when most of their fanbase are mostly manchild/literal kid
> 
> you cant have a good gaming experience if you dont have the GeForce GTX® logo/sticker in your PC afterall :^)
> 
> most probably doesnt care about efficiency either, or they are simply too new to remember the HD5xxx vs GTX 4xx series



And exactly what are you doing? Both sides fanbois are nasty and cruel, and arguing about shit that just doesnt matter while they throw insults like you just did.

And for the record, most of TPU's membership is well over 30, with alot of us in our 40's and 50's.


----------



## R-T-B (Jul 4, 2016)

rtwjunkie said:


> And for the record, most of TPU's membership is well over 30, with alot of us in our 40's and 50's.



I wouldn't say most.  There are a lot more one post wonders that are probably little kids.  But in post count, logic, and participation, us 30ish people hold our weight quite well.


----------



## RejZoR (Jul 4, 2016)

Dippyskoodlez said:


> Did you not see my screenshot? My GTX 970 runs "346.03".
> 
> There are also Linux users using non-proprietary blobs that provide basic acceleration. If a system fires up any compute, you could run into problems, especially considering it's likely to beat any game power draw.
> 
> ...



If you run Linux and "compute" you're not an average user and you've heard about RX480 "issue". And you don't seem to understand BIOS/driver relationship at all either. Drivers don't "assume" you have a blower cooler on your ACX EVGA graphic card. It KNOWS you don't have it. That's why AIB's have different hardware ID's to identify specific hardware, so that driver doesn't "assume" things like this, but it "knows" things like this.


----------



## basco (Jul 4, 2016)

pcper just gave us a second view of things on rx480 and i like the explanation and data.
maybe tomshardware+others should have not get it out so quickly and talked to other sites before making it big.
i think this is blown out of proportion.

rbuass is a well known and respected overclocker:
with subtitles


----------



## jigar2speed (Jul 4, 2016)

Thought to share it with you guys...


----------



## Frick (Jul 4, 2016)

basco said:


> maybe tomshardware+others should have not get it out so quickly and talked to other sites before making it big.
> i think this is blown out of proportion.
> /MEDIA]



The revenue must flow. - Guild Navigator


----------



## john_ (Jul 4, 2016)

I wonder if GTX 960 Strix was brought as a paradigm from those defending Nvidia. It's a card with an extra power connector and TDP much lower than 150W. Whatever spikes it produces the average consumption from the pcie bus will always look under the limit.
But tell someone about an overclocked GTX 950 without a 6 pin connector and ignores you, changes the subject, or wants you to believe that you can have 20% extra performance without consuming a single extra watt. Free performance.


Instead of people start asking if there are other graphics cards out there depending too much on the pcie bus, or sites starting tests to see if under overclocking specific graphics cards could push the pcie bus over it's specs(the GTX 950 I mentioned, the R9 270X with only one pcie connector could be another example, GTX Titan Z, R9 295X2), this ends up again becoming the favorite subject for many people. How to attack a new product from AMD.

One more opportunity to attack AMD, a lost chance to investigate on something important and probably learn something we seemed to ignored until today and probably we will ignore it in the future, starting after tomorrow.



jigar2speed said:


> Thought to share it with you guys...



Yeap. A fact that many try to hide behind their little finger.


----------



## ArdWar (Jul 4, 2016)

I can't even understand why peoples trying to, or experimenting if their motherboard would crash when used with reference RX480.

A power overload wouldn't crash a computer unless it's so grossly overloaded that it'll trip PSU OPP/OCP, or drop the PSU rail significantly under ATX spec. The PCI-E power delivery is just straight connected into power and ground plane on PCB, almost no other component inbetween to be _directly_ affected other than the connectrors itself. There's no power limiter, power sharing controller or whatever on the motherboard itself. Just direct connection!

If anything, the problem wouldn't show itself so quick. A newly inserted power connector and PCI card is literally the best case scenario for power delivery. The contacts are still shiny new, no oxidation, no dirt, and the scratch created during card insertion would help with removing the already existent oxide layer. Wait some month/years till the contacts oxidated, increasing the contact resistance. By Ohm's law, heat generated by the current flowing in the contacts will increase and depends on the material, maybe it'll melt the plastics on the contacts.


A bad motherboard that skimp on using proper power plane and ground plane, using skinny traces instead, is another story. It could literally burn.


----------



## ixi (Jul 4, 2016)

AsRock said:


> As much as the 480 is in that last video some one posted, like OMG he only just got that lol.
> 
> 
> 
> ...



That's racist!


----------



## AsRock (Jul 4, 2016)

ixi said:


> That's racist!



No, that's just you taking shit out of context.


----------



## ixi (Jul 4, 2016)

AsRock said:


> No, that's just you taking shit out of context.



Nice humour you got there mate.


----------



## AsRock (Jul 4, 2016)

ixi said:


> Nice humour you got there mate.



Mate ?, aint that like friend ?. So you saying i was being rasist when i wasn't but i am a mate HAHA.  

Anyways enough this is not what the forums for, please be more constructive and just PM me were i can just ignore you for taking out of your butt without effecting anyone else.


----------



## Assimilator (Jul 4, 2016)

jigar2speed said:


> Thought to share it with you guys...



"Turns out pretty much every card pulls more than 75W from the slot" Well, that's a blatant lie as numerous sources with actual power measurement equipment have already determined. But I guess people would rather choose to believe an arbitrary YouTuber's tweet over multiple-pages articles of hard facts and numbers.



john_ said:


> I wonder if GTX 960 Strix was brought as a paradigm from those defending Nvidia. It's a card with an extra power connector and TDP much lower than 150W. Whatever spikes it produces the average consumption from the pcie bus will always look under the limit.
> But tell someone about an overclocked GTX 950 without a 6 pin connector and ignores you



Because it's irrelevant. Let me explain this to you simply:

GTX 950 running at stock, i.e. how 100% of users will use it: adheres to the PCIe spec
GTX 950 overclocked, i.e. how a small % of users will use it: may violate the PCIe spec

RX 480 running at stock, i.e. how 100% of users will use it: violates the PCIe spec
RX 480 overclocked, i.e. how a small % of users will use it: violates the PCIe spec, probably even more

Simple numbers say that since far fewer people overclock GTX 950 than run RX 480 at stock, far fewer people will encounter issues with PCIe slot draw. Not to mention that overclocking voids your warranty anyway, so only you are responsible if your PC catches fire while you're overclocking a GTX 950. But if you're running an RX 480 at stock and it causes your PC to catch on fire, the only one to blame is the manufacturer... i.e. AMD.

Oh, and we already know that R9 295 X2 plays fast and loose with the PCIe power spec - I called AMD out on that too, BTW - but that's far less of a problem because there are so few 295s and the majority of people running them will have overspecced systems anyway.

What it boils down to is simply that if AMD hadn't been cheap f**ks and tried to shave 2 cents off the BOM by using a 6-pin connector instead of an 8-pin, they wouldn't be having this problem. That's an absolutely indefensible case of cutting corners. And personally that's why I'm so upset, because AMD has, once again, ruined what could've been a great product launch with their own incompetence. Like I said in the review thread, they never learn.



ArdWar said:


> A bad motherboard that skimp on using proper power plane and ground plane, using skinny traces instead, is another story. It could literally burn.



That's where most of the concern comes from, because the low cost of the RX 480 means it's often likely to be paired with a cheap motherboard. Think of internet gaming cafes in Asia that are going to be buying these cards by the truckload - how high quality and well ventilated do you think those systems will be?


----------



## $ReaPeR$ (Jul 4, 2016)

arbiter said:


> High power spikes that is normal when a Dc to DC switch turns on, like a light bulb that turns on draw's a lot of power to turn on quick then drops down. Problem that could be from all this, people want to build super cheap 550$ gaming machine. Not gonna be a good thing if machine keeps shutting it self down in middle of game play. Most people probably wouldn't haven't the trouble shooting to figure out the gpu is drawing to much power from the board and causing it.
> 
> Well it is a 2 way street, the same people whined and complained about the gtx970 issue most them were not likely to ever buy one.
> 
> ...


thats not the same but anyway.. you cant correct with a driver the slow 512 MB of the 970..


----------



## john_ (Jul 4, 2016)

Assimilator said:


> Because it's irrelevant. Let me explain this to you simply:
> 
> GTX 950 running at stock, i.e. how 100% of users will use it: adheres to the PCIe spec
> GTX 950 overclocked, i.e. how a small % of users will use it: may violate the PCIe spec


This is funny. You start about people believing an arbitrary tweet, then you do an arbitrary speculation because it suits you. Double standards?



> RX 480 running at stock, i.e. how 100% of users will use it: violates the PCIe spec
> RX 480 overclocked, i.e. how a small % of users will use it: violates the PCIe spec (probably even more)


In contrary to you, who will wait hell to freeze first before posting anything negative for Nvidia, I don't have a problem saying that AMD messed up here.



> Simple numbers say that since far fewer people overclock GTX 950 than run RX 480 at stock, far fewer people will encounter issues with PCIe slot draw. Not to mention that overclocking voids your warranty anyway, so only you are responsible if your PC catches fire while you're overclocking a GTX 950. But if you're running an RX 480 at stock and it causes your PC to catch on fire, the only one to blame is the manufacturer... i.e. AMD.


 You missed my point here. If GTX 950 goes as high as 85-90W of power usage through the pcie bus, shouldn't we have heard about people frying their motherboards? You can say that it doesn't happen and that's why we haven't heard anything. To be fair tech press likes to concentrate it's fire on AMD, so we probably will never learned what power GTX 950 with no 6pin sucks through the pcie bus under overclocking. Neither Tom's Hardware, neither PCPerspective, (neither TPU?) will come up with an article, especially if the card sucks more than it should.



> Oh, and we already know that R9 295X2 plays fast and loose with the PCIe power spec - I called them out on that too, BTW - but that's far less of a problem because there are so few 295s and the majority of people running them will have overspecced systems anyway.


 I bet you did. It's an AMD card.



> What it boils down to is simply that if AMD hadn't been cheap f**ks and tried to shave 2 cents off the BOM by using a 6-pin connector instead of an 8-pin, they wouldn't be having this problem. That's an absolutely indefensible case of cutting corners. And personally that's why I'm so upset, because AMD has, once again, ruined what could've been a great product launch with their own incompetence. Like I said in the review thread, they never learn.


 Thankfully Nvidia wasn't cheap f***ks, so they put all the features on the overpriced Founders Editions cards. Fans going bananas, power consumption at idle/multi monitor going bananas. Bananas. Bananas everywhere.

Funny how you get upset for products you will never buy from a company that you hate because it represents the competition to the company you love. In fact Nvidia fanboys are more upset than RX 480 owners themselves.


----------



## jigar2speed (Jul 4, 2016)

Assimilator said:


> " arbitrary YouTuber



This Arbitrary youtuber is also known as Tech Reviewer and unlike you he has 654,848 subscribers to his youtube channel. You know i am just puting it forward the way they are coming up to internet, you can believe whatever you want. My 4 year old thinks my HD 7970 is an aeroplane and i don't correct her. Its not the correct time anyway.


----------



## silentbogo (Jul 4, 2016)

john_ said:


> ou missed my point here. If GTX 950 goes as high as 85-90W of power usage through the pcie bus, shouldn't we have heard about people frying their motherboards? You can say that it doesn't happen and that's why we haven't heard anything. To be fair tech press likes to concentrate it's fire on AMD, so we probably will never learned what power GTX 950 with no 6pin sucks through the pcie bus under overclocking. Neither Tom's Hardware, neither PCPerspective, (neither TPU?) will come up with an article, especially if the card sucks more than it should.


Seems like you've missed the point: a GTX 950 *with * a 6-pin connector has a 90W rated max TDP. A GTX 950 *without *6-pin PCIE power connector is limited to 75W, at least if we take specs from EVGA, ASUS, Palit and consider them true. 
Additionally, here's a quote from low-power GTX950 review by @W1zzard :


> During gaming, we see power consumption hover almost exactly around the 75 W mark, which is the maximum power draw from a PCI-Express slot. Since the card has no additional power connectors, this is the ideal result - close to 75 W but not significantly more.


https://www.techpowerup.com/reviews/ASUS/GTX_950/21.html

With maximum consumption of 76W and peak at 79W, which in the worst case scenario puts it at 5% over spec in short peaks, and 1.3%  over spec overall (which is well within an error margin for such measurements).
Overclocking may increase the peak consumption values, but overall max won't change, because the card will throttle to stay within 75W limit.

It's not about who's defending who, it's about speculation versus facts and numbers. Even if Raja Koduri himself says that "RX480 is fine, trust me", I won't believe it because there are at least several equally reputable and less reputable people who clearly displayed the opposite by running an experiment and sharing their results with public.


----------



## sith'ari (Jul 4, 2016)

john_ said:


> ...........................
> Thankfully Nvidia wasn't cheap f***ks, so they put all the features on the overpriced Founders Editions cards. Fans going bananas, power consumption at idle/multi monitor going bananas. Bananas. Bananas everywhere.
> Funny how you get upset for products you will never buy from a company that you hate because it represents the competition to the company you love. In fact Nvidia fanboys are more upset than RX 480 owners themselves.



You have to keep something in mind :
AMD's attempt to save money by using a 6-pin power connector instead of an 8-pin, possibly *endangers* *my system*.
What you say about NVidia,- (*although i totally disagree with you, because they made a fantastic GPU with +50% performance of a FuryX/980Ti, and you are still complaining!! )-, affects only the GPU itself, and it doesn't places in jeopardy my system.
You might enjoy taking risks about your motherboard's endurance and longevity, but personally, as i said before, i didn't pay near *600€ (*for top-notch PSU / UPS / surge protectors etc),* *only to let AMD's GPU to destroy my system from the inside !!*


----------



## rtwjunkie (Jul 4, 2016)

jigar2speed said:


> This Arbitrary youtuber is also known as Tech Reviewer and unlike you he has 654,848 subscribers to his youtube channel.



Not arguing, disagreeing, or agreeing, but it needs to be pointed out that well over 90% of the people are Sheeple and will follow anything, including a guy who is entertaining on youtube.  It doesn't actually say anything about his tech abilities.


----------



## HD64G (Jul 4, 2016)

sith'ari said:


> You have to keep something in mind :
> AMD's attempt to save money by using a 6-pin power connector instead of an 8-pin, possibly *endangers* *my system*.
> What you say about NVidia,- (*although i totally disagree with you, because they made a fantastic GPU with +50% performance of a FuryX/980Ti, and you are still complaining!! )-, affects only the GPU itself, and it doesn't places in jeopardy my system.
> You might enjoy taking risks about your motherboard's endurance and longevity, but personally, as i said before, i didn't pay near *600€ (*for top-notch PSU / UPS / surge protectors etc),* *only to let AMD's GPU to destroy my system from the inside !!*



If you spend such amounts of money on PSU/UPS, etc and you don't habe a high quality MB, you are simply ignorant of PC and gaming tech. But since I am sure your MB is a good quality one, there is NOT A CHANCE an RX480 could damage it. Especially since AMD will fix that in 2-3 days with a new driver or anyone could fix it NOW by himself by lowering voltage a bit through Wattman.

http://semiaccurate.com/2016/07/01/investigating-thermal-throttling-undervolting-amds-rx-480/


----------



## newtekie1 (Jul 4, 2016)

jigar2speed said:


> Thought to share it with you guys...



Again, the difference is that most don't pull more from the slot for long periods of time.  They spike above 75w, but that is completely normal and acceptable.  The problem with the RX 480 is that it pulls a lot more than 75w for a sustained period of time.



john_ said:


> I wonder if GTX 960 Strix was brought as a paradigm from those defending Nvidia. It's a card with an extra power connector and TDP much lower than 150W. Whatever spikes it produces the average consumption from the pcie bus will always look under the limit.



Actually the GTX 960 Strix was bought up because Tom's data made it look like it was doing the same thing as the RX 480.  The GTX 960 Strix does spike above 75w.  However, in reality it doesn't do the same thing as the RX 480 because as it has already been pointed out, the spikes don't matter.  With DC to DC conversion there will always be those high spikes.  What matters is the overall average.  The GTX 960 Strix averages down around 30w from the PCI-E slot, and pulls everything else it needs from the external connector.  The RX 480 average over 75w from the PCI-E slot.



john_ said:


> But tell someone about an overclocked GTX 950 without a 6 pin connector and ignores you, changes the subject, or wants you to believe that you can have 20% extra performance without consuming a single extra watt. Free performance.



What about an overclocked GTX 950 without a 6-pin?  I directly addressed the issue of an overclocked GTX 950 without a 6-pin. I didn't change the subject, I explained to you exactly how it works several pages back.  I'll explain it again. The power draw from the PCI-E slot still stays at 75w thanks to the power limiting built into the card.  If the power limit is set to 75w, thanks to nVidia's GPU Boost, it is going to consume right around 75w.  It doesn't matter what you overclock the card to, GPU Boost will keep the card at 75w.  It is very effective at doing this.  Manually raising the power limit as been a part of overclocking with nVidia for a couple generation now.


----------



## john_ (Jul 4, 2016)

sith'ari said:


> You have to keep something in mind :
> AMD's attempt to save money by using a 6-pin power connector instead of an 8-pin, possibly *endangers* *my system*.
> What you say about NVidia,- (*although i totally disagree with you, because they made a fantastic GPU with +50% performance of a FuryX/980Ti, and you are still complaining!! )-, affects only the GPU itself, and it doesn't places in jeopardy my system.
> You might enjoy taking risks about your motherboard's endurance and longevity, but personally, as i said before, i didn't pay near *600€ (*for top-notch PSU / UPS / surge protectors etc),* *only to let AMD's GPU to destroy my system from the inside !!*



No, it doesn't endangers your system with the 750Ti on it. 

50% performance over GTX 980Ti? I think GTX 980Ti owners will want to say something here. The same can be said for Fury X owners in DirectX 12 games.

And don't worry. The UPS probably will survive.


----------



## john_ (Jul 4, 2016)

silentbogo said:


> Seems like you've missed the point: a GTX 950 *with * a 6-pin connector has a 90W rated max TDP. A GTX 950 *without *6-pin PCIE power connector is limited to 75W, at least if we take specs from EVGA, ASUS, Palit and consider them true.
> Additionally, here's a quote from low-power GTX950 review by @W1zzard :
> 
> https://www.techpowerup.com/reviews/ASUS/GTX_950/21.html
> ...


I didn't missed the point. You missed all the other posts that I did about the subject and I am not going to repeat everything in detail. Just ask yourself this.
100% performance at 75W. At that review W1zzard overclocks the card and gets 20% extra performance. Not just higher clocks. 20% extra PERFORMANCE.
Now tell me. How much extra power consumption do you need for that extra 20%? 0 Watts? 5 Watts? 10 Watts? 20 Watts? And don't tell me about throttling. Throttling doesn't increase performance by 20%.

You see, there are many things that the press will not tell you. You just learned about the PCIe bus power draw because the RX480 is an AMD card. If it was an Nvidia card, you wouldn't have known about it.


----------



## sith'ari (Jul 4, 2016)

HD64G said:


> If you spend such amounts of money on PSU/UPS, etc *1.and you don't habe a high quality MB, you are simply ignorant of PC and gaming tech.* But since I am sure your MB is a good quality one, *2.there is NOT A CHANCE an RX480 could damage it*. Especially since *3.AMD will fix that in 2-3 days* with a new driver or anyone could fix it NOW by himself by lowering voltage a bit through Wattman.
> http://semiaccurate.com/2016/07/01/investigating-thermal-throttling-undervolting-amds-rx-480/



1. Or maybe , -just maybe-, there are aren't any more Socket939 mobo's in the market and i had to buy whatever i could find regardless quality. Ever thought this possibility? 
2. When something gets out of spec, then i don't need neither AMD's or your's or anybody else's reassurance. Out of specs means by default: *possible danger* for my system!!
3. I'm allergic to the word "will" . First they must fix it and then we are going to evaluate the results.


----------



## cdawall (Jul 4, 2016)

sith'ari said:


> 1. Or maybe , -just maybe-, there are aren't any more Socket939 mobo's in the market and i had to buy whatever i could find regardless quality. Ever thought this possibility?
> 2. When something gets out of spec, then i don't need neither AMD's or your's or anybody else's reassurance. Out of specs means by default: *possible danger* for my system!!
> 3. I'm allergic to the word "will" . First they must fix it and then we are going to evaluate the results.



No one cares that you spent 600 euros on a power supply and ups for a s939 system that is essentially throw away now.


----------



## sith'ari (Jul 4, 2016)

cdawall said:


> No one cares that you spent 600 euros on a power supply and ups for a s939 system that is essentially throw away now.



I was replying to a comment so apparently somebody cared.


----------



## cdawall (Jul 4, 2016)

sith'ari said:


> I was replying to a comment so apparently somebody cared.



Every comment has been the same. Literally no one on here understands why you are complaining, cool thing about capitalism is if you don't like a product you just don't buy it. 

This particular card is a horrid idea for you because even after the software limits the PCIe draw it will still smoke your 10+ year old motherboard, which wasn't even high end 10+ years ago.


----------



## newtekie1 (Jul 4, 2016)

john_ said:


> I didn't missed the point. You missed all the other posts that I did about the subject and I am not going to repeat everything in detail. Just ask yourself this.
> 100% performance at 75W. At that review W1zzard overclocks the card and gets 20% extra performance. Not just higher clocks. 20% extra PERFORMANCE.
> Now tell me. How much extra power consumption do you need for that extra 20%? 0 Watts? 5 Watts? 10 Watts? 20 Watts? And don't tell me about throttling. Throttling doesn't increase performance by 20%.



Thanks to GPU boost, basically 0w.



john_ said:


> You see, there are many things that the press will not tell you. You just learned about the PCIe bus power draw because the RX480 is an AMD card. If it was an Nvidia card, you wouldn't have known about it.



Or maybe it is just because this is the first card we've seen do this since we've had the ability to test slot power draw separately.


----------



## silentbogo (Jul 4, 2016)

john_ said:


> You see, there are many things that the press will not tell you. You just learned about the PCIe bus power draw because the RX480 is an AMD card. If it was an Nvidia card, you wouldn't have known about it.



I learned about PCI-E bus because of this:
 
...and this:
 

...and about 28 more reasons in my office - I fix this stuff occasionally, if you know what I mean. 

Now, if it came to defensive insults, what makes you a specialist in this area?

_P.S. Boards are not for sale! Can trade a Z77 for cheap air conditioning _


----------



## R-T-B (Jul 4, 2016)

john_ said:


> I didn't missed the point. You missed all the other posts that I did about the subject and I am not going to repeat everything in detail. Just ask yourself this.
> 100% performance at 75W. At that review W1zzard overclocks the card and gets 20% extra performance. Not just higher clocks. 20% extra PERFORMANCE.
> Now tell me. How much extra power consumption do you need for that extra 20%? 0 Watts? 5 Watts? 10 Watts? 20 Watts? And don't tell me about throttling. Throttling doesn't increase performance by 20%.
> 
> You see, there are many things that the press will not tell you. You just learned about the PCIe bus power draw because the RX480 is an AMD card. If it was an Nvidia card, you wouldn't have known about it.


You still are really not getting how nvidia boost works...


----------



## Assimilator (Jul 4, 2016)

john_ said:


> You see, there are many things that the press will not tell you. You just learned about the PCIe bus power draw because the RX480 is an AMD card. If it was an Nvidia card, you wouldn't have known about it.



Son, you just went full retard. I think we're about done here.


----------



## revin (Jul 4, 2016)

newtekie1 said:


> The problem with the RX 480 is that it pulls a lot more than 75w for a *sustained period of time*.





newtekie1 said:


> Or maybe it is just because this is the first card we've seen do this since we've had the ability to test slot power draw separately


This, Bingo !! That IS the issue, the current draw is continuously higher than the *[revised]*SIG spec allows from the thru the PCIe connector*.*


----------



## alucasa (Jul 4, 2016)

This whole thread is ....

LOL

We need to make a webdrama outta this. It will be pure golden.


----------



## zAAm (Jul 4, 2016)

Assimilator said:


> Son, you just went full retard. I think we're about done here.



Agreed


----------



## Ungari (Jul 4, 2016)

Assimilator said:


> What it boils down to is simply that if AMD hadn't been cheap f**ks and tried to shave 2 cents off the BOM by using a 6-pin connector instead of an 8-pin, they wouldn't be having this problem. That's an absolutely indefensible case of cutting corners.



I thought AIBs were adding 8-pins for higher total power limits for overclocking?
I have not seen anything concrete that shows that an 8-pin would decrease the draw on the PCIE slot, as my understanding is that is regulated by the GPU itself.


----------



## GhostRyder (Jul 4, 2016)

Ok, so this thread seems to be spiraling out into a war.  Should we lock and load?

In all seriousness, here is what it boils down to:

1:  AMD decided to put a 6 pin instead of an 8 pin reference to look lower power instead of being smart and letting us have the clocking and less problems.
2:  AMD needs to release a driver fix to stop the card from overdrawing from the PCIE and either change it to the 6 pin or limit it.
3:  Even if you buy this card, your not going to kill your motherboard with it unless you have the most basic/cheap motherboard possible and even then I would be skeptical.

Fact is this should not be a problem but it is.  Is it a big problem that is going to result in dead motherboards?  No because motherboards especially in this day and age are pretty tough even on the cheap side.  I have overloaded a motherboard's PCIE's before, it takes alot to actually do some damage to it.  But the fact is AMD was beyond foolish to not only not put an 8 pin, but to let this pass through like this instead of allowing the 6 pin to take the brunt.  PSU's in this day and age have an 8 pin minimum even on the most cheap entry level one you would want to buy to support your gaming rig (Speaking ~500watt).  Either way though, this does not ruin the card or the value of what your getting, but it definitely makes after market variants look alot more appealing.


----------



## Ungari (Jul 4, 2016)

GhostRyder said:


> 1: AMD decided to put a 6 pin instead of an 8 pin reference to look lower power instead of being smart and letting us have the clocking and less problems.



Ed from Sapphire had a cryptic answer while under NDA when what connector the RX 480 NITRO would have; he said it has an 8-pin but that you really don't need it. He seemed to suggest that plugging in the additional 2-pins was optional.


----------



## Dippyskoodlez (Jul 4, 2016)

RejZoR said:


> That's why AIB's have different hardware ID's to identify specific hardware, so that driver doesn't "assume" things like this, but it "knows" things like this.



Oh man you're fucking delusional. Source please.


----------



## jabbadap (Jul 4, 2016)

Ungari said:


> Ed from Sapphire had a cryptic answer while under NDA when what connector the RX 480 NITRO would have; he said it has an 8-pin but that you really don't need it. He seemed to suggest that plugging in the additional 2-pins was optional.



Nothing cryptic on that, those 2 extra pins are just ground. Again it's quite safe drew more than 75W from 6-pin connector, if you have high end psu.



Ungari said:


> I thought AIBs were adding 8-pins for higher total power limits for overclocking?
> I have not seen anything concrete that shows that an 8-pin would decrease the draw on the PCIE slot, as my understanding is that is regulated by the GPU itself.



That is correct, reference RX-480 has a solid vrm. It's just routed for 50-50 power distribution between pcie slot and pcie connector.


----------



## john_ (Jul 4, 2016)

Assimilator said:


> Son, you just went full retard. I think we're about done here.


Nope. I will have to try really really hard to reach drop to your level.



R-T-B said:


> You still are really not getting how nvidia boost works...





newtekie1 said:


> Thanks to GPU boost, basically 0w.



Maybe I am missing something, but I don't see anyone explaining to me how you can get 20% extra *performance* and don't consume any more power. Can someone explain me that magic? The card consumes 74W at average and default clocks, it gets overclocked, it scores 20% higher in *performance* and I have to assume that power consumption on average remained at under 75W because of Nvidia boost? Oh, please explain.



silentbogo said:


> I learned about PCI-E bus because of this:
> View attachment 76280
> ...and this:
> View attachment 76281
> ...


 When you don't have any arguments just throw degrees and hardware in the face of the other. That will make you look more credible I guess. You think you are the first person on the internet that starts a post with "You should hear me, I am an engineer" and then you can't believe the BS he writes. I am not saying that you are talking BS. I just say that taking pictures of your hardware doesn't makes you an expert. You think I bought my PC yesterday? And no I haven't thought about PCIe power draw and I bet 99% of those posting in here haven't either. The last time I was worrying about a graphics card and a bus, was when running a GeForce 2 MX64 with the AGP at 83MHz.



zAAm said:


> Agreed


 Reinforcements...


----------



## jabbadap (Jul 4, 2016)

john_ said:


> Nope. I will have to try really really hard to reach drop to your level.
> 
> 
> 
> ...



I don't think you would get that 20% performance out of it, unless you have truly amazing chip. Nvidia has power restrictions set in bios, if you don't ask more power to take while overclocking(=don't touch tdp percentages), it will throttle clocks to keep power in which is restricted by bios.


----------



## silentbogo (Jul 4, 2016)




----------



## sith'ari (Jul 4, 2016)

john_ said:


> Nope. I will have to try really really hard to reach drop to your level.
> .............................................................



Sorry mate, but your following comment that @Assimilator has quoted, wasn't among your best ones:


> john_ said:
> You see, there are many things that the press will not tell you. You just learned about the PCIe bus power draw because the RX480 is an AMD card. If it was an Nvidia card, you wouldn't have known about it.



Seriously?!! 
If it was an Nvidia card we would have never hear about it? *
AMD managed to confuse the entire gaming community* with their propaganda Vs the GTX 970 memory size, & made the people believe that the card had less than advertised memory, and now you expect me to believe that if NV's cards had similar power issues (*which is something far greater than the memory size, since its a* safety* matter ), no one would know? !!


----------



## BiggieShady (Jul 4, 2016)

john_ said:


> Oh, please explain.


Ok, I'll be the one explaining this time.
The low power 950 is at all times limited to 75W power target ... at all times. Sample that @W1zzard reviewed probably had considerably better ASIC quality than average, meaning it was able to reach higher clocks on lower voltages than average sample. The rest is boost 2.0, power target is same 75W, clocks are offset by 200 Hz and the boost tightens the voltages to stay inside 75W and voila stable overclock. Every review has dynamic OC: clock vs. voltage table ... as you can see there are multiple clock samples for each voltage state.


----------



## RejZoR (Jul 4, 2016)

Dippyskoodlez said:


> Oh man you're fucking delusional. Source please.



OMG FUCKIN' DELUSIONAL OMFG MAH GOD YOU IDIOT NOOB FOOOOOK:
http://support.amd.com/en-us/kb-articles/Pages/HowtoidentifythemodelofanATIgraphicscard.aspx#DID

Driver can identify it even beyond just basic HW ID and can differentiate between reference or AIB models.


----------



## john_ (Jul 4, 2016)

BiggieShady said:


> Ok, I'll be the one explaining this time.
> The low power 950 is at all times limited to 75W power target ... at all times. Sample that @W1zzard reviewed probably had considerably better ASIC quality than average, meaning it was able to reach higher clocks on lower voltages than average sample. The rest is boost 2.0, power target is same 75W, clocks are offset by 200 Hz and the boost tightens the voltages to stay inside 75W and voila stable overclock. Every review has dynamic OC: clock vs. voltage table ... as you can see there are multiple clock samples for each voltage state.



The think BiggieShady is that I am not talking about frequencies here. The card could boost to 2GHz and stay under 75W and under certain conditions. But I am not talking about frequencies, do I? I am talking about performance. If the card wasn't gaining 20% performance but 1-3%, I wouldn't be making any fuss about it.


----------



## BiggieShady (Jul 4, 2016)

john_ said:


> But I am not talking about frequencies, do I? I am talking about performance.


If you are talking about performance then you are talking about frequency ... you are not gaining performance by dynamically adding compute units


----------



## Filip Georgievski (Jul 4, 2016)

All this talk for no reason.
Everytning is back to square one.
Nothing solved, nothing learned.
Just fanboys fighting all over TPU.

Truth be told, nobody should compare AMD to Nvidia, and it is becase they have different aproach on the GPU Market.

I honestly owned 4 AMD Cards, none of which blew up my system (on a side note my old psu almost did, people know what i mean).

I guess this one wont either.
Jays2Cents said clearly in the review of this card: mIt makes systems unstable if they have low or mid class mobos"

It does not blow up hardware. And it never will. It is just excuse to make AMD look bad, just because of a small problem their product has. 

So what? No company for any tipe of product has perfection, and nobody bitches over most of those brands and names.


----------



## john_ (Jul 4, 2016)

BiggieShady said:


> If you are talking about performance then you are talking about frequency ... you are not gaining performance by dynamically adding compute units


Does anyone understand basic things here?

By increasing the frequency you don't necessarily gain performance. If the card is limited in how much power it will take from the pcie bus, remaining under or at 75W, in will throttle. But if the results of overclocking the card are 20% extra performance, then the card doesn't stop at 75W, it asks *and it gets* more power from the pcie bus. Remember. At standard speeds, based on the review, the card is already at 74W average. Not talking about the peak at 79W. Let's ignore that. If at 100% performance you have an average power consumption of 74W, even if you keep the voltages stable, by increasing the frequency of the GPU AND the frequency of the GDDR5, you are going higher in power consumption. And probably power consumption increases more than 20% that is the performance gain. For Nvidia's Boost to do some magic to keep the card at 75W, it will have to drop voltages automatically at higher frequencies and the card to remain stable.


----------



## McSteel (Jul 4, 2016)

GhostRyder said:


> Ok, so this thread seems to be spiraling out into a war.  Should we lock and load?
> 
> In all seriousness, here is what it boils down to:
> 
> ...



Good intentions, not quite the most accurate info, though...

1: AMD decided to split the power supply 50/50 between the external power connector (happens to be 6-pin in this case) and the PCI-E slot. To illustrate:





This is a problem because while the official spec for the 6-pin connector is 75W it can realistically provide upwards of 200W *continuously* without any ill effects.
The PCI-E slot and the card's x16 connector have 5 (five) flimsy pins at their disposal for power transfer. Those cannot physically supply more than a bit above 1A each. The better ones can sometimes handle 1.2A before significantly accelerating oxidation (both due to heating and passing current) and thus increasing resistance, necessitating more amps to pass to supply enough power further increasing oxidation rate... It's a feedback loop eventually leading to failure.

2: AMD *cannot* fix this via drivers, as there are trace breaks with missing resistors and wires that would bridge the PCI-E slot supply to the 6-pin power connector. This would make the connector naturally preferable to the current flow as its path has a lower resistance and that's the path current prefers to take. It can only be permanently _*fixed*_ by *physical modification*. No other methods. AMD *can* lower the total power draw and thus by extension relieve the stress on the PCI-E slot, but it will probably cost some of the GPU's performance. We'll see.

3: Buying and using this card won't kill your motherboard... straight away. Long-term consequences are unpredictable but cannot be positive. Would driving your car in first gear only, bumping into RPM limiter all the time kill your car? Well, not right away, but... Yeah. It's the same here, you're constantly at the realistic limit of an electromechanical system, constant stress is not going to make it work longer nor better, that's for sure.

The AIB partners would do well to design their PCBs such that the PCI-E slot only supplies power past 150W being drawn from the auxiliary power connector or something like that. Perhaps give one of the six phases to the slot, and the remaining five to the connector... Or better yet, power memory from the slot and GPU from the power connector exclusively. Breaking PCI-E spec that way is much less damaging due to the actual cpaabilities of the Molex Mini-Fit Jr. 2x3-pin connector that we like to call the 6-pin PCI-E power.


----------



## newtekie1 (Jul 4, 2016)

john_ said:


> Maybe I am missing something, but I don't see anyone explaining to me how you can get 20% extra *performance* and don't consume any more power. Can someone explain me that magic? The card consumes 74W at average and default clocks, it gets overclocked, it scores 20% higher in *performance* and I have to assume that power consumption on average remained at under 75W because of Nvidia boost? Oh, please explain.



Because performance is not directly related to power draw.  Raising clock speeds does very little to power draw, it is raising the voltage that increases power draw.  On the GTX950 with the 6-pin, the GPU runs at 1.3v.  On the GTX950 without the 6-pin the GPU runs at 1.0v.  That is a massive difference, and the reason the card stays at 75w.  It is also the reason that the 6-pinless GTX950 barely overclocks to match the stock speeds the 6-pin GTX950 runs.  The GTX950 Strix with no overclock boosts to 1408MHz(@1.3v), the 6-pinless GTX950 with overclock only boosts to 1440Mhz(@1.0v).  That 1.0v is why it stays under 75w, and GPU Boost will lower that voltage and the clock speeds if it needs to to stay under 75w.


----------



## john_ (Jul 4, 2016)

newtekie1 said:


> Because performance is not directly related to power draw.  Raising clock speeds does very little to power draw, it is raising the voltage that increases power draw.  On the GTX950 with the 6-pin, the GPU runs at 1.3v.  On the GTX950 without the 6-pin the GPU runs at 1.0v.  That is a massive difference, and the reason the card stays at 75w.  It is also the reason that the 6-pinless GTX950 barely overclocks to match the stock speeds the 6-pin GTX950 runs.  The GTX950 Strix with no overclock boosts to 1408MHz(@1.3v), the 6-pinless GTX950 with overclock only boosts to 1440Mhz(@1.0v).  That 1.0v is why it stays under 75w, and GPU Boost will lower that voltage and the clock speeds if it needs to to stay under 75w.



Power draw goes up with frequency, not as much as by increasing the voltage, but it does go up. And not by very little, you are wrong here, especially when you overclock both memory and GPU. 

Please try to NOT ignore the fact that the average power draw in the review at defaults is at 74W. Even if the GTX 950 runs at 1.0V instead of 1.3V, in the end it consumes 74W on average. So even if the difference in voltage is massive, as you say, the card still uses 74W on average. So the starting line is there at 74W. The card overclocks really well in W1zzard's review and it gets 20% extra performance(I am keep writing this, everyone conveniently ignores it). You don;t get 20% extra performance with lower clocks and voltage. So, if the card is at 74W at defaults, for that 20% extra performance it probably jumps at 90W through the pcie bus. If it was staying at 75W, then there wouldn't have been any serious performance gains and W1zzard's conclusion would have been that the card is power limited.

Am I right @W1zzard ?


----------



## McSteel (Jul 4, 2016)

For CPUs and GPUs, the power dissipation increases linearly with frequency, and proportional to the *square* of the voltage. In simple terms, P = C*V²*F, where C = internal capacitance (specific to the individual specimen), V = voltage and F = frequency. This is an oversimplification but provides a nice model that's fairly accurate until you get to LN2 stuff...


----------



## RejZoR (Jul 4, 2016)

@McSteel
Are you sure phases are physically tied to one and another power input? I wanted to ask just that if anyone can trace the wiring on the PCB...

Either way, if AMD limits power draw to actual 150W, that technically wouldn't really be cheating, they'd just be bringing it to what they've been advertising the whole time. Assuming they did it on purpose to boost framerate in reviews, hoping no one would notice it is just foolish seeing what kind of shitstorm everyone made out of this. And especially since all reviewers also tackle power consumption and that would also be a straight giveaway, like it was now.

So, calling it intentional, I'm not buying it. No one is this stupid.


----------



## newtekie1 (Jul 4, 2016)

john_ said:


> Power draw goes up with frequency, not as much as by increasing the voltage, but it does go up. And not by very little, you are wrong here, especially when you overclock both memory and GPU.
> 
> Please try to NOT ignore the fact that the average power draw in the review at defaults is at 74W. Even if the GTX 950 runs at 1.0V instead of 1.3V, in the end it consumes 74W on average. So even if the difference in voltage is massive, as you say, the card still uses 74W on average. So the starting line is there at 74W. The card overclocks really well in W1zzard's review and it gets 20% extra performance(I am keep writing this, everyone conveniently ignores it). You don;t get 20% extra performance with lower clocks and voltage. So, if the card is at 74W at defaults, for that 20% extra performance it probably jumps at 90W through the pcie bus. If it was staying at 75W, then there wouldn't have been any serious performance gains and W1zzard's conclusion would have been that the card is power limited.
> 
> Am I right @W1zzard ?



No one is ignoring it.  We just keep telling you it is happening with no extra power draw.  You are ignoring what we keep telling you.  Clock speeds do not affect power draw a noticeable amount, maybe 1w.  Voltage affects power draw.  GPU Boost guarantees the card stays within its power limit.  NVidia learned from their mistakes already, they went through this growing phase with Fermi, and have developed a very good tech to guarantee cards don't go over their power limit.


----------



## McSteel (Jul 4, 2016)

RejZoR said:


> @McSteel
> Are you sure phases are physically tied to one and another power input? I wanted to ask just that if anyone can trace the wiring on the PCB...
> 
> Either way, if AMD limits power draw to actual 150W, that technically wouldn't really be cheating, they'd just be bringing it to what they've been advertising the whole time. Assuming they did it on purpose to boost framerate in reviews, hoping no one would notice it is just foolish seeing what kind of shitstorm everyone made out of this. And especially since all reviewers also tackle power consumption and that would also be a straight giveaway, like it was now.
> ...



Yeah, you can see that in this video. Ok, the guy in it may not hold a masters in electronics, but it's clear the power phases are completely separated, and the GPU simply draws in power 50/50 from them.
A bit more current is drawn from the slot than from the aux connector simply due to higher resistance of the slot power pins...

I'm sure @W1zzard could confirm if he could find a bit of free time to do it


----------



## sith'ari (Jul 4, 2016)

RejZoR said:


> ...............................................................
> So, calling it intentional, I'm not buying it. No one is this stupid.



I would say, it should better be intentional (*intended to gain marketing hype perhaps), otherwise this shows that AMD's engineer team  are simply amateurs.


----------



## john_ (Jul 4, 2016)

newtekie1 said:


> No one is ignoring it.  We just keep telling you it is happening with no extra power draw.  You are ignoring what we keep telling you.  *Clock speeds do not affect power draw a noticeable amount, maybe 1w.*  Voltage affects power draw.  GPU Boost guarantees the card stays within its power limit.  NVidia learned from their mistakes already, they went through this growing phase with Fermi, and have developed a very good tech to guarantee cards don't go over their power limit.


In your dreams that thing you wrote and I putted in bold. In fact it would have been a dream of mine also to just increase frequencies in my hardware and expect only 1W more power consumption after getting 20% extra performance. Not to mention that in that case RX480 would have been close to 166W at any frequency, still it goes at 187W if I remember correctly. Doesn't it? Yes, yes I know. GPU Boost is a magical feature offering free performance with 1 extra watt.

No need to quote me again. Just see McSteel's post and stop there. Save both ourselves some time.


----------



## EarthDog (Jul 4, 2016)

LOL.. some things never change.


----------



## newtekie1 (Jul 4, 2016)

john_ said:


> In your dreams that thing you wrote and I putted in bold. In fact it would have been a dream of mine also to just increase frequencies in my hardware and expect only 1W more power consumption after getting 20% extra performance. Not to mention that in that case RX480 would have been close to 166W at any frequency, still it goes at 187W if I remember correctly. Doesn't it? Yes, yes I know. GPU Boost is a magical feature offering free performance with 1 extra watt.
> 
> No need to quote me again. Just see McSteel's post and stop there. Save both ourselves some time.



With normal operation, when the clocks go up the voltage goes up with it.  That is why W1z includes voltage/clock tables in his reviews.  AMD had to increase the voltage on the RX 480 to keep it stable at the clock speeds they wanted(this is also probably why it overclocks so poorly at stock voltage).  _However_, when W1z does his overclocking he does not increase voltage, he leaves it at the stock.  So while he increases the clock speeds, the voltage stays the same, so the current going through the GPU stays the same.  So you get no real power consumption increase.

In fact, one of the trick of overclocking nVidia cards is to actually lower the voltage to get higher clock speeds.  If your card is stable, but hitting the power limit, you can lower the voltage and raise the clocks to get better performance.  It is a commonly used trick, and one I had to use on my GTX970s.


----------



## sith'ari (Jul 4, 2016)

john_ said:


> ...........................
> No need to quote me again. Just see McSteel's post and stop there. Save both ourselves some time.



-@*McSteel* gave this type: P = C*V²*F .
-Also, @*newtekie1* said: 





> The GTX950 Strix with no overclock boosts to 1408MHz(@1.3v), the 6-pinless GTX950 with overclock only boosts to 1440Mhz(@1.0v).



So, according to McSteel's type, the power at the 1st example [ 1408MHz(@1.3v) ] will have to be higher than the 2nd example [ 1440Mhz(@1.0v)], right?


----------



## john_ (Jul 4, 2016)

newtekie1 said:


> With normal operation, when the clocks go up the voltage goes up with it.  That is why W1z includes voltage/clock tables in his reviews.  AMD had to increase the voltage on the RX 480 to keep it stable at the clock speeds they wanted(this is also probably why it overclocks so poorly at stock voltage).  _However_, when W1z does his overclocking he does not increase voltage, he leaves it at the stock.  *So while he increases the clock speeds, the voltage stays the same, so the current going through the GPU stays the same.  So you get no real power consumption increase.*
> 
> In fact, one of the trick of overclocking nVidia cards is to actually lower the voltage to get higher clock speeds.  If your card is stable, but hitting the power limit, you can lower the voltage and raise the clocks to get better performance.  It is a commonly used trick, and one I had to use on my GTX970s.


OH MY GOD he is trolling me. I have to believe he is trolling me. 23000 posts, 11 years in TPU he can't be so clueless..........

@sith'ari You start from 74W, don't forget it. The GTX 950 with the 6 pin power connector will hover higher than that at defaults, yes. GTX 950 with no power connector will have lower power consumption yes. But don't forget that you start from 74W. The higher you push your frequencies, even with stable voltage, the higher power consumption you will have. So the card will start moving over 74W.

If the card is power limited, then you will not see but only minor changes in benchmarks. If it is not power limited, you will see an almost linear increase in benchmarks scores the higher you push the frequencies.

Manufacturers will choose not to power limit the card. Will let the user push the card even if that means pulling over 75W from the pcie bus. Why? Because it is bad publicity for them to limit the card and they will also lose the customer, if the customer sees that the card is power limited and doesn't overclocks, or doesn't perform better after overclocking it because of throttling.

That's how AMD thought here. Users already push the power limits with overclocking, so why not push the power limits with the reference RX480, beat GTX 970 and at the same time use only a 6pin power connector. Well that was a stupid way of thinking, a stupid decision and AMD is paying the price now with all this negativity.


----------



## Ungari (Jul 4, 2016)

sith'ari said:


> *AMD managed to confuse the entire gaming community* with their propaganda Vs the GTX 970 memory size, & made the people believe that the card had less than advertised memory,



I thought it was Scott Wasson prior to his employment at AMD that researched and discovered the 3.5 + .5 VRAM issue due to anomalies in benchmarks?


----------



## sith'ari (Jul 4, 2016)

Ungari said:


> I thought it was Scott Wasson prior to his employment at AMD that researched and discovered the 3.5 + .5 VRAM issue due to anomalies in benchmarks?



I don't care who started it, the point is that just like in this case the entire world was informed about NV's "deception", if NV had a similar deception at the power sector, AMD  would have gladly informed the world again, rest assured!!


----------



## petedread (Jul 4, 2016)

Overclocks, peaks, averages.


----------



## xorbe (Jul 4, 2016)

newtekie1, power scales linearly with F in the ideal scenario.  Power scales by square of voltage. Clock gating has an effect on total activity.


----------



## GhostRyder (Jul 4, 2016)

McSteel said:


> Good intentions, not quite the most accurate info, though...
> 
> 1: AMD decided to split the power supply 50/50 between the external power connector (happens to be 6-pin in this case) and the PCI-E slot. To illustrate:
> 
> ...


Well I disagree, but I have not looked deep into the PCB to determine if this is impossible which is why I said one or the other.

The other part is the killing motherboard part slowly over time.  Its the same principle as overclocking slowly killing the chip over time.  At these levels your not going to kill a motherboard fast enough.  You might if you do 3-4 of these on a cheap motherboard that supports it and does not have an extra power input, but in the majority of cases that is not going to be the case.  The most likely scenario would be two of these on a cheap motherboard that supports two way but even then most boards in this day and age are pretty tough for just this amount of extra power.

Either way, we just have to wait and see what the fix is.


----------



## RejZoR (Jul 4, 2016)

McSteel said:


> Yeah, you can see that in this video. Ok, the guy in it may not hold a masters in electronics, but it's clear the power phases are completely separated, and the GPU simply draws in power 50/50 from them.
> A bit more current is drawn from the slot than from the aux connector simply due to higher resistance of the slot power pins...
> 
> I'm sure @W1zzard could confirm if he could find a bit of free time to do it



Still, I think GPU has some control over how much current draws from phases. Meaning, they can limit those for PCIe to 75W total and simply push those on 6pin a bit more. Wasn't there info floating around about power phases being more beefy than the ones on GTX 1080 ? Assuming they have such logic on board, such thing could be a feasible option. Again, just brainstorming, only one who really knows this is AMD (RTG). Tomorrow is the day they'll release more info, can't wait to see what they'll come up with and if it'll really be an effective solution, assuming they'll already deliver driver hotfix for it as mentioned they are working on it.


----------



## Dippyskoodlez (Jul 5, 2016)

RejZoR said:


> OMG FUCKIN' DELUSIONAL OMFG MAH GOD YOU IDIOT NOOB FOOOOOK:
> http://support.amd.com/en-us/kb-articles/Pages/HowtoidentifythemodelofanATIgraphicscard.aspx#DID
> 
> Driver can identify it even beyond just basic HW ID and can differentiate between reference or AIB models.



Where does it say it's aware of the exotic cooling systems these cards have? And why does the silent fan operation not require a driver at all? Why do the fans not run full speed 24/7 when powered on?

Please tell me how dos is capable of handling my GTX970 cooling successfully with the Nvidia driver.


----------



## cdawall (Jul 5, 2016)

RejZoR said:


> Still, I think GPU has some control over how much current draws from phases. Meaning, they can limit those for PCIe to 75W total and simply push those on 6pin a bit more. Wasn't there info floating around about power phases being more beefy than the ones on GTX 1080 ? Assuming they have such logic on board, such thing could be a feasible option. Again, just brainstorming, only one who really knows this is AMD (RTG). Tomorrow is the day they'll release more info, can't wait to see what they'll come up with and if it'll really be an effective solution, assuming they'll already deliver driver hotfix for it as mentioned they are working on it.



Each phase can be independently controlled. They could without a doubt make 3 phases pull more than the others, in the exact same way I can literally turn off half the phases on my several year old crosshair v. This isn't new tech nor is it a new practice. Does anyone on here really think that the phases are split differently on other cards?


----------



## R-T-B (Jul 5, 2016)

john_ said:


> Does anyone understand basic things here?
> 
> By increasing the frequency you don't necessarily gain performance. If the card is limited in how much power it will take from the pcie bus, remaining under or at 75W, in will throttle. But if the results of overclocking the card are 20% extra performance, then the card doesn't stop at 75W, it asks *and it gets* more power from the pcie bus.



No, it doesn't.  The bios limits are hard.  You'll just get throttled to hell unless you manually raise the power limit.  An agressive overclock with no raised power limit may even hurt your performance.

Until the power limit is manually raised by the user, it will NEVER exceed 75W


----------



## McSteel (Jul 5, 2016)

RejZoR said:


> Still, I think GPU has some control over how much current draws from phases. Meaning, they can limit those for PCIe to 75W total and simply push those on 6pin a bit more. Wasn't there info floating around about power phases being more beefy than the ones on GTX 1080 ? Assuming they have such logic on board, such thing could be a feasible option. Again, just brainstorming, only one who really knows this is AMD (RTG). Tomorrow is the day they'll release more info, can't wait to see what they'll come up with and if it'll really be an effective solution, assuming they'll already deliver driver hotfix for it as mentioned they are working on it.





cdawall said:


> Each phase can be independently controlled. They could without a doubt make 3 phases pull more than the others, in the exact same way I can literally turn off half the phases on my several year old crosshair v. This isn't new tech nor is it a new practice. Does anyone on here really think that the phases are split differently on other cards?



Well perhaps, if the GPU input power is all added together into a unified power plane, AMD could potentially disable some or all of the slot-driven phases, and the GPU will naturally compensate by pulling more from the aux connector. But if the power planes are separate for different zones in the chip (which admittedly would be odd), then they don't have the option to do so. I'm not really sure as I'm not privy to engineering blueprints of the Polaris10 GPU. But the traces and contacts on the PCB tell a very unambiguous story - the slot and the aux power are galvanically separated all the way up to the GPU. As such, if and only if they meet up within the GPU AND there is a control bus running between the GPU and the power delivery controller (the IR3567B), will AMD be able to restructure the power distribution without physical modifications to the card. Otherwise the only recourse is to lower consumption by lowering the voltage and then appropriately scaling down boost or even the base clock, depending on the transistor leakage current ("ASIC quality").


----------



## cdawall (Jul 5, 2016)

McSteel said:


> Well perhaps, if the GPU input power is all added together into a unified power plane, AMD could potentially disable some or all of the slot-driven phases, and the GPU will naturally compensate by pulling more from the aux connector. But if the power planes are separate for different zones in the chip (which admittedly would be odd), then they don't have the option to do so. I'm not really sure as I'm not privy to engineering blueprints of the Polaris10 GPU. But the traces and contacts on the PCB tell a very unambiguous story - the slot and the aux power are galvanically separated all the way up to the GPU. As such, if and only if they meet up within the GPU AND there is a control bus running between the GPU and the power delivery controller (the IR3567B), will AMD be able to restructure the power distribution without physical modifications to the card. Otherwise the only recourse is to lower consumption by lowering the voltage and then appropriately scaling down boost or even the base clock, depending on the transistor leakage current ("ASIC quality").



Spec sheet says it can go as far as to disable all, but one power phase on the card.

http://www.infineon.com/dgdl/pb-ir3567b.pdf?fileId=5546d462533600a4015356803a7228ef

They are also completely configurable which should in theory mean it could be setup to draw more from whatever phases it chooses.


----------



## RejZoR (Jul 5, 2016)

I mean, usually they separate power phases between GPU and memory. And that's it. Then it's entirely down to how clever and flexible is the power delivery system. Which seems to be quite advanced on Maxwell 2 and Polaris products and beyond.


----------



## john_ (Jul 5, 2016)

R-T-B said:


> No, it doesn't.  The bios limits are hard.  You'll just get throttled to hell unless you manually raise the power limit.  An agressive overclock with no raised power limit may even hurt your performance.
> 
> Until the power limit is manually raised by the user, it will NEVER exceed 75W


Look. I am not trying to make the GTX 950 example look like the RX480 example. If people stop trying defending NVidia, they would have realized that I am not defending AMD. They screw up, because this is the reference design at the default stocks. The end.

What I am saying is that there are plenty of cards out there sucking 85-90W from the PCIe bus, with no drama and 13 pages of discussions about that, all those years, no motherboards exploding killing their owners. In W1zzard's review he gets 20% performance, so he either increases the power limit, with the card giving him that capability doing it manually, or the card is already set to use extra power if necessary.

So, I believe it wouldn't have been a bad idea, because of RX480, sites to investigate it and try to educate users. If we stay at just pointing a finger to AMD, from tomorrow it will be forgotten. Users will go back overclocking cards and sucking 85-90W from the pcie bus thinking this was something limited to the reference RX480. "Temps are normal, 3DMark finishes, so no problem here". Isn't it the typical routine when overclocking? Was anyone thinking about power draw until today? Does anyone think about pcie power draw even today?


----------



## R-T-B (Jul 5, 2016)

john_ said:


> Look. I am not trying to make the GTX 950 example look like the RX480 example. If people stop trying defending NVidia, they would have realized that I am not defending AMD. They screw up, because this is the reference design at the default stocks. The end.
> 
> What I am saying is that there are plenty of cards out there sucking 85-90W from the PCIe bus, with no drama and 13 pages of discussions about that, all those years, no motherboards exploding killing their owners. In W1zzard's review he gets 20% performance, so he either increases the power limit, with the card giving him that capability doing it manually, or the card is already set to use extra power if necessary.
> 
> So, I believe it wouldn't have been a bad idea, because of RX480, sites to investigate it and try to educate users. If we stay at just pointing a finger to AMD, from tomorrow it will be forgotten. Users will go back overclocking cards and sucking 85-90W from the pcie bus thinking this was something limited to the reference RX480. "Temps are normal, 3DMark finishes, so no problem here". Isn't it the typical routine when overclocking? Was anyone thinking about power draw until today? Does anyone think about pcie power draw even today?



I don't know about plenty of cards, but there have certainly been a few.  It's only recently that NVIDIA has basically bios locked the cards wattage at stock, as well.  (Post-fermi, I think)

I have seen the damage drawing too much from the slot can do (from bitcoin mining).  It's not pretty.  But I was running 4 heavily overclocked GPUs.  The specs do indeed have some wiggle room, I will grant you.  But I do believe at least at stock, they should be adhered to.

I will grant you I think the main point you are getting at:  This is way blown out of proportion.


----------



## john_ (Jul 5, 2016)

I had motherboards in the past with a molex next to the first PCIe. Never really search to see why it was there. Better stability I was reading. More stable voltages, better overclocking. But maybe it wasn't just for that. Maybe it was also providing extra power if necessary. Don't know.


R-T-B said:


> I will grant you I think the main point you are getting at: This is way blown out of proportion.



Not exactly my point. My point is that it is blown out of proportion, but only in one direction. That of RX 480. It should be, they messed up. But it shouldn't JUST be shown as an RX 480 problem that is(?) going to be addressed today(?) with a driver, a BIOS, dark magic, or something, so we can forget about it tomorrow. Sites should take a few cards that are in their power limits, overclock them and see what happens. Is it just RX 480 that can overload the bus, or the 6pin, or are there other cards that we would never suspect?

We overclock stuff as much as they can remain stable and we only look at temps and if the benchmark finishes without errors. We usually, if not always, ignore power load. The only time in my life that I took really really in consideration what power was consuming the overclocked part of my system, was when overclocking my 1055T on the MSI 790FX-GD70. A really great motherboard, but that period, MSI's boards for the AMD platform where dying one after the other, if I am not mistaken, because their mosfets or the designs of their AMD motherboards, weren't exactly top quality. So in that board I did a combination of overclocking and undervolting, trying to stay below 140W.

RX 480 is the best excuse tech sites will even have, to investigate how much power graphics cards get from the PCIe bus or the 6pin, after we overclock them. That could end up as a very interesting and eye opening article. And that's what I try to say all these days. AMD is not going to be found innocent, if other graphics cards overload the pcie bus under overclocking, because they did it with a reference design and at default speeds. But people who overclock their cards could be interested in the results, if they care about their motherboard more than they care about 100 extra MHz, or if the 600W PSU they are using, cost them $20.


----------



## EarthDog (Jul 5, 2016)

The extra molex/PCIe power leads on the motherboard were intended for MULTI GPU setups. It had nothing to do with single GPU setups.


----------



## newtekie1 (Jul 5, 2016)

R-T-B said:


> No, it doesn't.  The bios limits are hard.  You'll just get throttled to hell unless you manually raise the power limit.  An agressive overclock with no raised power limit may even hurt your performance.
> 
> Until the power limit is manually raised by the user, it will NEVER exceed 75W



Yep, that is exactly why every piece of GPU overclocking software had to add a power limit slider.  And even then, the max you can set that slider to is hard locked by the BIOS to make sure the card doesn't exceed what the manufacturer wants.

In fact, I just took a look at the GTX 950's BIOS, and sure enough the power limit is set to 75w.  The user has the option to up the power limit to 90w, but that is the *users *choice, not something set by the manufacturer.  If the user wants to risk their board, they can.  The manufacturer of the graphics card should be making the decision to risk my motherboard and power supply.



john_ said:


> hat I am saying is that there are plenty of cards out there sucking 85-90W from the PCIe bus



Show me one other card that consistently pulls over 75w from the PCI-E bus.


----------



## john_ (Jul 5, 2016)

newtekie1 said:


> Show me one other card that consistently pulls over 75w from the PCI-E bus.


You first go and learn the alphabet. Then come and ask me to show you anything. I lost enough time with your fanboyism and your ignorance.

I am really thinking putting this 


> So while he increases the clock speeds, the voltage stays the same, so the current going through the GPU stays the same. So you get no real power consumption increase.


 in my signature with your name on it.


----------



## cdawall (Jul 5, 2016)

newtekie1 said:


> Show me one other card that consistently pulls over 75w from the PCI-E bus.



Hand me the equipment I have a hunch I have one or two on my shelf


----------



## newtekie1 (Jul 5, 2016)

john_ said:


> You first go and learn the alphabet. Then come and ask me to show you anything. I lost enough time with your fanboyism and your ignorance.
> 
> I am really thinking putting this
> in my signature with your name on it.



So judging by your insult ridden useless response, I'm going to assume you actually don't have any examples to back up your claim and can't actually show me a single other card that consistently pulls more than 75w from the PCI-E bus.  Got it.  You can move along if you don't have anything useful to add to the thread.



cdawall said:


> Hand me the equipment I have a hunch I have one or two on my shelf



I'd guess they were from the Fermi era... and even then, I believe they only did it when overclocked, at stock they didn't.


----------



## cdawall (Jul 5, 2016)

newtekie1 said:


> I'd guess they were from the Fermi era... and even then, I believe they only did it when overclocked, at stock they didn't.



Couple of different gens Fermi is one, but my 470's are water-cooled and consume less power because of it. I had a pair of 480's pulling nearly 900w at the wall by themselves at stock clocks in SLI for reference however.


----------



## cadaveca (Jul 5, 2016)

john_ said:


> Users will go back overclocking cards and sucking 85-90W from the pcie bus thinking this was something limited to the reference RX480. Does anyone think about pcie power draw even today?


W1zz has been testing PCIe power draw for a LONG time. I have personally been testing motherboards over the 8-pin connector only. Reviewers do look at these things with a critical eye that the normal users does not. So yeah, some people do.

AMD's 2900XT was popping motherboards at the 24-pin.
NVidia's GTX570 did as well.

If you pay attention, sure, there are a few cards that cause motherboard damage fairly consistently. For the most part, that's the whole reason why motherboard makers NOW include additional power for the PCIe slots, but not all boards do. There are MANY 3-x16 slot boards that support Crossfire that do not.



john_ said:


> "Temps are normal, 3DMark finishes, so no problem here". Isn't it the typical routine when overclocking? Was anyone thinking about power draw until today?



People that overclock should be aware of these sorts of issues in the first place, but the general "overclocker" isn't. There is much more that they aren't aware of. That's why I dropped OC, posting on HWBot, and put little focus on OC in my reviews. To me, OC is deep hardware analysis and testing, not a point-based skill competition like it has become. I don't call chasing numbers without a care at what dies OC'ing... and so I focused on GAMING as the main selling point. THe idea "Stuff dies when you OC" isn't true... stuff dies when you BLINDLY OC.

To me overclocking is an art. In order to make great art, you needs to understand the media you use, whether it be the paint, the pencil, music, or the hardware. However, mass marketing has hidden of all that as people have used OC as a selling feature.


Do a google on "burnt 24-pins". It's a hoot. Nearly every thread will blame the PSU. The real cause? Likely a VGA or a USB controller stuffed it. Not a single mention about that. Well, that's not entirely true. There are a couple, but still... when the blind lead the blind...


----------



## sith'ari (Jul 5, 2016)

Tom's repeated the measurements : http://www.tomshardware.com/reviews/amd-radeon-rx-480-power-measurements,4622.html 
Results were the same.


----------



## HD64G (Jul 5, 2016)

sith'ari said:


> I was replying to a comment so apparently somebody cared.


Wasn't sure about what MB you had. Now I just don't care at all, so have a nice day.


----------



## newtekie1 (Jul 5, 2016)

cdawall said:


> Couple of different gens Fermi is one, but my 470's are water-cooled and consume less power because of it. I had a pair of 480's pulling nearly 900w at the wall by themselves at stock clocks in SLI for reference however.



Yeah, but just because the cards are pulling a lot of power, doesn't mean they are pulling it through the PCI-E bus.  Like PCPer showed, with some of the cards they tested, when they overvolted the cards to increase power consumption the extra consumption came from the external 8/6-Pin and the power draw from the PCI-E bus stayed the same.  The external connectors are over-built, they can handle the extra power draw, so doing it this way isn't a problem.


----------



## cdawall (Jul 5, 2016)

newtekie1 said:


> Yeah, but just because the cards are pulling a lot of power, doesn't mean they are pulling it through the PCI-E bus.  Like PCPer showed, with some of the cards they tested, when they overvolted the cards to increase power consumption the extra consumption came from the external 8/6-Pin and the power draw from the PCI-E bus stayed the same.  The external connectors are over-built, they can handle the extra power draw, so doing it this way isn't a problem.



I'm personally just curious at this point.


----------



## sith'ari (Jul 5, 2016)

> HD64G said:
> But since I am sure your MB is a good quality one.........................





> HD64G said:
> Wasn't sure about what MB you had...............



*Either you were sure or you weren't ! You have to decide eventually !*
hint: there is something in the user control panel which says "system specs" : perhaps you should check it another time, contains useful info such as ...... the system specs !


----------



## newtekie1 (Jul 5, 2016)

cdawall said:


> I'm personally just curious at this point.



Yeah, me too actually.

As Cadaveca pointed out, Fermi and the HD2900XT had issues.  Though I'd like to know if they were right on the edge, and multiple cards pushed it over, or how much they actually were pulling from the PCI-E slot.  I know I melted my 24-pin with a pair of Fermi cards.  But nVidia obviously learned several lessons with Fermi and their last 3 generations haven't pulled a lot of power through the PCI-E bus.


----------



## cdawall (Jul 5, 2016)

sith'ari said:


> *Either you were sure or you weren't ! You have to decide eventually !*
> hint: there is something in the user control panel which says "system specs" : perhaps you should check it another time, contains useful info such as ...... the system specs !



Cocky now aren't you ...



newtekie1 said:


> Yeah, me too actually.
> 
> As Cadaveca pointed out, Fermi and the HD2900XT had issues.  Though I'd like to know if they were right on the edge, and multiple cards pushed it over, or how much they actually were pulling from the PCI-E slot.  I know I melted my 24-pin with a pair of Fermi cards.  But nVidia obviously learned several lessons with Fermi and their last 3 generations haven't pulled a lot of power through the PCI-E bus.



I am curious what the age old beasts pull 3870x2/4870x2/gtx295 etc.


----------



## cadaveca (Jul 5, 2016)

newtekie1 said:


> Yeah, me too actually.
> 
> As Cadaveca pointed out, Fermi and the HD2900XT had issues.  Though I'd like to know if they were right on the edge, and multiple cards pushed it over, or how much they actually were pulling from the PCI-E slot.  I know I melted my 24-pin with a pair of Fermi cards.  But nVidia obviously learned several lessons with Fermi and their last 3 generations haven't pulled a lot of power through the PCI-E bus.


Its interesting to see what boards carry 12V PCIe power-adders, and which contain MOLEX plugs. There is a good reason for those MOLEX plugs instead of a PCIe connector. Also,  some boards with PCIe power plug to add PCIe, but then the board has voltage regulation to switch that 12V down to the needed votlages, and some do not.


----------



## sith'ari (Jul 5, 2016)

cdawall said:


> Cocky now aren't you ...



I tend to respond at the same manner that other people are replying to me! (he was sarcastic against me so i did the same) 
P.S. i guess it's forbidden for someone to own an old motherboard because this ruins the "defensive line" for the AMD fanboys.


----------



## cdawall (Jul 5, 2016)

sith'ari said:


> I tend to respond at the same manner that other people are replying to me! (he was sarcastic against me so i did the same)
> P.S. i guess it's forbidden for someone to own an old motherboard because this ruins the "defensive line" for the AMD fanboys.



AMD fanboys? I could care less what CPU/GPU you use. Honestly the shear amount that this has been blown out of the water is astounding. I mean hell the 9370/9590's are blowing the mosfets up on $250+ motherboards and have been since release date, yet no one bats an eye. AMD releases a decent bang for the buck GPU that has issues on crap ancient boards and everyone is loosing their mind.


----------



## sith'ari (Jul 5, 2016)

cdawall said:


> AMD fanboys? I could care less what CPU/GPU you use. ...........



What are you talking about mate? where did i say that you are an AMD fanboy? my comment was a general one. 

*EDIT:*


> cdawall said:
> .....AMD releases a decent bang for the buck GPU that has issues on crap ancient boards and everyone is loosing their mind.



i have to loose my mind since it's my system. Of course i would care!!


----------



## cdawall (Jul 5, 2016)

sith'ari said:


> What are you talking about mate? where did i say that you are an AMD fanboy? my comment was a general one.



No one in this thread has posted really any fanboy comments. Literally you are one of what two people freaking out about something that isn't new and isn't abnormal. The "sheeple" if you will.


----------



## sith'ari (Jul 5, 2016)

cdawall said:


> No one in this thread has posted really any fanboy comments. Literally you are one of what two people freaking out about something that isn't new and isn't abnormal. The "sheeple" if you will.



1. check my edit at my previous post.
2. Also, check my post *#43* . I've been clear about my feelings for AMD from my early posts.


----------



## cdawall (Jul 5, 2016)

sith'ari said:


> i have to loose my mind since it's my system. Of course i would care!!



Are you planning on buying this GPU? Or as the next quote mentions are you just hear to complain?



sith'ari said:


> 1. check my edit at my previous post.
> 2. Also, check my post *#43* . I've been clear about my feelings for AMD from my early posts.



Yet you still post...

Out of curiosity did you loose a board to an AMD/NV GPU? Have you met anyone who has?


----------



## RejZoR (Jul 5, 2016)

I wonder how within PCIe specs were the first PCIe graphic cards that were powered entirely from PCIe. I'd die of laughing if people realized those old graphic cards were totally out of spec and no one made any big deal about it, but today, everyone is freaking out like mad... Would be fun to know.


----------



## sith'ari (Jul 5, 2016)

cdawall said:


> Are you planning on buying this GPU? Or as the next quote mentions are you just hear to complain?
> Yet you still post...
> Out of curiosity did you loose a board to an AMD/NV GPU? Have you met anyone who has?



I must have said it 100 times by now!, i haven't paid near 600€, for top-notch protection hardware (PSU, UPS, surge protectors ), only to take even the slightest risk this gpu to cause damage to my system.
check your post #193. You were the one that told me NOT to buy this GPU because it might destroy my mobo!!! 

P.S. No i wouldn't buy anything from AMD after the FuryX period (*but someone else with a similar system could). I simply don't like their policy.


----------



## cdawall (Jul 5, 2016)

RejZoR said:


> I wonder how within PCIe specs were the first PCIe graphic cards that were powered entirely from PCIe. I'd die of laughing if people realized those old graphic cards were totally out of spec and no one made any big deal about it, but today, everyone is freaking out like mad... Would be fun to know.



6800 Ultra drew around 80W according to the age old benchmarks and even that had a 6 pin...I imagine the old old cards didn't exceed much of anything they didn't draw enough power for it to be an issue.



sith'ari said:


> I must have said it 100 times by now!, i haven't paid near 600€, for top-notch protection hardware (PSU, UPS, surge protectors ), only to take even the slightest risk this gpu to cause damage to my system.
> check your post #193. You were the one that told me NOT to buy this GPU because it might destroy my mobo!!!
> 
> P.S. No i wouldn't buy anything from AMD after the FuryX period (*but someone else with a similar system could). I simply don't like their policy.



I believe I also mentioned that you are being ridiculous. 600€ for a UPS/PSU yet a board that doesn't even fully support the RX480 (PCI-e 16x would be a jokingly bad limit)


----------



## newtekie1 (Jul 5, 2016)

cdawall said:


> 6800 Ultra drew around 80W according to the age old benchmarks and even that had a 6 pin...I imagine the old old cards didn't exceed much of anything they didn't draw enough power for it to be an issue.



Of course, back then the limit on the PCI-E slot was 25w...  Of course the slot hasn't been changed any, PCI-SIG just upped the limit to 75w because that is what the slot was actually capable of and the high power card manufacturers asked for more.  The 25w limit was just a very conservative limit, kind of how 75w is a very conservative limit on the PCI-E 6-pin connector.


----------



## KainXS (Jul 5, 2016)

Is what The Stilt at overclock said true, that he fixed the slot draw problem with afterburner and that Wizzard is testing the fix.

http://www.overclock.net/t/1604979/...r-the-reference-rx-480-cards/10#post_25320606


----------



## john_ (Jul 5, 2016)

cadaveca said:


> W1zz has been testing PCIe power draw for a LONG time. I have personally been testing motherboards over the 8-pin connector only. Reviewers do look at these things with a critical eye that the normal users does not. So yeah, some people do.
> 
> AMD's 2900XT was popping motherboards at the 24-pin.
> NVidia's GTX570 did as well.
> ...



For many, overclocking can help them get important extra performance. People who can't or aren't willing to pay more money, will go for the cheaper, out of two, models, thinking that with overclocking, they can get to the performance level of the faster model, and save money in the process. That was always the idea of overclocking, saving money, it's another matter that today many just do it for the benchmarks, spending in fact more money. And yes all these people, me included, will, as you say, blindly OC. I am not going to fire up as voltage as I can on a CPU or a GPU, but others will.

But there are cases where people don't even imagine they are pushing their hardware, and that's why I keep repeating the example of the GTX 950 with no power connector. Putting a two slot beast with 8pin connectors and full controls for the voltages on a PCIe slot, can make someone much more nervous than when putting on a PCIe slot a tiny(compared to the beast) innocent GTX 950 that doesn't even have an extra PCIe connector, doesn't give you probably voltage controls to play and is advertised as a power efficient model. How can something like that be a possible danger for your PCIe slot?

When I saw W1zz's review of the card I realize that, if I haven't understood something wrong, that card could be having - after overclocking - the same power draw problems as the RX 480, because it can turn only to the PCIe bus for power. And that 20% extra performance that W1zz gets after OC, can't come out of thin air. And the card is already at 74W at defaults. W1zz's testing with that card on power draw, wasn't including the overclocking scenario, because his job is to test the card at it's defaults, with the OC page being just the icing on the review's cake. But with all this mess with RX 480, thanks to AMD's stupidity, I believe it's a nice opportunity for professionals to show to all those blind overclockers, that things aren't as simple here as "AMD messed up with RX 480". AMD shoot it's own feet, but probably there are others out there with their gun pointing at their feet, not knowing about it.


----------



## ikeke (Jul 5, 2016)

https://www.techpowerup.com/forums/threads/amd-radeon-rx-480-8-gb.223586/page-14#post-3484259


----------



## cadaveca (Jul 6, 2016)

AMD's statement via facebook and hour or so ago:



> We promised an update today (July 5, 2016) following concerns around the Radeon_™_ RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop... a driver update to improve the power draw. We’re pleased to report that this driver—Radeon Software 16.7.1—is now undergoing final testing and will be released to the public in the next 48 hours.
> 
> In this driver we’ve implemented a change to address power distribution on the Radeon RX 480 – this change will lower current drawn from the PCIe bus.
> Separately, we’ve also included an option to reduce total power with minimal performance impact. Users will find this as the “compatibility” UI toggle in the Global Settings menu of Radeon Settings. This toggle is “off” by default.
> ...


----------



## RejZoR (Jul 6, 2016)

Can't wait to see this driver tested and how it behaves. It'll show how committed AMD is regarding this. I just wonder what they mean with this "Compatibility" toggle which will be OF by default. Does this mean they are confident enough about the PCIe power draw not damaging anything they decided not to enable the fix by default? Hm. Anyway, looking forward for a test of this fix...


----------



## john_ (Jul 6, 2016)

RejZoR said:


> Can't wait to see this driver tested and how it behaves. It'll show how committed AMD is regarding this. I just wonder what they mean with this "Compatibility" toggle which will be OF by default. Does this mean they are confident enough about the PCIe power draw not damaging anything they decided not to enable the fix by default? Hm. Anyway, looking forward for a test of this fix...



They are giving two options.

1) The first option is to throw extra load to the 6pin PCIe connector, if you trust your PSU.

2) The second is to lower power consumption if you want to stay in specs.

The first option will be used as an argument that their fix didn't had any performance loss on the card's performance. That's why it will be the default. Hardware sites test cards with top equipment, so I believe they will ask tech sites to use the first option if they want to retest the card. If a user uses sub standard PSU that doesn't trusts, that's not AMD's fault anyway. I guess that's going to be the logic behind option one which is going to be used as a fix that doesn't affect the card's performance and also doesn't make AMD look like they acknowledge there is a problem. With this option they are not fixing any problem because there isn't one. They are just calming users who are nervous about the whole story with the PCIe bus power draw.

The second option, is what users who are really concerned about the power draw, will use in the end. To be fair, this is the real fix that will bring the card in specs. Probably we will see lower frequency from the GPU, maybe 1200MHz instead of 1266MHz and 1.1V instead of 1.15V GPU voltage. Some sites will choose and advice users to use this option, instead of the first one, and will retest the cards. AMD hopes that, whatever optimizations and performance increases they manage to archive this week in their drivers, that 3%, will be enough to make the card look like not having lost any performance at all.

Then the custom cards will come and everything will go back to normal.


----------



## RejZoR (Jul 6, 2016)

"If you trust your PSU". If PSU is so shit it can't handle more than 75W on a PCIe power connector, then you better not use it entirely because it's so shit it'll most likely blow up by itself.


----------



## McSteel (Jul 6, 2016)

Thought it might be useful to cross-link to the review commentary thread, as The Stilt has managed to find a way to instruct the power controller to redistribute power draw via software (see original thread on OCN here).

The effect is not huge, but it is significant enough to alleviate the problem, especially when combined with undervolting and/or underclocking the card.

I think we can let the issue rest now, knowing it's fully manageable. But we should definitely continue to investigate every aspect of performance - including detailed insight into power draw - of all future VGAs under review.


----------



## cadaveca (Jul 6, 2016)

McSteel said:


> Thought it might be useful to cross-link to the review commentary thread, as The Stilt has managed to find a way to instruct the power controller to redistribute power draw via software (see original thread on OCN here).
> 
> The effect is not huge, but it is significant enough to alleviate the problem, especially when combined with undervolting and/or underclocking the card.
> 
> I think we can let the issue rest now, knowing it's fully manageable. But we should definitely continue to investigate every aspect of performance - including detailed insight into power draw - of all future VGAs under review.


For me it wasn't a question if it was possible. I was pretty sure it was. I just wondered why it wasn't done so that there could NOT be any questions.

But in the end, kudos to AMD for listening to the grumbling and doing something about it. This shows that they are committed to addressing user concerns, and truly care what people think about their products. Doing this driver change costs them money. Yet I want AMD, and yet, all companies, to adhere to specifications of supporting parts 100%. Spec says 75W, you don't overstep it one bit, and now they give the end user the options!


----------



## john_ (Jul 6, 2016)

RejZoR said:


> "If you trust your PSU". If PSU is so shit it can't handle more than 75W on a PCIe power connector, then you better not use it entirely because it's so shit it'll most likely blow up by itself.


There are probably more sh!ty PSUs out there than sh!ty new motherboards. I have shown a link for a 600W PSU in one of my posts, that was costing about $20-$25. PSUs based on ancient designs with many ambers at 3.3V and 5V lines. Perfect for systems that still run Athlon XPs. Believe me, plenty of people will buy the best cpu, motherboard, ram, graphics card and when they reach the final part of their system, the PSU, they will start feeling they already spend too much money on their new set up and will go for the cheaper PSU they see available.

I believe that's why AMD chose to pull more power through the PCIe bus than the 6pin. Also as I said before, the same people making motherboards, also make RX 480 graphics cards. I wouldn't believe ASUS, Gigabyte, MSI would be willing to start selling graphics cards that they can kill their own motherboards.



cadaveca said:


> This shows that they are committed to addressing user concerns, and truly care what people think about their products. Doing this driver change costs them money.



And they are paying for it. Every time they come out and say "We hear you, we are fixing it", people add one more example of AMD messing up in their list. On the other hand Nvidia keeps it's mouth SHUT, it reacts like nothing is happening, and only talks about a problem AFTER releasing a fix. If it is something they can't fix, they just don't talk much about it. That way the problems looks like normal bugs, nothing to talk about, keeping Nvidia's reputation about excellent driver support mostly intact.

AMD is like the honest little person panicking when someone is telling him he made a mistake.
Nvidia is like the politician, never admitting there is a problem, or downplaying the significance of that problem.


----------



## ikeke (Jul 6, 2016)

john_ said:


> [..]
> 
> AMD is like the honest little person panicking when someone is telling him he made a mistake.
> Nvidia is like the politician, never admitting there is a problem, or downplaying the significance of that problem.



That is the ugly truth, here.


----------



## Tatty_One (Jul 6, 2016)

RejZoR said:


> "If you trust your PSU". If PSU is so shit it can't handle more than 75W on a PCIe power connector, then you better not use it entirely because it's so shit it'll most likely blow up by itself.


See the thing is, you are an enthusiast, most people out there go buy a pre-built system, they probably don't even know what PSU they have in it, OK maybe the wattage but not the quality or even the amperage and the only upgrades they are ever likely to do to that system is either throwing some extra ram in a slot or upgrading the graphics card, in some respects the 480 is exactly for them.... an affordable good performing solution.


----------



## RejZoR (Jul 6, 2016)

What's the chance of such people going for RX480 in the first place? People like this buy garbage like RX410 (I made this up), not a 200 dollar mid end card...


----------



## Tatty_One (Jul 6, 2016)

RejZoR said:


> What's the chance of such people going for RX480 in the first place? People like this buy garbage like RX410 (I made this up), not a 200 dollar mid end card...


Well, when they want to play modern games on a 7 year old system with a dual core CPU some will think that is the answer and that is all my point is, there will be a market in part due to ignorance.


----------



## Ungari (Jul 9, 2016)

Steve Burke tries to burn a cheap motherboard PCIE lane using the RX 480 8GB with original driver:


----------



## Tsukiyomi91 (Jul 10, 2016)

brand new cheap board should not have the burning issue, older ones like those first gen Core Series boards are more likely to get burned IMO...


----------



## EarthDog (Jul 29, 2016)

Ungari said:


> Steve Burke tries to burn a cheap motherboard PCIE lane using the RX 480 8GB with original driver:


Jesus.. Cliff's notes please? That shit is 15m long.............

JFC, what a waste.. he doesn't even test it in the video, did you watch it yourself before you linked it????????????????????


----------

