# perfcap reason pwr



## Franckette (Sep 14, 2017)

Hi everybody,
Since one month I've a problem with my gpu, I've low fps in game.
And when i've low fps there is in perfcap reason pwr.
Now any games are unplayable..
Do you know why the power of my gpu is limited?


----------



## P4-630 (Sep 14, 2017)

First of all, please fill your system specs:
https://www.techpowerup.com/forums/account/specs


----------



## Kursah (Sep 14, 2017)

Welcome to TPU!, hopefully we can help you resolve your issues but first we need more information from you.


Can you provide your system specs? (You can go into your profile and add them, that way you don't have to post them repeatedly)
What changed one month ago that you're aware of?
Is the rest of your system stable or does it crash?
Have you updated your operating system and graphics drivers?
What games are you testing with?


----------



## newtekie1 (Sep 14, 2017)

Give us a screen shot of GPU-Z's sensor tab too.

The PWR perfcap reason means the GPU is hitting the Power Consumption limit.


----------



## Franckette (Sep 14, 2017)

I've
*System Name:* Laptop Alienware m 17x R4 Win 7
*Processor:* Intel(R) Core(TM) i7-3610QM CPU @ 2.30GHz
*Memory:* 12.0 Go
*Video Card(s):* GTX 780M
*Hard Disk(s):* SAMSUNG SSD PM830 mSATA 64GB 

Before this month i'ved some crashs and freezes (only in games like Fallout 4)
And i've low fps when i do a Furmark but not with Benchmark.
In Gpu-Z in percap reason there is writting Pwr and the bar become green.

I've the laster driver.

This is a sreen shot


----------



## Mindweaver (Sep 14, 2017)

@Franckette welcome to TPU. Also, in the future don't double post. If you need to add something, just edit your last post. Thanks!

*EDIT: You can go over our Forum Guidelines here.*


----------



## Vellinious (Sep 14, 2017)

Franckette said:


> I've
> *System Name:* Laptop Alienware m 17x R4 Win 7
> *Processor:* Intel(R) Core(TM) i7-3610QM CPU @ 2.30GHz
> *Memory:* 12.0 Go
> ...


You're hitting the power limit perf cap there.  As temps increase, so does voltage and heat...as voltage goes up, so does the required power.  When you hit the power limit prescribed in the bios for the GPU, it will throttle the core and voltage to bring it back down where it should be.

A couple of things might help, assuming you can use Afterburner or PrecisionX on a mobile GPU.

1.  Create a custom fan curve to keep the GPU cooler
2.  Increase the power limit slider in MSI AB / EVGA PCX


----------



## Deleted member 67555 (Sep 14, 2017)

I'm thinking it may be time for a PC shop visit....
Unless you are comfortable doing some work yourself...
I'm going to guess you have a dead or dieing fan or general dust issues.


----------



## Vayra86 (Sep 14, 2017)

One more point. Do NOT run Furmark any more. This may have already contributed to the lacking performance or accelerated wear on the cooling solution. If you do increase the power limit slider of this card, this will also increase temperatures further, in turn causing more throttling. I wouldn't recommend this.

In the end, this is why I hate gaming laptops.


----------



## eidairaman1 (Sep 14, 2017)

jmcslob said:


> I'm thinking it may be time for a PC shop visit....
> Unless you are comfortable doing some work yourself...
> I'm going to guess you have a dead or dieing fan or general dust issues.



Needs a good cleanout, possibly a good thermal compound job on the cpu and gpu, mobo chipset.

Other thing, perhaps a bigger brick, other things, lower resolution and detail levels for smoother gameplay, it uses a Mobility gpu which are tailored for power sipping not running full out.

This being a laptop there really isn't a whole lot out there to OC that cpu


----------



## newtekie1 (Sep 14, 2017)

This isn't a heat issue, I'm not sure why people are thinking it is. There is a completely different Perfap reason for heat, it would be coming up Thrm if the GPU was hitting the thermal limit.

Yes, don't run furmark, it isn't an accurate test for power throttling. It basically causes the card the power throttle immediately.

Run something like Unigine Heaven in a window with GPUz running and take a screen shot of the sensors tab.


----------



## Vellinious (Sep 15, 2017)

newtekie1 said:


> This isn't a heat issue, I'm not sure why people are thinking it is. There is a completely different Perfap reason for heat, it would be coming up Thrm if the GPU was hitting the thermal limit.
> 
> Yes, don't run furmark, it isn't an accurate test for power throttling. It basically causes the card the power throttle immediately.
> 
> Run something like Unigine Heaven in a window with GPUz running and take a screen shot of the sensors tab.



Not in that it's an overheating issue, but as temps rise, so must the voltage to keep the core stable.  More core voltage = more power draw.  Thus, along with increasing the power limit in AB and creating a custom fan curve, he may be able to get around it.


----------



## newtekie1 (Sep 15, 2017)

Vellinious said:


> Not in that it's an overheating issue, but as temps rise, so must the voltage to keep the core stable.  More core voltage = more power draw.  Thus, along with increasing the power limit in AB and creating a custom fan curve, he may be able to get around it.



Temps have nothing to do with this issue.  The GPU is running well within the temps it should be, it is not getting anywhere near hot enough to cause a noticeable loss in efficiency.  And the GPU would be thermal throttling and would probably shut down before the temperature would be high enough to cause an increase in power draw.

What is more likely is that something is lowering the Power Limit(MSI Afterburner for instance lets you drop the power limit to 50%), or the power sense circuit on the GPU is starting to go wonky.


----------



## Vellinious (Sep 15, 2017)

newtekie1 said:


> Temps have nothing to do with this issue.  The GPU is running well within the temps it should be, it is not getting anywhere near hot enough to cause a noticeable loss in efficiency.  And the GPU would be thermal throttling and would probably shut down before the temperature would be high enough to cause an increase in power draw.
> 
> What is more likely is that something is lowering the Power Limit(MSI Afterburner for instance lets you drop the power limit to 50%), or the power sense circuit on the GPU is starting to go wonky.



I think if you look at the sensors tab screenshot there, you'll see exactly what I'm talking about.  It's idling at around 50c, load temps are much higher.  As temps increase during load, you'll also notice voltage increasing until it hits the power limit perf cap, at which time it throttles....temps, clock and voltage drops, and then returns to normal, and then throttles again.  

As I stated....increasing the power limit in AB may alleviate the issue, but creating a custom fan curve to keep the GPU cooler under load certainly can't hurt that either.

I will also concede, that something could be "wonky".  But before resorting to the conclusion that the GPU is just broke, I'd try the alternatives that I listed.


----------



## Franckette (Sep 16, 2017)

The pad thermal and thermal paste have been changed recently.
This is a screenshot when I play at Fallout 4.
When I start a Ungine Heaven, all is good..
No low fps or green bar.


----------



## P4-630 (Sep 16, 2017)

Franckette said:


> No low fps or green bar.



I do see some green in the GPU-Z screenshot though...

But as long as you can run everything again without huge fps drops you're fine then.

Happy Gaming!!


----------



## Franckette (Sep 16, 2017)

The green bar and low fps is only when I play, not when I run Ungine Heaven.
When I say low fps it's maybe 10 or 15 fps..
So I can't play at any games


----------



## newtekie1 (Sep 17, 2017)

Vellinious said:


> I think if you look at the sensors tab screenshot there, you'll see exactly what I'm talking about.



I looked at the screenshot before I posted.  I don't not see what you are talking about.



Vellinious said:


> It's idling at around 50c, load temps are much higher.



Idle temperature means nothing.  This is a laptop, idle temps will be high because the fan is likely turned off at idle.  And the load temps are not going up that high.  Judging by the graph they look like thy are getting up into the 70°C range.  I can guarantee the temp is staying below 82°C, because that is the thermal throttle temperature, and the GPU isn't thermal throttling.



Vellinious said:


> As temps increase during load, you'll also notice voltage increasing until it hits the power limit perf cap, at which time it throttles....temps, clock and voltage drops, and then returns to normal, and then throttles again.



That isn't how it works, at least not noticeably. The voltage on the GPU doesn't go up because of the temperature.  The voltage goes up because the driver is telling the voltage regulator to increase the voltage to keep the GPU stable at the higher clock speeds.

Yes, if temperatures get really high, then the efficiency will go down and the power draw will increase.  However, silicon has to get really really hot for this efficiency to matter to the point that the card would throttle due to the power limit.  In fact, the temperature would have to be so high, the GPU would already be throttling due to the temperature.  We aren't seeing thermal throttling in the sensor tab, so temperature is not an issue.



Vellinious said:


> As I stated....increasing the power limit in AB may alleviate the issue, but creating a custom fan curve to keep the GPU cooler under load certainly can't hurt that either.
> 
> I will also concede, that something could be "wonky". But before resorting to the conclusion that the GPU is just broke, I'd try the alternatives that I listed.



Going on a wild goose chase trying to solve a temperature issue that doesn't exist just wastes time that we could be using trying to solve the actual issue.

Yes, increasing the power limit in MSI Afterburner is the first thing I would try too.  Heck, it or a program like it might be what is causing the issue.

I'd also do a clean driver install.


----------



## Vellinious (Sep 17, 2017)

newtekie1 said:


> I looked at the screenshot before I posted.  I don't not see what you are talking about.
> 
> 
> 
> ...



1.  Then you're blind

2.  Idle temps are the baseline, and it's clear that as the graph goes up in temps, so does the voltage.  Correlation in this case, does equal causation.

3.  Wrong

4.  You're wrong

5.  The one thing we agree on, as well as the driver install

I'll agree to disagree.  I would rather try to set a custom fan curve to help the power limit, and set a higher power limit, than just automatically assume the GPU is just jacked up.  Better to at least research all options before jumping to that conclusion, but....according to what you're saying, just wing it, and go right to, "uh, it's broken".  I'm sincerely sorry, but, that's just stupid beyond reason.


----------



## eidairaman1 (Sep 17, 2017)

Put the ego down.


----------



## Aquinus (Sep 17, 2017)

newtekie1 said:


> That isn't how it works, at least not noticeably. The voltage on the GPU doesn't go up because of the temperature. The voltage goes up because the driver is telling the voltage regulator to increase the voltage to keep the GPU stable at the higher clock speeds.
> 
> Yes, if temperatures get really high, then the efficiency will go down and the power draw will increase. However, silicon has to get really really hot for this efficiency to matter to the point that the card would throttle due to the power limit. In fact, the temperature would have to be so high, the GPU would already be throttling due to the temperature. We aren't seeing thermal throttling in the sensor tab, so temperature is not an issue.


I'm going to refute this one. Temperature itself does not increase power draw however, heat does impact how quickly a transistor can switch states which is the real thing that needs to be considered. It's not unrealistic that nVidia GPUs factor temperature in when determining what voltage to use. There are only two ways to make a transistor switch faster which is a higher driving voltage or lower operating temperature. Also, ohm's law states that when resistance is constant that V ∝ I so any increase in voltage will have a corresponding increase in current and heat produced is a function of the square of the current.

So, temperatures might be reasonable but, increased driving voltage will increase current draw and if temperatures are reasonable, could mean that a power limit would be hit before a temperature limit but, that isn't to say that temperature doesn't factor into the equation even if it's not the limiting factor.


----------



## Franckette (Sep 17, 2017)

I use ThrottleStop now because MSI AB don't work on laptop.
I use it and everything seems to work
Do you knwo if ThrottleStop is dangerous for the gpu?


----------



## newtekie1 (Sep 17, 2017)

Aquinus said:


> I'm going to refute this one. Temperature itself does not increase power draw however, heat does impact how quickly a transistor can switch states which is the real thing that needs to be considered. It's not unrealistic that nVidia GPUs factor temperature in when determining what voltage to use. There are only two ways to make a transistor switch faster which is a higher driving voltage or lower operating temperature. Also, ohm's law states that when resistance is constant that V ∝ I so any increase in voltage will have a corresponding increase in current and heat produced is a function of the square of the current.
> 
> So, temperatures might be reasonable but, increased driving voltage will increase current draw and if temperatures are reasonable, could mean that a power limit would be hit before a temperature limit but, that isn't to say that temperature doesn't factor into the equation even if it's not the limiting factor.



Nothing to refute.  I'm not arguing that heat doesn't affect power draw.  As heat goes up, efficiency goes down, I never said it didn't.  Read the very first sentence again, particularly the "not noticeably" part.

At the temperatures the GPU is operating at, the inefficiencies caused by heat are not enough to make the GPU suddenly start hitting the power limit.  The GPU would have to hit the thermal throttle limit and go beyond for the this to show a noticeable affect on power draw.



Franckette said:


> I use ThrottleStop now because MSI AB don't work on laptop.
> I use it and everything seems to work
> Do you knwo if ThrottleStop is dangerous?



ThrottleStop, AFAIK, is for the CPU not the GPU.  If Throttlestop fixed your performance problem, then the problem was with the CPU, not the GPU, and the GPU hitting the power limit was never the problem.


----------



## Aquinus (Sep 17, 2017)

newtekie1 said:


> Nothing to refute. I'm not arguing that heat doesn't affect power draw. As heat goes up, efficiency goes down, I never said it didn't. Read the very first sentence again, particularly the "not noticeably" part.


Let me clarify since you didn't seem to get exactly what I was trying to point out. I was talking about this comment:


newtekie1 said:


> The voltage on the GPU doesn't go up because of the temperature. The voltage goes up because the driver is telling the voltage regulator to increase the voltage to keep the GPU stable at the higher clock speeds.


The driver is telling it to run at higher voltages against temperature as well, not just clock speed. nVidia's boost takes more factors into account and is more dynamic than say, AMD's which is really just as you suggest, clocks mapped to voltages. Simply put, to maintain clocks at higher temperatures, more voltage is usually required to overcome the efficiency losses you mentioned from higher temperatures. I'm more pointing out that nVidia's boost does actually vary voltage based on temperature because a cold circuit doesn't require the same amount of voltage to be driven than a hot one. If you're not close to thermal throttling, boost is going to aim for higher clocks and might apply additional voltage if it's running warmer but, not being limited by thermal limits.

Simply put, by itself, it wouldn't make a difference but because of how boost works, higher temperature would prompt boost to use more voltage to maintain the same clocks which would increase power consumption.


----------



## newtekie1 (Sep 17, 2017)

Aquinus said:


> Let me clarify since you didn't seem to get exactly what I was trying to point out. I was talking about this comment:
> 
> The driver is telling it to run at higher voltages against temperature as well, not just clock speed. nVidia's boost takes more factors into account and is more dynamic than say, AMD's which is really just as you suggest, clocks mapped to voltages. Simply put, to maintain clocks at higher temperatures, more voltage is usually required to overcome the efficiency losses you mentioned from higher temperatures. I'm more pointing out that nVidia's boost does actually vary voltage based on temperature because a cold circuit doesn't require the same amount of voltage to be driven than a hot one. If you're not close to thermal throttling, boost is going to aim for higher clocks and might apply additional voltage if it's running warmer but, not being limited by thermal limits.
> 
> Simply put, by itself, it wouldn't make a difference but because of how boost works, higher temperature would prompt boost to use more voltage to maintain the same clocks which would increase power consumption.



No, that is not how it works at all.

I decided to prove it instead of just going on and on about it.

This is a GPU-Z sensors on my nVidia card after a tweaking the fan speed manually to get the temperature to stabilize as close to 60°C as possible:





Note the voltage of 1.043v and the average power consumption of 79.7%.

This is the screen shot after I raised the fan speed to 100% and got the card down to 48°C.





Notice how the voltage is now higher, and the average power consumption is also higher?

Heck, I can turn the fans all the way down and let the card get over 70°C, and the voltage still doesn't go up.


----------



## Aquinus (Sep 17, 2017)

newtekie1 said:


> Notice how the voltage is now higher, and the average power consumption is also higher?


Notice how the core clock is also higher? It's not that black and white.


----------



## newtekie1 (Sep 17, 2017)

Aquinus said:


> Notice how the core clock is also higher? It's not that black and white.



Yes, but if what you said is true, the higher temperature should be giving a higher voltage and power consumption.

But obviously boost does not show the behavior you claim.

The fact is, higher temperatures do not lead to a noticeable increase in power consumption. Period.


----------

