# Should I be concerned by vRel and Pwr PerfCaps?



## jfjohnny5 (Sep 3, 2018)

System isn't overclocked at all. Brand new 1080 Ti just installed last Thursday. Was running fine all weekend, then this morning I got some hard lockups in Vermintide II. Installed GPU-Z to have a look. During gameplay, seeing a Power Consumption of ~93-103% TDP and PerfCap Reasons of 16, 1, and 4. 16 is present during Idle. Not concerned there. I looked up the other two and found:

_vRel_ = Reliability. Indicating performance is limited by voltage reliability.
_Pwr_ = Power. Indicating performance is limited by total power limit. 

My power supply isn't new, but it's not some cheap off-brand either; Corsair HX Professional Series 850w. Gameplay itself was smooth. 60+ fps no problem on max @ 1440. I'll also say that after uninstalling and reinstalling video drivers, as well as a reinstall of Vermintide II, I ran through a couple missions with no crashes (knock on wood). 

Question: should I be concerned by the PerfCap Reasons I'm seeing? Is there other testing anyone might suggest?


----------



## John Naylor (Sep 3, 2018)

Corsair HX850 is one of the best units Corsair ever made sold.... but would still recommend monitoring voltage on each rail w/  HWiNFO

Which 1080 Ti ?


----------



## jfjohnny5 (Sep 3, 2018)

John Naylor said:


> Corsair HX850 is one of the best units Corsair ever made sold.... but would still recommend monitoring voltage on each rail w/  HWiNFO


Ok, never used that before. Just grabbed it. Looking at the Sensor stats for the motherboard, I see readings for the +5V, +3.3V and +12V rails. Assuming I just want to make sure those stay pretty stable, right?



John Naylor said:


> Which 1080 Ti ?


Gigabyte AORUS 1080 Ti 11G _(not the Extreme edition - couldn't justify the extra cost)_

So here are the results after an hour of GTA V and Vermintide II:

+3.3V Rail: 3.376 at idle; 3.328 under load
+5V Rail: 5.040 at idle; 5.016 under load
+12V Rail: 12.096 at idle; 11.904 under load

Those numbers were steady too. Very little fluctuation. 
All the time, GPU-Z was reporting nearly constant _vRel _- but again, games ran smooth. No crashes.

Seem at all like there's an issue?


----------



## newtekie1 (Sep 4, 2018)

Both Pwr and vRel are normal to see on an nVidia GPU.  They have nothing to do with your power supply.

These are reasons why the card is not boosting to a higher clock speed.  Pwr means you are hitting the TDP limit of your card, vRel just means it is at the maximum clock speed it considers stable for the GPU voltage.


----------



## R-T-B (Sep 4, 2018)

jfjohnny5 said:


> Seem at all like there's an issue?



Nope, not really.


----------



## hat (Sep 4, 2018)

PWR is normal to see. My cards are capped out by PWR all the time. It just means that the card isn't allowed to draw any more power by design, so that's capping performance. You can alleviate this somewhat by installing MSI Afterburner and increasing the power target (some cards allow more than others). Mine will do 112%, but, as a miner, I purposely set it to 80% for a number of reasons.

I thought vREL was short for voltage regulation... in other words, you're not hitting power limits (or temp limits) but the card will only supply so much voltage.

All thanks to GPU Boost. It takes a number of factors into consideration and automatically overclocks the card as long as everything checks out. It's a great way for nVidia to squeeze out more performance from their cards for 95% of people... but for those of us who like to overclock our stuff, it's a bit of a hindrance. The same things GPU Boost checks to make sure it can push higher are the same things we run into issues with when we try to tweak it ourselves. 

Hmm... that said, that gives me an idea. Board partners could maybe put out cards with a switch (like a dual BIOS switch) that disables GPU Boost while removing all limits. Maybe with a little nuke icon on it.


----------



## newtekie1 (Sep 4, 2018)

hat said:


> Hmm... that said, that gives me an idea. Board partners could maybe put out cards with a switch (like a dual BIOS switch) that disables GPU Boost while removing all limits. Maybe with a little nuke icon on it.



I'm pretty sure nVidia learned their lesson back in the Fermi days when they left everything unlocked. People figured since the option for the voltage to go that high was there, then it must be fine, and then bitched when their cards died...  And nVidia caught a lot of flack for cards popping VRMs and such.  After than, nVidia pretty much said no more to totally unlocked power on cards.


----------



## hat (Sep 4, 2018)

newtekie1 said:


> I'm pretty sure nVidia learned their lesson back in the Fermi days when they left everything unlocked. People figured since the option for the voltage to go that high was there, then it must be fine, and then bitched when their cards died...  And nVidia caught a lot of flack for cards popping VRMs and such.  After than, nVidia pretty much said no more to totally unlocked power on cards.



Yeah, I understand that... and TBH, as much shit as I've slung at nVidia recently, I can't really blame them. I've seen too many people blow something up and it be completely their fault and then just go and return it for a new one... but that still doesn't mean I don't want to be able to smash those barriers at my own risk.

Hm... I also don't like wasteful things, so maybe it's better this way. There would be a lot more unnecessarily dead cards if GPU Boost didn't exist.


----------



## newtekie1 (Sep 4, 2018)

hat said:


> Yeah, I understand that... and TBH, as much shit as I've slung at nVidia recently, I can't really blame them. I've seen too many people blow something up and it be completely their fault and then just go and return it for a new one... but that still doesn't mean I don't want to be able to smash those barriers at my own risk.
> 
> Hm... I also don't like wasteful things, so maybe it's better this way. There would be a lot more unnecessarily dead cards if GPU Boost didn't exist.



I'd just like to see nVidia give more room to the actual card manufacturers to decide how hard they want to let their cards be pushed.  There's no reason that a 1080Ti Strix, which its significantly beefed up VRM and cooler, should be limited to only a 10% increase in board power over the stock 1080Ti.  ASUS should have been able to allow a 50% increase at least, but I'm sure nVidia wouldn't allow them to do that.


----------



## R-T-B (Sep 4, 2018)

Honestly, it's easy.  NVIDIA should just use the very bios tweaking program they designed, implemented, and promptly abandoned.  It was set up so you had to basically submit your serial and void your warranty to mod the bios and remove GPU boost.  They had everything in place but just gave up.


----------



## hat (Sep 4, 2018)

R-T-B said:


> Honestly, it's easy.  NVIDIA should just use the very bios tweaking program they designed, implemented, and promptly abandoned.  It was set up so you had to basically submit your serial and void your warranty to mod the bios and remove GPU boost.  They had everything in place but just gave up.


I would actually do that. I used to flash my vBIOS all the time, just to modify the fan curves and instill whatever clock speeds I wanted after finding them with Afterburner or something. Then I could uninstall that program and my card would run how I wanted it to forever, even if I moved it to another machine.


----------

