# Disable GeForce GTX 580 Power Throttling using GPU-Z



## btarunr (Nov 13, 2010)

NVIDIA shook the high-end PC hardware industry earlier this month with the surprise launch of its GeForce GTX 580 graphics card, which extended the lead for single-GPU performance NVIDIA has been holding. It also managed to come up with some great performance per Watt improvements over the previous generation. The reference design board, however, made use of a clock speed throttling logic which reduced clock speeds when an extremely demanding 3D application such as Furmark or OCCT is run. While this is a novel way to protect components saving consumers from potentially permanent damage to the hardware, it does come as a gripe to expert users, enthusiasts and overclockers, who know what they're doing.

GPU-Z developer and our boss W1zzard has devised a way to make disabling this protection accessible to everyone (who knows what he's dealing with), and came up with a nifty new feature for GPU-Z, our popular GPU diagnostics and monitoring utility, that can disable the speed throttling mechanism. It is a new command-line argument for GPU-Z, that's "/GTX580OCP". Start the GPU-Z executable (within Windows, using Command Prompt or shortcut), using that argument, and it will disable the clock speed throttling mechanism. For example, "X:\gpuz.exe /GTX580OCP" It will stay disabled for the remainder of the session, you can close GPU-Z. It will be enabled again on the next boot. 



 




As an obligatory caution, be sure you know what you're doing. TechPowerUp is not responsible for any damage caused to your hardware by disabling that mechanism. Running the graphics card outside of its power specifications may result in damage to the card or motherboard. We have a test build of GPU-Z (which otherwise carries the same-exact feature-set of GPU-Z 0.4.8). We also ran a power consumption test on our GeForce GTX 580 card demonstrating how disabling that logic affects power consumption.

*DOWNLOAD:* TechPowerUp GPU-Z GTX 580 OCP Test Build

*View at TechPowerUp Main Site*


----------



## 1c3d0g (Nov 13, 2010)

Very, very interesting. This is kind of like overriding the Governor chip/rev limiter on a car to push it past factory-tested speeds. Great work, all involved!


----------



## qubit (Nov 13, 2010)

This is a great feature and yet more  to W1zzard for putting it in.

However, as the throttling is designed to prevent hardware damage to card and mobo, how is this going to be prevented when the card is run past its limit?


----------



## crow1001 (Nov 13, 2010)

Maybe it was just to disguise the fact the 580 has a much higher TDP than a 480...350 watt.


----------



## erocker (Nov 13, 2010)

qubit said:


> This is a great feature and yet more  to W1zzard for putting it in.
> 
> However, as the throttling is designed to prevent hardware damage to card and mobo, how is this going to be prevented when the card is run past its limit?



I would assume it's up to the "expert users, enthusiasts and overclockers" to monitor these things if they are going to do this. If you disable throttling you should know the risks. Temperature throttling is most likely still in place. Correct me if I'm wrong though.


----------



## Kursah (Nov 13, 2010)

erocker said:


> I would assume it's up to the use to monitor these things if they are going to do this. If you disable throttling you should know the risks. Temperature throttling is most likely still in place. Correct me if I'm wrong though.



+1

All hand holding is disabled once you cross that line, choose the option and take the big boy route and expect the take on the consequences head-on. Don't expect to point fingers when you smoke your card using this to remove throttling...taking ownership of success along with mistakes is required by the owner of said product.

I am curious to see more about these cards de-throttled, how hot they get, how long they last, etc.


----------



## qubit (Nov 13, 2010)

erocker said:


> I would assume it's up to the "expert users, enthusiasts and overclockers" to monitor these things if they are going to do this. If you disable throttling you should know the risks. Temperature throttling is most likely still in place. Correct me if I'm wrong though.



I really dunno the answer to this one, erocker. By the sound of it, it's not just temperature damage that's the problem. If excess current is running through the card and/or mobo, then damage can result if it can't take it, regardless of how well temperature is kept down.

I can see lots of card RMAs going back, all with suspiciously similar faults to the power circuitry, or whatever and the user denying all knowledge of disabling the failsafe. 

I reckon a mini review by W1zzard on how to do this properly would really help us enthusiasts to minimize the risk.


----------



## erocker (Nov 13, 2010)

qubit said:


> I really dunno the answer to this one, erocker. By the sound of it, it's not just temperature damage that's the problem. If excess current is running through the card and/or mobo, then damage can result if it can't take it, regardless of how well temperature is kept down.
> 
> I can see lots of card RMAs going back, all with suspiciously similar faults to the power circuitry, or whatever and the user denying all knowledge of disabling the failsafe.
> 
> I reckon a mini review by W1zzard on how to do this properly would really help us enthusiasts to minimize the risk.



Really though, it just comes down to the same thing as overclocking anything else in your system. Watch temps, voltage, etc. Nothing new.


----------



## RejZoR (Nov 13, 2010)

Expect more burned GTX 580 cards now...


----------



## wahdangun (Nov 13, 2010)

qubit said:


> This is a great feature and yet more  to W1zzard for putting it in.
> 
> However, as the throttling is designed to prevent hardware damage to card and mobo, how is this going to be prevented when the card is run past its limit?



then every GTX 480 that wizz review was broken if that the case, because the GTX 480 use more watt and current than GTX 580,


----------



## qubit (Nov 13, 2010)

wahdangun said:


> then every GTX 480 that wizz review was broken if that the case, because the GTX 480 use more watt and current than GTX 580,



I don't see how that can be the case.


----------



## W1zzard (Nov 13, 2010)

this should not affect temperature protection, which will remain at 97°C



qubit said:


> However, as the throttling is designed to prevent hardware damage to card and mobo, how is this going to be prevented when the card is run past its limit?



it won't. just as much as any card other than gtx 580 does not have this kind of protection either


----------



## wahdangun (Nov 13, 2010)

qubit said:


> I don't see how that can be the case.



because GTX 480 doesn't have this throttling mechanism, thats why its can reach 300 watt in furmark and nvdia doesn't like that fact


----------



## evillman (Nov 13, 2010)

With this thing disabled, can we expect higher overclocks?


----------



## hat (Nov 13, 2010)

Is there a way to BIOS flash it out?


----------



## meran (Nov 13, 2010)

wow i bet the power circuit will die in 3 months after that


----------



## W1zzard (Nov 13, 2010)

meran said:


> wow i bet the power circuit will die in 3 months after that



if you run 3 months of furmark .. probably yes. typical gaming will never run into the power limit so "no power limit" will not make any difference


----------



## qubit (Nov 13, 2010)

W1zzard said:


> this should not affect temperature protection, which will remain at 97°C
> 
> 
> 
> *it won't. just as much as any card other than gtx 580 does not have this kind of protection either*



True, but no other previous cards have used as much power as the 480 & 580, which makes a burnout more likely.

And it looks like it could take out other parts of the PC with it, which was very unlikely previously.

You know what this all looks like to me? We're hitting another performance bottleneck due to excessive power use. This happened a few years ago with CPUs, preventing clock speed from reaching ever higher and this is looking like the same thing.

This new card is what, 15-30% faster than the old one? I'll bet the new ATI card will be faster than it's predecessor by a similar amount, all due to this unfortunate limit. The fact that these cards have to fit within a particular physical form factor and power usage envelope won't help either.

I reckon the days of next gen cards doubling in power over their predecessors are over.


----------



## meran (Nov 13, 2010)

W1zzard said:


> if you run 3 months of furmark .. probably yes. typical gaming will never run into the power limit so "no power limit" will not make any difference



sure but 350watt  my whole pc consume 330


----------



## W1zzard (Nov 13, 2010)

qubit said:


> True, but no other previous cards have used as much power as the 480 & 580, which makes a burnout more likely.



when properly designed a high power consumption design will work just as fine as a low power design. and you can bet nvida and amd have the best people in the world to figure out this kind of stuff


----------



## Taskforce (Nov 13, 2010)

Seems like today's standards are high temps and watts, if so i want nothing to do with it. Full load on GTX 460 = 65-70c I'm a little disappointed with, when i see high end cards doing under 40c full load call me impressed.


----------



## Zubasa (Nov 13, 2010)

qubit said:


> True, but no other previous cards have used as much power as the 480 & 580, which makes a burnout more likely.


The 4870X2 uses more power under furmark than the GTX580.


----------



## wahdangun (Nov 13, 2010)

qubit said:


> True, but no other previous cards have used as much power as the 480 & 580, which makes a burnout more likely.
> 
> And it looks like it could take out other parts of the PC with it, which was very unlikely previously.
> 
> ...



thats why right now ati focus on multi GPU scaling with their dual GPU card to achieve higher performance while can make entire line up much faster, and i hope with 28nm they can enable that side port thing,


----------



## qubit (Nov 13, 2010)

wahdangun said:


> thats why right now ati focus on multi GPU scaling with their dual GPU card to achieve higher performance while can make entire line up much faster, and i hope with 28nm they can enable that side port thing,



Hmmm yeah, +1 on that.


----------



## avatar_raq (Nov 13, 2010)

3 words: W1zzard u rock!


----------



## claylomax (Nov 13, 2010)

meran said:


> sure but 350watt  my whole pc consume 330



My whole pc consume 360w when gaming and that's off the wall, the pc would consume around 317w assuming 88% efficiency.


----------



## T3RM1N4L D0GM4 (Nov 13, 2010)

I heard W1z is magic.... isn't it?


----------



## claylomax (Nov 13, 2010)

T3RM1N4L D0GM4 said:


> I heard W1z is magic.... isn't it?



W1z is short for W1zzard, the administrator of the Techpowerup website. Wizzard was a 1970's rock UK band, one of their hits is popular this time of year here in the UK: http://www.youtube.com/watch?v=ZoxQ4Ul_DME


----------



## W1zzard (Nov 13, 2010)

T3RM1N4L D0GM4 said:


> I heard W1z is magic.... isn't it?



but don't tell anyone or they might tax it


----------



## HTC (Nov 13, 2010)

I just have a couple of questions:

- Are there any performance gains when not limiting?

- Is it worth it to remove the limiter?


----------



## LAN_deRf_HA (Nov 13, 2010)

This isn't going to burn anything out. The best use of this would be 15-20 minute stress testing sessions to ensure your overclock stability. Even doing it a dozen times isn't going to hurt anything, and you're unlikely to need to do it any more often than that.

Funny though, 350w makes it seem like the card isn't anymore power efficient at all.


----------



## T3RM1N4L D0GM4 (Nov 13, 2010)

W1zzard said:


> but don't tell anyone or they might tax it



Sure, it will be our little secret 

Back in topic: nice performance/watt ratio for this 580, rly  compared to 'old' gtx480... 480 has no power throttling cheat, right?


----------



## a_ump (Nov 13, 2010)

So basically, Nvidia implemented a 2nd throttle at the software level to make power consumption level's of the GTX 580 look lower? that's what i'm getting out of this. Course, we need to wait and see what results other users of the GTX 580's get.


----------



## segalaw19800 (Nov 13, 2010)

Wonder if you RMA a burn card will they know that power throttling was disable???


----------



## Hayder_Master (Nov 13, 2010)

great w1zzard nice work, i don't like crappy card without overclocking


----------



## Trigger911 (Nov 13, 2010)

This is some awesome news .... is this the card they previewed in the link below if so this is awesome ...

http://www.youtube.com/watch?v=eYJh5YVgDZE


----------



## Steevo (Nov 13, 2010)

Nvidia; They didn;t like wood screws. So we foudn another use for them.



We put a wood block under the throttle.


----------



## wiak (Nov 13, 2010)

be prepared to destroy your PSU


----------



## the54thvoid (Nov 13, 2010)

a_ump said:


> So basically, Nvidia implemented a 2nd throttle at the software level to make power consumption level's of the GTX 580 look lower? that's what i'm getting out of this. Course, we need to wait and see what results other users of the GTX 580's get.



I read somewhere it's a software implementation just now, only for Furmark and OCCT.  So it shouldn't be active with anything else.  Did i read this right?


----------



## Bundy (Nov 13, 2010)

W1zzard said:


> this should not affect temperature protection, which will remain at 97°C
> 
> 
> 
> it won't. just as much as any card other than gtx 580 does not have this kind of protection either



Is that temperature protection based on multiple sensors (GPU,VRM) or GPU only? If the power limiting is intended to protect the vrm rather than GPU, this mod may push users cards closer to failure than what they expect. If the protection is aimed at the GPU, then the temp limit will work ok.

As was advised, user beware is the relevant issue here.


----------



## HillBeast (Nov 13, 2010)

qubit said:


> True, but no other previous cards have used as much power as the 480 & 580, which makes a burnout more likely.



What everyone fails to remember is that supercomputers (especially ones based on the more modern POWER chips (POWER6 and POWER7) have very powerful and very hot CPUs in them. They draw well over 300W each and those puppies are at full load for months and months on end. Yes they are designed to handle it, but if a card is designed correctly, it can easily handle being at full load.

This throttling system wasn't NVIDIA saying the cards can't go higher than what they rated them for, it's NVIDIA just trying to make the card look like it's not as high of a power hungry card.


----------



## Imsochobo (Nov 14, 2010)

HillBeast said:


> What everyone fails to remember is that supercomputers (especially ones based on the more modern POWER chips (POWER6 and POWER7) have very powerful and very hot CPUs in them. They draw well over 300W each and those puppies are at full load for months and months on end. Yes they are designed to handle it, but if a card is designed correctly, it can easily handle being at full load.
> 
> This throttling system wasn't NVIDIA saying the cards can't go higher than what they rated them for, it's NVIDIA just trying to make the card look like it's not as high of a power hungry card.



my server board runs 130 W cpu's and has two phases each. no biggo coolin on the pwm either.
8 cpu board.

So your 16 phases.. do you need them ? i've taken world records on 4! average joe doesnt need so many phases, all that crap.
Motherboards can be cheap, the stock performance rarely differ, overclocking on the otherhand, you may require some expensiveness, and add cf and sli.

Back on topic, its funny tho.
350 W
ati manages to push a dualgpu card, with the lowered per/watt due to scaling, and having a double set of memory one of them doing nothing, and yet having better perf/watt.
I hope ati's engineers are getting a little bonus!


----------



## qubit (Nov 14, 2010)

HillBeast said:


> What everyone fails to remember is that supercomputers (especially ones based on the more modern POWER chips (POWER6 and POWER7) have very powerful and very hot CPUs in them. They draw well over 300W each and those puppies are at full load for months and months on end. Yes they are designed to handle it, but if a card is designed correctly, it can easily handle being at full load.
> 
> *This throttling system wasn't NVIDIA saying the cards can't go higher than what they rated them for, it's NVIDIA just trying to make the card look like it's not as high of a power hungry card.*



That would be nice if it's true, but I don't think nvidia would spend money implementing and building in a performance limiting feature (frame rate drops when it kicks in) just to put a certain idea in people's minds.

As W1zzard said to me earlier and you did just now, the card can consume any amount of power and run just fine with it, as long as the power circuitry and the rest is designed for it.

And that's the rub.

Everything is built to a price. While those POWER computers are _priced_ to run flat out 24/7 (and believe me they really charge for this capability) a power-hungry consumer grade item, including the expensive GTX 580 is not. So, the card will only gobble huge amounts of power for any length of time when an enthusiast overclocks it and runs something like FurMark on it. Now, how many of us do you think there are to do this? Only a tiny handful. Heck, even out of the group of enthusiasts, only some of them will ever bother to do this. The rest of us (likely me included) are happy to read the articles about it and avoid unecessarily stressing out their expensive graphics cards. I've never overclocked my current GTX 285 for example. I did overclock my HD 2900 XT though.

The rest of the time, the card will be either sitting at the desktop (hardly taxing) or running a regular game at something like 1920x1080, which won't stress it anywhere near this amount. So nvidia are gonna build it to withstand this average stress reliably. Much more and reliability drops significantly.

The upshot, is that they're gonna save money on the _quality_ of the motherboard used for the card, it's power components and all the other things that would take the strain when it's taxed at high power. This means that Mr Enthusiast over here at TPU is gonna kill his card rather more quickly than nvidia would like and generate lots of unprofitable RMAs. Hence, they just limit the performance and be done with it. Heck, it also helps to guard against the clueless wannabe enthusiast that doesn't know what he's doing and maxes the card out in a hot, unventilated case. 

Of course, now that there's a workaround, some enthusiasts are gonna use it...

And dammit, all this talk of GTX 580s is really making me want one!!


----------



## lism (Nov 14, 2010)

So basicly the card is capped at a certain level of power usage, but will it increase performance in furmark as soon as this trigger is being set off?

Or is just just abnormal power usage by the VRM's to protect them from burning out ? A few GTX's where also reported with fried VRM's using Furmark.


----------



## qubit (Nov 14, 2010)

lism said:


> So basicly the card is capped at a certain level of power usage, but will it increase performance in furmark as soon as this trigger is being set off?
> 
> Or is just just abnormal power usage by the VRM's to protect them from burning out ? A few GTX's where also reported with fried VRM's using Furmark.



Performance will increase noticeably as soon as the cap is removed.

EDIT: You might also want to read my post, the one before yours that explains why this kind of limiter is being put in.


----------



## Wile E (Nov 14, 2010)

qubit said:


> That would be nice if it's true, but I don't think nvidia would spend money implementing and building in a performance limiting feature (frame rate drops when it kicks in) just to put a certain idea in people's minds.
> 
> As W1zzard said to me earlier and you did just now, the card can consume any amount of power and run just fine with it, as long as the power circuitry and the rest is designed for it.
> 
> ...


I don't think the limit is there to protect the card. I think it is there to make power consumption numbers look better, and to allow them to continue to claim compliance with PCI-SIG specs for PCIe power delivery.


----------



## HillBeast (Nov 14, 2010)

Imsochobo said:


> my server board runs 130 W cpu's and has two phases each. no biggo coolin on the pwm either.
> 8 cpu board.
> 
> So your 16 phases.. do you need them ? i've taken world records on 4! average joe doesnt need so many phases, all that crap.
> Motherboards can be cheap, the stock performance rarely differ, overclocking on the otherhand, you may require some expensiveness, and add cf and sli.



What on earth are you on about with 16 power phases for? I was talking about POWER7: the IBM PowerPC CPU used in supercomputers. Those are VERY power hungry chips. Your 130W server chip wouldn't compare to the performance these things provide and power these things need. Why did you quote me for when you weren't even remotely talking on the same topic as me?

And I don't have 16 power phases on my motherboard if that is what you were referring to. Power phases mean nothing. It's how they are implemented. My Gigabyte X58A-UD3R with 8 analog power phases can overclock HIGHER than my friends EVGA X58 Classified with 10 digital power phases.



Wile E said:


> I don't think the limit is there to protect the card. I think it is there to make power consumption numbers look better, and to allow them to continue to claim compliance with PCI-SIG specs for PCIe power delivery.



Exactly.


----------



## qubit (Nov 14, 2010)

Wile E said:


> I don't think the limit is there to protect the card. I think it is there to make power consumption numbers look better, and to allow them to continue to claim compliance with PCI-SIG specs for PCIe power delivery.



Hmmm... the compliance angle sounds quite plausible. Does anyone have inside info on why nvidia implemented this throttle?

The built to a price argument still stands though.


----------



## Wile E (Nov 14, 2010)

qubit said:


> Hmmm... the compliance angle sounds quite plausible. Does anyone have inside info on why nvidia implemented this throttle?
> 
> *The built to a price argument still stands though.*



Not really, because the boards and power phases are more than enough to support the power draw of the unlocked cards. We already know what the components are capable of, and they are more than enough for 350W of draw.

If anything, adding throttling has added to the price of the needed components.


----------



## newtekie1 (Nov 14, 2010)

What amazes me is how many people think this is some major limitter that will hinder performance or kick in when the card goes over a certainly current level.

It is software based, it detects OCCT and Furmark and that is it.  It will not effect any other program at all. Anyone remember ATi doing this with their drivers so that Furmark wouldn't burn up their cards?




qubit said:


> True, but no other previous cards have used as much power as the 480 & 580, which makes a burnout more likely.



Ummmm...there most certainly has been.



Wile E said:


> I don't think the limit is there to protect the card. I think it is there to make power consumption numbers look better, and to allow them to continue to claim compliance with PCI-SIG specs for PCIe power delivery.



I really have a hard time believing that they did it to make power consumption look better.  Any reviewer right away should pick up on the fact that under normal gaming load the card is consuming ~225w and under Furmark it is only consuming ~150w.  Right there it should throw up a red flag, because Furmark consumption should never be drastically lower, or lower at all, than normal gaming numbers.  Plus the performance different in Furmark would be pretty evident to a reviewer that sees Furmark performance numbers daily.  Finally, with the limitter turned off the power consumption is still lower, and for a Fermi card that is within 5% of the HD5970 to have pretty much the same power consumption, that is an impressive feat that doesn't need to be artificially enhanced.


----------



## KainXS (Nov 14, 2010)

I have been wondering, when non reference versions of the GTX580 start coming out whats the chance that some of those non reference card don't even use the chips to throttle and run at full blast.


----------



## newtekie1 (Nov 14, 2010)

KainXS said:


> I have been wondering, when non reference versions of the GTX580 start coming out whats the chance that some of those non reference card don't even use the chips to throttle and run at full blast.



I doubt we'll see non-reference GTX580s any time soon.  I mean how many non-reference GTX480s are there?  I can really only think of 2, and I'm sure that is because nVidia just lifted the restrictions on the PCB design.


----------



## qubit (Nov 14, 2010)

newtekie1 said:


> Ummmm...there most certainly has been.



Yeah, Zubasa reminded me here: 



Zubasa said:


> The 4870X2 uses more power under furmark than the GTX580.



And that card came out quite some ago, too. Guess that's why I'd forgotten about it.


----------



## newtekie1 (Nov 14, 2010)

qubit said:


> Yeah, Zubasa reminded me here:
> 
> 
> 
> And that card came out quite some ago, too. Guess that's why I'd forgotten about it.



Yep, and the GTX480 outperformed it while using less power, yet everyone wanted to harp on the GTX480 for being so power hungry, but I doubt any of them even bats an eye at the HD4870x2's power consumption...


----------



## qubit (Nov 14, 2010)

newtekie1 said:


> I doubt we'll see non-reference GTX580s any time soon.  I mean how many non-reference GTX480s are there?



There's the MSI Lighting:







Product page.


----------



## KainXS (Nov 14, 2010)

I guess there will never be a dual 580 then . . . . . .


----------



## newtekie1 (Nov 14, 2010)

qubit said:


> There's the MSI Lighting:
> 
> http://img.techpowerup.org/101113/N480gtx_1ightning_1.jpg
> 
> Product page.



Yes, that is one of the 2 that I said I can think of, the other one being from Gigabyte.


----------



## LAN_deRf_HA (Nov 14, 2010)

Palit/gainward often have non-reference pcbs out pretty quickly.


----------



## Over_Lord (Nov 14, 2010)

KainXS said:


> I guess there will never be a dual 580 then . . . . . .



everybody needs heaters in winters, so dont worry, all hope aint lost!


----------



## Mussels (Nov 14, 2010)

qubit said:


> This is a great feature and yet more  to W1zzard for putting it in.
> 
> However, as the throttling is designed to prevent hardware damage to card and mobo, how is this going to be prevented when the card is run past its limit?



its probably best not to do this if you're on stock cooling. for example, a full coverage waterblock oughta be safe to mess around with this.


----------



## OneMoar (Nov 14, 2010)

and the point of this is ? 

it wont help over-clocking any 
and it puts EVEN greater stress on already overtaxed components 
and on you're wallet
edit:
and you WILL blow the mofsets off the card if you push those littles things any-harder
there not ment to handle that high of a sustained load and thats why nvidea   put the limiter in place


----------



## ty_ger (Nov 14, 2010)

newtekie1 said:


> What amazes me is how many people think this is some major limitter that will hinder performance or kick in when the card goes over a certainly current level.
> 
> *It is software based, it detects OCCT and Furmark and that is it.*  It will not effect any other program at all. Anyone remember ATi doing this with their drivers so that Furmark wouldn't burn up their cards?



No, OCCT and Furmark are only examples of the types of programs which trigger the OCP.  They *never* said that *only* OCCT and Furmark triggered the OCP.  It appears that NVIDIA has been pretty thorough in adding "artificial load" programs to the list of programs which trigger the OCP.

So far, OCCT, Furmark, EVGA OC Scanner, and Kombustor are confirmed to trigger the OCP cap.  I am sure there are more that I am not aware of.  GPU Tool?  ATItool?



OneMoar said:


> and you WILL blow the mofsets off the card if you push those littles things any-harder
> there not ment to handle that high of a sustained load and thats why nvidea put the limiter in place


 
You know this for a fact?  Do you have a link to the MOSFET datasheet?


----------



## HillBeast (Nov 14, 2010)

Why are people going on about how this is so dangerous? You're all acting as if we've never used FurMark before.

"Oh noez! This program will put a graphics card with a lower TDP than it's predecessor to it's limits! MOSFETs will blow up!"

How many GF100s blew up from Furmark? I am not saying none, but the number will be VERY low. Stop your whining. The GF110 is a much lower TDP and will put lower stress on the MOSFETs than the GF100 did, and as far as I am aware: it uses roughly the same, if not better, power circuitry than the GTX480.

SHUDDAP! IF YOU DON'T LIKE IT THEN STOP YOUR TROLLING!


----------



## bakalu (Nov 14, 2010)

btarunr said:


> GPU-Z developer and our boss W1zzard has devised a way to make disabling this protection accessible to everyone (who knows what he's dealing with), and came up with a nifty new feature for GPU-Z, our popular GPU diagnostics and monitoring utility, that can disable the speed throttling mechanism. It is a new command-line argument for GPU-Z, that's "/GTX580OCP". Start the GPU-Z executable (within Windows, using Command Prompt or shortcut), using that argument, and it will disable the clock speed throttling mechanism. For example, "X:\gpuz.exe /GTX580OCP" It will stay disabled for the remainder of the session, you can close GPU-Z. It will be enabled again on the next boot.
> 
> *DOWNLOAD:* TechPowerUp GPU-Z GTX 580 OCP Test Build


Sorry, My english is not good.

I have read your article and have followed that article, and here is test results of my ASUS GTX 580.

*ASUS GTX 580 @ 810/1013 vCORE=1.075v, FAN SET 85%, ROOM TEMP 30oC*

Maximum Temp with Furmark - *70oC*





Maximum Temp with Crysis Warhead (I plays map Train and Airfield of Crysis Warhead in 30'), FAN SET 75% - *81oC*





Maximum Power Consumption of Core i7 965 @ 3.6GHz + ASUS GTX 480 when running Furmark





Maximum Power Consumption of Core i7 965 @ 3.6GHz + ASUS GTX 580 when running Furmark





Maximum Power Consumption of Core i7 965 @ 3.6GHz + ASUS GTX 580 when playing Crysis Warhead


----------



## Bjorn_Of_Iceland (Nov 14, 2010)

Anyone know how to make the fan run @ 100%? Somewhat capped in 85% in precision..


----------



## HTC (Nov 14, 2010)

@ bakalu: Any chance you could rename the EXE Furmark to whatever you like and run it again with your 580? If @ anytime you see the temp rising too much, please interrupt the program but do post a screenie after.


----------



## knopflerbruce (Nov 14, 2010)

I assume that folding@home works properly, even when working on the toughest WUs?


----------



## newtekie1 (Nov 14, 2010)

ty_ger said:


> No, OCCT and Furmark are only examples of the types of programs which trigger the OCP. They never said that only OCCT and Furmark triggered the OCP. It appears that NVIDIA has been pretty thorough in adding "artificial load" programs to the list of programs which trigger the OCP.
> 
> So far, OCCT, Furmark, EVGA OC Scanner, and Kombustor are confirmed to trigger the OCP cap. I am sure there are more that I am not aware of. GPU Tool? ATItool?



Yes, they did say that only OCCT and Furmark trigger the OCP.  From the master mouth, the same person that made the tool to disable it:



W1zzard said:


> At this time the limiter is *only engaged when the driver detects* Furmark / OCCT, it is not enabled during normal gaming.


----------



## ty_ger (Nov 14, 2010)

newtekie1 said:


> Quote:
> 
> 
> 
> ...


 
Well, I am sorry, but W1zzard is not an employee of NVIDIA.  What I was stating was that _NVIDIA_ never stated that _only_ OCCT and Furmark triggered the OCP protection cap.  I am sorry to say that it appears that W1zzard was wrong when he made that statement.  OCCT and Furmark are only _examples_ of the types of programs which the drivers detect as 'artificial loads'.


----------



## W1zzard (Nov 14, 2010)

ty_ger said:


> What I was stating was that NVIDIA never stated that only OCCT and Furmark triggered the OCP protection cap



thats exactly what nvidia told me


----------



## newtekie1 (Nov 14, 2010)

ty_ger said:


> Well, I am sorry, but W1zzard is not an employee of NVIDIA.  What I was stating was that _NVIDIA_ never stated that _only_ OCCT and Furmark triggered the OCP protection cap.  I am sorry to say that it appears that W1zzard was wrong when he made that statement.  OCCT and Furmark are only _examples_ of the types of programs which the drivers detect as 'artificial loads'.





W1zzard said:


> thats exactly what nvidia told me



Yeah this^


----------



## ty_ger (Nov 14, 2010)

W1zzard said:


> thats exactly what nvidia told me



Don't know what to say to that.  There is evidence all over the net that EVGA OC Scanner and MSI Kombustor also trigger the OCP cap.  NVIDIA lies?


----------



## slyfox2151 (Nov 14, 2010)

ty_ger said:


> Don't know what to say to that.  There is evidence all over the net that EVGA OC Scanner and MSI Kombustor also trigger the OCP cap.  NVIDIA lies?



kombuster and such are exacly the same program as furmark. so ofc there going to trigger it as well.. there the same thing when it comes down to it.


----------



## qubit (Nov 14, 2010)

Mussels said:


> its probably best not to do this if you're on stock cooling. for example, a full coverage waterblock oughta be safe to mess around with this.



Yeah, watercooling definitely sounds like a good idea for this.

Ya know, I think I read somewhere (was it on TPU?) that the throttle is there to also protect the mobo, as well as the card. However, I don't quite understand why motherboard damage could happen: the PCI-E slot is rated for 75W, so the card will simply pull a max of 75W from there, in order to stay PCI-E compliant and the rest through its power connectors, therefore the risk to the mobo shouldn't be there.

Anyone have the definitive answer to this one?


----------



## slyfox2151 (Nov 14, 2010)

qubit said:


> Yeah, watercooling definitely sounds like a good idea for this.
> 
> Ya know, I think I read somewhere (was it on TPU?) that the throttle is there to also protect the mobo, as well as the card. However, I don't quite understand why motherboard damage could happen: the PCI-E slot is rated for 75W, so the card will simply pull a max of 75W from there, in order to stay PCI-E compliant and the rest through its power connectors, therefore the risk to the mobo shouldn't be there.
> 
> Anyone have the definitive answer to this one?



+1 

WTF.


how would the GTX5xx dmg the motherboard?


----------



## HTC (Nov 14, 2010)

W1zzard said:


> thats exactly what nvidia told me



That settles it then: seems Newtekie1 was right.


----------



## MikeX (Nov 14, 2010)

geez, you guys are killing the planet. 120 fps aint enough?


----------



## MrHydes (Nov 14, 2010)

GTX580 it's not quite what they anounced, 

i was amazed when reviews pointed less power consumption and about more

10% ~20% in some cases (against GTX480)... well now we all know that's not true!


----------



## newtekie1 (Nov 14, 2010)

qubit said:


> Yeah, watercooling definitely sounds like a good idea for this.
> 
> Ya know, I think I read somewhere (was it on TPU?) that the throttle is there to also protect the mobo, as well as the card. However, I don't quite understand why motherboard damage could happen: the PCI-E slot is rated for 75W, so the card will simply pull a max of 75W from there, in order to stay PCI-E compliant and the rest through its power connectors, therefore the risk to the mobo shouldn't be there.
> 
> Anyone have the definitive answer to this one?





slyfox2151 said:


> +1
> 
> WTF.
> 
> ...



 24 Pin P1 Connector Wires getting extremely hot

That is what can happen if you overload the PCI-e slots.  Now that was an extreme case of course, but once you start pulling more than 75w through the PCI-E connector things can get hairy pretty quickly.


----------



## slyfox2151 (Nov 14, 2010)

yes thats true.. but he was running more then 1 card.


is the slot/card not designed to stop it from sending more then 75watts through it?


----------



## qubit (Nov 14, 2010)

newtekie1 said:


> 24 Pin P1 Connector Wires getting extremely hot
> 
> That is what can happen if you overload the PCI-e slots.  Now that was an extreme case of course, but once you start pulling more than 75w through the PCI-E connector things can get hairy pretty quickly.



Thanks NT - that's quite a nasty burn on that connector there.

But my point is that wouldn't the card limit its power draw to stay within that limit and pull the rest from it's power connectors? That would prevent any damage to the mobo and stay PCI-E standards compliant. I don't know if it would, which is why I'm throwing the question out to the community.


----------



## newtekie1 (Nov 14, 2010)

slyfox2151 said:


> yes thats true.. but he was running more then 1 card.
> 
> 
> is the slot/card not designed to stop it from sending more then 75watts through it?



Not really, it will attempt to send as much as is demanded of it.



qubit said:


> Thanks NT - that's quite a nasty burn on that connector there.
> 
> But my point is that wouldn't the card limit its power draw to stay within that limit and pull the rest from it's power connectors? That would prevent any damage to the mobo and stay PCI-E standards compliant. I don't know if it would, which is why I'm throwing the question out to the community.



That is pretty much the idea behind this limit.  The PCI-E slot provides 75w, a 6-pin PCI-E power connector provies 75w, and an 8-pin PCI-E power connector provides 150w.  That is 300w.  So once you go over that, it doesn't matter if the power is coming from the PCI-E power connectors or the motherboard's PCI-E slot, you are overloading something somewhere, and you aren't PCI-E standards compliant.


----------



## qubit (Nov 14, 2010)

Sure something would go pop, but it still doesn't answer the question if the card would pull more than 75W from the mobo under such a condition. Properly designed, it should limit the current. I just don't know if it does or not and I don't think anyone else does either.


----------



## newtekie1 (Nov 14, 2010)

qubit said:


> Sure something would go pop, but it still doesn't answer the question if the card would pull more than 75W from the mobo under such a condition. Properly designed, it should limit the current. I just don't know if it does or not and I don't think anyone else does either.



W1z might know if he has power consumption numbers from just the PCI-E slot.

However, if you assume pretty even load across all the connectors, 1/4 from the PCI-E slot, 1/4 from the PCI-E 6-pin, and 1/2 from the PCI-E 8-Pin, once the power consumption goes over 300w, the extra will be divided between all the connectors supplying power.  I don't believe the power curcuits on video cards are smart enough to know that once the power consumption goes over a certain level to load certain connectors more than others.


----------



## qubit (Nov 14, 2010)

Thanks NT, that sounds quite likely. And because of this limitation, I'll bet that's why the current limiter operates the way it does.

W1zz, you wanna give us the definitive answer on this one?


----------



## MikeMurphy (Nov 15, 2010)

qubit said:


> Yeah, watercooling definitely sounds like a good idea for this.
> 
> Ya know, I think I read somewhere (was it on TPU?) that the throttle is there to also protect the mobo, as well as the card. However, I don't quite understand why motherboard damage could happen: the PCI-E slot is rated for 75W, so the card will simply pull a max of 75W from there, in order to stay PCI-E compliant and the rest through its power connectors, therefore the risk to the mobo shouldn't be there.
> 
> Anyone have the definitive answer to this one?



People privy to inside information, and who are smarter than you and I, decided it was necessary.

I suspect the tech specs published to manufacturers didn't account for the unusual power consumption under furmark etc.  This wouldn't have been an accident, but rather a procedure to keep costs down re power circuits and cooling.


----------



## Steevo (Nov 15, 2010)

I don't believe the logic exists for that either, I believe some cards pull the memory and other power through the PCIe slot and the core power through he connectors. I hope that is how they have the 580 setup.


----------



## bakalu (Nov 15, 2010)

HTC said:


> @ bakalu: Any chance you could rename the EXE Furmark to whatever you like and run it again with your 580? If @ anytime you see the temp rising too much, please interrupt the program but do post a screenie after.


Can you answer my question?

*You buy the GTX 580 to play games or run Furmark ?*


----------



## HTC (Nov 15, 2010)

bakalu said:


> Can you answer my question?
> 
> *You buy the GTX 580 to play games or run Furmark ?*



Neither: i don't buy it.

Took you a long time to reply but no matter. Since i asked, W1zzard has stated that the card really does react to Furmark and OCCT and, as such, what i asked is now irrelevant.


----------



## bakalu (Nov 15, 2010)

HTC said:


> Neither: i don't buy it.
> 
> Took you a long time to reply but no matter. Since i asked, W1zzard has stated that the card really does react to Furmark and OCCT and, as such, what i asked is now irrelevant.


I bought the GTX 580 to play games so I dont' care the temperature of the GTX 580 when running Furmark

*The temperature of the GTX 580 when playing is very cool and that is what interests me.*


----------



## MikeMurphy (Nov 15, 2010)

bakalu said:


> I bought the GTX 580 to play games so I dont' care the temperature of the GTX 580 when running Furmark
> 
> *The temperature of the GTX 580 when playing is very cool and that is what interests me.*



OK, if you don't care about the topic of this thread then please don't post in this thread.  This isn't a sneer remark or anything but just a way to get this thing back on topic.

Thanks,


----------



## Mussels (Nov 15, 2010)

qubit said:


> Yeah, watercooling definitely sounds like a good idea for this.
> 
> Ya know, I think I read somewhere (was it on TPU?) that the throttle is there to also protect the mobo, as well as the card. However, I don't quite understand why motherboard damage could happen: the PCI-E slot is rated for 75W, so the card will simply pull a max of 75W from there, in order to stay PCI-E compliant and the rest through its power connectors, therefore the risk to the mobo shouldn't be there.
> 
> Anyone have the definitive answer to this one?



the same way sticking a wire from your 12V rail onto the metal of your case makes shit melt.  excess power use will simply cause shit to fry.


----------



## HTC (Nov 15, 2010)

bakalu said:


> *I bought the GTX 580 to play games* so I dont' care the temperature of the GTX 580 when running Furmark



Really? Funny because the first thing you posted on this thread was ...



bakalu said:


> Maximum Temp *with Furmark* - *70oC*
> http://forum.amtech.com.vn/attachme...eforce-gtx-580-da-co-mat-o-amtech-temp-70.jpg
> 
> Maximum Power Consumption of Core i7 965 @ 3.6GHz + ASUS GTX 480 *when running Furmark*
> ...





bakalu said:


> *The temperature of the GTX 580 when playing is very cool and that is what interests me.*



If you say so ...


----------



## BorgOvermind (Nov 15, 2010)

So the card actually reaches above 360W. Just as I anticipated. If they would of let it unleashed it would of exceeded the 300W PCI-E specs limit. Good thing it can unlock easy tho'.


----------



## Imsochobo (Nov 15, 2010)

newtekie1 said:


> Not really, it will attempt to send as much as is demanded of it.
> 
> 
> 
> That is pretty much the idea behind this limit.  The PCI-E slot provides 75w, a 6-pin PCI-E power connector provies 75w, and an 8-pin PCI-E power connector provides 150w.  That is 300w.  So once you go over that, it doesn't matter if the power is coming from the PCI-E power connectors or the motherboard's PCI-E slot, you are overloading something somewhere, and you aren't PCI-E standards compliant.



PCI-E can give more than 75 W
PCi-e 1.1 can only give 75W 2.0 can give more than 75. 150 if i recall right...


----------



## bogie (Nov 15, 2010)

I could do with a stop throttling tool for my HD5870 as when i watch films on my secondary display it throttles down and causes stuttering playback.

Will it work on the HD5870 as well?


----------



## W1zzard (Nov 15, 2010)

bogie said:


> Will it work on the HD5870 as well?



no. this is only for the gtx 580 power throttling which is a unique mechanism at this time that no other card before ever used


----------



## BorgOvermind (Nov 15, 2010)

Imsochobo said:


> PCI-E can give more than 75 W
> PCi-e 1.1 can only give 75W 2.0 can give more than 75. 150 if i recall right...


I'm not 100% sure, but I don't think so. A card expecting 150W from the slot would not run on a 1.0 slot, but the compatibility is 100%. The additional W come from the external connectors.
75W+2x75W from 2x6Pins makes 225. 8-pins are used if power drain is larger then 225W.
A PCI-E with 150W from slot could get to 350+ with the extra 8-pins, which is not the case.


----------



## Mussels (Nov 15, 2010)

BorgOvermind said:


> I'm not 100% sure, but I don't think so. A card expecting 150W from the slot would not run on a 1.0 slot, but the compatibility is 100%. The additional W come from the external connectors.
> 75W+2x75W from 2x6Pins makes 225. 8-pins are used if power drain is larger then 225W.
> A PCI-E with 150W from slot could get to 350+ with the extra 8-pins, which is not the case.



^ you're correct on the wattages.


putting what i said in simpler terms:

drawing more than 75W from the slot wont magically turn the slot off, or anything else like that... if the card has no internal mechanism to deal with the power draw, the wiring feeding the slot will just start to overheat, and bad things can happen.


----------



## newtekie1 (Nov 15, 2010)

Imsochobo said:


> PCI-E can give more than 75 W
> PCi-e 1.1 can only give 75W 2.0 can give more than 75. 150 if i recall right...



I thought so as well, until I was kindly pointed to the PCIsig document explaining PCI-E 2.0 specs:

http://www.pcisig.com/developers/ma...c_id=b590ba08170074a537626a7a601aa04b52bc3fec

Page 38 is the important one estabilishing how much power can be drawn from where, the slot is still limitted to 75w.


----------



## qubit (Nov 15, 2010)

Mussels said:


> ^ you're correct on the wattages.
> 
> 
> putting what i said in simpler terms:
> ...



You know, it's really beginning to look like we need an uprated power spec for ATX & PCI-E power delivery. I'm sure we're hitting the same brick wall that CPUs did a few years ago, which is why these latest gen cards are getting held back in the performance they deliver and are not really massively faster than the ones that they replace. I'll bet the new Cayman GPU will have a lot of the same power and heat issues as nvidia's Fermi GPU.

I'm sure that if a 600W power budget (with appropriate cooling) was available, some excellent performance gains could be achieved, like double or more performance and extra rendering features.


----------



## W1zzard (Nov 15, 2010)

on most decent motherboards you can draw well over 100 w from the slot alone without bad things happening - because motherboard designers specifically design for that. 

if you buy a $30 motherboard then the boss of those guys told them to save 5 cents to meet their price target -> possible damage when drawing too much current for a long time


----------



## Wile E (Nov 15, 2010)

W1zzard said:


> on most decent motherboards you can draw well over 100 w from the slot alone without bad things happening - because motherboard designers specifically design for that.
> 
> if you buy a $30 motherboard then the boss of those guys told them to save 5 cents to meet their price target -> possible damage when drawing too much current for a long time



My old crappy ECS KA3-MVP allowed me to manually set the wattage limit to over 100w, let alone a quality board.

So, basically, you are saying the limiter is basically to prevent crappy boards from burning out, not for the benefit of the card itself? (Which is pretty much what I suspected from the beginning)


----------



## GC_PaNzerFIN (Nov 15, 2010)

Nothing new with cards detecting furmark and throttling. ATi has done it ever since 2008. I can't believe how big a fuss out of nothing this has become.


----------



## W1zzard (Nov 15, 2010)

Wile E said:


> the limiter is basically to prevent crappy boards from burning out, not for the benefit of the card itself?



it's for both



GC_PaNzerFIN said:


> ATi has done it ever since 2008



any idea where the system is described? i dont see any evidence of it when doing my power consumption testing. maybe you mean vrm overheat protection which reduces clocks on overheat?


----------



## GC_PaNzerFIN (Nov 15, 2010)

W1zzard said:


> any idea where the system is described? i dont see any evidence of it when doing my power consumption testing. maybe you mean vrm overheat protection which reduces clocks on overheat?


Yes they changed the method, HD4 detected exe and HD5 have hardware protections. End result is same. 

http://www.anandtech.com/show/2841/11

T


----------



## OneMoar (Nov 15, 2010)

!firehazard  :shadedshu
350 watts @ 12v = 30 amps - the 75-100 that the pcie-buss can supply so you are right on the edge of what the wires from you're psu can handle .. 
most wires are only rated for about 8-10 AMPS > then.melt
I put forth this question when does performance out weight the extra power usage and risk


----------



## Wile E (Nov 15, 2010)

OneMoar said:


> !firehazard  :shadedshu
> 300 watts @ 12v = 30 amps
> wires are only rated for about 20 > then.melt
> I put forth this question when does performance out weight the extra power usage and ris



Not all wires melt at 20A. Higher gauge/thicker wire = more amps, and not only that, but that load is spread across multiple wires.

A quality board and quality psu has no problems handling these kinds of loads.


----------



## OneMoar (Nov 15, 2010)

Wile E said:


> Not all wires melt at 20A. Higher gauge/thicker wire = more amps, and not only that, but that load is spread across multiple wires.
> 
> A quality board and quality psu has no problems handling these kinds of loads.


I know that but consider when you start adding the rest of the power hungry components
bad things will happen 
unless you can find a psu with 15g wiring most are 20-18


----------



## Wile E (Nov 15, 2010)

OneMoar said:


> I know that but consider when you start adding the rest of the power hungry components
> bad things will happen
> unless you can find a psu with 15g wiring most are 20-18



It's still not a problem with a high qulaity psu. If you can afford a 580, you can afford the high quality psu to go with it. I'll put 30A from a video card thru my psu all day, and never have to worry about it.


----------



## bakalu (Nov 16, 2010)

MikeMurphy said:


> OK, if you don't care about the topic of this thread then please don't post in this thread.  This isn't a sneer remark or anything but just a way to get this thing back on topic.
> 
> Thanks,





HTC said:


> @ bakalu: *Any chance you could rename the EXE Furmark* to whatever you like and *run it again with your 580? If @ anytime you see the temp rising too much, please interrupt the program but do post a screenie after.*



My *ASUS GeForce GTX 580 @ 880MHz/1050*. *This overclock was 100% stable on games and 3DMark Vantage*

*vCore=1.138V, stock fan, FAN set 85%, Room Temp=25oC. I rename Furmark to nemesis* and run over 40'. Here is my result:






Maximum Temp *70oC* 

What more you can say


----------



## W1zzard (Nov 16, 2010)

renaming furmark will not make any difference for nvidia's detection system


----------



## Splave (Nov 16, 2010)

alot of crying in this thread? if you are a gamer/normal user this doesnt even apply to you. To a bencher innovation like this is invaluable. thanks w1zzard


----------



## INTHESUN (Dec 15, 2010)

What is this meant to do. Just tried it and I don't know if I did it correct.


is this correct?.







Uploaded with ImageShack.us


----------



## edison (Dec 23, 2010)

why the max valuse is 317 watt for gtx 580 not 350 watt ?

http://www.techpowerup.com/reviews/HIS/Radeon_HD_6970/27.html


----------



## trt740 (Dec 23, 2010)

thx wizzard great addition.


----------



## encor3 (Aug 6, 2014)

How do I use the command lines in GPU-Z? Haven't really used the program more than just monetoring the GPU so I would really appreciate an explanation on how to disable the throttling on my 580GTX...
Thanks in advance!

//encor3


----------

