# GPU-Z no longer disables GTX500 OCP on 266 drivers



## Star (Jan 19, 2011)

I've been ripping my hair out for days trying to figure out why my GTX 580 is unstable, until I realized OCP isn't being disabled properly anymore on nvidia 266+ drivers.

Is there a fix planned soon? Or has nvidia really made it difficult this time?


----------



## douglatins (Jan 19, 2011)

Mah w1zz just need to redoit, should be easy  i dunno


----------



## W1zzard (Jan 19, 2011)

Star said:


> Is there a fix planned soon? Or has nvidia really made it difficult this time?



yes and yes


----------



## PedroPortnoy (Mar 4, 2011)

Has this been solved already?

It is really useful to test for stability in this card...


----------



## W1zzard (Mar 4, 2011)

should be included in the latest build and work fine


----------



## hat (Mar 4, 2011)

Man, I really wish there was a _permanant_ solution to disable it. Wake up Nvidia, take the hint... we don't want your thermal throttling!

IMO if they need to throttle the card so that it doesn't melt itself under stress, they need to do something with their design. GJ making a shitty card that needs to have its hand held to prevent it from self destructing.


----------



## HalfAHertz (Mar 4, 2011)

hat said:


> Man, I really wish there was a _permanant_ solution to disable it. Wake up Nvidia, take the hint... we don't want your thermal throttling!
> 
> IMO if they need to throttle the card so that it doesn't melt itself under stress, they need to do something with their design. GJ making a shitty card that needs to have its hand held to prevent it from self destructing.



Your CPU has built in throtling, why should your GPU be any different?


----------



## hat (Mar 4, 2011)

There's a difference between the CPU and GPU throttling we have. A few points:

-CPUs don't often overheat if properly cared for (stock cooler is fine, but clean the dust out every once in a while without a blanket of wires all over the place in some cooped up mini tower with zero aifrlow), even if running OCCT.

-CPUs only throttle when they hit a certian temperature. Nvidia's throttling technology just detects 2 stressful programs and makes the card wimp out when they're run.

-GPUs already have thermal throttling similar to CPUs, but nobody cared about that because it was acceptable. Isolating specific programs because they make your shitty card draw more power than you designed it to handle isn't.


----------



## newtekie1 (Mar 4, 2011)

hat said:


> Man, I really wish there was a _permanant_ solution to disable it. Wake up Nvidia, take the hint... we don't want your thermal throttling!
> 
> IMO if they need to throttle the card so that it doesn't melt itself under stress, they need to do something with their design. GJ making a shitty card that needs to have its hand held to prevent it from self destructing.



You realize ATi is doing the same thing now, right?



hat said:


> There's a difference between the CPU and GPU throttling we have. A few points:
> 
> -CPUs don't often overheat if properly cared for (stock cooler is fine, but clean the dust out every once in a while without a blanket of wires all over the place in some cooped up mini tower with zero aifrlow), even if running OCCT.
> 
> ...



CPUs do often pop vregs though due to pulling too much current.  I wish they had a power throttling system as well, would probably have a bunch of those shitty pre-built boards from death.

And it is a trade off we have to make.  The cards would likely never have an issue as stock voltages and clock speeds, however with nVidia allowing us to up the voltage it opens up the possibility for idiots to max out the voltage and then leave furmark running for 24 hours.  ATi had issue with that before too, remember when the HD4850s were dieing from Furmark so ATi released drivers that purposely retarded Furmark performance to stop the cards from being killed by the same idiots that think running Furmark for 24+ hours straight is a good idea?

ATi and nVidia, as well as the actual graphics card manufacturers themselves, are getting tired of replacing cards that have died from these circumstances so the over current protection is necessary.  I just wish there was a flag that could be set on the card itself, somewhere that is completely unreachable by the user, that detects when the overcurrent was disabled so the warranty could be instantly voided when the user does so.


----------



## HalfAHertz (Mar 4, 2011)

Ati's version is in fact much stricter in that mater because instead of just two programs, it can cap all of them if any goes over its limit. So looking at this matter, Ati is much worse.
I dunno I think it's better if Nvidia chose this option and decided to be honest instead of secretly implementing a solution at the driver level which would never have been found, unless someone was specifically looking for it.


----------



## hat (Mar 6, 2011)

newtekie1 said:


> You realize ATi is doing the same thing now, right?



It doesn't matter what company does it, my hatred for purposely retarding hardware to prevent it from going out of spec and breaking is shared equally between all manufacturers.





newtekie1 said:


> CPUs do often pop vregs though due to pulling too much current.  I wish they had a power throttling system as well, would probably have a bunch of those shitty pre-built boards from death.



Then I suppose people should stop putting high power CPUs in boards not designed to handle them. I learned my lesson on that the hard way a few years back with a Phenom 9500, a 95w processor in a cheap board like the one I have now.



newtekie1 said:


> And it is a trade off we have to make.  The cards would likely never have an issue as stock voltages and clock speeds, however with nVidia allowing us to up the voltage it opens up the possibility for idiots to max out the voltage and then leave furmark running for 24 hours.  ATi had issue with that before too, remember when the HD4850s were dieing from Furmark so ATi released drivers that purposely retarded Furmark performance to stop the cards from being killed by the same idiots that think running Furmark for 24+ hours straight is a good idea?
> 
> ATi and nVidia, as well as the actual graphics card manufacturers themselves, are getting tired of replacing cards that have died from these circumstances so the over current protection is necessary.  I just wish there was a flag that could be set on the card itself, somewhere that is completely unreachable by the user, that detects when the overcurrent was disabled so the warranty could be instantly voided when the user does so.



I thought warranties were already voided due to overclocking and overvolting? If they're getting tired of replacing cards damaged by people who ran them out of spec, perhaps they should stop accepting RMA requests from people who ramp up the clocks and volts on their cards. I expect my video card to be able to run Furmark for 24 hours straight. If it can't handle that kind of load, then it's a shitty card.

GPU engineer 1: Hey, let's design this really high power graphics card... but then make poor power circuitry to save money!

GPU engineer 2: Yeah, that cheap power circuitry can save the company a lot of money! But what if the cheap parts blow up under stress?

GPU engineer 3: Hey, why not have the card power itself down within tolerances of the cheap parts so it doesn't fry under stress at the cost of performance?

GPU engineers 1 & 2: That's a great idea!

All GPU engineers: YEAH! LET'S DO IT!


----------



## newtekie1 (Mar 6, 2011)

hat said:


> It doesn't matter what company does it, my hatred for purposely retarding hardware to prevent it from going out of spec and breaking is shared equally between all manufacturers.



Concidering your rant was all about bashing nVidia and didn't mention anything about AMD at all, I doubt that.  Something tells me it just pisses you off that nVidia does it, you don't really care if AMD is doing it too.



hat said:


> Then I suppose people should stop putting high power CPUs in boards not designed to handle them. I learned my lesson on that the hard way a few years back with a Phenom 9500, a 95w processor in a cheap board like the one I have now.



That is easier said than done, especially with motherboard manufacturers not giving accurate ideas about what a board is designed to handle.  I have an eVGA board that has burnt vregs from a 95w CPU, and according to eVGA the board is supposed to be able to handle up to a 130w QX6800. But given enough time the vregs would pop with a 95w CPU that the board is _supposed_ to be designed to handle. 



hat said:


> I thought warranties were already voided due to overclocking and overvolting? If they're getting tired of replacing cards damaged by people who ran them out of spec, perhaps they should stop accepting RMA requests from people who ramp up the clocks and volts on their cards. I expect my video card to be able to run Furmark for 24 hours straight. If it can't handle that kind of load, then it's a shitty card.



Most manufacturers do not void the warranty from overvolting and overclocking anymore, especially not overclocking.  The problem with your idea is that it is impossible, currently, to tell if a card just had a bad component that popped or if the user jacked up the voltage/clocks and ran furmark until it popped.  The only logical solution is to either put in a system that detects when the card has been overvolted and voids the warranty, or just put in a protection that prevents the card from dying.  Personally, I'd prefer the system that prevent the card from dying but keeps my warranty in tact.

And the fact of the matter is that Furmark, and the programs based of it, are not conditions that a graphics card realisticly will ever encounter during normal use.  So it is inane to think that it is acceptable to subject a graphics card to that type of torture just because you can.  I can put my car in neutral and floor it, and it will overheat and kill the motor, but that doesn't mean I should.  That isn't a situation that the car should ever have to be subjected to expect by idiots that think that just because they can they should. Of course modern cars have rev limiters to prevent this, but I'm sure that pisses you off to, because if you want to kill the motor by flooring it in neutral until something fails, you should be able to, and the car manufacturer should cover it under wanty because it is their fault for using such shitty parts.



hat said:


> GPU engineer 1: Hey, let's design this really high power graphics card... but then make poor power circuitry to save money!
> 
> GPU engineer 2: Yeah, that cheap power circuitry can save the company a lot of money! But what if the cheap parts blow up under stress?
> 
> ...



The cards aren't popping under stock settings, so you wasted your time typing all that out.  It isn't designed to pretect the cards as the engineers designed, it is designed to protect the cards from idiotic users that think that just because you can set the voltage as high as possible, that you should.


----------



## hat (Mar 6, 2011)

newtekie1 said:


> Concidering your rant was all about bashing nVidia and didn't mention anything about AMD at all, I doubt that.  Something tells me it just pisses you off that nVidia does it, you don't really care if AMD is doing it too.



No, really, I don't care who does it, I hate this throttling nonsense. I'm not that I'm hating on Nvidia becasue I'm an ATi/AMD fanboy (I haven't even used any ATi/AMD cards since I had an X1800XL I had ages ago... so I actually prefer Nvidia cards to ATi/AMD). My rant was about bashing Nvidia because Nvidia throttling their 5xx series is the current subject.





newtekie1 said:


> That is easier said than done, especially with motherboard manufacturers not giving accurate ideas about what a board is designed to handle.  I have an eVGA board that has burnt vregs from a 95w CPU, and according to eVGA the board is supposed to be able to handle up to a 130w QX6800. But given enough time the vregs would pop with a 95w CPU that the board is _supposed_ to be designed to handle.



Nobody said it was easy to design hardware. I give them credit for what they do, but I hold it against them that they cut corners. As for your motherboard, it's possible you just had a defective model, or the quality on that specific model wasn't exactly up to par. In any case, I still hold my opinion, be it graphics cards, motherboards, or anything else.





newtekie1 said:


> Most manufacturers do not void the warranty from overvolting and overclocking anymore, especially not overclocking.  The problem with your idea is that it is impossible, currently, to tell if a card just had a bad component that popped or if the user jacked up the voltage/clocks and ran furmark until it popped.  The only logical solution is to either put in a system that detects when the card has been overvolted and voids the warranty, or just put in a protection that prevents the card from dying.  Personally, I'd prefer the system that prevent the card from dying but keeps my warranty in tact.



Well, perhaps they should start voiding warranties for the few people that run their cards out of spec instead of throttling it for everyone. The GTX580 suffers in Furmark even at stock clocks.



newtekie1 said:


> And the fact of the matter is that Furmark, and the programs based of it, are not conditions that a graphics card realisticly will ever encounter during normal use.  So it is inane to think that it is acceptable to subject a graphics card to that type of torture just because you can.  I can put my car in neutral and floor it, and it will overheat and kill the motor, but that doesn't mean I should.  That isn't a situation that the car should ever have to be subjected to expect by idiots that think that just because they can they should. Of course modern cars have rev limiters to prevent this, but I'm sure that pisses you off to, because if you want to kill the motor by flooring it in neutral until something fails, you should be able to, and the car manufacturer should cover it under wanty because it is their fault for using such shitty parts.



My point is that Furmark _shouldn't_ kill cards. Designing a high power video card with the thought of "oh well, anyone who buys this will probably only run Solitare on it anyway so I don't need to design good power circuitry" is pretty inane as well. As for your car analogy, yes, if I want to blow up a car by doing stupid shit to it I should be able to; however, I don't agree that running Furmark on a graphics card is akin to flooring a car in neutral. Furmark is a good benchmarking tool and a good tool to test for stability (OCCT is great for this as well). I'm not very keen on cars, but I don't think people "benchmark" their cars or check for reliability by flooring it in neutral.





newtekie1 said:


> The cards aren't popping under stock settings, so you wasted your time typing all that out.  It isn't designed to pretect the cards as the engineers designed, it is designed to protect the cards from idiotic users that think that just because you can set the voltage as high as possible, that you should.



If they're not popping under stock settings then why even design this feature in the first place? Tweakers such as ourselves ought to know that there's a risk of blowing stuff up when increasing the voltage. As I said before, instead of retarding the cards to protect from cards being run out of spec, maybe they should just stop accepting RMA requests from people that run their cards as such.


----------



## newtekie1 (Mar 6, 2011)

hat said:


> Nobody said it was easy to design hardware. I give them credit for what they do, but I hold it against them that they cut corners. As for your motherboard, it's possible you just had a defective model, or the quality on that specific model wasn't exactly up to par. In any case, I still hold my opinion, be it graphics cards, motherboards, or anything else.



No, it wasn't a defective model, it was just a low end board that used a 3 phase PWM that was easily overloaded.  However, the 3 phase PWM design is common, especially on pre-builts, and it just doesn't hold up.  If the caps don't pop(which isn't like on my board since they are solid caps), then the mosfets burn up and pop.




hat said:


> Well, perhaps they should start voiding warranties for the few people that run their cards out of spec instead of throttling it for everyone. The GTX580 suffers in Furmark even at stock clocks.



That doesn't make a whole lot of sense though when you think about it.  The only people that that are affected by the throttle are those running furmark. It isn't throttling for everyone, because the only people running furmark are the ones overclocking and overvolting.  I mean, how many normal users that never overclock do you know that run furmark just for the fuck of it?




hat said:


> My point is that Furmark _shouldn't_ kill cards. Designing a high power video card with the thought of "oh well, anyone who buys this will probably only run Solitare on it anyway so I don't need to design good power circuitry" is pretty inane as well. As for your car analogy, yes, if I want to blow up a car by doing stupid shit to it I should be able to; however, I don't agree that running Furmark on a graphics card is akin to flooring a car in neutral. Furmark is a good benchmarking tool and a good tool to test for stability (OCCT is great for this as well). I'm not very keen on cars, but I don't think people "benchmark" their cars or check for reliability by flooring it in neutral.



And my point is that it doesn't kill cards when they are run at stock speeds, what they were designed for, even if you disable the throttle.  Hell, the limit is 300w, that is what the limitter is designed to keep the card under, and with the limitter disabled a stock GTX580 only hits a maximum of 305w with furmark.  That 5w ain't going to kill the card, as I said, the limitter isn't there to save stock cards it is there to save overvolted/overclocked cards.  You can run the card all day long with the limitter disabled at stock settings and the card ain't going to die.

It came down to nVidia and manufacturers that didn't want to keep replacing cards that overclocker novices killed, so it was either set up something that limitted the current so idiots couldn't kill the card, or void the warranties of anyone that overclocked.  Personally, I'll take the limitter and keep my warranty.

And while furmark might be a decent benchmark(I won't say good), there are plenty of better benchmarks out there, and I don't think the 60 second benchmark is what nVidia/AMD was worried about.  This would be like a proper dyno run with a car.

However, leaving furmark running 24 hours, which _is_ what nVidia/AMD was worried about.  And there are idiots that will stand on the gas in a car until the motor blows, just like there are idiots that will let furmark sit for 24 hours.  Hell, if you don't know what you are doing you can blow an engine on a dyno run if you let it rev too high and don't shut it down, I've seen it happen.

I would disagree that furmark is a good stability tester, there are far better ones out there, particularly ones that actually look for errors and don't just overload the card and hope it crashes(which is often doesn't despite the card being unstable).




hat said:


> If they're not popping under stock settings then why even design this feature in the first place? Tweakers such as ourselves ought to know that there's a risk of blowing stuff up when increasing the voltage. As I said before, instead of retarding the cards to protect from cards being run out of spec, maybe they should just stop accepting RMA requests from people that run their cards as such.



I've already explained this.  Tweakers like us also know how to disable the protection.  The protection is there for people that don't know what they are doing beyond installing Precision/Afterburner, and they see that litle voltage slider and jack it all the way up because they can.  

And there is no way for the manufacturers to stop accepting RMAs from people that overclock/overvolt because there is no real way to know.  They could design a system that sits on the card and monitors the clock speeds, and if the clock speeds are changed it voids the warranty, but that is probably more complicated and expensive than a current limitter.  However, as I've said, the only other option was to implement the limitter which allows the customer to overclock and still keep the warranty.  

I don't see why you would favor no overclocking or no warranty over a limitter that only affects 1 program(and programs based off it) that still allows us to overclock and keep our warranties.  I don't see how that makes any sense at all.


----------



## entropy13 (Mar 6, 2011)

Star said:


> Is there a fix planned soon? *Or has nvidia really made it difficult this time?*





W1zzard said:


> yes and *yes*



Shame on Nvidia, making W1zzard work. :shadedshu


----------



## HalfAHertz (Mar 6, 2011)

I agree with newtekie on this one - furmark is a shitty test tool. The way it's designed is that it bypasses the standard rendering pipeline and stresses all parts of the GPU at the same time: It literally generates junk data inside the GPU and just flushes it constantly.


----------



## crazyeyesreaper (Mar 6, 2011)

i would say AMD has the better power limiter, theres a set limit yes but i can raise it by 20% to be honest even maxing overclocks in CCC limit terms i was unable to make the cards to clock down at default 0% change to the power , at 20% im able to go well into the 1000mhz range with no issue, eitherway this power throttle tech was bound to be used at somepoint on GPUs, speed and performance takes power its unavoidable, sometimes you have to limit it to an extent or have things go boom,

having seen Nvidia cards melt the PCIE slots on some EVGA boards id have to say sometimes power managment is a good thing lol


----------



## newtekie1 (Mar 6, 2011)

crazyeyesreaper said:


> i would say AMD has the better power limiter, theres a set limit yes but i can raise it by 20% to be honest even maxing overclocks in CCC limit terms i was unable to make the cards to clock down at default 0% change to the power , at 20% im able to go well into the 1000mhz range with no issue, eitherway this power throttle tech was bound to be used at somepoint on GPUs, speed and performance takes power its unavoidable, sometimes you have to limit it to an extent or have things go boom,
> 
> having seen Nvidia cards melt the PCIE slots on some EVGA boards id have to say sometimes power managment is a good thing lol



Yes, and despite what Hat believes, this isn't just as a protection for the cards, it is just as much a protection for the motherboards.  In fact, now that I think about it, I've probably seen more motherboards with burnt 24-pin or PCIe connectors from cards drawing too much power than I've seen cards die.

I like nVidia's system a little better because it is actually hardware based, and not just software making guesses.  And it also is never active at all during any type of normal use of the card.


----------



## crazyeyesreaper (Mar 6, 2011)

true enough newtekie but i prefer an all encompassing protection i control nvidias may be hardware but i have no real control over it,

think of it this on an AMD card the throttling might be more obvious in certain situations but 20% on a 200watt TDP is a huge jump in power consumption allowed thats 40 extra watts of power consumption, now if there gonna add shit like this i prefer having a situation where i have control where i can change the TDP at my discretion but still have that upper limit to keep things from getting out of control Nvidias method is a bit more reliable im willing to bet long term as its hardware based not software, id truly like to see a mix of both hardware and software.

aka a tiny switch on he GPU that gpu has a tdp limit say 225w, but that switch can be set to -20 -10 0default +10 +20% so you can have control over the TDP at a hardware lvl  or something to that extent. but yea i dont see the TDP limit as much of a card protection mechanism its more for the motherboard, i havent seen ati cards melt sockets but ive seen nvidia cards do it, and im pretty sure if an ati card did it would be the 4870x2 as 400w from 2 pcie power and the socket is alot, if that extra 80w was pulled straight from the PCIE slot it would melt the 6990 and 590 i feel without power throttling would fry more then a few motherboards.


----------



## newtekie1 (Mar 6, 2011)

But the control is an illusion when the figures it is using are very inaccurate guesses.

I prefer just a 300w set limit, which is the PCIe spec by the way, and the ability to just turn it off if I wish over being able to change the limit with no real idea what the numbers really are.


----------



## crazyeyesreaper (Mar 6, 2011)

thats why i said id prefer a MIX of hardware with AMDs method a physical hardware switch to set those settings not guess work 

i like the idea of being able to force power consumption down or allow room for it to increase id just prefer a much more fool proof system of hardware with the % based settings, if you get what i mean,  Nvidia has the more fool proof system in terms of it wont fuck up do to guess work,  AMD has the control i want, mix both and id be happy


----------



## catnipkiller (Mar 7, 2011)

Fan boys gonna rage on both sides. They limit power so u don't rage when u try to over clock your GPU on a stock cooler. I think it should be able to disable It with the warning "YOUR CARD WELL MELT ENTER AT OWN RISK" and if it melts don't cry.


----------



## HalfAHertz (Mar 7, 2011)

Fact is that until we see some fixed function hardware built directly into the die resembling something like this, we won't have a perfect solution. Think of it as the turbo mode in Nehalem and Lynfield processors. Remember what a huge boost it was to single threaded performance? It will will basically be able to do what Ati is trying to do in software but instantly and on the spot with much more accuracy and efficiency. And the end result will be both faster and cooler cards.

So chop chop engineers, go do it NAU!


----------

