# Furmark+IntelBurnTest (simultaneously) fail (always furmark crash)



## BenchAndGames (Feb 27, 2019)

Runnig Intel Burn Test + Furmark (both last version) simultaneously, it crashing the furmark software, but Intel Burn still running. Tested with all bios on stock, also my graphic card its a stock version, so didnt has any oc. It fail after like 15-20 minutes. Tested several times and always happaing, with same order of events logs. 

1) id 1 nvidia opengl driver
2) id 1000 app error 
"Nombre de la aplicación con errores: FurMark.exe, versión: 1.20.4.0, marca de tiempo: 0x5c3f0381
Nombre del módulo con errores: nvoglv32.dll, versión: 25.21.14.1891, marca de tiempo: 0x5c5b2fcf"
3) id 4101 display driver stopped workign and recovered

So this mean something wrong with the graphic card ? It can be a hardware failure ¿ Sould I do RMA ?

i7 4770k @ stock
rtx 2060 @ stock
16GB TridentX 2400 @ but running at stock
Gigabyte G1 Sniper 5
Corsair HX850W


----------



## eidairaman1 (Feb 27, 2019)

Stop running both at the same time, let alone stop running furmark completely


----------



## FireFox (Feb 27, 2019)

BenchAndGames said:


> Runnig Intel Burn Test + Furmark



I don't mean to be rude but that's a Noob thing.


----------



## OneMoar (Feb 27, 2019)

good way to melt some vrms


----------



## Kissamies (Feb 27, 2019)

I have to ask, why Furmark? It's not good for even as a synthetic test, it just heats heats your card to unrealistic temps.



OneMoar said:


> good way to melt some vrms



Exactly. It's just so useless.


----------



## jaggerwild (Feb 27, 2019)

Keep running them also run MEMTEST2011 lolz!


----------



## kastriot (Feb 27, 2019)

Use 1st intelburn on high 30 cycles then use unigine heaven on extreme for 1-2 hours for gpu and that's it.


----------



## BenchAndGames (Feb 27, 2019)

Ok so the fact that Furmark fail running simultaneously with IBT it does not mean the video card is defective ? I just want to be sure. I mean it can be posible to fail because when the GPU need to process something with the CPU, it jsut going on fail because the CPU already have full load 100% from the IntelBurn ¿



Knoxx29 said:


> I don't mean to be rude but that's a Noob thing.



I was trying to simulate the real game experience, where the games of today are using compeltly all of you cores and GPU at the same time. I was just wanted to test the video card.


----------



## the54thvoid (Feb 27, 2019)

What your doing does not simulate real game performance. You'll not find many (any) games that run your CPU at 100% and your GPU maxed out. 
Apart from Furmark being useless, the CPU at 100% will probably mean any other software will be unstable. If your CPU is completely maxed out on one application, how is it meant to work on another?


----------



## Kissamies (Feb 27, 2019)

Run 3DMark and/or internal benchmarks of games instead of that "power virus" Furmark.


----------



## cucker tarlson (Feb 27, 2019)

BenchAndGames said:


> I was trying to simulate the real game experience, where the games of today are using compeltly all of you cores and GPU at the same time. I was just wanted to test the video card.


well,then play those games for heaven's sake


----------



## Final_Fighter (Feb 27, 2019)

likely that the software could be having a conflict.

if you want to test for stability run prime95 for 30 minutes, this is usually enough to see if something is wrong with your cpu. then run memtest for around an hour. this checks to see if the memory has any issues. next run a couple of benchmarks like metro's built in one, or just play the games to see if you have any problems. running multiple benchmarks wont crash the pc if the software  isnt conflicting with one another but in this case im sure there is.


good luck.


----------



## Shambles1980 (Feb 27, 2019)

IBT + fur mark?
What are you trying to do, cook your self?


----------



## Athlonite (Feb 27, 2019)

BenchAndGames said:


> I was trying to simulate the real game experience, where the games of today are using compeltly all of you cores and GPU at the same time. I was just wanted to test the video card.



There's no such game that uses 100% of both CPU & GPU and all your going to do is end up cooking something completely. Like others have said here just run 3DMark and or IBT separately or just use in game benchmarks to simulate gaming


----------



## BenchAndGames (Feb 27, 2019)

All right so I have the last question:

"Timeout Detection and Recovery" it can happen because of bad/unstable CPU overclock ? 

Two days ago I had freeze on chome for like few seconds than my screen goes black, and back again with a notification of Driver stopped working and recovered
"id event 14 - The description of event 14 is not found on the origin nvlddmkm
id event 4101 - nvlddmkm stopped working and has recovered" 

Actually I was trying to get some overclock on my CPU and well...just had this issue


----------



## rtwjunkie (Feb 27, 2019)

BenchAndGames said:


> was trying to simulate the real game experience, where the games of today are using compeltly all of you cores and GPU at the same time. I was just wanted to test the video card.


I will offer my opinion.  If you want to test out your computer’s ability to handle a tough game experience, then test it by playing games.   Doing what you want it to without error is a pretty good judge of its ability.


----------



## EarthDog (Feb 27, 2019)

As we told you at ocf....stop running furmark.

Just an fyi, he cross forum posted. So you guys dont spin your tires here... I linked his threads from ocf where we went over these things and shared the same advice. 
https://www.overclockers.com/forums/showthread.php/793790-CPU-overclock-can-make-graphics-card-fail

https://www.overclockers.com/forums/showthread.php/793775-Half-FPS-in-all-games-with-default-bios



BenchAndGames said:


> I was trying to simulate the real game experience, where the games of today are using compeltly all of you cores and GPU at the same time. I was just wanted to test the video card.


run unigine heaven or 3dmark fire strike extreme on loop... running ibt and furmark at the same time isnt remotely a realistic load. Very few games use all cores and threads and when they do, they arent as stressful as stress tests...


----------



## Zyll Goliat (Feb 27, 2019)

BenchAndGames said:


> All right so I have the last question:
> 
> "Timeout Detection and Recovery" it can happen because of bad/unstable CPU overclock ?
> 
> ...


Try to reinstall your GPU drivers,first wipe them totally with DDU and then install new....If the problem persist then there is a possibility that is something wrong with your GPU....
P.S. STOP USING FURMARK!!!


----------



## BenchAndGames (Feb 27, 2019)

EarthDog said:


> As we told you at ocf....stop running furmark.
> 
> Just an fyi, he cross forum posted. So you guys dont spin your tires here... I linked his threads from ocf where we went over these things and shared the same advice.
> https://www.overclockers.com/forums/showthread.php/793790-CPU-overclock-can-make-graphics-card-fail
> ...



I apreciate that you are active on my issues and you help, but you dont need to manipulate me or the others people . So im in my rights to come and ask for the help on others forums. You should edit you post and remove the part of telling people from here to not focus on my issues. Education is everything


----------



## EarthDog (Feb 27, 2019)

BenchAndGames said:


> I apreciate that you are active on my issues and you help, but you dont need to manipulate me or the others people . So im in my rights to come and ask for the help on others forums. You should edit you post and remove the part of telling people from here to not focus on my issues. Education is everything


Nobody is manipulating anything...nor did I say/allude to you not having a right to do this. I shared with others that you have gone over a lot of this at a different site and shared the links so those who may run across it won't waste their time (nor yours) going over information that has already been provided. 

Education IS everything (not changing my post - I didn't ask anyone not to help, WTH??), but let's not waste time of those who are helping you for free, by going over things that were covered previously. This is common forum etiquette that if you cross post, at least link threads so people can see what has been done already.

From another site........(reasons)


> Please don't cross-post questions on multiple forums at the same time (whether on multiple threads here, or on other forums). If you post on another forum and don't get the answer you need, please feel free to post it here after a reasonable length of time, or better still, post here first and then try elsewhere if we can't help.
> 
> If we recognize a thread from another forum, we'll post a link to the other thread so that those reading are aware of the other thread. Why? 2 reasons:
> 
> ...



I hope you are now able to understand why I posted those links.


----------



## FireFox (Feb 28, 2019)

ppn said:


> You should try running IBT at lower CPU priority with task manager so that it leaves some cpu cycles to furmark and the nvidia driver,



Really?
Your post is the #20 did you read the other 19? i guess you didn't



BenchAndGames said:


> I was trying to simulate the real game experience, where the games of today are using compeltly all of you cores and GPU at the same time. I was just wanted to test the video card.



Sad to say but you were doing it completely wrong and more than testing your GPU you were cooking it, however i don't know where you get that information from but IBT and Furmark doesn't have anything to do with simulating real Game Experience, instead use Fire Strike or Heaven Benchmark.

P.s Take in consideration @EarthDog advice i am sure he didn't mean it in a bad way


----------



## BenchAndGames (Feb 28, 2019)

Ok thank all for the help


----------



## EarthDog (Feb 28, 2019)

The thing is, bench, I can only think of ONE game that uses 'all' threads. Games typically only use a few cores and threads at most. Also the difference between gaming loads and stress test loads are huge. There isnt anything wrong with testing both at the same time, but furmark is a terrible test for the GPU.


----------



## londiste (Feb 28, 2019)

Does Furmark fail when not running simultaneously with IBT?


----------



## ppn (Feb 28, 2019)

Knoxx29 said:


> Really?
> Your post is the #20 did you read the other 19? i guess you didn't



I missed that part, i thought you were total denialists of this test combination so was less inclined to read. but i See now.


----------



## BenchAndGames (Feb 28, 2019)

@EarthDog
I understand but I will return it cuz I dont want to start from beggining with troubles. "I mean that first freeze/TDR on idle"  for only 20€ more I can actually buy the Ventus OC edition, and hope that one will run good without any problems.
It may sound like I'm a little exaggerating, but I remember few years ago when I got GTX 970 I ahd to replace it also for a new unit cuz I had similars issues, was starting same as this one, but with that one I remember was almost everyday ingames freeze, screen totally blank or green, sound buggy, so I replace that one and the new unit what they send me, was totally fine, 0 problems 


@londiste
No, I was able to run it for like 2 hours in solo mode and than I just manually stopped cuz I didnt see any problems.


----------



## Shambles1980 (Feb 28, 2019)

the Only time i use IBT. is when water cooling "custom loop" and overclocking to see what my max fan speeds need to be to keep the system under check for prolonged periods of heavy usage.
I never use IBT when i have a air cooler.
As for IBT + fur mark. i just wouldnt do it. OCCT psu test is enough stress for a system to check for stability over all components.
its a pretty similar setup too. but its automated and runs the tests in an alternating pattern to achieve the results needed for a stability test..

for reall world gaming tests well play the game, or failing that i like firestrike combined test.


----------



## EarthDog (Feb 28, 2019)

BenchAndGames said:


> @EarthDog
> I understand but I will return it cuz I dont want to start from beggining with troubles. "I mean that first freeze/TDR
> 
> 
> ...


----------



## Regeneration (Feb 28, 2019)

Furmark + Linpack is a good way to stress test Northbridge, PSU, and entire bus (FSB/QPI/UPI/HT).

The error is probably related to instability of one of those things.


----------



## EarthDog (Feb 28, 2019)

Regeneration said:


> Furmark + Linpack is a good way to stress test Northbridge, PSU, and entire bus (FSB/QPI/UPI/HT).
> 
> The error is probably related to instability of one of those things.


Furnark is a power virus and should not be run... nvidia and AMD have said this!! It doesnt even test running clocks and voltage and immediately throttles any relatively modern gpus!

Stop saying it is ok...


----------



## londiste (Feb 28, 2019)

Both Nvidia and AMD have a bone to pick with Furmark because it used to show the maximum power consumption of their cards before they could get their power management in order.

It is perfectly OK to run.
Running clocks are irrelevant. Furmark is to test power consumption and temperatures.


----------



## Regeneration (Feb 28, 2019)

Nvidia and AMD said it 10 years ago, but FurMark improved ever since. Mining, GPGPU can be stressful like FurMark.

There is nothing wrong with rendering some furry object and textures. If the card can't handle it, something is wrong with its cooler.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> Both Nvidia and AMD have a bone to pick with Furmark because it used to show the maximum power consumption of their cards before they could get their power management in order.
> 
> It is perfectly OK to run.
> Running clocks are irrelevant. Furmark is to test power consumption and temperatures.


no.

It isnt because they have a bone to pick. They both say it can damage the card and not to run it.



Regeneration said:


> Nvidia and AMD said it 10 years ago,


It's in nvidia reviewers guide for RTX and generations before as well...


----------



## londiste (Feb 28, 2019)

EarthDog said:


> no.
> It isnt because they have a bone to pick. They both say it can damage the card and not to run it.
> It's in nvidia reviewers guide for RTX and generations before as well...


Please explain why Furmark is not OK?


----------



## Regeneration (Feb 28, 2019)

Lousy cooling can damage the card. Not rendering some graphical objects.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> Please explain why Furmark is not OK?


I have already...

But let me get to a desktop (45 mins or so) and I will quote nvidia from the reviewers guide and an FAQ.


----------



## Regeneration (Feb 28, 2019)

If Nvidia tells you to jump off the roof, will you do it?




Yes, please explain to me how some rotating MSI graphical object can damage the card.

My GTX 970 runs it at TDP + 81 percent. Temps are normal.


----------



## EarthDog (Feb 28, 2019)

Regeneration said:


> My GTX 970 runs it at TDP + 81 percent. Temps are normal.


So... you don't think your clearly modified TDP on the card has anything to do with that??? 

ALso, FTR, I haven't mentioned anything about temperatures. 



Sorry, it wasn't an FAQ, but from an NVIDIA forums and CSR...



> Furmark is an application designed to stress the GPU by maximizing power draw well beyond any real world application or game. In some cases, this could lead to slowdown of the graphics card due to hitting over-temperature or over-current protection mechanisms. These protection mechanisms are designed to ensure the safe operation of the graphics card. Using Furmark or other applications to disable these protection mechanisms can result in permanent damage to the graphics card and void the manufacturer's warranty.


https://forums.geforce.com/default/...r-stress-tests-with-geforce-graphics-cards/1/


Note, I did not add the bold, it is like this in the guide...


> For GeForce RTX 2080, we also recommend reviewers test GPU power consumption with actual
> games and applications, rather than tools like Furmark.
> 
> Applications like Furmark and OCCT—which are often called “power viruses”—will stress the
> ...



So again... this is not the first guide that suggests not to run it because it could permantetly damage the card. Yes, there are limits in place on the card, however, how is it a realistic scenario to have your card throttle back clocks and voltage to levels, in my experience recently, hundreds of mhz below the running clock?

Furmark should not be used, period. It can damage cards and is a wholly unrealistic load... even when the car throttles itself back so low to fit in a power envelope. NOT a good test of anything.

Tell me why I should run it again when we are told not to? Do either of you have more anecdotes to add or is straight from the horse's mouth now sufficient?


----------



## Regeneration (Feb 28, 2019)

This post was from 2011. Some apps and games like The Witcher 3, Fire Strike, can push the GPU to FurMark's level.

https://www.tomshardware.com/reviews/how-to-stress-test-graphics-cards,5449-8.html

Vendors recommend to avoid overclocking,and we still do it.


----------



## EarthDog (Feb 28, 2019)

Being from 2011 just shows how long they have been saying it... I mean, I just quoted it from the RTX guide...



Regeneration said:


> Vendors recommend to avoid overclocking,and we still do it.


That doesn't mean its right. 

Fire Strike does NOT reach FM levels last I ran it... At least, it holds its clocks. 

I am running FM now against my 2080 and instantly, my clocks are below the minimum rated boost and still banging off the power limit. They are coming close to base clocks.


----------



## Regeneration (Feb 28, 2019)

Power:
FurMark 102.2w
Fire Strike  98.8w

Temps:
FurMark 64c
Fire Strike  64c

3.4w is not a big deal.


----------



## EarthDog (Feb 28, 2019)

You see the difference there, right? 


3DM FS runs at clocks and voltages it would normally run at in game. During Furmark, the clocks and voltage are beat down to WELL below running speeds. How can you call that 'like'???


----------



## bug (Feb 28, 2019)

cucker tarlson said:


> well,then play those games for heaven's sake


Stupid advice. What's next, "get a _real _girlfriend"?


----------



## rtwjunkie (Feb 28, 2019)

bug said:


> Stupid advice. What's next, "get a _real _girlfriend"?


It’s actually great advice. I gave it earlier too.  The best test of your PC’s ability to safely play games is to play those games.


----------



## londiste (Feb 28, 2019)

EarthDog said:


> So again... this is not the first guide that suggests not to run it because it could permantetly damage the card. Yes, there are limits in place on the card, however, how is it a realistic scenario to have your card throttle back clocks and voltage to levels, in my experience recently, hundreds of mhz below the running clock?
> 
> Furmark should not be used, period. It can damage cards and is a wholly unrealistic load... even when the car throttles itself back so low to fit in a power envelope. NOT a good test of anything.
> 
> Tell me why I should run it again when we are told not to? Do either of you have more anecdotes to add or is straight from the horse's mouth now sufficient?


Furmark by your own description is an EXCELLECT test of power usage, management and temperatures/cooling.


----------



## EarthDog (Feb 28, 2019)

rtwjunkie said:


> It’s actually great advice. I gave it earlier too.  The best test of your PC’s ability to safely play games is to play those games.


This is why I don't put too much stock into stress testing CPUs... I give a good few hours in AIDA64 as well as a couple hours of realbench and for MY uses (obviously YMMV) it works out fine for me.


londiste said:


> Furmark by your own description is an EXCELLECT test of power usage, management and temperatures/cooling.


How so when it isn't testing the clocks and voltage you run at but is artificially limited by banging HARD off the power limit? Running FS will hit hte limit, but so will any game and the card 'settles' on a boost clock and isn't downclocked BELOW base boost and approaching base clocks (which the card NEVER runs at).


----------



## londiste (Feb 28, 2019)

EarthDog said:


> How so when it isn't testing the clocks and voltage you run at but is artificially limited by banging HARD off the power limit? Running FS will hit hte limit, but so will any game and the card 'settles' on a boost clock and isn't downclocked BELOW base boost and approaching base clocks (which the card NEVER runs at).


It is a *stress* test. Pretty much for testing the worst case scenario.
How do you know the clocks, voltage and load you will be running? With games I see this being very-VERY variable.

Besides, what makes one test worse than the other? Tomshardware found Furmark to fall close enough to Witcher 3 which does not sound like an unreasonable thing:
https://www.tomshardware.com/reviews/how-to-stress-test-graphics-cards,5449.html

Most of the games I play have been banging on Power Limit for years, sometimes Voltage Limit as well.


----------



## EarthDog (Feb 28, 2019)

Again I ask... if you are not testing the same clocks and voltages you are running at (in general) then how is it a worthwhile test? Tell how testing at 1650 MHz .8xxV versus 1950 Mhz 1.05V is a worthwhile stress test?

All cards tickled the power limit in games, but the clocks don't plummet out of the gate to below base boost and approach base clocks.


----------



## londiste (Feb 28, 2019)

It is not for testing clocks. It is a stress test to test power consumption and temperatures.


----------



## qubit (Feb 28, 2019)

BenchAndGames said:


> Runnig Intel Burn Test + Furmark (both last version) simultaneously


That's interesting, it should technically work, but it puts a lot of stress on the system, so it may not be robust enough to take it.

As the others are saying, it could damage your hardware and a bit pointless, so I don't recommend continuing with this.


----------



## londiste (Feb 28, 2019)

Is stress testing something that has gone out of vogue or something? Stress test by definition is a test designed to assess how well a system functions when subjected to greater than normal amounts of stress or pressure. If the system passes stress test, you can be quite sure it will take any normal stress and be fine.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> It is not for testing clocks. It is a stress test to test power consumption and temperatures.


But, it is, for many(most), to stress test for STABILITY (see the OP). Annnnnnnnnnd, NVIDIA says not to test power with this in the first place.

EDIT: And if we are getting the same power as testing with 3DM FS, then why would we want to have something more stressful just pound on the limiter for the same result? That doesn't remotely make sense to me.

So, is there an answer to my question for those who use this, like the OP, to find stability in a system when its HUNDREDS of MHz off from running clocks? Can we admit it isn't good for that at minimum?


----------



## londiste (Feb 28, 2019)

What does frequency have to do with stability on stock clocks? Limits do, as well as behaviour around them.
By your argument, Furmark should be too easy a test as frequency is off to a low side.



EarthDog said:


> EDIT: And if we are getting the same power as testing with 3DM FS, then why would we want to have something more stressful just pound on the limiter for the same result? That doesn't remotely make sense to me.


Why not? And why would it be more stressful when you just said it is the same power as 3DM FS?
If you look at the Tomshardware test results Furmark is a bit heavier on memory compared to 3dM FS which explains the small difference in power consumption.


----------



## qubit (Feb 28, 2019)

londiste said:


> Is stress testing something that has gone out of vogue or something? Stress test by definition is a test designed to assess how well a system functions when subjected to greater than normal amounts of stress or pressure. If the system passes stress test, you can be quite sure it will take any normal stress and be fine.


In principle you're correct. However, consumer kit is built down to a price and hence robustness isn't always what it should be, especially with budget components and if the components are old. Hence, it's possible that these stress tests can damage them. Therefore, something that doesn't stress a system to its limits is a more prudent option.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> What does frequency have to do with stability on stock clocks?


I should't have to answer this.. you know why... and I have said why, quite consicely and clearly multiple times already. 



londiste said:


> By your argument, Furmark should be too easy a test as frequency is off to a low side.


Que? How is it an 'easy' test when clocks that are HUNDREDS of MHz lower and notably lower voltage 'easy'? 

Brother, when testing a CPU overclock, do you test at the clockspeed you run at or towards the base clock? Why would a GPU be any different? How is testing at 1680 MHz .8xxV the same as testing running clocks of 1950 MHz 1.05V?

I simply don't know what else I can say here... the logic just B slaps me in the face on this one, LOL!


----------



## londiste (Feb 28, 2019)

EarthDog said:


> Que? How is it an 'easy' test when clocks that are HUNDREDS of MHz lower and notably lower voltage 'easy'?


Are you really trying to tell me lower clocks and lower voltage is a more stressful test than higher clocks and higher voltage?



EarthDog said:


> Brother, when testing a CPU overclock, do you test at the clockspeed you run at or towards the base clock? Why would a GPU be any different? How is testing at 1680 MHz .8xxV the same as testing running clocks of 1950 MHz 1.05V?


Furmark is not for testing (gaming) clock speeds. It is not meant for it, it is not good for it. I think I have written this several times already.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> Are you really trying to tell me lower clocks and lower voltage is a more stressful test?


No. How did you extrapolate that from what I have been saying.

I am asking a couple of things (that seem to be refused to be answered............)

1. How is it the SAME test for STABILITY when one is using (in my case) 300 MHz less clocks and notably less voltage using the same power as running the clocks at ACTUAL boost? Right? How is testing at 1650 Mhz 0.8xxV the same as testing running clocks of 1950 MHz 1.0xV?
2. Why do we test CPUs at the clocks they run at as a norm/standard, but its OK to run a GPU hundreds of MHz less and call that OK?

EDIT: YOU say its for power consumption... yet, we see the vast majority of users, like the OP, run it as a stability/stress test. Isn't that one (of a couple) points to run a stress test is to see if it is stable where you are at, be it stock or overclocked??? Come on now....how many times have you seen me say, in this forum alone, not to run this to test stability???


----------



## londiste (Feb 28, 2019)

EarthDog said:


> I am asking a couple of things (that seem to be refused to be answered............)
> 
> 1. How is it the SAME test for STABILITY when one is using (in my case) 300 MHz less clocks and notably less voltage using the same power as running the clocks at ACTUAL boost? Right? How is testing at 1650 Mhz 0.8xxV the same as testing running clocks of 1950 MHz 1.0xV?
> 2. Why do we test CPUs at the clocks they run at as a norm/standard, but its OK to run a GPU hundreds of MHz less and call that OK?


This started with stress testing at stock.
1. How do you want to define stability? Clocks - again, especially if we are talking about running things at stock - are only one part of the problem. And the frequency range is guaranteed in spec. Limits are a much bigger problem, especially with GPUs.
2. Without OC - and going beyond the spec - you are likely to test a contemporary CPU at hundreds of MHz less than maximum boost. It will run into power limit.



EarthDog said:


> EDIT: YOU say its for power consumption... yet, we see the vast majority of users, like the OP, run it as a stability/stress test. Isn't that one (of a couple) points to run a stress test is to see if it is stable where you are at, be it stock or overclocked??? Come on now....how many times have you seen me say, in this forum alone, not to run this to test stability???


There is no such thing as guaranteed clock speed for a GPU. Has not been for years.
When I play games, my GPU frequency is anywhere between about 1820 to 2040 MHz. Clock depend on how much power GPU/card pulls (which Furmark does test) and what temperatures it gets (again, which Furmark does test). It does not test for maximum possible frequency. Or stability of that frequency in every situation. No single test does.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> This started with stress testing at stock.
> 1. How do you want to define stability? Clocks - again, especially if we are talking about running things at stock - are only one part of the problem. And the frequency range is guaranteed in spec. Limits are a much bigger problem, especially with GPUs.
> 2. Without OC - and going beyond the spec - you are likely to test a contemporary CPU at hundreds of MHz less than maximum boost. It will run into power limit.
> 
> There is no such thing as guaranteed clock speed for a GPU. has not been for years.


1. Stability is defined as being able to run at your given clockspeed...stock or overclocked.
2. My CPUs boost to their all core/thread clocks and single thread clocks without issue in P95 with AVX or AIDA64 with AVX/AVX-512.

RE: No such thing as a guaranteed clock, you are absolutely right. You'll notice earlier I specifically said "GENERALLY" when talking about clocks in my post. The point there is when gaming (or testing with 3DM FS), clocks are _generally_ A LOT higher than when testing in Furmark. They tend to stabilize around a _general_ clockspeed once temperatures stabilize. This clock value is WELL over the RATED boost one sees in GPUz. When running Furmark, the clocks instantly drop to BELOW BASE BOOST clock and stock voltage... it approaches the BASE CLOCK (because it is doing what it can to fit within TDP). As I said, in my examples, the 2080 I have runs around 1950 MHz gaming or 3DM FS... but when I bench furmark, it runs in the mid 1600's with a lot less voltage.

I just don't see the point in testing something, for power or stability, when for furmark it is literally POUNDING on the limit and CONSTANTLY trying to stay below TDP. It is quite simply apples and oranges.


----------



## londiste (Feb 28, 2019)

EarthDog said:


> 1. Stability is defined as being able to run at your given clockspeed...stock or overclocked.


By that definition no current CPU or GPU is stable at stock 



EarthDog said:


> I just don't see the point in testing something, for power or stability, when for furmark it is literally POUNDING on the limit and CONSTANTLY trying to stay below TDP. It is quite simply apples and oranges.


That is exactly what every other load on the same GPU does. They will simply manage to stay at a higher clock rate thanks to lower ALU occupancy.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> By that definition no current CPU or GPU is stable at stock


Sorry? What? What are you saying here????? That isn't remotely true...?!!!


----------



## londiste (Feb 28, 2019)

EarthDog said:


> Sorry? What? What are you saying here????? That isn't remotely true...?!!!


What exactly did you mean by given clock speeds?


EarthDog said:


> 1. Stability is defined as being able to run at your given clockspeed...stock or overclocked.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> That is exactly what every other load on the same GPU does. They will simply manage to stay at a higher clock rate thanks to lower ALU occupancy.


You missed the part where I said it "generally" settles to a 'stable' clock it seems... my clocks on the GPU when gaming hover around 1950 MHz.. could be a bin or two higher, could be a bin or two lower... that is how these cards work, spot on. What Furmark does though, is hit so hard against the limit that it has to lower clocks HUNDREDS OF MHz as well as lowering voltage in order to fit within that same power envelope. Clearly the card is working MUCH harder to maintain TDP/Temps with Furmark than it is with gaming. Why bother using it if the power is the same but one just bangs off the rev limiter harder and with greater losses?


londiste said:


> What exactly did you mean by given clock speeds?


For a CPU, it is whatever you set it at, being stock or overclocked. For a GPU, see above... due to how these works, it is difficult to say EXACTLY 1950 MHz (where my card runs)... but you don't think for stability and stress testing that running hundreds of Mhz lower isn't actually testing your clocks? All that tells me is Furmark is putting so much stress on the GPU that it can't come close to running its 'given'/actual clockspeeds or voltages.

Also, see my screenshot from above. Furmark is on the Left, Unigine Heaven on the right. Fan set at 55% manually, power limit at stock. That is the 4K UHD bench (does the same at 2560x1440 as well).


----------



## Shambles1980 (Feb 28, 2019)

"given" is a word to generalize something to its specific state...
for example if i were to say..
These second gen i5 cpus can all run fine at their given clock speeds..
you could then check what each cpu's individual speed was. "if you wanted to know the specific speed"
No one expects people to state the stock speed of every variation of an i5 2nd gen in a sentance of that manner.


----------



## londiste (Feb 28, 2019)

EarthDog said:


> Also, see my screenshot from above. Furmark is on the Left, Unigine Heaven on the right. Fan set at 55% manually, power limit at stock. That is the 4K UHD bench (does the same at 2560x1440 as well).


Heaven runs completely into voltage limit?


----------



## trog100 (Feb 28, 2019)

i currently have my 2080ti set to a max power limit of 75% furmark hits its power limit (boost) at about about 900 mhz.. with the max power limit set to 125% the boost goes up to around 1750.. 

furmark wont do the slightest harm.. its all now controlled in the bios just like the max power usage is.. both a modern cpu and gpu are now pretty much idiot proof out of the box.. 

you are living in the past earthdog my friend.. time you dropped this vendetta against poor old furmark.. he he

trog


----------



## EarthDog (Feb 28, 2019)

trog100 said:


> you are living in the past earthdog my friend.. time you dropped this vendetta against poor old furmark.. he he


It came from the RTX guide... that isn't so far past its not relevant.


----------



## Shambles1980 (Feb 28, 2019)

i still say just run occt psu stress test rather than ibt + furmark. its prety similar but has more benifits.


----------



## Vayra86 (Feb 28, 2019)

londiste said:


> *This started with stress testing at stock*.
> 1. How do you want to define stability? Clocks - again, especially if we are talking about running things at stock - are only one part of the problem. And the frequency range is guaranteed in spec. Limits are a much bigger problem, especially with GPUs.
> 2. Without OC - and going beyond the spec - you are likely to test a contemporary CPU at hundreds of MHz less than maximum boost. It will run into power limit.
> 
> ...



GPU Boost 3.0 is _stock for Nvidia cards. _They get a base clock and GPU Boost does the rest - in both directions: it will also throttle if your kit gets too hot.

Every decent stress test _does_ test for maximum possible frequency, in fact, within the limitations of GPU Boost 3.0. A good stress test can simulate a real world load, and in those, the GPU will find equilibrium between temps, highest possible clock rate and voltage. Furmark however is not a real world load, and driver/BIOS contains flags to make sure GPU Boost 3.0 _does specifically NOT what it is supposed to do._ How? Simple: you get a hard lock on _voltage, _one of the key variables for GPU Boost to work proper. You simply cannot use the whole clock/voltage curve that is _set at stock_ for these cards, regardless of temperature and regardless of actual power usage at the wall.

So while there is no guaranteed clockspeed for an Nvidia GPU, there is a guaranteed GPU Boost 3.0 behaviour, and Furmark presents a situation where that behaviour is... adjusted. It is also the _only stress test_ that manages to do this, its the odd one out so we can split hairs about frequency and varying clocks, but that is not the underlying cause for what you see in Furmark.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> Heaven runs completely into voltage limit?


Yes, but not typically that hard... lol. I was running something custom. Here is a better SS. First is a full Heaven Extreme run  (using Hwbot wrapper), then 4K UHD, then 2560x1440.

Obviously, both are hitting the power limit, but CLEARLY one is slamming into it with reckless abandon causing the clocks to drop precipitously down hundreds of MHz just to fit within TDP while the other is pretty damn stable throughout...a bin or two like I said above.


----------



## londiste (Feb 28, 2019)

Vayra86 said:


> Furmark however is not a real world load, and driver/BIOS contains flags to make sure GPU Boost 3.0 _does specifically NOT what it is supposed to do._ How? Simple: you get a hard lock on _voltage, _one of the key variables for GPU Boost to work proper. You simply cannot use the whole clock/voltage curve that is _set at stock_ for these cards, regardless of temperature and regardless of actual power usage at the wall.


Err... anything to back up that claim?
Furmark will simply run into power limit. Perfectly normal with GPU Boost 3.0. Most stress tests will run into power limit in the exact same way.
AMD has similar power management system. I think it is currently called PowerTune.



EarthDog said:


> Obviously, both are hitting the power limit, but CLEARLY one is slamming into it with reckless abandon causing the clocks to drop precipitously down hundreds of MHz just to fit within TDP while the other is pretty damn stable throughout...a bin or two like I said above.


Try Firestrike? Shadow/Rise of Tomb Raider? Witcher 3? They'll all slam the power limit in the exact same way. With somewhat varying frequencies. And no, they probably won't go down that low.


----------



## EarthDog (Feb 28, 2019)

londiste said:


> Err... anything to back up that claim?


What claim... that it drops precipitously due to slamming off the power limt? The screenshot is the proof. Look at what Heaven does and look at furmark...



londiste said:


> Furmark will simply run into power limit. Perfectly normal with GPU Boost 3.0. Most stress tests will run into power limit in the exact same way.


Indeed. Except FM hits the cards so hard, that it lowers clocks HUNDREDS of MHz/voltage tenths, just to have it fit within the the power envelope. I'm not sure I can post any more proof that what we see right here. What does that graph tell you if it doesn't show FM hitting the power limit a lot harder?



londiste said:


> Try Firestrike? Shadow/Rise of Tomb Raider? Witcher 3? They'll all slam the power limit in the exact same way. With somewhat varying frequencies. And no, they probably won't go down that low.


The won't come close to that low...they'll be close to those running clocks! I've tested this before... but will be happy to provide another screenshot though the burden of proof doesn't lay on these shoulders... look here in a couple mins......

I can also run SOTR and show the same thing if you like.......


----------



## londiste (Feb 28, 2019)

EarthDog said:


> What claim... that it drops precipitously due to slamming off the power limt? The screenshot is the proof. Look at what Heaven does and look at furmark...


The claim was that there are flags in driver/BIOS specifically for Furmark, locking voltage.


EarthDog said:


> Indeed. Except FM hits the cards so hard, that it lowers clocks HUNDREDS of MHz/voltage tenths, just to have it fit within the the power envelope. I'm not sure I can post any more proof that what we see right here. What does that graph tell you if it doesn't show FM hitting the power limit a lot harder?


Again, Furmark is not a test for clock speeds.
Firestrike will flat out be power limited all the way.

Stop with the clocks already. I get it. Furmark runs at a lot lower clock speeds. I have never argued that. We knew that from get-go. Everyone knows.
It runs into power limit. So does everything else we mentioned here.
Well, not Heaven apparently. Which makes it a good test for high clocks but not for much else and these clocks won't be representative of a game any better.

edit:
Interesting. Did you run Furmark at 2160p and 8xAA? When I try this I get about 90% of power limit and 2040MHz (dropping to 2012 by 76C) :/
Furmark should be most evil with 720p and no AA


----------



## EarthDog (Feb 28, 2019)

Stop with the clocks? Sure thing.. one last image (after your suggestion again note) to show that 3DM FS runs high clocks while also tickling the power limit.. SOTR does the same thing, but again, clocks are hundreds of MHz higher...

Flat out power limited...it tickles it.................yet reaches typical gaming clocks and isn't pounding on the power limit requiring the clocks and voltage to drop in order to maintain it.

Yeah, great stress/stabilty/power test...







EDIT: In the end, everyone will do what they want. But after seeing what I have been seeing, reading what NVIDIA and AMD call it and suggest not to use it. I don't see this application as terribly useful compared to others that are out which show the same things, and actually test at proper clocks. I get the 30K foot view of clocks do not matter, but people use this as a stability/stress test man... it fails miserably at that because it cannot seem to test proper clocks. There is stress testing your GPU and there is this... you/users don't have to agree, but every time I run across people saying its OK, I'm throwing this information out to let users make the decision for themselves. I'm not a lemming, but I will follow this.


----------



## MrGenius (Feb 28, 2019)

Meanwhile...@der8auer is running Furmark and Prime95 simultaneously, for hours on end, to test stability for his signature pre-overclocked systems being sold @caseking.de.


----------



## BenchAndGames (Feb 28, 2019)

I only want to make some clear, in my case Furmark its running at (1920 MHz), actually higher than Unigine  Heaven  Extreme (1860 MHz) or FireStrike Ultra (1905 MHz), all in Fullscreen and maximum settings.


----------



## cucker tarlson (Feb 28, 2019)

MrGenius said:


> Meanwhile...@der8auer is running Furmark and Prime95 simultaneously, for hours on end, to test stability for his signature pre-overclocked systems being sold @caseking.de.


which he absolutely doesn't have to.


----------



## EarthDog (Feb 28, 2019)

BenchAndGames said:


> I only want to make some clear, in my case Furmark its running at (1920 MHz), actually higher than Unigine  Heaven  Extreme (1860 MHz) or FireStrike Ultra (1905 MHz), all in Fullscreen and maximum settings.


That is interesting... regardless though, I would still find something else considering AMD and NVIDIA state not to run it because it could cause damage. Loop 3DMark or something with it instead.  

I will leave it at that in this thread... Thanks, Londiste, for a conversation that did NOT devolve into barbs and toxicity. Not sure we achieved any clarity, or helped the OP much, but um.. yeah. 

Cheers.


----------



## MrGenius (Feb 28, 2019)

cucker tarlson said:


> which he absolutely doesn't have to.


Possibly not. But aside from eat, drink, sleep, shit, piss and breathe...I don't presume to know what anyone "has" to do. I do have a pretty good idea why he does it though. And I can't think of a much any better way to fully load a system for an extended period of time.


----------



## Regeneration (Feb 28, 2019)

MrGenius said:


> Meanwhile...@der8auer is running Furmark and Prime95 simultaneously, for hours on end, to test stability for his signature pre-overclocked systems being sold @caseking.de.



It is the only way to stress test the northbridge and entire bus.


----------



## qubit (Feb 28, 2019)

Vayra86 said:


> Furmark however is not a real world load, and driver/BIOS contains flags to make sure GPU Boost 3.0 _does specifically NOT what it is supposed to do._ How? Simple: you get a hard lock on _voltage, _one of the key variables for GPU Boost to work proper. You simply cannot use the whole clock/voltage curve that is _set at stock_ for these cards, regardless of temperature and regardless of actual power usage at the wall.


My inner nerd wants to grab an old card that I don't care about, test it at stock with Furmark, noting its framerate performance, then, defeat the safeties and run Furmark again. Will be interesting how much performance it gains and how long it lasts before it dies. Heck, it may be really tough and not die if we're lucky.


----------



## trog100 (Feb 28, 2019)

the current nvidia cards boost until they hit a power limit.. the lighter the load the higher the frequency reached .. the harder the load the lower the frequency reached..

oddly enough a light load higher frequency can cause a card to crash.. the point being boosting until a power limit is reached is normal behavior.. the clock speed will vary with the load.. furmark is a heavy load hence the lower frequency reached.. the load on the gpu will always be maxed.. the frequency will differ that is all..

cheaper cards with crappy coolers may reach a temp limit but those with decent coolers will always hit the power limits first..

trog


----------



## John Naylor (Feb 28, 2019)

Synthetic utilities have their uses but stability testing is not one of them.  For stability testing ...

a)  You want realistic loads, synthetic tests are not realistic.

b)  You want multitasking loads, which stress the CPU in a variety of ways

c)  You want to make sure that the loads include modern instruction sets.

d)  Synthetic test present unrealstic loads artificially lowering maximum sustainable OCs otherwise attainable under real world conditions.

Synthetic testing does not address all 4 and often none of the above.

Practical uses of synthetic utilities.

Prime 95 (Older non AVX versions) - We use P95 to thermally cycle the TIM 4 or 5 times (more with certain TIMs) bringing temps up to 85C or so and then letting it cool to room temperature.  With some TIMs (i.e. AS), this greatly accelerates the curing process which otherwise can take 7 weeks or so (200 hours or normal usage according to AS5 manufacturer) but even with TIMs that state "no curing required", we oft do see minor improvements.  

Furmark - used similar to above to cure TIM of card, with pump and rads at minimum rpm, we will run  Furmark up top card's throttling point adjusting rpm as necessary to it reaches steady state condition and cycle back down as above.  We then bring fan rpms up to the point where noise can be detected.  Restart Furmark and starting at 100% of pump speed, will record max temp once system reaches steady state conditions.  "Rinse and repeat" for 90%, 80% etc till 300%.  From this data, it can quickly be determined at which point the system receives no significant benefit from additional flow (typically @ 1.25 gpm).  Then with pump at a a fixed speed, the tests are repeated varying the fan rpms from 100% of full speed down to 25% .  This data is used to set up the fan curves.   Furmark is critical here as it maintains a constant load; using other "real world" utilities present varying loads which render any such testing useless.  On our test rig for example, at 100% fan rpm we see 39C on the GPUs and while i wouldn't call the 1,230 ish rpm noisy, it is audible,  At an inaudible 850 rpm, the GPU temps under Furmark are 42C.  You don't get any bonus points for being 39C instead of 42C, so the max speed is set at 850... outside Furmark, we never see 40C.

For stress testing we use RoG Real Bench, a suite of 3 Real Wold programs which are all run at same time along with playing a movie for the "Heavy Multitasking" test ... it takes about 8 minutes and you will get very close to a working OC with just the 8 minute test.  For final dial-ins, to insure stability I use 4 hour test but many feel that 2 hours is adequate.  Temps will usually be about 10C lower than P95 and it's a given that your system will never see a load anything close to what RB provides in real world usage.  Oh ... one thing worth mentioning ... have had 24 hour P95 stable OCs fail in RB.

For GPU, it's bit more cumbersome, using the 3D mark and Unigin benchmarks, you can dial things in pretty close.  BE AWARE that you will almost never get the highest OCs with the highest core of memory OCs.  In TPUs testing with the 2080 Tis (Micron memory)  ....

They got the Zotac Amp to a core of 2,145 which netted 221.5 fps in the OC test (2000 memory)
They got the Asus Strix to a memory OC of 2,065 which netted 225.0 fps in the OC test (115 core)

However, they got the MSI Gaming X Trio to 226.6 fps in the OC test  with a 2085 core and 2005 memory.  My approach is to determine max stable core with memory at default and then determine max memory with core at default.  Let's say that g[ves us

Default Core = 1650 / Max Core 2150
Default Memory = 1750 / Max Memory 2050

You might make a spreadsheet with Memory  as the Column headings and Core in the row headings.  So with 1750  in the 1st data column, 1st date row would be 1650, 2nd 1700 and so on till ya crash, 2nd column could be 1800 and again test at each +50 jump in core speed ... recording fps achieved.  Usually I wind up with something like 5th row, 4th column giving the best fps results.

As for nvidia saying "don't do it".... I called nvidia for my custom built lappie when I was using OCCT (would not even run), to OC my lappie and they told me, "We restricted the use of OCCT cause folks were running the PSU test which stressed GPU and CPU at same time, but since our newer cards are unaffected by the original issue, we are going to remove the limitation soon.  In the meantime, just use Furmark".


----------



## Shambles1980 (Feb 28, 2019)

saying it 1ce more lol then im out.
just use occt psu test to load the system.
It provides graphs of all the monitored sensors. "temps, voltages, frequency.." you can also set a safety net so if it reaches a temp you think is not-safe it will auto stop. and you then have the graphs to see what temps and frequencys and voltages were all doing at the time when it stopped and prior.

its just a better way to do it..

Right then im Out.


----------



## londiste (Mar 1, 2019)

EarthDog said:


> 1. Stability is defined as being able to run at your given clockspeed...stock or overclocked.


This is where I beg to differ. At least for me, stability also includes power and cooling. Both of which Furmark allows to test easily and consistently. Maximum power draw - which today is not GPU maximum but power limit - as well as worst case temperatures. This will bring out any insufficiencies in card or system, not necessarily directly related to GPU. Power supply, VRM, perhaps even motherboard, their capacity to provide power and stability of it. Cooling of the card, of the case.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> This is where I beg to differ. At least for me, stability also includes power and cooling. Both of which Furmark allows to test easily and consistently. Maximum power draw - which today is not GPU maximum but power limit - as well as worst case temperatures. This will bring out any insufficiencies in card or system, not necessarily directly related to GPU. Power supply, VRM, perhaps even motherboard, their capacity to provide power and stability of it. Cooling of the card, of the case.



Power and cooling are _requirements_ for stability. Clocks however are the intended performance level one attempts to be stable at. Different things, I'd say. I say this because power and cooling are directly related and many components self-manage this to remain stable. GPU Boost is a perfect example. It creates its own stability through power adjustments. Similarly, high temperatures are almost never a problem for stability, because the GPU will just use lower clocks instead.

Furmark loads all shaders/resources with a constant load and this heavily taxes the VRM. This in turn creates heat that is much greater than what you see in regular use - _in places you'd normally not have it_. Depending on the GPU, this can push it beyond safe ranges, even with all the measures in place. It causes voltage throttling, up to the point (again: depending on GPU headroom in cooling/power delivery) of putting it in a different power state altogether. The devil is in the details: since Furmark produces a constant, full load on all resources, the VRM has no opportunities to shed some heat where it normally would be able to (regular usage, that includes a 100% load for 24 hours in-game or in another stress test like 3DMark!). AIBs and Nvidia design and scale their cooling solutions based on this regular usage - and _not_ on a constant load as Furmark presents it.

Previous Nvidia gens, at least on Fermi, had built in measures to limit power draw from Furmark and there are many documented instances of it killing cards. Today, GPU Boost does the job for you and the BIOS is hard locked to such a degree that you simply can't get Furmark to push the normal voltages you'd see in regular use. Regardless, the implementation is different but the end result is the same: you get voltage locked.

Heat is still a problem though. Due to the lower voltage, your GPU die won't get as hot as it would in regular gaming, but at the same time, the VRM might be cooking, after all, that power is going somewhere. Historically we know that high VRM temps are the number one weak spot of GPU longevity, we also know recent generations of cards have had several (AIB and even as recent as the 2080ti FE!) hot spots and we often see memory exceed recommended temps when the VRM nearby gets crispy.

Now, we can search the interwebs all day for a source (and casually ignoring the official Nvidia and AMD statements on the matter), or we can simply use common sense. The fact it is used for a short test is still not preferable, because there are _many_ GPUs that are not sufficiently cooled to guarantee no damage is done to VRM or surrounding parts. That doesn't make it an immediate no-go for everything and it also explains why high-end components are much better equipped to deal with Furmark's excessive heat than for example, a cheap blower.


----------



## londiste (Mar 1, 2019)

Vayra86 said:


> Power and cooling are _requirements_ for stability. Clocks however are the intended performance level one attempts to be stable at. Different things, I'd say. I say this because power and cooling are directly related and many components self-manage this to remain stable. GPU Boost is a perfect example. It creates its own stability through power adjustments. Similarly, high temperatures are almost never a problem for stability, because the GPU will just use lower clocks instead.


Different phrasing, same point. I can rephrase - Furmark works well for verifying the requirements for stability.


Vayra86 said:


> Furmark loads all shaders/resources with a constant load and this heavily taxes the VRM. This in turn creates heat that is much greater than what you see in regular use - _in places you'd normally not have it_. Depending on the GPU, this can push it beyond safe ranges, even with all the measures in place. It causes voltage throttling, up to the point (again: depending on GPU headroom in cooling/power delivery) of putting it in a different power state altogether. The devil is in the details: since Furmark produces a constant, full load on all resources, the VRM has no opportunities to shed some heat where it normally would be able to (regular usage, that includes a 100% load for 24 hours in-game or in another stress test like 3DMark!). AIBs and Nvidia design and scale their cooling solutions based on this regular usage - and _not_ on a constant load as Furmark presents it.


Tom's Hardware testing linked above shows that Fire Strike and Witcher 3 (and Kombustor and OCCT and Sky Diver) cause even higher VRM temperatures. Timespy, Valley and Doom are not far behind. What makes this load more unrealistic?


Vayra86 said:


> Previous Nvidia gens, at least on Fermi, had built in measures to limit power draw from Furmark and there are many documented instances of it killing cards. Today, GPU Boost does the job for you and the BIOS is hard locked to such a degree that you simply can't get Furmark to push the normal voltages you'd see in regular use. Regardless, the implementation is different but the end result is the same: you get voltage locked.


Furmark didn't kill Fermis. They had limits in place that prevented actually killing hardware. It was all about maintaining image. Power consumption and clocks on Fermi were both awful with Furmark so it was throttled by driver detection (to 550MHz memory serves right). AMD did pretty much the same for the exact same reasons.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> What makes this load more unrealistic?
> 
> Power consumption and clocks on Fermi were both awful with Furmark so it was throttled by driver detection (to 550MHz memory serves right). AMD did pretty much the same for the exact same reasons.



Already explained but it seems you don't want to read it. The _constant_ nature of the load, as in, full on continuous strain, as opposed to a constantly changing type of load as you see it with every other bench. Hence the 'power virus' commentary. VRM may momentarily get peak temperatures from other tests but it also gets dips in between. With Furmark, it gets that peak all the time. At the same time, core temps may not represent the same temperature scenario, and fan speed is determined through core temp, resulting in inadequate cooling.

So, now you do admit Furmark is throttled by driver detection, so why did I bother to explain this? With Fermi it was a flag, today GPU Boost does pretty much the same job. We see it in every Furmark bench result, and yet we're still denying it...?!



londiste said:


> Different phrasing, same point. I can rephrase - Furmark works well for verifying the requirements for stability.



No it does not, because you're not seeing the clocks you would see in-game. So you may not hit a power or temperature wall, but you can still be completely unstable in-game. These metrics are related, removing one from the equation eliminates the purpose of testing it.

You know what's so silly. When it comes to CPU overclocks lately, people 'don't run Prime' because it makes their CPU too hot and 'we don't use AVX anyway' (even though, ironically, even games _do use it)_... the real reason is that in fact many casual overclocks will just not last under that load. But when it comes to GPU, somehow I'm reading topics where people want the exact opposite: persist in testing a completely useless scenario that literally shows you nothing useful.


----------



## Bones (Mar 1, 2019)

All I can say about Furmark to anyone would be "Run at your own risk" and don't come bitching to me about it when your card dies.


----------



## ppn (Mar 1, 2019)

how to test the memory controller above 50% then.


----------



## londiste (Mar 1, 2019)

VRM temperatures are determined by power usage.
If card running at power limit breaks due to VRM overheating this is a warranty case, period.


Vayra86 said:


> So, now you do admit Furmark is throttled by driver detection, so why did I bother to explain this? With Fermi it was a flag, today GPU Boost does pretty much the same job. We see it in every Furmark bench result, and yet we're still denying it...?!


It is not. It used to be. For years now the only thing limiting Furmark has been power limit.


Vayra86 said:


> No it does not, because you're not seeing the clocks you would see in-game. So you may not hit a power or temperature wall, but you can still be completely unstable in-game. These metrics are related, removing one from the equation eliminates the purpose of testing it.


How many times do I need to say Furmark is not good for testing high clocks. It is good for testing power and temperature. I have not said otherwise.


Vayra86 said:


> You know what's so silly. When it comes to CPU overclocks lately, people 'don't run Prime' because it makes their CPU too hot and 'we don't use AVX anyway' (even though, ironically, even games _do use it)_... the real reason is that in fact many casual overclocks will just not last under that load. But when it comes to GPU, somehow I'm reading topics where people want the exact opposite: persist in testing a completely useless scenario that literally shows you nothing useful.


Using Prime95 with AVX for CPU is the exact same thing as using Furmark for GPU. And I am running Prime95 to test both what CPU does in terms of power and temperature.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> VRM temperatures are determined by power usage.
> If card running at power limit breaks due to VRM overheating this is a warranty case, period.








If Nvidia can show you or the application Furmark used measures to circumvent 'protection mechanisms', you can kiss your warranty goodbye.

Note; this includes running Furmark with this checkbox ticked:





Bottom line: slippery slope material and an area where the vast majority of people making topics on TPU has no real clue on what is 'covered in warranty' and what is not. So, do whatever you like ey  Its not my warranty...


----------



## londiste (Mar 1, 2019)

Furmark disables overtemperature and overcurrent mechanisms? How?
Nvidia (who I assume that document is from) is full of shit.

And yes, disabling limits on cards can lead to failure without adequate care and cooling. Never denied that. Why is this relevant?

Edit: By the way, for an Nvidia card that checkbox will still not let you past Nvidia's hard voltage limit. Last time I checked this is around 1.09V.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> Furmark disables overtemperature and overcurrent mechanisms? How?
> Nvidia (who I assume that document is from) is full of shit.



Do you not read


----------



## londiste (Mar 1, 2019)

Vayra86 said:


> Do you not read


I did, and I quote:


> Using Furmark or other applications to disable these protection mechanisms can result in permanent damage to the graphics card and void the manufacturer's warranty.


Furmark does not disable protection mechanisms as far as I am aware of. Do you want to say otherwise?
While Nvidia seems to do that I would say this is incorrect. There is nothing to back up that statement.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> I did, and I quote:
> Furmark does not disable protection mechanisms as far as I am aware of. Do you want to say otherwise?
> While Nvidia does I would say this is incorrect. *There is nothing to back up that statement*.





londiste said:


> Edit: By the way, for an Nvidia card that checkbox will still not let you past Nvidia's hard voltage limit. Last time I checked this is around 1.09V.



Irrelevant. *You get a disclaimer/warning before you activate this checkbox* which is enough to deny a warranty claim. I'm not sure how many more warning signs you want to ignore.

There may not be _data_ to back up that statement, but there are disclaimers and warnings in place. You may be correct all day long, but that still doesn't replace a GPU you broke, and Nvidia has perfect grounds to deny your claim. The bottom line is: you're squarely in 'do at own risk' territory and your claim that you can just have it replaced, is incorrect.



londiste said:


> Using Prime95 with AVX for CPU is the exact same thing as using Furmark for GPU. And I am running Prime95 to test both what CPU does in terms of power and temperature.



And yet, Prime still runs at the clocks you dial in, does not cause throttling, and does not cause a different type of load on the CPU as any other AVX task. Therefore Prime is a valid test for precisely this type of load and presents a worst case scenario.

Furmark however does not, because you're not seeing the actual clocks you'd see under load, you're not seeing the actual voltage you'd see under load, because the load presented is one no other application can or will produce.


----------



## londiste (Mar 1, 2019)

Dude, what are you talking about?


Vayra86 said:


> Irrelevant. *You get a disclaimer/warning before you activate this checkbox* which is enough to deny a warranty claim. I'm not sure how many more warning signs you want to ignore.


That checkbox is not required to run Furmark.


Vayra86 said:


> And yet, Prime still runs at the clocks you dial in, does not cause throttling, and does not cause a different type of load on the CPU as any other AVX task. Therefore Prime is a valid test for precisely this type of load and presents a worst case scenario.


Your system specs say you have 8700K as CPU. Have you run AVX2-enabled Prime95 on that CPU with Power Limit in place? It'll throttle.

Edit:
I have said this a bunch of times already. If you remove limits from your hardware its on you.
With limits in place and especially at stock, Furmark, Prime95 or other stress tests do not kill your hardware.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> Dude, what are you talking about?
> That checkbox is not required to run Furmark.



It is not, yet is checked by many users because 'balls to the wall' overclocking. Then users stress test their OC (with Furmark). If the card then fails, your warranty claim will not be quite as straightforward as you might think.


----------



## ppn (Mar 1, 2019)

it will carry the same straightforwardness as if the card burned under normal loads. or 160 watts for the 2060. since it will always be power limited and they dont know unless you mention about Furmark. Actually they can always use Furmark as plausible deniability to deny warranty.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> Dude, what are you talking about?
> That checkbox is not required to run Furmark.
> Your system specs say you have 8700K as CPU. Have you run AVX2-enabled Prime95 on that CPU with Power Limit in place? It'll throttle.
> 
> ...



Ah right, so now we're only stress testing our cards at stock and with all limits in place. Cool story. Let's leave it at that.


----------



## londiste (Mar 1, 2019)

Vayra86 said:


> Ah right, so now we're only stress testing our cards at stock and with all limits in place. Cool story. Let's leave it at that.


Limits and protections are not easy to remove. Power limit is hard limit for GPUs. Nvidia has an additional hard voltage limit. Cards will not get over these unless you seriously tamper with them. And by that I mean deliberate registry/BIOS switching/hacking or hardware mods. Protections, especially for overtemperature remain and cannot be disabled. This has been the case for a decade or more. You have to try real hard to kill your hardware. Running Furmark will not do it.


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> Limits and protections are not easy to remove. Power limit is hard limit for GPUs. Nvidia has an additional hard voltage limit. Cards will not get over these unless you seriously tamper with them. And by that I mean deliberate registry/BIOS switching/hacking or hardware mods. Protections, especially for overtemperature remain and cannot be disabled. This has been the case for a decade or more. You have to try real hard to kill your hardware. Running Furmark will not do it.



The very moment you click past a warning box that contains a disclaimer is the moment you potentially void your warranty, and that is all there is to that. You can be all stubborn about it not being so but that really doesn't matter. We know how things work when money is involved, manufacturers will use every last straw to deny a claim and this is quite an easy one.

Determine the cause of hardware failure... was it that ESD because you didn't wear a bracelet when installing the card, was it Furmark, was it the cat. Most failures happen over longer periods of time and Furmark may very well make that time shorter. Do cards insta-burn away when you run it? Of course not and nobody is telling anybody that. Does it have the potential to shorten the lifespan of your card? Absolutely. And the question is: for what, because you're not getting useful info out of running it, compared to other testing.


----------



## londiste (Mar 1, 2019)

Why are you so incredibly focused on that checkbox?
A lot of restrictions on warranties are smoke and mirrors. They cannot determine why it died. That is why the limits are getting harder and harder.
There is always points they can try to cower behind but that is the nature of business.

I mean continuing your train of thought gaming will probably make GPU lifetime shorter as well because it puts heavy load on it. Maybe we should not game on our GPUs?



Vayra86 said:


> And the question is: for what, because you're not getting useful info out of running it, compared to other testing.


What do you mean not getting useful into out of running it? You can see if your system and especially PSU can feed the card properly. You can see if the cooler on the card can cool it (or components of it) down. You can see if the entire computer (usually in a case) can cool itself down.

What would you say is useful information to gather from running a stress test?


----------



## Vayra86 (Mar 1, 2019)

londiste said:


> What would you say is useful information to gather from running a stress test?



If you want to go in circles all day, be my guest, but please, just stop asking the same questions and read back instead. We're at 1,5 page now of rhetorical questions and obvious answers. So I'm just going to quote relevant posts instead.



londiste said:


> Different phrasing, same point. I can rephrase - Furmark works well for verifying the requirements for stability.



No it does not, because you're not seeing the clocks you would see in-game. So you may not hit a power or temperature wall, but you can still be completely unstable in-game. These metrics are related, removing one from the equation eliminates the purpose of testing it.

___
Its all very simple to me.
- You don't gain more/better/accurate information from the use of Furmark, compared to any other test application.
- There is a risk involved that other applications do not present.
- It can create confusion as to what the actual behaviour of your GPU is/is supposed to be.

/thread


----------



## londiste (Mar 1, 2019)

Furmark will not kill your hardware unless you have done something very stupid in terms of limits (which is not easy to do).


----------



## infrared (Mar 1, 2019)

This conversation looks like it will go round in circles forever.

Look, furmark is great for testing thermals, but as has been explained clearly by many members, many times, with examples and screenshots given... it is not good as a stability test due to running at lower clocks and voltages than you'd see in real world situations. It's entirely possible to be unstable under light load, high clocks and crash in games while furmark might lead you to believe your gpu is 100% stable (possibly leading to you question the stability of the rest of the system). 
Double check cpu/memory with linpack extreme/prime95/memtest86+/memtest64, play games with gpu at stock for a while to confirm stability of the rest of the system before pushing the GPU OC. Adjust GPU clocks/volts in the games/benchmarks you like until it doesn't crash, that's about as real world as it gets.


----------



## londiste (Mar 1, 2019)

infrared said:


> Look, furmark is great for testing thermals, but as has been explained clearly by many members, many times, with examples and screenshots given... it is not good as a stability test due to running at lower clocks and voltages than you'd see in real world situations. It's entirely possible to be unstable under light load, high clocks and crash in games while furmark might lead you to believe your gpu is 100% stable (possibly leading to you question the stability of the rest of the system).


Isn't the problem in the beginning of this thread exactly the opposite, making Furmark the appropriate stress test here? Nvidia display driver crashes when Furmark is running together with IBT. Clocks and voltages are (supposedly) low as they should be with Furmark. Also, he stated Furmark by itself worked fine for couple hours.

This seems to indicate the problem is not with the GPU (or CPU) but either an (possibly compatibility) issue somewhere around motherboard/controllers or more likely power/temperatures as loading both GPU and CPU in full swing will inevitably be very stressful to the PSU and rest of the system.


----------



## infrared (Mar 1, 2019)

londiste said:


> Isn't the problem in the beginning of this thread exactly the opposite, making Furmark the appropriate stress test here? Nvidia display driver crashes when Furmark is running together with IBT. Clocks and voltages are (supposedly) low as they should be with Furmark. Also, he stated Furmark by itself worked fine for couple hours.
> 
> This seems to indicate the problem is not with the GPU (or CPU) but either an (possibly compatibility) issue somewhere around motherboard/controllers or more likely power/temperatures as loading both GPU and CPU in full swing will inevitably be very stressful to the PSU and rest of the system.



It's either a CPU/RAM/Mem controller issue (unless he can also run IBT indefinitely on it's own), or it's PSU related, which is probably unlikely given he has a HX850. It might be a good idea for BenchandGames to save his oc profile and go back to stock settings for a moment to see if it stops crashing the driver under combined load.


----------



## EarthDog (Mar 1, 2019)

Something we mentioned already at ocf.


----------



## BenchAndGames (Mar 1, 2019)

I already said about 100 posts back, that I was going to return the graphic card, and buy a new one. But I did not want to get in the way of the conversation, because they put a lot of effort into it.

@infrared It fail at stock, just take like 20 minutes to fail, but it does also, with some OC on CPU and RAM at 2133 or 2400 it fail in 5 minutes.


----------



## londiste (Mar 1, 2019)

@BenchAndGames did you try running only IBT for a long while?


----------



## BenchAndGames (Mar 1, 2019)

Yes running IBT or furmark solo, no problems. I had crash when running togheter, but anyway its not only that, all this was coming from a nvidia driver crash on idle, + lose singnal of my monitor, all this was on idle on chrome exactly, so from this I started to test to try find why I lose signal of my video card. Just after 3-4 days of installing the new RTX 2060, and I remember few years ago, also when I intalled a new video card, (GTX 970) had exactly same problem, monitor go on blank or green, pc freeze, sound freeze idle or load, so I replace that 970 card for a new one, and with the new one 0 problems. 
So I want to avoid those problems, and thats why I returt this new card, and I will buy a new one.


----------



## eidairaman1 (Mar 1, 2019)

infrared said:


> This conversation looks like it will go round in circles forever.
> 
> Look, furmark is great for testing thermals, but as has been explained clearly by many members, many times, with examples and screenshots given... it is not good as a stability test due to running at lower clocks and voltages than you'd see in real world situations. It's entirely possible to be unstable under light load, high clocks and crash in games while furmark might lead you to believe your gpu is 100% stable (possibly leading to you question the stability of the rest of the system).
> Double check cpu/memory with linpack extreme/prime95/memtest86+/memtest64, play games with gpu at stock for a while to confirm stability of the rest of the system before pushing the GPU OC. Adjust GPU clocks/volts in the games/benchmarks you like until it doesn't crash, that's about as real world as it gets.



I say lock this thread down @infrared

Award goes to @BenchAndGames


----------



## trog100 (Mar 2, 2019)

eidairaman1 said:


> I say lock this thread down @infrared
> 
> Award goes to @BenchAndGames View attachment 117656



kind of similar to quite a few threads lately..

a small  number of people just argue back and forth seemingly unable to stop..

trog


----------



## Regeneration (Mar 2, 2019)

BenchAndGames said:


> Yes running IBT or furmark solo, no problems. I had crash when running togheter, but anyway its not only that, all this was coming from a nvidia driver crash on idle, + lose singnal of my monitor, all this was on idle on chrome exactly, so from this I started to test to try find why I lose signal of my video card. Just after 3-4 days of installing the new RTX 2060, and I remember few years ago, also when I intalled a new video card, (GTX 970) had exactly same problem, monitor go on blank or green, pc freeze, sound freeze idle or load, so I replace that 970 card for a new one, and with the new one 0 problems.
> So I want to avoid those problems, and thats why I returt this new card, and I will buy a new one.



Running Linpack + FurMark simultaneously require increased value of TdrDelay to prevent display driver timeout.


----------



## BenchAndGames (Mar 2, 2019)

I had already set to 8 value when I was running those


----------



## Regeneration (Mar 2, 2019)

Try those values:

TdrLevel 0
TdrDelay 60 (decimal)
TdrDdiDelay 60 (decimal)

Also make sure to use the latest Nvidia driver and close any GPU monitoring apps like MSI Afterburner.

RTX 2060 was launched more then a month ago, this can be a bug in the driver or VGA BIOS.


----------



## MrGenius (Mar 2, 2019)

Vayra86 said:


> It is not, yet is checked by many users because 'balls to the wall' overclocking. Then users stress test their OC (with Furmark). If the card then fails, your warranty claim will not be quite as straightforward as you might think.


Having it checked means you can adjust voltages manually with Afterburner. It means 100% abso-posi-lutely nothing else. Including having anything to do with Furmark. And no, Furmark has no way of circumventing power limits or affecting operating voltages...period. You're extremely naive to think so. Not only all that. Nvidia, or anybody else for that matter, would have no way of knowing if Furmark had been run or not...even if it did all that BS you think it does.


Vayra86 said:


> The very moment you click past a warning box that contains a disclaimer is the moment you potentially void your warranty, and that is all there is to that.


Except that it isn't all there is to that. As, just like not being able to know whether or not you ran Furmark, there's no way to prove you checked that box either. Your warranty is unaffected. Unless they are spying on you(which would be illegal in most instances), or you are stupid enough to admit it when asked. Even then, you would still have a strong case that you did nothing wrong. If they didn't want you to be able to adjust the voltage they can implement a voltage lock in the BIOS that prevents it from happening(even with the box checked). Which is what they have done with many cards. Setting a precedence for it being what they should have done, if they didn't want you to be able to do it. And giving them no real right to deny your warranty for having done so.


----------



## BenchAndGames (Mar 2, 2019)

Regeneration said:


> Try those values:
> 
> TdrLevel 0
> TdrDelay 60 (decimal)
> ...



The thing is I will not pay 370€ for something that I need to change and tweak stuff on my PC for fix it. When you buy a video card It has to work correctly. Installed the last driver and voiala, but that wasant my case.


----------



## Shambles1980 (Mar 2, 2019)

if i buy a car and drive it as fast as i can in 1st gear whilst towing a massive weight you think i should return it because it overheats?


----------



## Caring1 (Mar 2, 2019)

Shambles1980 said:


> if i buy a car and drive it as fast as i can in 1st gear whilst towing a massive weight you think i should return it because it overheats?


Depends, if you are driving up an incline and it requires a low gear, and ambient temperature is not in your favour, then you can't avoid overheating unless you take regular rest breaks


----------



## BenchAndGames (Mar 2, 2019)

Shambles1980 said:


> if i buy a car and drive it as fast as i can in 1st gear whilst towing a massive weight you think i should return it because it overheats?



We are talking about video cards not cars, I dont know in you country, but here, if you buy something you have 30 days of return it without any reason.
In addition, in my case I had failures with the video card navigating on the internet, or playing whatever, (this is the purpose of a video card) but of course if you do not read my posts. I dont retunt it cuz its crash on furmark, and if is the case, as I told you I can return it with no reasons.


----------



## EarthDog (Mar 2, 2019)

Good to know where some people draw the line with their morals (not you BenchAndGames). Nothing like clicking on a check box that warns of damage, using it the software anyway, damaging the card, then returning the product like the button wasn't clicked and warnings weren't shared from NVIDIA...especially if asked and you LIE so it can be returned... some shady souls around this joint.


----------



## 95Viper (Mar 2, 2019)

BenchAndGames said:


> So I want to avoid those problems, and thats why I returt this new card, and I will buy a new one.



OP states he returned card and will buy a new one.
Looks like thread has run it's course and is drifting into the rocky coast of Off Topic and Useless Bickering Island.


----------

