• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Furmark+IntelBurnTest (simultaneously) fail (always furmark crash)

Status
Not open for further replies.
I only want to make some clear, in my case Furmark its running at (1920 MHz), actually higher than Unigine Heaven Extreme (1860 MHz) or FireStrike Ultra (1905 MHz), all in Fullscreen and maximum settings.
 
I only want to make some clear, in my case Furmark its running at (1920 MHz), actually higher than Unigine Heaven Extreme (1860 MHz) or FireStrike Ultra (1905 MHz), all in Fullscreen and maximum settings.
That is interesting... regardless though, I would still find something else considering AMD and NVIDIA state not to run it because it could cause damage. Loop 3DMark or something with it instead.

I will leave it at that in this thread... Thanks, Londiste, for a conversation that did NOT devolve into barbs and toxicity. Not sure we achieved any clarity, or helped the OP much, but um.. yeah. :)

Cheers.
 
which he absolutely doesn't have to.
Possibly not. But aside from eat, drink, sleep, shit, piss and breathe...I don't presume to know what anyone "has" to do. I do have a pretty good idea why he does it though. And I can't think of a much any better way to fully load a system for an extended period of time.
 
Meanwhile...@der8auer is running Furmark and Prime95 simultaneously, for hours on end, to test stability for his signature pre-overclocked systems being sold @caseking.de.

It is the only way to stress test the northbridge and entire bus.
 
Furmark however is not a real world load, and driver/BIOS contains flags to make sure GPU Boost 3.0 does specifically NOT what it is supposed to do. How? Simple: you get a hard lock on voltage, one of the key variables for GPU Boost to work proper. You simply cannot use the whole clock/voltage curve that is set at stock for these cards, regardless of temperature and regardless of actual power usage at the wall.
My inner nerd wants to grab an old card that I don't care about, test it at stock with Furmark, noting its framerate performance, then, defeat the safeties and run Furmark again. Will be interesting how much performance it gains and how long it lasts before it dies. Heck, it may be really tough and not die if we're lucky.
 
the current nvidia cards boost until they hit a power limit.. the lighter the load the higher the frequency reached .. the harder the load the lower the frequency reached..

oddly enough a light load higher frequency can cause a card to crash.. the point being boosting until a power limit is reached is normal behavior.. the clock speed will vary with the load.. furmark is a heavy load hence the lower frequency reached.. the load on the gpu will always be maxed.. the frequency will differ that is all..

cheaper cards with crappy coolers may reach a temp limit but those with decent coolers will always hit the power limits first..

trog
 
Synthetic utilities have their uses but stability testing is not one of them. For stability testing ...

a) You want realistic loads, synthetic tests are not realistic.

b) You want multitasking loads, which stress the CPU in a variety of ways

c) You want to make sure that the loads include modern instruction sets.

d) Synthetic test present unrealstic loads artificially lowering maximum sustainable OCs otherwise attainable under real world conditions.

Synthetic testing does not address all 4 and often none of the above.

Practical uses of synthetic utilities.

Prime 95 (Older non AVX versions) - We use P95 to thermally cycle the TIM 4 or 5 times (more with certain TIMs) bringing temps up to 85C or so and then letting it cool to room temperature. With some TIMs (i.e. AS), this greatly accelerates the curing process which otherwise can take 7 weeks or so (200 hours or normal usage according to AS5 manufacturer) but even with TIMs that state "no curing required", we oft do see minor improvements.

Furmark - used similar to above to cure TIM of card, with pump and rads at minimum rpm, we will run Furmark up top card's throttling point adjusting rpm as necessary to it reaches steady state condition and cycle back down as above. We then bring fan rpms up to the point where noise can be detected. Restart Furmark and starting at 100% of pump speed, will record max temp once system reaches steady state conditions. "Rinse and repeat" for 90%, 80% etc till 300%. From this data, it can quickly be determined at which point the system receives no significant benefit from additional flow (typically @ 1.25 gpm). Then with pump at a a fixed speed, the tests are repeated varying the fan rpms from 100% of full speed down to 25% . This data is used to set up the fan curves. Furmark is critical here as it maintains a constant load; using other "real world" utilities present varying loads which render any such testing useless. On our test rig for example, at 100% fan rpm we see 39C on the GPUs and while i wouldn't call the 1,230 ish rpm noisy, it is audible, At an inaudible 850 rpm, the GPU temps under Furmark are 42C. You don't get any bonus points for being 39C instead of 42C, so the max speed is set at 850... outside Furmark, we never see 40C.

For stress testing we use RoG Real Bench, a suite of 3 Real Wold programs which are all run at same time along with playing a movie for the "Heavy Multitasking" test ... it takes about 8 minutes and you will get very close to a working OC with just the 8 minute test. For final dial-ins, to insure stability I use 4 hour test but many feel that 2 hours is adequate. Temps will usually be about 10C lower than P95 and it's a given that your system will never see a load anything close to what RB provides in real world usage. Oh ... one thing worth mentioning ... have had 24 hour P95 stable OCs fail in RB.

For GPU, it's bit more cumbersome, using the 3D mark and Unigin benchmarks, you can dial things in pretty close. BE AWARE that you will almost never get the highest OCs with the highest core of memory OCs. In TPUs testing with the 2080 Tis (Micron memory) ....

They got the Zotac Amp to a core of 2,145 which netted 221.5 fps in the OC test (2000 memory)
They got the Asus Strix to a memory OC of 2,065 which netted 225.0 fps in the OC test (115 core)

However, they got the MSI Gaming X Trio to 226.6 fps in the OC test with a 2085 core and 2005 memory. My approach is to determine max stable core with memory at default and then determine max memory with core at default. Let's say that g[ves us

Default Core = 1650 / Max Core 2150
Default Memory = 1750 / Max Memory 2050

You might make a spreadsheet with Memory as the Column headings and Core in the row headings. So with 1750 in the 1st data column, 1st date row would be 1650, 2nd 1700 and so on till ya crash, 2nd column could be 1800 and again test at each +50 jump in core speed ... recording fps achieved. Usually I wind up with something like 5th row, 4th column giving the best fps results.

As for nvidia saying "don't do it".... I called nvidia for my custom built lappie when I was using OCCT (would not even run), to OC my lappie and they told me, "We restricted the use of OCCT cause folks were running the PSU test which stressed GPU and CPU at same time, but since our newer cards are unaffected by the original issue, we are going to remove the limitation soon. In the meantime, just use Furmark".
 
saying it 1ce more lol then im out.
just use occt psu test to load the system.
It provides graphs of all the monitored sensors. "temps, voltages, frequency.." you can also set a safety net so if it reaches a temp you think is not-safe it will auto stop. and you then have the graphs to see what temps and frequencys and voltages were all doing at the time when it stopped and prior.

its just a better way to do it..

Right then im Out.
:)
 
1. Stability is defined as being able to run at your given clockspeed...stock or overclocked.
This is where I beg to differ. At least for me, stability also includes power and cooling. Both of which Furmark allows to test easily and consistently. Maximum power draw - which today is not GPU maximum but power limit - as well as worst case temperatures. This will bring out any insufficiencies in card or system, not necessarily directly related to GPU. Power supply, VRM, perhaps even motherboard, their capacity to provide power and stability of it. Cooling of the card, of the case.
 
This is where I beg to differ. At least for me, stability also includes power and cooling. Both of which Furmark allows to test easily and consistently. Maximum power draw - which today is not GPU maximum but power limit - as well as worst case temperatures. This will bring out any insufficiencies in card or system, not necessarily directly related to GPU. Power supply, VRM, perhaps even motherboard, their capacity to provide power and stability of it. Cooling of the card, of the case.

Power and cooling are requirements for stability. Clocks however are the intended performance level one attempts to be stable at. Different things, I'd say. I say this because power and cooling are directly related and many components self-manage this to remain stable. GPU Boost is a perfect example. It creates its own stability through power adjustments. Similarly, high temperatures are almost never a problem for stability, because the GPU will just use lower clocks instead.

Furmark loads all shaders/resources with a constant load and this heavily taxes the VRM. This in turn creates heat that is much greater than what you see in regular use - in places you'd normally not have it. Depending on the GPU, this can push it beyond safe ranges, even with all the measures in place. It causes voltage throttling, up to the point (again: depending on GPU headroom in cooling/power delivery) of putting it in a different power state altogether. The devil is in the details: since Furmark produces a constant, full load on all resources, the VRM has no opportunities to shed some heat where it normally would be able to (regular usage, that includes a 100% load for 24 hours in-game or in another stress test like 3DMark!). AIBs and Nvidia design and scale their cooling solutions based on this regular usage - and not on a constant load as Furmark presents it.

Previous Nvidia gens, at least on Fermi, had built in measures to limit power draw from Furmark and there are many documented instances of it killing cards. Today, GPU Boost does the job for you and the BIOS is hard locked to such a degree that you simply can't get Furmark to push the normal voltages you'd see in regular use. Regardless, the implementation is different but the end result is the same: you get voltage locked.

Heat is still a problem though. Due to the lower voltage, your GPU die won't get as hot as it would in regular gaming, but at the same time, the VRM might be cooking, after all, that power is going somewhere. Historically we know that high VRM temps are the number one weak spot of GPU longevity, we also know recent generations of cards have had several (AIB and even as recent as the 2080ti FE!) hot spots and we often see memory exceed recommended temps when the VRM nearby gets crispy.

Now, we can search the interwebs all day for a source (and casually ignoring the official Nvidia and AMD statements on the matter), or we can simply use common sense. The fact it is used for a short test is still not preferable, because there are many GPUs that are not sufficiently cooled to guarantee no damage is done to VRM or surrounding parts. That doesn't make it an immediate no-go for everything and it also explains why high-end components are much better equipped to deal with Furmark's excessive heat than for example, a cheap blower.
 
Last edited:
Power and cooling are requirements for stability. Clocks however are the intended performance level one attempts to be stable at. Different things, I'd say. I say this because power and cooling are directly related and many components self-manage this to remain stable. GPU Boost is a perfect example. It creates its own stability through power adjustments. Similarly, high temperatures are almost never a problem for stability, because the GPU will just use lower clocks instead.
Different phrasing, same point. I can rephrase - Furmark works well for verifying the requirements for stability.
Furmark loads all shaders/resources with a constant load and this heavily taxes the VRM. This in turn creates heat that is much greater than what you see in regular use - in places you'd normally not have it. Depending on the GPU, this can push it beyond safe ranges, even with all the measures in place. It causes voltage throttling, up to the point (again: depending on GPU headroom in cooling/power delivery) of putting it in a different power state altogether. The devil is in the details: since Furmark produces a constant, full load on all resources, the VRM has no opportunities to shed some heat where it normally would be able to (regular usage, that includes a 100% load for 24 hours in-game or in another stress test like 3DMark!). AIBs and Nvidia design and scale their cooling solutions based on this regular usage - and not on a constant load as Furmark presents it.
Tom's Hardware testing linked above shows that Fire Strike and Witcher 3 (and Kombustor and OCCT and Sky Diver) cause even higher VRM temperatures. Timespy, Valley and Doom are not far behind. What makes this load more unrealistic?
Previous Nvidia gens, at least on Fermi, had built in measures to limit power draw from Furmark and there are many documented instances of it killing cards. Today, GPU Boost does the job for you and the BIOS is hard locked to such a degree that you simply can't get Furmark to push the normal voltages you'd see in regular use. Regardless, the implementation is different but the end result is the same: you get voltage locked.
Furmark didn't kill Fermis. They had limits in place that prevented actually killing hardware. It was all about maintaining image. Power consumption and clocks on Fermi were both awful with Furmark so it was throttled by driver detection (to 550MHz memory serves right). AMD did pretty much the same for the exact same reasons.
 
What makes this load more unrealistic?

Power consumption and clocks on Fermi were both awful with Furmark so it was throttled by driver detection (to 550MHz memory serves right). AMD did pretty much the same for the exact same reasons.

Already explained but it seems you don't want to read it. The constant nature of the load, as in, full on continuous strain, as opposed to a constantly changing type of load as you see it with every other bench. Hence the 'power virus' commentary. VRM may momentarily get peak temperatures from other tests but it also gets dips in between. With Furmark, it gets that peak all the time. At the same time, core temps may not represent the same temperature scenario, and fan speed is determined through core temp, resulting in inadequate cooling.

So, now you do admit Furmark is throttled by driver detection, so why did I bother to explain this? With Fermi it was a flag, today GPU Boost does pretty much the same job. We see it in every Furmark bench result, and yet we're still denying it...?!

Different phrasing, same point. I can rephrase - Furmark works well for verifying the requirements for stability.

No it does not, because you're not seeing the clocks you would see in-game. So you may not hit a power or temperature wall, but you can still be completely unstable in-game. These metrics are related, removing one from the equation eliminates the purpose of testing it.

You know what's so silly. When it comes to CPU overclocks lately, people 'don't run Prime' because it makes their CPU too hot and 'we don't use AVX anyway' (even though, ironically, even games do use it)... the real reason is that in fact many casual overclocks will just not last under that load. But when it comes to GPU, somehow I'm reading topics where people want the exact opposite: persist in testing a completely useless scenario that literally shows you nothing useful. o_O
 
Last edited:
All I can say about Furmark to anyone would be "Run at your own risk" and don't come bitching to me about it when your card dies.
 
VRM temperatures are determined by power usage.
If card running at power limit breaks due to VRM overheating this is a warranty case, period.
So, now you do admit Furmark is throttled by driver detection, so why did I bother to explain this? With Fermi it was a flag, today GPU Boost does pretty much the same job. We see it in every Furmark bench result, and yet we're still denying it...?!
It is not. It used to be. For years now the only thing limiting Furmark has been power limit.
No it does not, because you're not seeing the clocks you would see in-game. So you may not hit a power or temperature wall, but you can still be completely unstable in-game. These metrics are related, removing one from the equation eliminates the purpose of testing it.
How many times do I need to say Furmark is not good for testing high clocks. It is good for testing power and temperature. I have not said otherwise.
You know what's so silly. When it comes to CPU overclocks lately, people 'don't run Prime' because it makes their CPU too hot and 'we don't use AVX anyway' (even though, ironically, even games do use it)... the real reason is that in fact many casual overclocks will just not last under that load. But when it comes to GPU, somehow I'm reading topics where people want the exact opposite: persist in testing a completely useless scenario that literally shows you nothing useful. o_O
Using Prime95 with AVX for CPU is the exact same thing as using Furmark for GPU. And I am running Prime95 to test both what CPU does in terms of power and temperature.
 
Last edited:
VRM temperatures are determined by power usage.
If card running at power limit breaks due to VRM overheating this is a warranty case, period.

1551438490067.png


If Nvidia can show you or the application Furmark used measures to circumvent 'protection mechanisms', you can kiss your warranty goodbye.

Note; this includes running Furmark with this checkbox ticked:

1551438590702.png


Bottom line: slippery slope material and an area where the vast majority of people making topics on TPU has no real clue on what is 'covered in warranty' and what is not. So, do whatever you like ey ;) Its not my warranty...
 
Furmark disables overtemperature and overcurrent mechanisms? How?
Nvidia (who I assume that document is from) is full of shit.

And yes, disabling limits on cards can lead to failure without adequate care and cooling. Never denied that. Why is this relevant?

Edit: By the way, for an Nvidia card that checkbox will still not let you past Nvidia's hard voltage limit. Last time I checked this is around 1.09V.
 
Furmark disables overtemperature and overcurrent mechanisms? How?
Nvidia (who I assume that document is from) is full of shit.

Do you not read
 
Do you not read
I did, and I quote:
Using Furmark or other applications to disable these protection mechanisms can result in permanent damage to the graphics card and void the manufacturer's warranty.
Furmark does not disable protection mechanisms as far as I am aware of. Do you want to say otherwise?
While Nvidia seems to do that I would say this is incorrect. There is nothing to back up that statement.
 
I did, and I quote:
Furmark does not disable protection mechanisms as far as I am aware of. Do you want to say otherwise?
While Nvidia does I would say this is incorrect. There is nothing to back up that statement.

Edit: By the way, for an Nvidia card that checkbox will still not let you past Nvidia's hard voltage limit. Last time I checked this is around 1.09V.

Irrelevant. You get a disclaimer/warning before you activate this checkbox which is enough to deny a warranty claim. I'm not sure how many more warning signs you want to ignore.

There may not be data to back up that statement, but there are disclaimers and warnings in place. You may be correct all day long, but that still doesn't replace a GPU you broke, and Nvidia has perfect grounds to deny your claim. The bottom line is: you're squarely in 'do at own risk' territory and your claim that you can just have it replaced, is incorrect.

Using Prime95 with AVX for CPU is the exact same thing as using Furmark for GPU. And I am running Prime95 to test both what CPU does in terms of power and temperature.

And yet, Prime still runs at the clocks you dial in, does not cause throttling, and does not cause a different type of load on the CPU as any other AVX task. Therefore Prime is a valid test for precisely this type of load and presents a worst case scenario.

Furmark however does not, because you're not seeing the actual clocks you'd see under load, you're not seeing the actual voltage you'd see under load, because the load presented is one no other application can or will produce.
 
Dude, what are you talking about?
Irrelevant. You get a disclaimer/warning before you activate this checkbox which is enough to deny a warranty claim. I'm not sure how many more warning signs you want to ignore.
That checkbox is not required to run Furmark.
And yet, Prime still runs at the clocks you dial in, does not cause throttling, and does not cause a different type of load on the CPU as any other AVX task. Therefore Prime is a valid test for precisely this type of load and presents a worst case scenario.
Your system specs say you have 8700K as CPU. Have you run AVX2-enabled Prime95 on that CPU with Power Limit in place? It'll throttle.

Edit:
I have said this a bunch of times already. If you remove limits from your hardware its on you.
With limits in place and especially at stock, Furmark, Prime95 or other stress tests do not kill your hardware.
 
Last edited:
Dude, what are you talking about?
That checkbox is not required to run Furmark.

It is not, yet is checked by many users because 'balls to the wall' overclocking. Then users stress test their OC (with Furmark). If the card then fails, your warranty claim will not be quite as straightforward as you might think.
 
it will carry the same straightforwardness as if the card burned under normal loads. or 160 watts for the 2060. since it will always be power limited and they dont know unless you mention about Furmark. Actually they can always use Furmark as plausible deniability to deny warranty.
 
Dude, what are you talking about?
That checkbox is not required to run Furmark.
Your system specs say you have 8700K as CPU. Have you run AVX2-enabled Prime95 on that CPU with Power Limit in place? It'll throttle.

Edit:
I have said this a bunch of times already. If you remove limits from your hardware its on you.
With limits in place and especially at stock, Furmark, Prime95 or other stress tests do not kill your hardware.

Ah right, so now we're only stress testing our cards at stock and with all limits in place. Cool story. Let's leave it at that.
 
Status
Not open for further replies.
Back
Top