This is where I beg to differ. At least for me, stability also includes power and cooling. Both of which Furmark allows to test easily and consistently. Maximum power draw - which today is not GPU maximum but power limit - as well as worst case temperatures. This will bring out any insufficiencies in card or system, not necessarily directly related to GPU. Power supply, VRM, perhaps even motherboard, their capacity to provide power and stability of it. Cooling of the card, of the case.
Power and cooling are
requirements for stability. Clocks however are the intended performance level one attempts to be stable at. Different things, I'd say. I say this because power and cooling are directly related and many components self-manage this to remain stable. GPU Boost is a perfect example. It creates its own stability through power adjustments. Similarly, high temperatures are almost never a problem for stability, because the GPU will just use lower clocks instead.
Furmark loads all shaders/resources with a constant load and this heavily taxes the VRM. This in turn creates heat that is much greater than what you see in regular use -
in places you'd normally not have it. Depending on the GPU, this can push it beyond safe ranges, even with all the measures in place. It causes voltage throttling, up to the point (again: depending on GPU headroom in cooling/power delivery) of putting it in a different power state altogether. The devil is in the details: since Furmark produces a constant, full load on all resources, the VRM has no opportunities to shed some heat where it normally would be able to (regular usage, that includes a 100% load for 24 hours in-game or in another stress test like 3DMark!). AIBs and Nvidia design and scale their cooling solutions based on this regular usage - and
not on a constant load as Furmark presents it.
Previous Nvidia gens, at least on Fermi, had built in measures to limit power draw from Furmark and there are many documented instances of it killing cards. Today, GPU Boost does the job for you and the BIOS is hard locked to such a degree that you simply can't get Furmark to push the normal voltages you'd see in regular use. Regardless, the implementation is different but the end result is the same: you get voltage locked.
Heat is still a problem though. Due to the lower voltage, your GPU die won't get as hot as it would in regular gaming, but at the same time, the VRM might be cooking, after all, that power is going somewhere. Historically we know that high VRM temps are the number one weak spot of GPU longevity, we also know recent generations of cards have had several (AIB and even as recent as the 2080ti FE!) hot spots and we often see memory exceed recommended temps when the VRM nearby gets crispy.
Now, we can search the interwebs all day for a source (and casually ignoring the official Nvidia and AMD statements on the matter), or we can simply use common sense. The fact it is used for a short test is still not preferable, because there are
many GPUs that are not sufficiently cooled to guarantee no damage is done to VRM or surrounding parts. That doesn't make it an immediate no-go for everything and it also explains why high-end components are much better equipped to deal with Furmark's excessive heat than for example, a cheap blower.