That's overblown (pun intended: the main victims are GPU fans, not the GPUs themselves). I bought ~40 mining GPUs about 4 years ago and none of them failed so far (had to replace fans in 7 of them, no other issues).
May I ask what you did with them? Are you talking about buying and flipping them, or building your own lab for some heavy compute task (or even mining yourself)?
My concern isn't primarily DoA, but stability over time. And what people regard as stable is highly subjective. May I also ask what is your standard for a GPU passing validation? My standard is that a GPU (or any hardware) should handle your use case for at least 3 months without a single crash due to hardware issues. (It doesn't mean I stress test for 3 months though…)
That's too extreme. Especially if we're talking SSDs manufactured <2 years ago. You can add a safety SSD but throwing every single storage unit in the rubbish bin is just a waste. Drives, unless handled with negative care, can serve you for decades.
Safety wise, sure if you do a proper DBAN it should be safe to use.
And I could agree if you find something less than 2 years old, but that's rather rare. Getting SMART errors on devices >3 years is fairly common, and makes them useless. Espescially SSDs seem to wear out at 2-3 years of daily use, I've seen a lot of them fail.
I didn't suggest buying Pentium II era computers. Anything semi-recent (Turing onwards; RDNA onwards) will get software updates for longer than you actually care. You wouldn't care if the game isn't supported by your decade-old GPU if reviews show that more recent GPUs of double the speed are still too slow for this title. Drivers for non-GPU parts are a PITA exclusively if you are some corporate user with extremely specific soft/hardware. Windows 11 runs perfectly fine (driver-wise) on earliest Core 2 Duo systems. That's 18 years old.
Anything older than Skylake(2015) can be a hit or miss with compatibility for later OS's, but generally speaking anything workstation/server grade generally have very long driver support. Linux will generally work fine.
Part of my reasoning is that something very old might be able to work right now, but will not have as long support as a brand new product, so this
should be taken into the value proposition of any used hardware, along with lack of warranty. (e.g. a Zen 5 will offer more support going forward)
I've many times considered buying "old" computer parts, and if you know what to look for you can find quite good deals. Local sources are usually the best, but you can even find gold on Ebay, even though shipping and VAT would challenge the value proposition for some of us. Especially workstation parts can be great deals, better than the usual i7s. Just ~1.5 years ago I was looking at getting 2-3 sets workstation boards, CPUs and RAM at a great deal, I believe it was
Cascade Lake or something similar, so very recent and feature rich. I didn't buy it because I was too busy. I also almost pulled the trigger on a box of 3x Intel X550 (10G NIC) NIB for ~$80 a piece, would be amazing for my "home lab", but I'm just too busy on my day job.
What's perhaps more interesting would be what kind of hardware would offer a decent value compared to a brand new Zen 4 or Raptor Lake? For all-round gaming and desktop use, you will get pretty far with anything from the Skylake family with boost >4 GHz, even if you pair it with a RTX 4070 or RX 7800 XT, you'll not be loosing too many FPS in most titles, and if the old parts frees up money for a higher tier GPU it might be worth it. And I don't expect this to change a lot when Zen 5 arrives, for most realistic use cases, there aren't a tremendous gain beyond "Skylake" performance in gaming (except for edge cases). But the bigger question remains; risk and time.
Those generally have overkill and high rated PSUs
Last time I had experience with such systems I noticed the fact these PSUs are so foul it's a crime to call it a PSU. Even KCAS line-up wasn't as bad. These pre-builts are not worth the effort if you have at least a half of idea what you're doing. Their cases and motherboards are designed the way you
must have a hammer, a chainsaw, and an angle grinder to make anything semi-decent fit there.
For clarity, I was
only takling about workstation computers, not baseline office computers or home computers from retail stores, both of those have generally underpowered and low quality PSUs and cooling.
Take for instance the Dell Optiplex, especially the slightly smaller ones; horrible designs. Even if you put a graphics card in there, there is no PSU headroom. There is (usually) no case cooling, only the stock CPU cooler recycling case air. Even when specced with the highest 65W i7, those are utterly useless for development work, or any kind of sustained load. I've seen them throttle like crazy or shut down just from a ~5 min compile job, and those were fairly new at the time.
I sincerely don't know how much you need to damage your brain and limbs to make PC building mistakes possible. Every single connector, slot, and interface is designed the way you need to intentionally forfeit all the sanity to do it wrong.<snip>
Please be serious. Let's be civil and have a constructive and interesting discussion.
If you read carefully I'm talking about the big PC manufacturers who build systems by the hundreds of thousands. For them, avoiding an add-in card (or two), cabling, cooling means less work hours and less potential points of failure. Every single mistake which have to be manually corrected, cost them "a lot". It's completely different for us enthusiasts building computers, or even ordering a custom built machine, these are not mass produced on an assembly line.