I recall The Stilt saying the chips can be killed with too-high a voltage, regardless of thermals and current. He said, for instance, that he reckoned that the FX's safe voltage limit is around 1.475. That's 32nm SOI.
His comments, which I don't have in front of me (and therefore must rely on memory from something I read years back) suggested to me that a chip could be degraded or fried very quickly, even without being subjected to a heavy load like Prime — without temperature nor load being needed. However, I have also read things that suggest that higher voltages won't kill chips if they're kept cold enough (i.e. nitrogen), so I'm a little confused.
He also said other things that seem to fly in the face of the conventional wisdom, like the widespread belief that a chip that needs less voltage is a better-quality one. I see that idea everywhere, from forum posts to professional reviews. He said it's basically the opposite. Higher leakage means less voltage required but he said, except under nitrogen, that's worse than higher voltage required and lower leakage. He said, for example, that the 9000 series FX chips were so poor-quality that they would have ended up in the crusher if AMD hadn't decided to create the 220 watt spec for AM3+. So, although the 9590 could hit, let's say 5 GHz, with less voltage than an 8370E, it would use more power and would die sooner at a given high voltage. Not only do the higher-leakage parts waste more power they are more sensitive to electromigration.
So, when we're looking at what safe voltages are it seems that it depends a lot on the leakage of the part. Maybe 1.5V is actually safe if the leakage is low enough? It seems really high to me but I don't know the technical details of the TSMC 7nm process AMD is using. I thought that safe voltage maximums are supposed to shrink as nodes shrink, although things like FinFET probably affect that quite a bit.
I remember Fermi (GF100) being described (in an Anandtech review I think) as an example of Nvidia's strategic intelligence. The notion was that Fermi intentionally had extra high-leakage transistors to increase performance. Given The Stilt's comments, or my understanding of them which may be flawed, I'm not sure how high-leakage transistors are a boon. Was it something about them being able to switch on and off more quickly? Does that apply to CPUs?
Also, some have been concerned that AMD may end up with a higher RMA rate because of this change. If it's the case that the RMA rate will increase one solution the company can do for future production is to realign the binning. This aligns with the improvement to the node that typically happens as it matures as well.