I'm not saying 15% is "a big deal". It isn't most of the time. But if it is possible and without much fuss or sacrifices, then why not?
BTW: by ignoring those 15% you're basically undermining the sense of overclocking, which isn't exactly in-line with this community. But don't worry - I'm with you on this one! :-D Overclocking is the least cost-effective way of improving performance and, hence, fairly pointless - apart from the top CPUs available, obviously.
I'm doing a lot of stuff that can't be split between cores. It's mostly stemming from the way I work or the tasks I'm given. That's why I care.
But you're doing it as well, sometimes unconsciously. Browsing WWW is a basic example. 10 years ago it was limited by our internet connections. Today you're not waiting for the data, but for the rendering engine.
What are "production tasks"?
Editing? Depends what and how you're editing.
Quite a few popular photo algorithms are serial (sequential) and utilize just 1 thread. This is why programs like Photoshop struggle to utilize more than 3-4 threads during non-batch operations.
Principal component analysis (e.g. used for face recognition) is iterative as well - people are making very decent money and careers by finding ways to make it faster on multi-thread PCs.
Nope. A parallel algorithm is one that can be run on many elements independently with no impact on the result. For example summing vectors is perfectly parallel. Monte Carlo simulations are great as well, since all runs are independent by definition.
But many problems aren't that easy. We modify them, make compromises and sacrifice a bit of precision to make them work on HPC.
Example: training neural networks (easily one of the most important problems of our times) is sequential by definition.
You can't run it on many cores.
So we partition the data, run training on each sample independently and then average the results. It isn't equivalent to running it properly.
And forced parallelisation isn't just affecting the results. It's very problematic both theoretically and practically. What I mean is: in case of some algorithms parallelisation requires more advanced math than the algorithm itself...