I haven't seen the specs of how Windows intend to do scheduling of these new hybrid designs, but pretty much any user application (especially games) would have to run on high-performance cores all the time. Games are heavily synchronized, and even if a low load thread is causing delays, it may cause serious stutter, or in some cases even cause the game to glitch.
At the same time there's some benefit if the OS end up interrupting less the high priority game threads instead delegating the lower priority to the little cores. Might also help in question of cache pressure for the big cores as there's no chance of some background thread running on it and end up discarding some important stuff in to the higher level caches.
Probably not a big improvement, but nevertheless, it's not all bad as you claim to be.
And not everything in games is synchronized.
Network threads per example are very likely not to be, with a little core being enough to do most network tasks.
Nor is a lot of tasks like graphics. Ever seen assets pooping up?
Why?
HEDT/high-end workstation and server CPUs will not have a hybrid design, so do you think Microsoft want to sabotage the performance of power users and servers then?
If AMD feels "forced" to do it, it's because of marketing.
He said
for mobile, which is certainly true, as high efficiency cores will mean that mobile will enjoy a much better experience with higher battery life.
Windows schedule for big/little cores would only work when you have those obviously, so absolutely no clue to what you are talking about when saying it would 'sabotage the performance'.
Where is the spec for this?
How would the OS detect the "type" of an application? (And how would a programmer set a flag(?) to make it run differently?)
There's some hardware in the chips to help with that. How it works though hasn't been revealed and might never be.
However, this problem is already known and has already has some solutions. Linux has supported BIG.little in their scheduler for a long time and there are information that the OS has of the tasks and it's enough to make some heuristics out of it.
Very likely the hardware that Intel put is just that, more task statistics to guide the OS into making the right choice.
The small cores will be very slow, and if the details we've seen is right will share L2 cache, which means high latency. They will be totally unsuitable for even an "older" game.
They might actually be quite decent. The previous generation of the little cores, Tremont turbos to 3.3 GHz, and the rumours are that Gracemont will have a similar IPC to Skylake.
Very possibly that they will turbo higher than Tremont as enhanced version of 3.3 GHz and higher tdp in desktop, etc etc. Though 3.3GHz is already on par with the original Skylake, i.e. the i5 6500 had an all-core turbo of 3.3GHz.
The L2 is shared among a cluster of 4 small cores and not all. Plus, 2 MB of L2, so basically 512 kb/core. Very likely that it's multi-ported.
Obviously that they aren't going to be high power, but if Intel manages to get them to turbo to 3.5+GHz or so, it would be more than good enough and could be pretty similar to an i7 from 2015~2016(original Skylake). If they manage to make it clock better, then well, that's good.
The minimum would likely be around a i5-6500, considering that Tremont is already pretty close to it and the new atom architecture might bring it to IPC parity to it or close enough.
That is, assuming that the rumor are sort of true. If they hold no truth whatsoever then we have no clue.
I sure hope AMD would be smarter and skip it all together. I would rather have more cache.
Little cores would be a pretty small part of the die anyway. The L3 cache already takes most of the die, so that isn't an issue.