First and foremost: why do people on this forum still think Intel's 10nm isn't working properly at that point?
Is there anything Intel could do to convince you?
Absolutely! They could make a 10nm product line that
isn't outperformed in the same segment by their own 14nm line (according to their own numbers, no less). They could also phase out 14nm entirely for one or more segments (which would make sense to alleviate the ongoing shortage), as that would demonstrate that 10nm yields are ramping properly - it's not like 10nm and 14nm are made at the same fabs, after all. Though at this point they are even admitting in their own financial disclosures that 10nm is likely never going to be profitable, and that 7nm will likely be outperforming it out of the gate when it launches in late 2021 or early 2022 (barring any delays, obviously). If 10nm was "working properly" it would be able to at least match 10nm on performance when it also has an 18% IPC advantage, or at the very least perf/W compared to a 14nm process that has seen several revisions focused on increasing clocks, not efficiency.
Bad? That's the only logical option.
No. The logical option for a part meant for diverse use cases (like any CPU) is to give it as much performance as possible in all types of workload, and then let the system balance which part gets how much of the power budget under what workload. That would give the best possible performance under any workload, with the only downside being a minor increase in die size (as long as the chip has fine-grained power and clock gating to keep idle components from drawing too much power).
Some chips are built around faster IGP, which makes them more usable for IGP gaming - a use case where stronger CPU wouldn't be utilized anyway.
Other (most) chips are CPU-oriented, which makes them better for everything outside of IGP gaming - and a much better companion for more powerful discrete GPUs.
If the top of the range SoC offered the best CPU and the best IGP in the lineup, it would mean all other chips are not opimal in use of package size or power consumption. It would make no sense.
Uh ... so Intel's own laptop lineup up until the 10th-gen has made no sense? 'Cause up until then they have essentially had what you describe. Sure, there have been Iris-equipped SKUs, but those have been rare to the point of nonexistent (outside of Apple 13" laptops). The thing is, what you're describing here is a scenario with essentially a single workload for each machine, which is unrealistic in general, but particularly on low-power laptops. For most people a device like this is their only device, and as such they are likely to be used for anything from photo and video editing (mainly CPU bound, but benefits greatly from GPU acceleration) to CAD to light gaming to pretty much anything else. This is why forcing users to choose makes no sense for this segment. For higher power segments it makes a lot
more sense to have a singular focus on CPU performance as the chance of then having an iGPU is much higher.
Also, what you're saying seems to take for granted that there has to be several pieces of silicon within a market segment, which ... no, there doesn't. A single top-end SKU with everything enabled with lower-end SKUs with parts of the iGPU disabled, some cores disabled, or just lower clocks, makes
perfect sense as long as that top-end SKU isn't an overly large die. That's how chips are binned to reduce waste, after all. If all you need is CPU performance that top-end SKU will not in any way be handicapped by also having a powerful iGPU - it will simply be idle. The same goes for the other way around. But users will have the option for better performance in other workloads, which you are arguing against for some reason.
True, and two more important facts;
Despite all of Intel's "failures" on 10nm, they are probably close to matching AMD's 7nm in terms of units shipped, while having only a tiny fraction of their total volume on 10nm. By the end of this year Intel will very likely use make more wafers as well, but even then it will not be enough for the majority of their products.
AMD have been lucky to have TSMC's high power 7nm capacity mostly for themselves, but that is changing.
So the reality is more nuanced than most people think.
That is true, but then Intel is the market leader and recently had a 9:1 or higher sales advantage across
all segments (so it would be
really weird if AMD somehow had them matched on volume at this point - even with AMD buying wafers from others that tells us Intel has had the capacity to produce enough chips for 90% of the market), and also has a massive fab operation while AMD contracts chip production to TSMC. Thus AMD has to share fab space with TSMC's other customers, including behemoth Apple which has until recently likely occupied the majority of TSMC's 7nm fab capacity, or at least a very large portion of it. There's no such thing as "TSMC's high power 7nm capacity" - there are different design libraries and different revisions of each node, but all TSMC 7nm is still made across the same selection of fabs and on the same equipment (though they obviously keep anything that can be made at a single fab to one locale). So despite Intel shipping Ice Lake in "significant numbers" (though currently limited to a relatively small selection of thin-and-lights, with Comet Lake being far more prevalent), there is currently zero reason to believe Intel's 10nm node is anywhere near the kind of yields you would expect for a high volume node, nor that its performance is where it should be. If it was, they would be transitioning 14nm fabs to 10nm (to increase chip output per fab on the denser node), which would then likely mean transitioning all mobile chips to 10nm while freeing up 14nm capacity for the lucrative server and enterprise markets, while also readying 10nm designs for all market segments. There is
nothing indicating that any of this is happening. So no, Intel 10nm is by no means working as it should. All signs (including Intel's own public plans) point towards Intel planning to keep 10nm limping along, with 14nm as the backbone of the company's production, until 7nm arrives to get things going as they should.
People forget that even AMD, having just a fraction of Intel's CPU sales, probably has way over half of manufactured "total die area" on 14nm.
Every Zen2 chip has a massive 14nm I/O die. Mobile SoCs are generation behind - still 14nm. And of course AMD still makes and sells Zen and Zen+ desktop/server parts as well.
In fact, I'd love to see their ratio of 14nm vs 7nm wafers. I bet it's 2:1 at best at the moment.
Not forgetting anything. There is however a significant difference between continuing production of a previous-generation product (which Intel also does) to fulfill long-term contracts and maximise ROI on already taped-out and mature silicon, and making brand-new designs on an old node. AMD absolutely makes a lot of 14nm stuff still, but they aren't announcing new products on the node. Bringing the I/O die into this argument is also a bit odd - it's not a part that would benefit whatsoever from a smaller node (physical interconnects generally scale poorly with node shrinks), so why use an expensive smaller node for that when it's better utilized for parts that would benefit? If Intel did an MCM part I would absolutely not criticize them whatsoever for using 14nm or even 22nm for parts of it - that would be the smart thing to do.