I think they deserve more credit and attention that. Its not just first party applications where this chip looks good, DaVinci Resolve and Adobe Premiere (and others) is also running native on Apple silicon and its getting awful close to the performance of the Xeon powered Mac Pro. Granted the Xeon they are using isn't the best of example of x86 silicon but you compare the results from those applications running on Windows on Intel 12th gen and Zen 3 and the M1 hardware still looks pretty impressive no matter how you look at it.
The package of the APU is unconventional but if that level of integration is key to the performance they are getting then either AMD and Intel will have to do something similar or someone else will eventually do it for them. I mean if you use any of those performant applications (video editing, 3D rendering) and if Apple starts walking away from the competition you will switch so in that sense they are absolutely a threat. Users of professional software packages are not allegiant to platforms and/or hardware fanboi/gurls.
Its not the integration here, its the die & package size that makes the nice numbers. AMD and Intel don't have to do anything, they can produce any sort of x86 machine for a fraction of the cost with a faster chip.
The way forward for Apple:
- bigger dies / more chiplets = further cost increase per chip
- even further integration which implies even further tailor made solutions, which kills expandability, unless they can automate it somehow. Which could be their higher purpose, they already built pretty strong recompiling software if I recall to bridge ARM > x86.
Both things are finite. You can only scale chips as far as package allows, and we're already looking at a massive package compared to the competition. You can also only scale chips as far as is economically feasible, and that's a moving target, but still finite in some way.
The way forward for x86:
- the focus is still on small dies, the monolithic die is yesterday's news, so we're looking at not just die size increases but simply more chiplets or better arranged core complexes, but still with a
focus on reduced yield risk, ergo, small dies. Nodes get smaller, so the gain here is massive - simply because its achievable. Same die size on a smaller node is already a larger floor plan.
- the focus is on more specialized cores that - again - take up lower square mm per core on the die.
- software efficiency is a per-case scenario. Some software will optimize for newer hardware, other stuff will lag behind, but eventually, economics dictate you will need to optimize to keep up. These are costs Intel and AMD are
not making, while Apple forced itself into that software garden, with control comes a large responsibility there.
As Apple dies go larger, I predict they'll face an ever more difficult economic balance with a large package. Part of that can be justified by its performance, perf/watt, and the performance of applications that run on it. But the hard cap of its capabilities will be lower than what x86 can be stretched towards, or Apple will have to sacrifice power efficiency for clocks. Basically... something's gonna give one way or another.
Huge chips and huge packages are not risk free, the real, core question is whether Apple
timed their move to a larger package right, really.
So far,
historically, every company that could hold on to a smaller chip at a competitive level longer than the rest, is the company that won that specific round of silicon wars. Intel during their quad core / single thread focused days (which is the reason they're not abandoning that race either even with the newest core designs; the reason E cores exist is so they can push P cores harder within similar TDPs), and Nvidia ever since Kepler, and even now, with a feature advantage for Nvidia, AMD manages to strike back with a smaller chip even in the strange market of today. I mean yes, AMD won a few rounds in the GCN years especially with HD7970's, but let's not speak of their margins; meanwhile, Nvidia could keep up with smaller, more efficient designs and they started swimming in gold year over year, even while keeping the performance crown and increasing their lead as AMD's GCN stopped scaling proper past Hawaii XT. It is thát money that enabled them to fortify their lead. We have yet to see how things develop post-Ampere as Nvidia does a proper shrink at last, and not this crappy Samsung business; but the only reason Nvidia had margins to speak of on Samsung is because likely Samsung loved having that high profile business and offered something cheap. Even so... the ride was bumpy and we know yields aren't fantastic, and we also know of several price hikes between Pascal and Ampere, while nearly all of Turing was too expensive even in base MSRP... because the die got huge. One gen post Turing and Nvidia lost a convincing +-3~5-year lead to RDNA2 that now has a far stronger road ahead of itself in terms of die size/scalability.
Also... this is Apple, which kind of lives in a vacuum in tech land and is really happy in it. The reason they are what they are is because they have nice things that aren't for everyone's wallet. It remains to be seen how hard they even want to push and fire on all cilinders. They can easily make do on marketing and minimal progress, as we've seen, again, historically.