Friday, March 27th 2020
Apple ARM Based MacBooks and iMacs to come in 2021
Apple has been working on replacing Intel CPUs in its lineup of products for a while now, and the first batch of products to feature the new Arm-based CPUs should be coming soon. Having a completely custom CPU inside it's MacBook or an iMac device will allow Apple to overtake control of the performance and security of those devices, just like they did with their iPhone models. Apple has proved that its custom-built CPUs based on Arm Instruction Set Architecture (ISA) can be very powerful and match Intel's best offerings, all while being much more efficient with a TDP of only a few Watts.
According to analyst Ming-Chi Kuo, Apple has started an "aggressive processor replacement strategy", which should give some results by the end of 2020, around Q4, or the beginning of 2021 when the first quarter arrives. According to Kuo, the approach of doing in-house design will result in not only tighter control of the system, but rather a financial benefit, as the custom processor will be 40% to 60% cheaper compared to current Intel CPU prices.
Source:
AppleInsider
According to analyst Ming-Chi Kuo, Apple has started an "aggressive processor replacement strategy", which should give some results by the end of 2020, around Q4, or the beginning of 2021 when the first quarter arrives. According to Kuo, the approach of doing in-house design will result in not only tighter control of the system, but rather a financial benefit, as the custom processor will be 40% to 60% cheaper compared to current Intel CPU prices.
98 Comments on Apple ARM Based MacBooks and iMacs to come in 2021
Compiling most software with ARM using the base feature set isn't hard, but the performance would be terrible. If the application will do anything requiring real performance, you'll have to rely on application specific acceleration, which means you may have to write specific code paths for each feature set you want to target, requiring an immense amount of low-level code to get decent performance. The main strength of x86 is that it's inherently faster in generic performance, and the ISA is very stable across microarchitectures (in contrast to custom ARM designs which are very different), which makes it easy to make performant code across multiple generations of CPUs from both Intel and AMD (and VIA :D).
Geekbench 5
Snapdragon 865 (2.5 W - 3 W): 3464 Multi-core, 934 Single-core, 3171 Compute www.pcworld.com/article/3490156/qualcomm-snapdragon-865-benchmark-performance.html
Apple A13 Bionic (2.5 W - 3 W): 3338 Multi-core, 1288 Single-core, 6273 Compute www.pcworld.com/article/3490156/qualcomm-snapdragon-865-benchmark-performance.html
Intel Atom x5-Z8350 (2W): 523 Multi-Core Score 168 Single-Core Score browser.geekbench.com/processors/1969
Intel Core i7-1060G7 (9 W): 2151 Multi-core, 1268 Single-core browser.geekbench.com/v5/cpu/1372971 Correct. Google Play Store has millions of apps, so the argument about software is invalid..
ARM, IBM Power, or any other architecture can be faster in some scenario. We can look at a benchmark and proclaim, “Oh, so much faster than x86!” It doesn’t matter at all. The world runs on x86 OSes and software with a slim, few exceptions.
Anyone who says it’s easy to port or recompile software has never worked development or is grossly understating the effort required for the sake of their argument. You can easily port or rewrite some basic tool with 1000 lines and no dependencies but imagine business software with 10k or 100k lines of code (or more)? What about all the 20-30 years of technical debt baked into a lot of big enterprise software packages? Hell, most of the accounting software I’ve worked with is reliant on Windows and would need to be recreated from the ground up to even run on Linux.
ARM *could* become a dominant desktop or server architecture. But it’s going to take many years for that to be a possibility.
ARM is well suited for some server tasks as well.
It's really a question of having software designed specifically for the architecture.
We live in a x86 world. That's why ARM seems like a nightmare to use. But if we went this route 30 years ago, we would have a complete and robust ARM ecosystem.
If we really focused on migrating to ARM, it may become a fully featured universe in maybe a decade.
But we could also focus on improving and optimizing x86 - getting to ARM's efficiency without any big sacrifices.
x86 and ARM are different, but both can do what CPUs are expected to do. It's not a question of what moving to ARM can give us. It's a question of: why bother at all? Because most of our software today is written for x86. Because moving to ARM would be - as @effikan noticed - a nightmare.
ARM is more efficient in some scenarios but x86 is better in other.
x86 is built as a universal architecture. ARM is often custom made for a particular system.
ARM has big.LITTLE which is responsible for a lot of it's efficiency, but Intel's hybrid core architecture will close the gap. But that chip is designed to work well with other hardware Apple uses and optimized with iOS.
It's a comfort most of the computing world doesn't have. x86 is universal and flexible - the cost is efficiency. No other away.
You've mentioned Graviton, which is a great example of that optimization. It's a chip designed to be very efficient in AWS infrastructure and - probably - with Amazon's own Linux.
Phoronix tested the first generation chips:
www.phoronix.com/scan.php?page=article&item=graviton-linux-os&num=1
Amazon Linux was the best allround. RHEL was the worst, but that's the most popular server distribution.
So if you have the comfort of using a distribution, Graviton could be an interesting choice. If you don't (your software runs only on Red Hat family), x86 will remain more efficient.
And of course we don't have a fully features ARM Windows yet.
Another good example would be IBM's POWER and Z architectures.
And there is more healthy competition in the ARM market - Qualcomm, Apple, Samsung, MediaTek.
While in the x86, there are AMD, Intel and only left maybe VIA...
Too arrogant for a simple supplier to IBM.
There are some fundamental facts that many fail to realize;
Firstly, RISC is inherently slower, it needs more operations to do the same work. In a world where we are scaling towards the "clock wall" and the "memory wall", such inherent disadvantages will only increase. In order for ARM to compete, it needs to become more "CISC", as x86 keeps increasing the work done per instruction.
Secondly, the way most low-power ARM or MIPS chips today achieve performance is through ASICs which do all the heavy lifting, not through their core instruction set. This is how your Blu-Ray player manages to play movies with a tiny slow CPU, or your phone manages to play YouTube videos. Such features varies a lot between ARM designs, and some, especially Apple, have a lot of their own. Applications then need to be coded specifically to utilize these, which is a nightmare to develop for, considering all the various ARM designs. This custom chip approach is a conscious decision, and is a strength for ARM in many ways, but it's not for the general computing market in PCs. Yes, it's a very important question.
The fact is that from an ISA standpoint, moving to ARM will not yield any substantial benefits, it will only yield a lot of performance loss. All modern x86 microarchitectures have already solved this "problem"; if you want higher performance or better energy efficiency, you design the microarchitecture accordingly, but the ISA remains the same. Having very high compatibility across platforms is one of the key strengths of the PC. Imagine for a moment if the PC market were dominated by a semi-custom ISA; Dell, HP, Lenovo, etc. all had their own CPUs that were slightly different software compatibility would be a pain, and the PC market would probably be much weaker as a whole. We sort of had this in the beginning before IBM "standardized" the PC.
The only thing to gain is easier licensing and the ability to customize, but the latter is only an advantage for embedded or specialized platforms. Sure, it's a part of it.
Other major parts are CPU front-end complexity, cache efficiency/bandwidth, ALU/FPU efficiency, etc.
As you see, most factors which make these designs more energy efficient have little to do with ISA; it's a design choice.
That means Intel violates the EU standards for energy efficiency.
The bottom line is that if you're dependent on software that you can't control or don't have a service agreement for, then it's a ticking time bomb. It will eventually explode if you have no support. This is something Apple has done before, several times, and none of those times resulted in their demise. This a nice way of telling businesses, "Hey, we're about to change everything and we'll support you to some extent for so long, but you're expected to get your shit together and keep up with the times."
Honestly, if Apple wants to move forward and not be constrained by the past, this is the best way to do it, but if we always stuck with the "well, we've always done it this way," mentality, there would never be truly substantial progress.
a) software wasn't as "globalized" as it is since the late 90s, i.e. you were fine with a text editor, mail client, calculator, compiler, snake-like game - mostly stuff provided with the OS (phones looked like that until mid 2000s),
b) PowerPC had potential to become a mainstream architecture.
But the IT world changed. We all started using pretty much the same OS, the same office suite, the same photo editor. And, obviously, the same games - bigger, made by 3rd party studios.
And it was all made for x86.
So when Apple decided to go x86 in ~2005, it wasn't just another choice of architecture. They were fixing a mistake made 15 years earlier. And it changed everything, because suddenly Macs were usable for so many use scenarios. They became popular with coders, scientists, analysts. In business as well (especially when MS Office arrived).
The way I see it: if they really cracked x86 64-bit emulation (including GPU support) with no or little performance penalty, they can use whatever architecture they want - no one will care.
But if they launch a MacBook that can't run MS Office, Adobe Photoshop, Matlab, Visual Studio and so on... they'll end up with just a posh linux laptop with subpar repository. And the number of linux laptops offered today tells us exactly how big that market is. But that's a perspective of a PC enthusiast or coder. Yeah, you can compile for ARM, you can make nice RaspberryPi projects. You can do many things.
As of today you can't migrate most of the normal, everyday tasks we do: both professionally and casually (e.g. gaming).
As long as we use locally installed software, the mainstream program availability dictates which platform sells and which doesn't. People won't suddenly replace MS Office with LibreOffice because an ARM laptop will have double the battery life.
Maybe Apple is betting on a cloud strategy, when these MacBooks will merely be terminals. Fine. But that would mean only the weaker models (MacBook Air) will get an ARM chip - essentially becoming an iPad Pro with a keyboard. MacBook Pro and other Macs will keep x86 to run the software locally or slowly vanish from the lineup. Which is what I've mentioned earlier: the reason why ARM chips shine in some use cases is because they were made specifically for that scenarios/clients. x86 focused on being as universal as possible.
But there is no reason why big x86 clients wouldn't work with Intel and AMD on customized chips. Especially when the world moves towards mobile and cloud, so less chip variants are needed.
Ampere Altra is the first 80-core ARM-based server processor
venturebeat.com/2020/03/03/ampere-altra-is-the-first-80-core-arm-based-server-processor/
80 cores at 210-watt TDP and faster than AMD's 64-core and Intel 56-core server CPUs. They can do chiplets. Imagine 20 ARM chiplets in 60-watt.