Friday, January 17th 2025
Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS
Parallels has announced the introduction of x86 emulation support in Parallels Desktop 20.2.0 for Apple Silicon Macs. This new feature enables users to run x86-based virtual machines on their M-series Mac computers, addressing a longstanding limitation since Apple's transition to its custom Arm-based processors. The early technology preview allows users to run Windows 10, Windows 11 (with some restrictions), Windows Server 2019/2022, and various Linux distributions through a proprietary emulation engine. This development particularly benefits developers and users who need to run 32-bit Windows applications or prefer x86-64 Linux virtual machines as an alternative to Apple Rosetta-based solutions.
However, Parallels is transparent about the current limitations of this preview release. Performance is notably slow, with Windows boot times ranging from 2 to 7 minutes, and overall system responsiveness remains low. The emulation only supports 64-bit operating systems, though it can run 32-bit applications. Additionally, USB device support is not available, and users must rely on Apple's hypervisor as the Parallels hypervisor isn't compatible. Despite these constraints, the release is a crucial step forward in bridging the compatibility gap for Apple Silicon Mac users so legacy software can still be used. The feature has been implemented with the option to start virtual machines hidden in the user interface to manage expectations, as it is still imperfect.
Source:
Parallels
However, Parallels is transparent about the current limitations of this preview release. Performance is notably slow, with Windows boot times ranging from 2 to 7 minutes, and overall system responsiveness remains low. The emulation only supports 64-bit operating systems, though it can run 32-bit applications. Additionally, USB device support is not available, and users must rely on Apple's hypervisor as the Parallels hypervisor isn't compatible. Despite these constraints, the release is a crucial step forward in bridging the compatibility gap for Apple Silicon Mac users so legacy software can still be used. The feature has been implemented with the option to start virtual machines hidden in the user interface to manage expectations, as it is still imperfect.
19 Comments on Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS
What do you think?
x86ARM Win11 on Parallels & VMware Fusion for close to a year now!So what's changed suddenly:wtf:Running in macOS would be like running apps on Linux using WINE support - this is the equivalent of using QEMU.
Also, technically, 'legacy' software (from the macOS ecosystem) is already supported with their own binary translation engine (as well as special ISA bits in Apple's ARM implementation) - software for other Intel based OS aren't technically 'legacy' software.
As for ARM vs x86, x86 was never designed to be small cores at low power - even Intel Atom wasn't ARM small.
Whereas ARM were getting design wins for basic cell phones, and equipment that needed more than a basic microcontroller in small footprints. ARMs biggest threat once upon a time was probably MIPS - indeed in the home router space many devices still use MIPS architecture SoCs.
If x86 had to be targetted in such a way you'd a) have probably seen a push towards it more a long time ago, and b) different technologies be added to the x86 ISA at different times. For example Intel adding QuickSync was 'cute' compared to the fact that ARM SoCs had been leveraging IP/logic processing blocks for media and comms, etc., for some time, whereas x86 always just brute force CPU processed things because on the desktop (and even to a certain extent laptops with comparatively massive batteries compared to a handheld device) one could afford to do that.
For example, when MP4/H264 video was becoming normal, most PCs at the time had no issues playing back the media (sure some would work harder than others), whereas trying to watch MP4 video on an ARM device that a) didn't have hardware decode support for it and b) didn't have NEON/SIMD support was tough going.
I suspect once a small and efficient instruction decoder was in place, due to continual node process improvements it would have had little overhead of one arch vs another.
Even Jim Keller basically said a good CPU isn't solely dictated by the ISA arch it uses.
It’s an ARM world and x86 is just living in it.
Maybe one day well finally get proton on macOS.... We also have x86 devices like Lunar Lake that can put out ARM macbook levels of runtime.
While true, there are ARM supercomputers out there, the GPUs in those systems are doing the heavy lifting. In any config where the CPU is expected to do significant work, notice they are still x86? It's actually not. ARM is powerful but so is x86, and the lowest end controllers have been shifting from ARM to RISC V.
Everything has its place.
Once upon a time you'd have expansion cards with Zilog or MIPS based microcontrollers doing the work...
e.g. yeah that's a Z80 CPU at the top middle hanging out on an SCSI controller which would have been sitting in an Intel Server/Workstation:
The original Ageia PhysX cards were just MIPS CPUs. Could have just as easily been ARM based or even Power based... guessing there was some technical reason they didn't go that route.
Now every SSD is pretty much using some ARM CPU core to handle the drive functions.
Just for kicks, you get to pay for that privilege, too. Parallels is not free. You already own it and want to use it on a new OS version? You're in luck, you get to pay for an upgrade, too.
Apple not only did a really good microarchitecture, on a really good (and expensive) node, but also did a great package around it when it comes to memory subsystem, co-processors and whatnot. They likely could have achieved something really similar with x86 or even RISC-V (assuming they also managed to impose good software support for the latter).
AMD CPU have a really great microarchitecture, okay node for today's standards (they're always lagging 1 or 2 nodes compared to apple), and a meh package. Strix point is a great monolithic design, but on the usual boring 128-bit memory subsystem. Strix Halo finally went for an improved memory subsystem, but opted for a more desktop-like approach with their chiplets, re-using the desktop/epyc CCDs with an impressive new IOD.
Intel's Lunar Lake was a really impressive step in the right direction, approaching the base M-chip lineup, but too bad this design was a one-off thing from intel. Let's see if Panther Lake can bring something impressive as well and that manages to scale.
Other ARM designs (like the ones from Qcom) are kinda meh. They are good, but not impressive by any means. Some server offerings are also good, but not as brilliant as what Apple has done. MIPS is almost dead, no new designs and most SoCs are legacy designs. ARM managed to take quite a bunch of the new networking space, but now we have even x86 where intel managed a really nice design with low footprint at an absurdly low cost.
RISC-V is also getting some traction in the lower end of embedded system, but that'd be closer to the mcu space. *cries in atrix 4g*
That Tegra CPU was really nice, but the lack of NEON made it a pain not long after I got it :( Lunar Lake's problem (IMO) is that it doesn't scale :(
There's, nor there will be, a beefier variant of it, which is a bummer since its design was really cool. More on the middle ground, ARM and x86 servers can live happily together as options. Where I work at we have both ARM and x86 nodes in our K8s clusters for various tasks, like running simple backends, databases, and even some inference stuff (some of those we run without GPUs).
I don't get why people say one thing need to die for other to prevail, both ISAs can just co-exist and we can chose which one we want at any given time, specially nowadays where cross-compiling stuff has become somewhat easy with modern tooling and you can add a new ISA to your build farms without much headaches. Eh, I'd say it was either more of a pricing issue, or just something the folks working on it were used to. I did many projects with AVR back then because I had done a similar thing with an arduino and scaling it up from there was easier than bolting a new toolchain. Similar excuse that I use for my personal preference for STMicro stuff. Or RISC-V :p
Yeah, RISC-V might displace ARM. And in many ways it's interesting they went straight in at the ground floor with RISC-V - yeah they've demonstrated computer CPU implementations, but a lot of the actual usage out there has been at the tiny device level.... it's almost as if low-cost CPU cores for small commodity uses, that don't have some additional/excessive licensing cost applied, are an attractive thing... go figure.
That said, the problem is that for the moment, a lot of the controllers in the marketplace use ARM and will probably continue to until the vendors a) have time to design / implement a RISC-V design which will meet their requirements, and b) get through all the validation.
Moving to another ISA is additional cost and, at the moment, in the SSD space it's a very cost sensitive market. I can't imagine a vendor not having some firmware issues in the field, especially as validation seems a bit hit and miss with some vendors.
It would be more telling if say an Apple device has a micro-controller that uses RISC-V instead of ARM... be it a speaker or part of a home kit product or something.
www.techpowerup.com/298936/report-apple-to-move-a-part-of-its-embedded-cores-to-risc-v-stepping-away-from-arm-isa
By the time the system boots, hunter-gatherers would have found their food.