Friday, January 17th 2025

Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS

Parallels has announced the introduction of x86 emulation support in Parallels Desktop 20.2.0 for Apple Silicon Macs. This new feature enables users to run x86-based virtual machines on their M-series Mac computers, addressing a longstanding limitation since Apple's transition to its custom Arm-based processors. The early technology preview allows users to run Windows 10, Windows 11 (with some restrictions), Windows Server 2019/2022, and various Linux distributions through a proprietary emulation engine. This development particularly benefits developers and users who need to run 32-bit Windows applications or prefer x86-64 Linux virtual machines as an alternative to Apple Rosetta-based solutions.

However, Parallels is transparent about the current limitations of this preview release. Performance is notably slow, with Windows boot times ranging from 2 to 7 minutes, and overall system responsiveness remains low. The emulation only supports 64-bit operating systems, though it can run 32-bit applications. Additionally, USB device support is not available, and users must rely on Apple's hypervisor as the Parallels hypervisor isn't compatible. Despite these constraints, the release is a crucial step forward in bridging the compatibility gap for Apple Silicon Mac users so legacy software can still be used. The feature has been implemented with the option to start virtual machines hidden in the user interface to manage expectations, as it is still imperfect.
Source: Parallels
Add your own comment

19 Comments on Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS

#1
Acuity
Personally, I'm still not very convinced by ARM processors. AMD's new x86s consume very little power and have the same performance as the top of the range ARMs, despite being on older production processes, so I think they are also more performing with the same production process.
What do you think?
Posted on Reply
#2
R0H1T
AleksandarKParallels has announced the introduction of x86 emulation support in Parallels Desktop 20.2.0 for Apple Silicon Macs.
Is this really new? I've been running x86 ARM Win11 on Parallels & VMware Fusion for close to a year now! So what's changed suddenly :wtf:

Wrong version, just ignore :ohwell:
Posted on Reply
#3
Vincero
Technically, macOS isn't running the apps - they are running in a VM.
Running in macOS would be like running apps on Linux using WINE support - this is the equivalent of using QEMU.

Also, technically, 'legacy' software (from the macOS ecosystem) is already supported with their own binary translation engine (as well as special ISA bits in Apple's ARM implementation) - software for other Intel based OS aren't technically 'legacy' software.
Posted on Reply
#4
Daven
AcuityPersonally, I'm still not very convinced by ARM processors. AMD's new x86s consume very little power and have the same performance as the top of the range ARMs, despite being on older production processes, so I think they are also more performing with the same production process.
What do you think?
Your smartphone begs to differ. It’s all about power scalability.
Posted on Reply
#5
Vincero
DavenYour smartphone begs to differ.
Heh... maybe in some alternate reality Siri, Bixby, and Gemini begging for attention would be almost funny if not eventually annoying.

As for ARM vs x86, x86 was never designed to be small cores at low power - even Intel Atom wasn't ARM small.
Whereas ARM were getting design wins for basic cell phones, and equipment that needed more than a basic microcontroller in small footprints. ARMs biggest threat once upon a time was probably MIPS - indeed in the home router space many devices still use MIPS architecture SoCs.

If x86 had to be targetted in such a way you'd a) have probably seen a push towards it more a long time ago, and b) different technologies be added to the x86 ISA at different times. For example Intel adding QuickSync was 'cute' compared to the fact that ARM SoCs had been leveraging IP/logic processing blocks for media and comms, etc., for some time, whereas x86 always just brute force CPU processed things because on the desktop (and even to a certain extent laptops with comparatively massive batteries compared to a handheld device) one could afford to do that.
For example, when MP4/H264 video was becoming normal, most PCs at the time had no issues playing back the media (sure some would work harder than others), whereas trying to watch MP4 video on an ARM device that a) didn't have hardware decode support for it and b) didn't have NEON/SIMD support was tough going.
I suspect once a small and efficient instruction decoder was in place, due to continual node process improvements it would have had little overhead of one arch vs another.

Even Jim Keller basically said a good CPU isn't solely dictated by the ISA arch it uses.
Posted on Reply
#6
Daven
VinceroHeh... maybe in some alternate reality Siri, Bixby, and Gemini begging for attention would be almost funny if not eventually annoying.

As for ARM vs x86, x86 was never designed to be small cores at low power - even Intel Atom wasn't ARM small.
Whereas ARM were getting design wins for basic cell phones, and equipment that needed more than a basic microcontroller in small footprints. ARMs biggest threat once upon a time was probably MIPS - indeed in the home router space many devices still use MIPS architecture SoCs.

If x86 had to be targetted in such a way you'd a) have probably seen a push towards it more a long time ago, and b) different technologies be added to the x86 ISA at different times. For example Intel adding QuickSync was 'cute' compared to the fact that ARM SoCs had been leveraging IP/logic processing blocks for media and comms, etc., for some time, whereas x86 always just brute force CPU processed things because on the desktop (and even to a certain extent laptops with comparatively massive batteries compared to a handheld device) one could afford to do that.
For example, when MP4/H264 video was becoming normal, most PCs at the time had no issues playing back the media (sure some would work harder than others), whereas trying to watch MP4 video on an ARM device that a) didn't have hardware decode support for it and b) didn't have NEON/SIMD support was tough going.
I suspect once a small and efficient instruction decoder was in place, due to continual node process improvements it would have had little overhead of one arch vs another.

Even Jim Keller basically said a good CPU isn't solely dictated by the ISA arch it uses.
But now we have ARM in the tiniest devices like watches consuming 100’s of mW to the most powerful supercomputers in the world consuming 100’s of W.

It’s an ARM world and x86 is just living in it.
Posted on Reply
#8
TheinsanegamerN
Much like the Game Dev Toolkit, This sounds really neat, and could make a situation where one could conceivably move all their windows software and games to MacOS. Sadly I'd imagine much like the toolkit this wont result in any such change.

Maybe one day well finally get proton on macOS....
DavenBut now we have ARM in the tiniest devices like watches consuming 100’s of mW to the most powerful supercomputers in the world consuming 100’s of W.
We also have x86 devices like Lunar Lake that can put out ARM macbook levels of runtime.

While true, there are ARM supercomputers out there, the GPUs in those systems are doing the heavy lifting. In any config where the CPU is expected to do significant work, notice they are still x86?
DavenIt’s an ARM world and x86 is just living in it.
It's actually not. ARM is powerful but so is x86, and the lowest end controllers have been shifting from ARM to RISC V.

Everything has its place.
Posted on Reply
#9
Vincero
DavenBut now we have ARM in the tiniest devices like watches consuming 100’s of mW to the most powerful supercomputers in the world consuming 100’s of W.

It’s an ARM world and x86 is just living in it.
Again.... make an arch that scales down to the smallest microcontroller level and you can do that. Displacing MIPS and things like simple PIC units at tiny computing levels is how you make that happen.
Once upon a time you'd have expansion cards with Zilog or MIPS based microcontrollers doing the work...
e.g. yeah that's a Z80 CPU at the top middle hanging out on an SCSI controller which would have been sitting in an Intel Server/Workstation:


The original Ageia PhysX cards were just MIPS CPUs. Could have just as easily been ARM based or even Power based... guessing there was some technical reason they didn't go that route.

Now every SSD is pretty much using some ARM CPU core to handle the drive functions.
Posted on Reply
#10
Daven
TheinsanegamerNWhile true, there are ARM supercomputers out there, the GPUs in those systems are doing the heavy lifting. In any config where the CPU is expected to do significant work, notice they are still x86?
The 6th fastest supercomputer in the world is 100% ARM (no GPUs).
Posted on Reply
#11
bug
Everybody and their grandmother can run x86 in a VM. That you couldn't do it on a Mac is a shortcoming, no credit is due now that they're starting moving off their butts.

Just for kicks, you get to pay for that privilege, too. Parallels is not free. You already own it and want to use it on a new OS version? You're in luck, you get to pay for an upgrade, too.
Posted on Reply
#12
Daven
bugEverybody and their grandmother can run x86 in a VM. That you couldn't do it on a Mac is a shortcoming, no credit is due now that they're starting moving off their butts.

Just for kicks, you get to pay for that privilege, too. Parallels is not free. You already own it and want to use it on a new OS version? You're in luck, you get to pay for an upgrade, too.
You could already run Mac OS x86 apps on Apple Silicon from the get go. Apple does not care too much about ensuring that x86 apps in a Windows virtual environment run under Mac OS any more than Microsoft cares about running ARM apps in a Mac OS virtual environment under Windows. That's what third-party developers like Parallels are for. They are a much smaller company so it takes some time to code this kind of emulation.
Posted on Reply
#13
igormp
AcuityPersonally, I'm still not very convinced by ARM processors. AMD's new x86s consume very little power and have the same performance as the top of the range ARMs, despite being on older production processes, so I think they are also more performing with the same production process.
What do you think?
ARM is just an ISA, just like x86 is. What really matters is how good of a microarchitecture you have, the node it's fabbed on, and the overall package that it's built on.
Apple not only did a really good microarchitecture, on a really good (and expensive) node, but also did a great package around it when it comes to memory subsystem, co-processors and whatnot. They likely could have achieved something really similar with x86 or even RISC-V (assuming they also managed to impose good software support for the latter).

AMD CPU have a really great microarchitecture, okay node for today's standards (they're always lagging 1 or 2 nodes compared to apple), and a meh package. Strix point is a great monolithic design, but on the usual boring 128-bit memory subsystem. Strix Halo finally went for an improved memory subsystem, but opted for a more desktop-like approach with their chiplets, re-using the desktop/epyc CCDs with an impressive new IOD.

Intel's Lunar Lake was a really impressive step in the right direction, approaching the base M-chip lineup, but too bad this design was a one-off thing from intel. Let's see if Panther Lake can bring something impressive as well and that manages to scale.

Other ARM designs (like the ones from Qcom) are kinda meh. They are good, but not impressive by any means. Some server offerings are also good, but not as brilliant as what Apple has done.
VinceroWhereas ARM were getting design wins for basic cell phones, and equipment that needed more than a basic microcontroller in small footprints. ARMs biggest threat once upon a time was probably MIPS - indeed in the home router space many devices still use MIPS architecture SoCs.
MIPS is almost dead, no new designs and most SoCs are legacy designs. ARM managed to take quite a bunch of the new networking space, but now we have even x86 where intel managed a really nice design with low footprint at an absurdly low cost.
RISC-V is also getting some traction in the lower end of embedded system, but that'd be closer to the mcu space.
Vincerob) didn't have NEON/SIMD support was tough going.
*cries in atrix 4g*
That Tegra CPU was really nice, but the lack of NEON made it a pain not long after I got it :(
TheinsanegamerNWe also have x86 devices like Lunar Lake that can put out ARM macbook levels of runtime.
Lunar Lake's problem (IMO) is that it doesn't scale :(
There's, nor there will be, a beefier variant of it, which is a bummer since its design was really cool.
TheinsanegamerNWhile true, there are ARM supercomputers out there, the GPUs in those systems are doing the heavy lifting. In any config where the CPU is expected to do significant work, notice they are still x86?
More on the middle ground, ARM and x86 servers can live happily together as options. Where I work at we have both ARM and x86 nodes in our K8s clusters for various tasks, like running simple backends, databases, and even some inference stuff (some of those we run without GPUs).
I don't get why people say one thing need to die for other to prevail, both ISAs can just co-exist and we can chose which one we want at any given time, specially nowadays where cross-compiling stuff has become somewhat easy with modern tooling and you can add a new ISA to your build farms without much headaches.
Vinceroguessing there was some technical reason they didn't go that route.
Eh, I'd say it was either more of a pricing issue, or just something the folks working on it were used to. I did many projects with AVR back then because I had done a similar thing with an arduino and scaling it up from there was easier than bolting a new toolchain. Similar excuse that I use for my personal preference for STMicro stuff.
VinceroNow every SSD is pretty much using some ARM CPU core to handle the drive functions.
Or RISC-V :p
Posted on Reply
#14
Vincero
igormpEh, I'd say it was either more of a pricing issue, or just something the folks working on it were used to. I did many projects with AVR back then because I had done a similar thing with an arduino and scaling it up from there was easier than bolting a new toolchain. Similar excuse that I use for my personal preference for STMicro stuff.
Eh - I'd like to assume on balance there was some technical reason to choose it (even if that was because licensing costs or the purchase cost of the SoC IP was dirt cheap - at the end of the day engineering to ensure a low BoM is a technical thing), but I acknowledge the 'go with what you know' mantra (or that developers are lazy) could be equally applicable.

Yeah, RISC-V might displace ARM. And in many ways it's interesting they went straight in at the ground floor with RISC-V - yeah they've demonstrated computer CPU implementations, but a lot of the actual usage out there has been at the tiny device level.... it's almost as if low-cost CPU cores for small commodity uses, that don't have some additional/excessive licensing cost applied, are an attractive thing... go figure.
That said, the problem is that for the moment, a lot of the controllers in the marketplace use ARM and will probably continue to until the vendors a) have time to design / implement a RISC-V design which will meet their requirements, and b) get through all the validation.
Moving to another ISA is additional cost and, at the moment, in the SSD space it's a very cost sensitive market. I can't imagine a vendor not having some firmware issues in the field, especially as validation seems a bit hit and miss with some vendors.

It would be more telling if say an Apple device has a micro-controller that uses RISC-V instead of ARM... be it a speaker or part of a home kit product or something.
Posted on Reply
#15
igormp
VinceroEh - I'd like to assume on balance there was some technical reason to choose it (even if that was because licensing costs or the purchase cost of the SoC IP was dirt cheap - at the end of the day engineering to ensure a low BoM is a technical thing), but I acknowledge the 'go with what you know' mantra (or that developers are lazy) could be equally applicable
Yeah, from this pov it's still a technical thing anyway. And developer/engineering time is still a cost (and a high one at it), so going with what you know does lower this cost by quite a lot.
VinceroMoving to another ISA is additional cost and, at the moment, in the SSD space it's a very cost sensitive market. I can't imagine a vendor not having some firmware issues in the field, especially as validation seems a bit hit and miss with some vendors.
Wasn't WD/Sandisk using RISC-V for some of their stuff?
VinceroIt would be more telling if say an Apple device has a micro-controller that uses RISC-V instead of ARM... be it a speaker or part of a home kit product or something.
There were rumors about Apple planning to do so some years ago, not sure if it managed to reach production or was cancelled:
www.techpowerup.com/298936/report-apple-to-move-a-part-of-its-embedded-cores-to-risc-v-stepping-away-from-arm-isa
Posted on Reply
#16
Vincero
igormpWasn't WD/Sandisk using RISC-V for some of their stuff?
No idea, but if they are using a Phison or SMI or other controller that does utilise ARM cores then the choice has been made for them / via purchasing it - ultimately most of the firmware comes down to that also.
igormpThere were rumors about Apple planning to do so some years ago, not sure if it managed to reach production or was cancelled:
www.techpowerup.com/298936/report-apple-to-move-a-part-of-its-embedded-cores-to-risc-v-stepping-away-from-arm-isa
Due to a lack of fanfare about such a thing, who knows... ultimately the closed eco-system and restricted access to docs will mean it will take a while to ever find out.
Posted on Reply
#17
Redwoodz
DavenThe 6th fastest supercomputer in the world is 100% ARM (no GPUs).
6th. Not first. AMD CPU's power the most powerful supercomputer in the world. So come again and explain to me why ARM is better?
Posted on Reply
#18
Tek-Check
2 to 7 minutes? Lovely...
By the time the system boots, hunter-gatherers would have found their food.
Posted on Reply
#19
igormp
VinceroNo idea, but if they are using a Phison or SMI or other controller that does utilise ARM cores then the choice has been made for them / via purchasing it - ultimately most of the firmware comes down to that also.
They do have some products that use their in-house controllers.
Posted on Reply
Add your own comment
Jan 17th, 2025 14:30 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts