Wednesday, April 8th 2020

x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

Intel's x86 processor architecture has been the dominant CPU instruction set for many decades, since IBM decided to put the Intel 8086 microprocessor into its first Personal Computer. Later, in 2006, Apple decided to replace their PowerPC based processors in Macintosh computers with Intel chips, too. This was the time when x86 became the only option for the masses to use and develop all their software on. While mobile phones and embedded devices are mostly Arm today, it is clear that x86 is still the dominant ISA (Instruction Set Architecture) for desktop computers today, with both Intel and AMD producing processors for it. Those processors are going inside millions of PCs that are used every day. Today I would like to share my thoughts on the demise of the x86 platform and how it might vanish in favor of the RISC-based Arm architecture.

Both AMD and Intel as producer, and millions of companies as consumer, have invested heavily in the x86 architecture, so why would x86 ever go extinct if "it just works"? The answer is that it doesn't just work.
Comparing x86 to Arm
The x86 architecture is massive, having more than a thousand instructions, some of which are very complex. This approach is called Complex Instruction Set Computing (CISC). Internally, these instructions are split into micro-ops, which further complicates processor design. Arm's RISC (Reduced Instruction Set Computing) philosophy is much simpler, and intentionally so. The design goal here is to build simple designs that are easy to manage, with a focus on power efficiency, too. If you want to learn more, I would recommend reading this. It is a simple explanation of differences and what design goals each way achieves. However, today this comparison is becoming pointless as both design approaches copy from each other and use the best parts of each other. Neither architecture is static, they are both constantly evolving. For example Intel invented the original x86, but AMD later added support for 64-bit computing. Various extensions like MMX, SSE, AVX and virtualization have addressed specific requirements for the architecture to stay modern and performing. On the ARM side, things have progressed, too: 64-bit support and floating point math support were added, just like SIMD multimedia instructions and crypto acceleration.

Licensing
Being originally developed by Intel, the x86 ISA is a property of Intel Corporation. To use its ISA, companies such as AMD and VIA sign a licensing agreement with Intel to use the ISA for an upfront fee. Being that Intel controls who can use its technology, they decide who will be able to build an x86 processor. Obviously they want to make sure to have as little competition as possible. However, another company comes into play here. Around 1999, AMD developed an extension to x86, called x86-64 which enables the 64-bit computing capabilities that we all use in our computers. A few years later the first 64-bit x86 processors were released and took the market by storm, with both Intel and AMD using the exact same x86-64 extensions for compatibility. This means that Intel has to license the 64-bit extension from AMD, and Intel licenses the base x86 spec to AMD. This is the famous "cross-licensing agreement" in which AMD and Intel decided to give each other access to technology so both sides have benefits, because it wouldn't be possible to build a modern x86 CPU without both.

Arm's licensing model, on the other hand, is completely different. Arm will allow anyone to use its ISA, as long as that company pays a [very modest] licensing cost. There is an upfront fee which the licensee pays, to gain a ton of documentation and the rights to design a processor based on the Arm ISA. Once the final product is shipped to customers, Arm charges a small percentage of royalty for every chip sold. The licensing agreement is very flexible, as companies can either design their cores from scratch or use some predefined IP blocks available from Arm.

Software Support
The x86 architecture is today's de facto standard for high-performance applications—every developers creates software for it, and they have to, if they want to sell it. In the open source world, things are similar, but thanks to the openness of that whole ecosystem, many developers are embracing alternative architectures, too. Popular Linux distributions have added native support for Arm, which means if you want to run that platform you won't have to compile every piece of software yourself, but you're free to install ready-to-use binary packages, just like on the other popular Linux distributions. Microsoft only recently started supporting Arm with their Windows-on-Arm project that aims to bring Arm-based devices to the hands of millions of consumers. Microsoft already had a project called Windows RT, and its successor, Windows 10 for ARM, which tried to bring Windows 8 editions to Arm CPU.

Performance
The Arm architecture is most popular for low-powered embedded and portable devices, where it can win with its energy-efficient design. That's why high performance has been a problem until recently. For example Marvell Technology Group (ThunderX processors) started out with first-generation Arm designs in 2014. Those weren't nearly as powerful as the x86 alternatives, however, it gave the buyers of server CPUs a sign - Arm processors are here. Today Marvell is shipping ThunderX2 processors that are very powerful and offer comparable performance similar to x86 alternatives (Broadwell and Skylake level performance), depending on the workload of course. Next-generation ThunderX3 processors are on their way this year. Another company doing processor design is Ampere Computing, and they just introduced their Altra CPUs, which should be very powerful as well.
What is their secret sauce? The base of every core is Arm's Neoverse N1 server core, designed to give the best possible performance. The folks over at AnandTech have tested Amazon's Graviton2 design which uses these Neoverse N1 cores and came to an amazing conclusion - the chip is incredibly fast and it competes directly with Intel. Something unimaginable a few years ago. Today we already have decent performance needed to compete with Intel and AMD offerings, but you might wonder why it matters so much since there are options already in the form of Xeon and EPYC CPUs. It does matter, it creates competition, and competition is good for everyone. Cloud providers are looking into deploying these processors as they promise to offer much better performance per dollar, and higher power efficiency—power cost is one of the largest expenses for these companies.
Arm Neoverse
Arm isn't sitting idle, they are doing a lot of R&D on their Neoverse ecosystem with next-generation cores almost ready. Intel's innovation has been stagnant and, while AMD caught up and started to outrun them, it is not enough to keep x86 safe from a joint effort of Arm and startup companies that are gathering incredible talent. Just take a look at Nuvia Inc. which is bringing some of the best CPU architects in the world together: Gerard Williams III, Manu Gulati, John Bruno are all well-known names in the industry, and they are leading the company that is promising to beat everything with its CPU's performance. You can call these "just claims", but take a look at some of the products like Apple's A13 SoC. Its performance in some benchmarks is comparable to AMD's Zen 2 cores and Intel's Skylake, showing how far the Arm ecosystem has come and that it has the potential to beat x86 at its own game.

Performance-per-Watt disparity between Arm and x86 define fiefdoms between the two. Arm chips offer high performance/Watt in smartphone and tablet form-factors where Intel failed to make a dent with its x86-based "Medfield" SoCs. Intel, on the other hand, consumes a lot more power, to get a lot more work gone at larger form-factors. It's like comparing a high-speed railway locomotive to a Tesla Model X. Both do 200 km/h, but the former pulls in a lot more power, and transports a lot more people. Recent attempts at scaling Arm to an enterprise platform met with limited success. A test server based on a 64-core Cavium ThunderX 2 pulls 800 Watts off the wall, which isn't much different from high core-count Xeons. At least, it doesn't justify the cost for enterprise customers to re-tool their infrastructure around Arm. Enterprise Linux distributions like Novell or RHEL haven't invested too much in scalar Arm-based servers (besides microservers), and Microsoft has no Windows Server for Arm.

Apple & Microsoft
If Apple's plan to replace Intel x86 CPUs in its products realizes, then x86 lost one of the bigger customers. Apple's design teams have proven over the years that they can design some really good cores, the Ax lineup of processors (A11, A12 and most recently A13) is testament to that. The question remains however, how well can they scale such a design and how quickly they can adapt the ecosystem for it. With Apple having a tight grip on its App Store for Mac, it wouldn't be too difficult for them to force developers to ship an Arm-compatible binary, too, if they want to keep their product on App Store.

On the Microsoft Windows side, things are different. There is no centralized Store—Microsoft has tried, and failed. Plenty of legacy software exists that is developed for x86 only. Even major developers of Windows software are currently not providing Arm binaries. For example Adobe's Creative Suite, which is the backbone of the creative industry, is x86 only. Game developers are busy enough learning DirectX 12 or Vulkan, they sure don't want to start developing titles with Arm support, too—in addition to Xbox and Playstation. An exception is the Microsoft Office suite, which is available for Windows RT, and is fully functional on that platform. A huge percentage of Windows users are tied to their software stack for either work or entertainment, so the whole software development industry would need to pay more attention to Arm and offer their software on that platform as well. However, that seems impossible for now. Besides Microsoft Edge, there is not even a 3rd party web-browser available. Firefox is in beta, Google's Chrome has seen some development, but there is no public release. That's probably why Microsoft went with the "emulation" route, unlike Apple. According to Microsoft, applications compiled for the Windows platform can run "unmodified, with good performance and a seamless user experience". This emulation does not support 64-bit applications at this time. Microsoft's Universal Windows Platform (UWP) "Store" apps can easily be ported to run on Arm, because the API was designed for that from the ground up.

Server & Enterprise
The server market is important for x86—it has the best margins, high volume and is growing fast, thanks to cloud computing. Historically, Intel has held more than 95% of server shipments with its Xeon lineup of CPUs, while AMD occupied the rest of that, Arm really played no role here. Recently AMD started the production of EPYC processors that deliver good performance, run power efficient and have good pricing, making a big comeback and gnawing away at Intel's market share. Most of the codebases in that sector should be able to run on Arm, and even supercomputers can use the Arm ISA, where the biggest example is the Fugaku pre-exascale supercomputer. By doing the custom design of Arm CPUs, vendors will make x86 a thing of the past.

Conclusion
Arm-based processors are lower-cost than Intel and AMD based solutions, while having comparable performance, and consuming less energy. At least that's the promise. I think that servers are the first line where x86 will slowly phase away, and consumer products are second, with Apple pursuing custom chips and Microsoft already offering Arm-based laptops.

On the other hand, eulogies of x86 tend to be cyclical. Just when it appears that Arm has achieved enough performance per Watt to challenge Intel in the ultra-compact client-computing segments, Intel pushes back. Lakefield is an ambitious effort by Intel to take on Arm by combining high-efficiency and high-performance cores onto a single chip, along with packaging innovations relevant to ultra-portables. When it comes out, Lakefield could halt Arm in its tracks as it seeks out high-volume client-computing segments such as Apple's MacBooks. Lakefield has the potential to make Apple second-guess itself. It's very likely that Apple's forward-looking decisions were the main reason Intel sat down to design it.

So far, Arm ISA is dominant in the mobile space. Phones manufactured by Samsung, Apple, Huawei and many more feature a processor that has an Arm-based CPU inside. Intel tried to get into the mobile space with its x86 CPUs but failed due to their inefficiency. The adoption rate was low, and some manufacturers like Apple preferred to do custom designs. However, SoftBank didn't pay $31 billion to acquire ARM just so it could eke out revenues from licensing the IP to smartphone makers. The architecture is designed for processors of all shapes and sizes. Right now it takes companies with complete control over their product stack, such as Amazon and Apple, to get Arm to a point where it is a viable choice in the desktop and server space. By switching to Arm, vendors could see financial benefit as well. It is reported that Apple could see reduction in processor prices anywhere from 40% to 60% by going custom Arm. Amazon offers Graviton 2 based instances that are lower-priced compared to Xeon or EPYC based solutions. Of course complete control of both hardware and software comes with its own benefits, as a vendor can implement any feature that users potentially need, without a need to hope that a 3rd party will implement them. Custom design of course has some added upfront development costs, however, the vendor is later rewarded with lower cost per processor.
Add your own comment

217 Comments on x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

#1
yeeeeman
What could they say.
Arm always wanted to get into High Performance computing, whereas x86 manufacturers always wanted to get into ultra low power devices. They never quite made it, because they develop optimal tools for completely different scenarios.
Posted on Reply
#2
Vayra86
Low hanging fruit and a clean slate. That is why ARM is so promising.

We will see how long that lasts. What I do know, is that it will last long enough for everyone in the market to adjust. After all, that is how this game really works. If its not economically feasible its not happening and on top of that, there is a massive software chicken-egg problem to solve as well. This will be a very slow movement, and it has already started. Decades, easily.
Posted on Reply
#3
phanbuey
But isn't the whole 'lack of innovation' the whole reason it hasn't been replaced?

As is stated in the article - software is really the key driver of this, specifically compatibility with mission critical applications. Once there is a Windows Server ARM with a Microsoft SQL Enterprise ARM then we can talk about it being replaced... until then the closest it will get to PC will be in tablets pretending to be laptops.
Posted on Reply
#4
londiste
Vayra86Low hanging fruit and a clean slate. That is why ARM is so promising.
This. Once ARM follows much or all of what x86 CPUs have been doing, they are more than likely to end up in a very similar place in terms of all parameters.
Posted on Reply
#5
Frick
Fishfaced Nincompoop
ARM will never be as universal as x86 (not anytime soon anyway) one reason being the giant pile of old but extremely necessary software lying around. Control systems for many things, medical stuff, industrial stuff, and so on and so forth.
Posted on Reply
#6
Vya Domus
Innovation ?

Come on, let's be real, ARM simply took existing features and techniques meant to increase performance and adapted them to low power designs, their ideas didn't came up entirely out of thin air. Thing is though, most things that you used to find in a typical x86 CPU are now present in ARM cores as well, they are running out of things to pluck out of x86 designs. If they really got innovation that will remain to be seen.
Posted on Reply
#7
ARF
You had to explain about IBM's PowerPC and the Sony's PlayStation 3 Cell.

Cell is a 2005-2006 technology and yet its raw performance is around Intel's current processors.
Posted on Reply
#8
lexluthermiester
ARFCell is a 2005-2006 technology and yet its raw performance is around Intel's current processors.
If you really believe that, I have a bridge in Brooklyn I'd like to sell you, for cheap.
Posted on Reply
#9
Vya Domus
ARFCell is a 2005-2006 technology and yet its raw performance is around Intel's current processors.
Cell could do about 200 Gflops, today's high core count CPUs are easily 1.5+ Tflops. Also Cell achieved that at the expense of simplified floating point units that didn't have as many features and not to mention that the processor itself was basically abysmal at everything that wasn't SIMD FP32.
Posted on Reply
#10
Flanker
ARFYou had to explain about IBM's PowerPC and the Sony's PlayStation 3 Cell.

Cell is a 2005-2006 technology and yet its raw performance is around Intel's current processors.
It is people like you that help me get over my depressive episodes.
Posted on Reply
#11
FordGT90Concept
"I go fast!1!11!1!"
Execution units are RISC. Skylake has 8 execution units (three dedicated to memory operations, one dedicated to calculating addresses):


The fundamental difference between x86 and ARM is that ARM is load-store and x86 isn't. x86 does the implied memory operations as part of the instruction. x86 is far more versatile because of that: it can take high concepts (instructions) and cook them into rapid results by maximizing load across the execution units. ARM has taken a similar approach but...that memory micromanagement required by the compiler/developer...

Everything is RISC with a CISC wrapper these days.
Posted on Reply
#14
ARF
Vya DomusWhat can I say, clueless lad. As is everyone that believes this and who doesn't know anything about these things.
This is not true. You have the majority of PC computers these days still ultra slow dual and quad cores with HDD. Crap as hell.
Posted on Reply
#15
Vya Domus
ARFThis is not true. You have the majority of PC computers these days still ultra slow dual and quad cores with HDD. Crap as hell.
And what's that supposed to mean ?
Posted on Reply
#16
notb
ARF:laugh:

Guerrilla dev: PS3's Cell CPU is by far stronger than new Intel CPUs
Please, don't kidnap this thread with a flood of links and graphs like you usually do. It's mildly interesting. Go ruin some gaming discussion.
Posted on Reply
#17
londiste
ARFGuerrilla dev: PS3's Cell CPU is by far stronger than new Intel CPUs
www.tweaktown.com/news/69167/guerrilla-dev-ps3s-cell-cpu-far-stronger-new-intel-cpus/index.html
Cell is ~200 SP GFLOPS, largely in theory. SPEs are notorious for not being the easiest thing to code for. SPEs are more coprocessors than cores - very execution focused, no branch prediction hardware.
200 SP GFLOPS (in something like SGEMM which is also pretty much a theoretical number) is roughly what 4-core CPUs today are able to show.

So, either the guy was wrong or more likely was misquoted.
Cell was either more powerful or on par in theoretical performance with best mainstream x86 CPUs at the time. These mainstream x86 CPUs were also 4-core at that time, by the way.
Posted on Reply
#18
ARF
Vya DomusAnd what's that supposed to mean ?
It means that x86 is a power hog unsuitable for modern computing. You have to move on to much leaner, meaner and energy efficient technology.

Cell works only when it's needed, while x86 works and sometimes goes idle when unused but still terrible efficiency by any means.
Posted on Reply
#19
Vya Domus
ARFIt means that x86 is a power hog unsuitable for modern computing. You have to move on to much leaner, meaner and energy efficient technology.

Cell works only when it's needed, while x86 works and sometimes goes idle when unused but still terrible efficiency by any means.
8 FMA per clock cycle * 4 cores * an average clock speed of 3.5 Ghz = 224 Gflops within something like 45W which is already faster and more power efficient on a modern node than Cell could have ever been. And can do a lot more in general, efficiently.
Posted on Reply
#20
ARF
Vya Domus8 FMA per clock cycle * 4 cores * an average clock speed of 3.5 Ghz = 224 Gflops within something like 45W which is already faster and more power efficient on a modern node than Cell could have ever been. And can do a lot more in general, efficiently.
:confused:

Cell can be moved to 7nm, add more units and it could become something extremely powerful.

Remember that ARM's SoCs at 2-watt are 7 to 8 times faster than Intel's Atoms at 2-watt.
Posted on Reply
#21
FordGT90Concept
"I go fast!1!11!1!"
Cell was a semi-custom product created by IBM at the behest of Sony. POWER9 is the latest iteration with POWER10 coming soon.
Posted on Reply
#22
Vya Domus
ARF:confused:

Cell can be moved to 7nm, add more units and it could become something extremely powerful
Cell can be moved to everything you'd like and it would still be slower across the board no matter how many units you add. It's SIMD instructions are only 128bit wide, no branch predictor, no out of order execution. Cell is a design from 2006 for Christs sake. There is a reason they were able to fit so much compute back then, they basically did it at the expense of everything else.

You just never want to give up on these blatantly wrong beliefs do you ?
Posted on Reply
#23
FordGT90Concept
"I go fast!1!11!1!"
Cell was a swing...and a miss. Most PS3 games did not venture beyond the two threads exposed by the PowerPC cores in the Cell processor. You can lead a horse (developer) to water (extra compute resources) but you can't make him drink (it's extra work).

Think of it this way: just because you play a game on a 128-thread processor doesn't necessarily mean the game will use even 1/32nd of those resources. Applications (especially games) aren't coded to scale like that because:
a) very few people are playing games on 128-threaded processors.
b) hell to debug.
c) rendering is almost always the chief bottleneck.

Cell suffers the exact same problems...but worse...because SPEs (extra FPUs) are completely foreign to the PPEs (traditional processors)

XBO, PS4, XBSX, and PS5 have 8-core processors. Simpler, easier, and better than Cell...also cheap...and x86 so broad compiler compatibility.

So off topic though...


On topic: as a developer, my problem with ARM is that it isn't part of the familiar Windows ecosystem. :roll:
Posted on Reply
#24
notb
phanbueyBut isn't the whole 'lack of innovation' the whole reason it hasn't been replaced?
The reason is: we really don't need it.
x86 scales like no other architecture, is well-known and supported by software.

We need ARM as well - for cheap, ULV, embedded solutions. They're absolutely fine working side-by-side.
But of course both sides will keep trying.

Intel wants to grow and the lucrative mobile, IoT and networking markets are the most obvious directions.
AMD, at least for now, doesn't need that.

As for ARM group: because PCs are so built around x86 software, they'll go for servers first. It's started already.
We'll see how it goes.
Posted on Reply
#25
efikkan
AleksandarKThe x86 architecture is massive, having more than a thousand instructions, some of which are very complex. This approach is called Complex Instruction Set Computing (CISC). Internally, these instructions are split into micro-ops, which further complicates processor design.
Actually, you got it all wrong.
Back in the 80s there were two main arguments in the CISC vs. RISC discussions;
- A smaller instruction set allows higher flexibility in implementation
- Having to support legacy instructions in hardware
Moving to using micro-operations solved both of these, and therefore eliminating the only real advantages of RISC over CISC. In all x86 CPUs since the early 90s, the CPU front-end decodes the x86 ISA into the microarchitecture's specific ISA, which gives the designer full control over which instructions are prioritized in the hardware implementation, and which can be "simulated" using a combination of other instructions. This way the instructions can be optimized for however many execution units and various resources are present in the CPU, and the whole pipeline doesn't need to support every legacy feature. Every modern x86 microarchtecture is sort of a hybrid of CISC and RISC, with RISC-like micro-operations. Even current ARM microarchitectures have gotten some CISC-like features including some SIMD and loads of ASIC features, but still differs from modern "CISC" designs by using a load-store architectures (like FordGT90Concept mentioned), which means that ARM will always require more instructions to do the same work.

x86 is still advancing, and if anything x86 is more held back by software than ARM is respectively. Very little software is compiled to use any ISA features beyond AMD64/SSE2, even as recently as Sunny Cove Intel added an instruction to speed up memory copying. AVX-512 is also shaping up to become very flexible compared to previous iterations, hopefully AMD will support it soon. Further down the line, Intel is researching into "threadlets", which have to potential for massive performance gains, but yet again are more in the "CISC" direction than "RISC".
Vayra86Low hanging fruit and a clean slate. That is why ARM is so promising.
While ARM have changed a lot over the years, it dates back to ~1985, so I wouldn't call it quite a "clean slate".
Posted on Reply
Add your own comment
Nov 20th, 2024 05:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts