Sunday, May 21st 2023

Intel Exploring x86S Architecture, Envisions an Unadulterated 64-bit Future

Intel has published a highly involved and extensive whitepaper on the topic of streamlining its CPU architectures, most notably by focusing on a purely 64-bit specification, and consequently dropping legacy 32-bit operating modes (as well as 16-bit!). Team Blue's key proposal states: "This whitepaper details the architectural enhancements and modifications that Intel is currently investigating for a 64-bit mode-only architecture referred to as x86S (for simplification). Intel is publishing this paper to solicit feedback from the ecosystem while exploring the benefits of extending the ISA transition to a 64-bit mode-only solution."

The paper provides a bit of background context: "Since its introduction over 20 years ago, the Intel 64 architecture became the dominant operating mode. As an example of this evolution, Microsoft stopped shipping the 32-bit version of their Windows 11 operating system. Intel firmware no longer supports non UEFI64 operating systems natively. 64-bit operating systems are the de facto standard today. They retain the ability to run 32-bit applications but have stopped supporting 16-bit applications natively. With this evolution, Intel believes there are opportunities for simplification in our hardware and software ecosystem."

The intros a small flow diagram: "Certain legacy modes have little utility in modern operating systems besides bootstrapping the CPU into the 64-bit mode. It is worth asking the question, "Could these seldom used elements of the architecture be removed to simplify a 64-bit mode-only architecture?" The architecture proposed in this whitepaper completes the transition to a 64-bit architecture, removing some legacy modes."
Envisioning a Simplified Intel ArchitectureHow Would a 64-Bit Mode-Only Architecture Work?
Intel 64 architecture designs come out of reset in the same state as the original 8086 and require a series of code transitions to enter 64-bit mode. Once running, these modes are not used in modern applications or operating systems.

An exclusively 64-bit mode architecture will require 64-bit equivalents of technologies that currently run in either real mode or protected mode. For example:
  • Booting CPUs (SIPI) starts in real-address mode today and needs a 64-bit replacement. A direct 64-bit reset state eliminates the several stages of trampoline code to enter 64-bit operation.
  • Today, using 5-level pages requires disabling paging, which requires going back to unpaged legacy mode. In the proposed architecture, it is possible to switch to 5-level paging without leaving a paged mode.
These modifications can be implemented with straightforward enhancements to the system architecture affecting the operating system only.

What Would Be the Benefits of a 64-bit Mode-Only Architecture?
A 64-bit mode-only architecture removes some older appendages of the architecture, reducing the overall complexity of the software and hardware architecture. By exploring a 64-bit mode-only architecture, other changes that are aligned with modern software deployment could be made. These changes include:
  • Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use.
  • Removing ring 1 and 2 (which are unused by modern software) and obsolete segmentation features like gates.
  • Removing 16-bit addressing support.
  • Eliminating support for ring 3 I/O port accesses.
  • Eliminating string port I/O, which supported an obsolete CPU-driven I/O model.
  • Limiting local interrupt controller (APIC) use to X2APIC and remove legacy 8259 support.
  • Removing some unused operating system mode bits.
Legacy Operating Systems on 64-Bit Mode-Only Architecture
While running a legacy 64-bit operating system on top of a 64-bit mode-only architecture CPU is not an explicit goal of this effort, the Intel architecture software ecosystem has sufficiently matured with virtualization products so that a virtualization-based software solution could use virtualization hardware (VMX) to deliver a solution to emulate features required to boot legacy operating systems.

Detailed Proposal for a 64-Bit Mode-Only Architecture
A proposal for a 64-bit mode-only architecture is available. It embodies the ideas outlined in this white paper. Intel is publishing this specification for the ecosystem to evaluate potential impacts to software.
The webpage introduction only serves as a simple primer on the topic of x86S - more technically-minded folks can take a look at the big whitepaper document (PDF) here.
Sources: Intel Articles, Phoronix
Add your own comment

41 Comments on Intel Exploring x86S Architecture, Envisions an Unadulterated 64-bit Future

#26
TheLostSwede
News Editor
trsttteThe PSU gets to be more of what it already is, a simple, dumb and reliable brick.
Except ATX 3.0 PSUs with 12VHPWR made sure they're less dumb, at least with regards to power delivery to the graphics card, if the extra four pins are present that can deliver power on demand to the graphics card.
Posted on Reply
#27
R-T-B
lexluthermiesterThat is nonsense. Legacy instructions work fine.
No one is saying they don't work, they take up silicon space that could be used for other things though.
Posted on Reply
#28
Dr. Dro
R-T-BNo one is saying they don't work, they take up silicon space that could be used for other things though.
This, and from my understanding any features removed are not used or are completely trivial to replace on currently existing 64-bit operating systems with no impact in their existing software catalog's compatibility or performance. Seems like a done deal to me. I'm especially interested in a counter-proposal or agreement by AMD's part.
Posted on Reply
#29
WhateverAnotherFreakingID
chodaboy19That is funny because intel only recently stopped production of their IA-64 in 2021.

en.wikipedia.org/wiki/IA-64#End_of_life:_2021

Also:

en.wikipedia.org/wiki/X86-64#Licensing
Yeah humans not possessing any form of intellectual collective memory (as opposed to a genetic collective memory carried by the surviving genome) makes our history a dishearteningly sequence of petty revisions punctuating a monstrous blob of plain ineptitude at mutual comprehension.
Posted on Reply
#30
Wirko
DenverI found it around:

"So, it was estimated that the Pentium used 30% of its transistors to support the x86 ISA. Since the x86 ISA and support hardware remained relatively constant, by the Pentium 4 era, x86 support was estimated to account for 10% of the transistor count.

Ars Technica' Jon Stokes touches on this x86-RISC decoding cost on The Pentium: An Architectural History of the World's Most Famous Desktop Processor."
TL;DR but I'll read that one, I promise. However, here's my take: the Northwood Pentium 4 had 30 M logic transistors (55 M - 25 M of L2 cache). It was the last purely 32-bit desktop CPU from Intel. A modern CPU can't possibly need more than 30 M per core for 32-bit compatibility, and that's a naïve upper bound, even with new features that came later, such as VT-* and rings.

Now Intel can remove some 32-bit features but they can do something else too: optimise the remaining parts for transistor count, regardless of performance drop. No one would blame them, when was the last time that top 32-bit performance mattered? Right, that was when CPUs were two or three times slower.

So in total, they can save maybe... 10 M transistors per core?
Posted on Reply
#31
LabRat 891
WirkoMy guess is that Intel found security vulnerabilities in 32-bit protected mode and related stuff, or may be expecting to find them in the future, so they will remove functionality that's no longer really needed, just in case.



That makes immense sense.
Since the 20-teens, it seems like there's a new 'vulnerability discovered' every couple years, and some have reached back to the first few generations of X86 chips (and are "unfixable").
Posted on Reply
#32
nageme
Denver"So, it was estimated that the Pentium used 30% of its transistors to support the x86 ISA. Since the x86 ISA and support hardware remained relatively constant, by the Pentium 4 era, x86 support was estimated to account for 10% of the transistor count.
Haven't read the article, and I'm not sure what they call "x86 support", but let's use that vague thing for a number. The quoted sentence implies 10% on early-era Pentium 4 (Willamette, 180nm). And on page 2 there it says "Today, x86 support accounts for well under 10% of the transistors on the Pentium 4". The article's from 2004, so let's assume 2004-era Pentium 4 (Prescott, 90nm).

Wikipedia says P4 180nm used 42M transistors, P4 90nm 125M transistors.
42M * 10% = 4M, but let's round up and assume "well under" 10% of 125M is 6M.

I can't find figures for modern Intels, but Zen 4 is 6.5B transistors per CCD (up to two of them) plus 3.4B for the I/O die. In total about 16 billion transistors in the max configuration.

So with a lot of random assumptions:
6M * 16 cores = 96M. 96M / 16B = 0.6%
That's for a current-gen CPU, assuming no multicore/other optimizations, and ignoring cache/logic/whatever density differences.
Dr. DroBut it's cleaning house so to speak, this is one of ARM's biggest advantages right now.
Someone should quantify/concretize what that actually means. Cleanup just for the sake of aesthetics isn't useful.
If anything, it seems like modern software development practices are wasting much more potential performance and power than some minimal legacy support hardware. But who knows.
Posted on Reply
#33
chrcoluk
So if its just 32bit OS support, makes a lot of sense, I did wonder why 32bit OS's have persisted for so long in distributions.

32bit apps is a different beast, but for now after the further explanation it seems thats not affected.
Posted on Reply
#34
Unregistered
XajelI like it, 32bit has become rare to the point that only limited hardware is being used for such systems (embedded CPUs and OSs).

But, they'll have to work with at least AMD, IBM, Microsoft & the Linux community for this to work probably, this is not a simple x86 extension anymore like SSE, AVX, and to be "simple" as they call it, at least Intel and AMD must both agree on basic paths, especially 16bit & 32bit virtualization, it won't be simple anymore if it required separate code paths to address both AMD and Intel systems.

x86 is old, and has so much legacy things that it makes it slow and inefficient compared to modern alternatives (arm, risc-v).
People actually building CPUs disagree with you.
Posted on Edit | Reply
#35
SRB151
WhateverAnotherFreakingIDYeah humans not possessing any form of intellectual collective memory (as opposed to a genetic collective memory carried by the surviving genome) makes our history a dishearteningly sequence of petty revisions punctuating a monstrous blob of plain ineptitude at mutual comprehension.
LOL. Yeah, the Itanic was doing great until 2021. Being made doesn't mean it was useful or profitable. HP had their fingers in co-designing this disaster, and moved a lot of big iron based on it when it first same out. Intel wanted to torpedo this long ago, but HP had too much invested and too many suckers locked into it. Intel nearly succeeded at it until HP won a lawsuit with Oracle to keep the software maintained. Itanic has been a dead chip walking for nearly 10 years. Thanks HP!
Posted on Reply
#36
lexluthermiester
R-T-Bthey take up silicon space that could be used for other things though.
Not really. The legacy instructions sets in question take up less than 3% of the average Intel CPU P-core and less than 5% of an Ecore(in either case those instructions represent less than 1% of the total die, regardless of model). Keeping them is effortless and doesn't change much. What Intel is pushing for here is to close up "loose ends" in it's ISA, becuase these are all things that can now be done(emulated/simulated) very easily in software. They also fear unknown/undiscovered security vulnerabilities lurking.

So for Xajal to state it's because it's "old" or "inefficient" is just plain wrong and a clear indication of someone who either doesn't understand what's really going on or is just flexing against Intel. Regardless, it's pure moose muffins, putting it bluntly.

For you to say it "takes up silicon space" is less wrong because it is technically correct. However, it's just not anywhere close to the real reason.

I personally have mixed thoughts on this one. On the one hand, who really uses those decades old legacy instructions these days? But at the same time, it's not hurting much for them to be present and on the off chance they become useful again(it's happened), engineering them back in might be a challenge.
Posted on Reply
#37
R-T-B
lexluthermiesterNot really. The legacy instructions sets in question take up less than 3% of the average Intel CPU P-core and less than 5% of an Ecore(in either case those instructions represent less than 1% of the total die, regardless of model).
to be quite frank, I'm more confident that intel has more meaningful data on this than whatever numbers we can guess at.
Posted on Reply
#38
lexluthermiester
R-T-Bto be quite frank, I'm more confident that intel has more meaningful data on this than whatever numbers we can guess at.
While true, deducing percentages is not difficult. The instruction sets being removed are from the 386/486/Pentium/Pentium2/Pentium3 era. Those dies were made on lithography scales vastly larger than what today's fabs produce. The transistor counts for those CPU's were numbered in the low millions. Today's CPU's are measured in the 10's of billions and recently crossed 100billion. So using simple math, Intel could easily fit a full 386, a full 486, a full Pentium , a full Pentium 2 AND full Pentium 3 and still come in under 30million transistors, and that includes the original onboard caches. However, Intel is not going to cram a whole set of old cores into modern dies, just the parts that are needed to maintain a compatibility layer. They could throw everything in, but they don't. It gets VERY complicated from there.

So, I was actually being very generous with the percentage estimates above. As I said, Intel is cleaning up it's ISA package and eliminating potential hardware based security risks. Nothing more.
Posted on Reply
#39
R-T-B
That actually makes a lot of sense, thanks, never thought about it that way.
Posted on Reply
#40
Wirko
lexluthermiesterWhile true, deducing percentages is not difficult. The instruction sets being removed are from the 386/486/Pentium/Pentium2/Pentium3 era. Those dies were made on lithography scales vastly larger than what today's fabs produce. The transistor counts for those CPU's were numbered in the low millions. Today's CPU's are measured in the 10's of billions and recent crossed 100billion. So using simple math, Intel could easily fit a full 386, a full 486, a full Pentium , a full Pentium 2 AND full Pentium 3 and still come in under 30million transistors, and that includes the original onboard caches. However, Intel is not going to cram a whole set of old cores into modern dies, just the parts that are needed to maintain a compatibility layer. They could throw everything in, but they don't. It gets VERY complicated from there.
So let's say the 32-bit parts take up 1% or 2% of a P core, or whatever. Even less will be removed because 32-bit code execution ability will remain.

But with the silicon area gained, Intel could cram maybe 0.25 MB of L2 more into each core, or increase some other of the many buffers and small caches that sit in there, with a tangible benefit. Designing CPUs is an iteration with a set of limitations and tradeoffs, you can increase X by a bit but then you can't increase Y, and exactly which workloads will benefit from that? Etc. So even a few tenths of a percent is not negligible, neither in surface area nor in performance.
lexluthermiesterAs I said, Intel is cleaning up it's ISA package and eliminating potential hardware based security risks. Nothing more.
That's a lot even if there's "nothing more". The 32-bit blocks won't develop and maintain themselves, I'd say 1% of Intel engineers' job is associated with them, can we agree on this rough estimate? Intel has some very competent bean counters, they surely do wake up when the corporation's net income goes to (brackets).

Engineers are needed to search for security holes, adapt the blocks to new microarchitectures and new nodes and higher clocks*, simulate, optimise for (at least) area, test, and possibly commit errata that eventually make it to the production silicon. And we're talking about tricky features here, such as protection rings, protected mode or virtualisation support, not just "simple" code execution.

* Unless Intel is cheating, which they should be. As I've said before, no one will miss great 32-bit performance in 2023, so decoders can run at half speed. Slower transistors are often smaller and less hot, too.
Posted on Reply
#41
lexluthermiester
WirkoSo let's say the 32-bit parts take up 1% or 2% of a P core, or whatever. Even less will be removed because 32-bit code execution ability will remain.
I think that is a misunderstanding of how CPUs compute. 32bit code can easily be handled by a 64bit instruction pipeline as long as it's written/compiled properly. For older code and OS can be configured to interpreter older in a way that is compatible with newer instruction sets. The 16bit/32bit pipelines being removed is not trivial, but at the same time when CPU makers removed 8bit instruction sets and some 16bit instructions, 32bit took over seamlessly because the engineering was done properly. 64bit will do the same.

So any 32bit code will continue to run flawlessly.
WirkoSo even a few tenths of a percent is not negligible
I never said it was. Just stated what I think they're really doing. This is supported by the facts at hand.
WirkoAs I've said before, no one will miss great 32-bit performance in 2023
Nonsense! There are TONS of things that still run 32bit and NEED to perform well.
Posted on Reply
Add your own comment
Dec 4th, 2024 04:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts