Friday, February 2nd 2024

Intel Arrow Lake-S 24 Thread CPU Leaked - Lacks Hyper-Threading & AVX-512 Support

An interesting Intel document leaked out last month—it contained detailed pre-release information that covered their upcoming 15th Gen Core Arrow Lake-S desktop CPU platform, including a possible best scenario 8+16+1 core configuration. Thorough analysis of the spec sheet revealed a revelation—the next generation Core processor family could "lack Hyper-Threading (HT) support." The rumor mill had produced similar claims in the past, but the internal technical memo confirmed that Arrow Lake's "expected eight performance cores without any threads enabled via SMT." These specifications could be subject to change, but tipster—InstLatX64—has uprooted an Arrow Lake-S engineering sample: "I spotted (CPUID C0660, 24 threads, 3 GHz, without AVX 512) among the Intel test machines."

The leaker had uncovered several pre-launch Meteor Lake SKUs last year—with 14th Gen laptop processors hitting the market recently, InstLatX64 has turned his attention to seeking out next generation parts. Yesterday's Arrow Lake-S find has chins wagging about the 24 thread count aspect (sporting two more than the fanciest Meteor Lake Core Ultra 9 processor)—this could be an actual 24 core total configuration—considering the evident lack of hyper-threading, as seen on the leaked engineering sample. Tom's Hardware reckons that the AVX-512 instruction set could be disabled via firmware or motherboard UEFI—if InstLatX64's claim of "without AVX-512" support does ring true, PC users (demanding such workloads) are best advised to turn to Ryzen 7040 and 8040 series processors, or (less likely) Team Blue's own 5th Gen Xeon "Emerald Rapids" server CPUs.
Sources: InstLatX64, Tom's Hardware, VideoCardz
Add your own comment

42 Comments on Intel Arrow Lake-S 24 Thread CPU Leaked - Lacks Hyper-Threading & AVX-512 Support

#26
kondamin
DavenArrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc. Intel is choosing to allocate the extra real estate of smaller nodes to the NPU, better e cores and the iGPU instead.
Wasn't the NPU on it's own "tile"
Posted on Reply
#27
Daven
kondaminWasn't the NPU on it's own "tile"
The whole package must be considered when setting clock speeds and adding features to each ‘tile’. Just because the CPU and NPU are on separate tiles, you can’t add a bunch of transistors independently to both or you will break the power budget and cost.
Posted on Reply
#28
Why_Me
trparkyAnd I can't help but think that Intel is really holding back the rest of the industry with these kinds of shenanigans. We could have universal AVX-512 support but we can't because... Intel.
Why is it Intel's fault? E-mail AMD and ask them what the deal is.
Posted on Reply
#29
atomsymbol
ncrsSo... it was Z80's creators that provided support after all. Other operating systems obviously used this and documentation to implement support, but it is the CPU vendor's job to provide the initial support, development environments and documentation.
ChatGPT claims the following about ZX Spectrum ROM development:

"Development was done using cross-development tools, as the ZX Spectrum hardware was not powerful enough for ROM development. This means that the software was written and compiled on a more powerful computer, then transferred to the Spectrum for testing."

"For the development of early 8-bit computers like the ZX Spectrum, it was typical to use a larger, more capable minicomputer or a mainframe."

Thus, from end-user perspective of the ZX Spectrum home computer, the Z80 CPU vendor's contributions to the ZX Spectrum are invisible, are not there. This is what I meant in my previous post.

I am only claiming that year 2024 is quite different from year 1980, nothing more.

I disagree with the idea that Intel should be responsible for turning an OS, such as Window 11 kernel or the Linux kernel, into an OS that supports Intel's heterogenous CPUs (ignoring for the moment the fact that Intel doesn't even have access to Windows 11 kernel source code). I don't understand how you can believe that Intel should be fully responsible for development of such a software feature. Providing initial enablement for hetero-CPUs or a prototype OS that proves it can work is one thing ---- implementing a fully hetero-CPU-aware OS is a very different thing. Can Intel's software engineers contribute hetero-CPU support to Linux kernel so that Intel CPUs sell better in the market? Of course they can, but this doesn't imply that Intel is to be blaimed for mainstream operating systems not being prepared at all for hetero-CPUs.

The main mistake I think you are making is that you believe that a physical sample of a hetero-CPU is needed to develop a hetero-CPU-enabled OS. It isn't required. This mistake is then leading you to the invalid conclusion that, because only Intel has access to prototypes of their CPUs in advance to others (that is: before others), Intel is responsible for bringing hetero-CPU support to operating systems. The thing that you seem not to understand is that any software developer with a Linux machine and an average year-2024 machine can start working on bringing hetero-CPU support to Linux today without any kind of extra equipment, by pretending that the homo-CPU in the developer's machine is a hetero-CPU. In light of this fact, the claim that Intel is responsible for the debacle of their hetero-CPU (Alder Lake) is absurd.
Posted on Reply
#30
trparky
Why_MeWhy is it Intel's fault? E-mail AMD and ask them what the deal is.
Because starting with Ryzen 7000, they already have AVX-512 support. It’s Intel that pulled support for it starting with 12th gen. As far as I can see, Intel’s the one to blame here.
Posted on Reply
#31
atomsymbol
DavenArrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc.
Just a note: In most situations, the overall utilization of ALUs in a modern CPU is fairly low (such as: less than 50%). The typical average IPC of most x86 applications is still in the range 1.0 - 2.0. This means that, in theory, 1 or 2 ALUs of a given type are sufficient and having 3 or more ALUs of a given type is a waste of silicon. Applications with an IPC of 4.0 are very rare. The main drivers of CPU core performance in recent years have been: OoO (out-of-order) logic improvements, larger internal CPU buffers and queues, branch prediction improvements. ALUs are relatively cheap in terms of silicon area and are relatively easy to replicate on a chip - OoO logic isn't cheap and is a much harder problem to crack than ALUs.
Posted on Reply
#32
Wirko
atomsymbolJust a note: In most situations, the overall utilization of ALUs in a modern CPU is fairly low (such as: less than 50%). The typical average IPC of most x86 applications is still in the range 1.0 - 2.0. This means that, in theory, 1 or 2 ALUs of a given type are sufficient and having 3 or more ALUs of a given type is a waste of silicon. Applications with an IPC of 4.0 are very rare. The main drivers of CPU core performance in recent years have been: OoO (out-of-order) logic improvements, larger internal CPU buffers and queues, branch prediction improvements. ALUs are relatively cheap in terms of silicon area and are relatively easy to replicate on a chip - OoO logic isn't cheap and is a much harder problem to crack than ALUs.
But why is there a large number of execution units in x86 processors? To improve SMT performance?
Posted on Reply
#33
Noyand
DavenArrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc. Intel is choosing to allocate the extra real estate of smaller nodes to the NPU, better e cores and the iGPU instead.
MTL and ARL are not monolithic, though. The CPU unit got his own tile. Raptor lake is also an Arch that wasn't supposed to exist, MTL was 2 years late. Intel slides about MTL also mentioned how flexible the tile design allow them to be. The SOC/GPU/CPU tile can be bigger or smaller depending on which market the SKU is supposed to answer. If Arrow lake is designed to be high performance, they can absolutely make the CPU tile bigger while reducing the others. ARL being a die shrink of MTL, while being meant to be a performance SKU, will make this a launch even worse than the first gen P4. In 2023 Intel wasn't dumb enough to make MTL the replacement of the HX chips on laptops, High performance laptops are still using RPL. Also, worth to keep in mind that Lunar lake is going to be the spiritual successor of what MTL ultimately became: a laptop only chip.
www.servethehome.com/intel-disaggregates-client-chips-with-meteor-lake-hc34/


Posted on Reply
#34
efikkan
atomsymbolJust a note: In most situations, the overall utilization of ALUs in a modern CPU is fairly low (such as: less than 50%). The typical average IPC of most x86 applications is still in the range 1.0 - 2.0. This means that, in theory, 1 or 2 ALUs of a given type are sufficient and having 3 or more ALUs of a given type is a waste of silicon. Applications with an IPC of 4.0 are very rare. The main drivers of CPU core performance in recent years have been: OoO (out-of-order) logic improvements, larger internal CPU buffers and queues, branch prediction improvements. ALUs are relatively cheap in terms of silicon area and are relatively easy to replicate on a chip - OoO logic isn't cheap and is a much harder problem to crack than ALUs.
Going "deeper"(OoO) has certainly been one of the main performance drivers since the Core 2 family, arguably even long before that. But we must not forget that going "wider"(more execution ports) goes along with it, and along with balancing the right execution units (on the execution ports), power gating and so on, they achieve good utilization of execution ports and all the resources to feed these, even though the individual execution units probably have fairly low utilization rates.
Back with Sandy Bridge, Intel had 3 execution ports to do integer or vector operations.
In Haswell they added a forth with an ALU.
In Sunny Cove(Ice Lake/Rocket Lake) they added more execution units on the forth port.
On Alder Lake(Golde Cove) they added the fifth execution port for int/vec, this time with an ALU and LEA unit (similar to Haswell). (While only 3 execution ports still contain vector units.)

But still, there are more minor changes which add up to significant performance gains. Like in Sunny Cove, Intel brought significantly faster integer multiplication and division. More such improvements will be possible as they move to more advanced nodes. While I don't think the ALUs themselves can be much faster (and they are down to a few clock cycles anyways), and those are as you said very cheap, but the other units probably can.

So will we see Intel going even wider? Probably, but I don't see them going straight to 8 ALUs, as it wouldn't be worth the scheduling etc. to manage it before the rest of the front-end can feed it. But as you know, at some point there is a point of diminishing returns (the CPU front-ends are already huge), well unless something changes on the software side. And I don't just mean the quality of software, but also ISA changes and compiler improvements. There could be a lot of efficiency gains if the cost of mispredictions are reduced (like a partial flush). And I'm sure both companies have a lot coming that I'm not aware of.

BTW, lot's of interesting discussions here. :)
WirkoBut why is there a large number of execution units in x86 processors? To improve SMT performance?
No, not at least the way current x86 microarchitectures implements it. (Currently they only switch between two threads)
Multiple execution ports (each can hold multiple execution units) allows what we call instruction level parallelism (worth reading), which basically means whenever the CPU finds multiple calculations that are independent on each other, it might as well execute them in parallel, and there are huge savings whenever prefetching or branching needs the result before continuing.

We actually got this feature very early on. Back with 80486 we got pipelining, and already with the following Pentium we got two execution ports. Pentium Pro/II added out-of-order execution. Even though these implementations were very simple compared to current designs, these concepts have evolved over decades, and been a core part of the performance gains over these years.

Current designs from Intel(Golden Cove) have 5 ports for int/vec operations + 7 ports for memory operations.
Zen have a different configuration, but keep their integer and vector engines separate. If I read the schematics correctly; 8 ports for integer and memory operations combined (4 of which with ALUs), 6 ports for their vector operations (where 4 are for calculations (but can be fused together for FMA), 2 for load/store). So in theory, Zen 4 is in a way "wider" than Golden Cove, but this doesn't tell us all the finer details that makes up the complete picture. But then Zen 4 can seemingly only issue 6 operations/clock for 14 ports.
But if you can take away one lesson for today; it's the utilization of these ports that makes up a large part of the performance characteristics of an architecture, and is a good part of the explanation why an AMD CPU can win massively in one workload, while Intel wins in another. ;)
Posted on Reply
#35
Jism
Remarkable that HT is stripped. Did they do that to save on power?
Posted on Reply
#36
ncrs
atomsymbolChatGPT claims the following about ZX Spectrum ROM development:
Sorry, that's not a source.
atomsymbolI disagree with the idea that Intel should be responsible for turning an OS, such as Window 11 kernel or the Linux kernel, into an OS that supports Intel's heterogenous CPUs (ignoring for the moment the fact that Intel doesn't even have access to Windows 11 kernel source code). I don't understand how you can believe that Intel should be fully responsible for development of such a software feature.
Intel for sure has access and has contributed to the Windows kernel. The term "WinTel" wasn't coined from nothing. That's not belief on my part. Both Intel and Microsoft said multiple times that Intel did the enablement:
The Intel Thread Director team worked closely with Microsoft to enable a seamless experience in Windows 11.
atomsymbolProviding initial enablement for hetero-CPUs or a prototype OS that proves it can work is one thing ---- implementing a fully hetero-CPU-aware OS is a very different thing. Can Intel's software engineers contribute hetero-CPU support to Linux kernel so that Intel CPUs sell better in the market? Of course they can, but this doesn't imply that Intel is to be blaimed for mainstream operating systems not being prepared at all for hetero-CPUs.
They did provide support for every previous Intel-unique CPU feature in Linux, as it is in their best interest to do it.
atomsymbolThe main mistake I think you are making is that you believe that a physical sample of a hetero-CPU is needed to develop a hetero-CPU-enabled OS. It isn't required. This mistake is then leading you to the invalid conclusion that, because only Intel has access to prototypes of their CPUs in advance to others (that is: before others), Intel is responsible for bringing hetero-CPU support to operating systems. The thing that you seem not to understand is that any software developer with a Linux machine and an average year-2024 machine can start working on bringing hetero-CPU support to Linux today without any kind of extra equipment, by pretending that the homo-CPU in the developer's machine is a hetero-CPU. In light of this fact, the claim that Intel is responsible for the debacle of their hetero-CPU (Alder Lake) is absurd.
Simulating hardware is not equivalent of running the real hardware. We can't even fully (as in 100% compatibility) simulate a PC from the 90's. There's no way you're going to exercise all the edge cases in such a setup for completely new hardware, not to mention the inevitable hardware bugs. But this is all theoretical anyway since judging by the history of Intel's contributions to the Linux kernel it's not being done that way.
GCC has been already wired up by Intel for Arrow, Lunar and Panther Lakes. Why? Because it's their job to do it and it's in their best interest.
Arrow Lake iGPU support in LLVM was done in November, by Intel. Arrow Lake kernel sound support was done in December, again by Intel. I can keep linking early enablement of unreleased Intel products by Intel themselves over and over again.
Alder Lake support in Linux has not been completed yet. It is Intel's fault and they are fixing it - just yesterday Intel posted patches improving Intel Thread Director support for virtualization in Linux.
Posted on Reply
#37
atomsymbol
ncrsThey did provide support for every previous Intel-unique CPU feature in Linux, as it is in their best interest to do it.
Apparently, you don't know what the word "every" means. You are posting half-truths that support your "little world". Facts that do not fit your worldview are being ignored in your posts: It is a fact that Intel did not post any support for their hetero-CPU AlderLake (which fully qualifies as an Intel-unique feature) to any operating system. Not to Windows (they don't own the source code), not to Linux, not to MacOS, not to FreeBSD. Microsoft let Intel contribute/collaborate Thread Director to Windows because it is a relatively small amount of source code. Microsoft won't let Intel's engineers touch (directly or indirectly) any large part of their operating system - and adding x86 hetero-CPUs to Windows is a lot of source code.
ncrsSorry, that's [ChatGPT] not a source.
Of course that ChatGPT is a valid source. Its accuracy will get better over time.

ChatGPT query: "Comprehensive list of Intel CPU or GPU features for which Intel didn't provide any software support for."

ChatGPT response:

"Despite these challenges, there have been instances where specific Intel features were noted for lacking software support at their launch or for an extended period afterward. Some notable examples include:
  • Intel Management Engine (ME): While not a directly user-facing feature, the ME has had aspects that were underutilized or lacked clear software utilization paths for end-users.
  • Quick Sync Video: Initially, software support for Intel's integrated GPU video encoding/decoding was sparse, though this has improved significantly over time.
  • Thunderbolt 3: In its early days, Thunderbolt 3 support on Windows PCs was inconsistent, and the software ecosystem around managing Thunderbolt devices was limited.
  • WiDi (Wireless Display): Intel's WiDi technology for wireless screen casting had compatibility and software support issues before being overshadowed by technologies like Miracast.
  • Certain AVX-512 Instructions: Some AVX-512 instruction sets in specific Intel CPUs had limited software optimization or use cases outside of specialized applications."
----

Additionally, when ChatGPT updates its large language model next year to update its knowledge, this very post might allow ChatGPT to include a new bullet in the above bullet list about Intel's heterogeneous Adler Lake CPUs for which (and most people would agree) Intel didn't provide any kind of software support.
Posted on Reply
#38
ncrs
atomsymbolApparently, you don't know what the word "every" means. You are posting half-truths that support your "little world". Facts that do not fit your worldview are being ignored in your posts: It is a fact that Intel did not post any support for their hetero-CPU AlderLake (which fully qualifies as an Intel-unique feature) to any operating system. Not to Windows (they don't own the source code), not to Linux, not to MacOS, not to FreeBSD. Microsoft let Intel contribute/collaborate Thread Director to Windows because it is a relatively small amount of source code. Microsoft won't let Intel's engineers touch (directly or indirectly) any large part of their operating system - and adding x86 hetero-CPUs to Windows is a lot of source code.
Lake CPUs for which (and most people would agree) Intel didn't provide any kind of software support.
You provide no sources in your arguments. None, while I provide links to repositories that directly correlate with mine.
You write that Intel did not provide any support for their "hetero-CPU AlderLake" as a reply to my post that contains a direct link to Intel Thread Director support for virtualization in Linux, posted by Intel employees. There is no logic here, are you just trolling?
atomsymbolOf course that ChatGPT is a valid source.
No.
atomsymbolIntel Management Engine (ME): While not a directly user-facing feature, the ME has had aspects that were underutilized or lacked clear software utilization paths for end-users.
Intel provided direct support for Intel ME in Linux kernel. It even has linux-mei@linux.intel.com as the contact e-mail on that page.
Intel also provided a comprehensive suite of tools to interface with ME and AMT which is based on it, all open source. They have scaled it back recently but the fact remains.
atomsymbolQuick Sync Video: Initially, software support for Intel's integrated GPU video encoding/decoding was sparse, though this has improved significantly over time.
Intel provides first party support for Quick Sync Video not only in the Linux kernel, in Mesa, but also in multimedia libraries like FFmpeg - here's one of the latest additions introducing hardware AV1 support.
atomsymbolThunderbolt 3: In its early days, Thunderbolt 3 support on Windows PCs was inconsistent, and the software ecosystem around managing Thunderbolt devices was limited.
Intel provides first party support for Thunderbolt, just recently they added support for upcoming Lunar Lake in the Linux kernel. They have been doing this since Thunderbolt's beginning.
Intel also provides full userspace support for managing Thunderbolt and USB4 (which is based on Thunderbolt 3).
atomsymbolWiDi (Wireless Display): Intel's WiDi technology for wireless screen casting had compatibility and software support issues before being overshadowed by technologies like Miracast.
Intel provides first party support for Wireless Display in both the Linux kernel (as part of their first party WiFi drivers) and theiriwd project, including userspace.
atomsymbolCertain AVX-512 Instructions: Some AVX-512 instruction sets in specific Intel CPUs had limited software optimization or use cases outside of specialized applications."
I have no idea why you included this here. AVX-512 support has been done by Intel on every layer from the kernel through compilers to libraries. Each major Intel software project like OpenVINO contains direct support for AVX-512 and AMX.
atomsymbolAdditionally, when ChatGPT updates its large language model next year to update its knowledge, this very post might allow ChatGPT to include a new bullet in the above bullet list about Intel's heterogeneous Adler Lake CPUs for which (and most people would agree) Intel didn't provide any kind of software support.
Too bad every single point of what it spewed can be defeated easily by simple searches.

To sum up: Intel has been providing first party support for their technologies for years, decades even. They don't always do it fully, as is the case with P- and E-core based CPUs.
Posted on Reply
#39
atomsymbol
ncrsYou provide no sources in your arguments. None, while I provide links to repositories that directly correlate with mine.
That is simply because what doesn't exist is (by definition) very hard to find on the Internet. Negations are in many cases hard to prove.
ncrsYou write that Intel did not provide any support for their "hetero-CPU AlderLake" as a reply to my post that contains a direct link to Intel Thread Director support for virtualization in Linux, posted by Intel employees. There is no logic here, are you just trolling?
Intel Thread Director has little to do with CPU's containing hetero-ISA cores, and has more to do with power consumption and disparity of performance between E-cores and P-cores. Why do you keep posting Intel Thread Director as an example of hetero-CPU (N*AVX512 + M*AVX256 cores) enablement? I thought I have made it clear in previous posts that Intel Thread Director doesn't count as hetero-CPU enablement. I should have stated it more clearly - sorry about that.
ncrsTo sum up: Intel has been providing first party support for their technologies for years, decades even. They don't always do it fully, as is the case with P- and E-core based CPUs.
From Wikipedia about Itanium: "Several groups ported operating systems for the [Itanium] architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris,[61][62][63] Tru64 UNIX,[60] and Monterey/64.[64] The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping."

Interpretation: Intel was unable to provide compiler support for their Itanium CPUs.
ncrsToo bad every single point of what [ChatGPT] spewed can be defeated easily by simple searches.
Yes. It was too easy in a sense. Let's hope that ChatGPT's reasoning/logic capabilities improve over time. Additionally, amount of information in an answer depends on the amount of information in a question - and ChatGPT queries are usually quite short and simple (such as: "Write a sad poem about a cancer patient").
Posted on Reply
#40
ncrs
atomsymbolThat is simply because what doesn't exist is (by definition) very hard to find on the Internet. Negations are in many cases hard to prove.
You specifically made the point that Intel does not and should not support their own technology [1], and you mistakenly make such an assumption again, just below [2].

[1]:
atomsymbolI disagree with the idea that Intel should be responsible for turning an OS, such as Window 11 kernel or the Linux kernel, into an OS that supports Intel's heterogenous CPUs (ignoring for the moment the fact that Intel doesn't even have access to Windows 11 kernel source code). I don't understand how you can believe that Intel should be fully responsible for development of such a software feature.
atomsymbolIntel Thread Director has little to do with CPU's containing hetero-ISA cores, and has more to do with power consumption and disparity of performance between E-cores and P-cores. Why do you keep posting Intel Thread Director as an example of hetero-CPU (N*AVX512 + M*AVX256 cores) enablement? I thought I have made it clear in previous posts that Intel Thread Director doesn't count as hetero-CPU enablement. I should have stated it more clearly - sorry about that.
OK, so you have a very narrow definition of "heterogeneous". If such processor was released by Intel (and there hasn't been one yet), then it would still be Intel's, or any other manufacturer's of such processor, job to provide support in operating systems for them. Just like they have been providing support, months or even years before release, for all their products and technologies. Why would any OS vendor bend over backwards to implement support for something that doesn't even exist. In the past Intel has worked very closely with OS vendors to enable support, a prime example of that was provided by yourself - Itanium, and we'll get to that.

I gave ITD as a main example in the context of our discussion because the dissimilarity of P- and E-cores make it essentially a heterogeneous CPU, just like ARM SoCs are considered to be. This mechanism was developed by Intel (for Linux - not yet fully merged despite years of effort) or Intel with/for Microsoft (for Windows 11).

[2]
atomsymbolFrom Wikipedia about Itanium: "Several groups ported operating systems for the [Itanium] architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris,[61][62][63] Tru64 UNIX,[60] and Monterey/64.[64] The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping."

Interpretation: Intel was unable to provide compiler support for their Itanium CPUs.
Intel has been providing their own compilers for Itanium since the very beginning. This was back in the days of each major vendor (Intel, HP, SGI, Microsoft) having their own proprietary compilers - all were developed with private documentation and most likely help from Intel themselves.
You can read a bit about their interactions with the open GCC implementation here.
Intel has been providing first party enablement for Itanium in Linux as well (this is the oldest commit I found since most Linux repositories being with 2.6.12-rc2, the whole history is available at archive.org).
The problem with Itanium compilers was not that they didn't exist. It was their performance and inability to exploit the CPU's execution capabilities fully. Not to mention the manufacturing difficulties, but that's another issue entirely.
atomsymbolYes. It was too easy in a sense. Let's hope that ChatGPT's reasoning/logic capabilities improve over time. Additionally, amount of information in an answer depends on the amount of information in a question - and ChatGPT queries are usually quite short and simple (such as: "Write a sad poem about a cancer patient").
LLMs are not "intelligence", they produce a kind of "word salad" that sounds great at first, but when scrutinized it often falls apart easily. ChatGPT can be trivially wrong even when asked simple math questions. It is a tool, and like any tool it has to be used with caution.
When you understand how LLMs work you'll know that they can't be implicitly trusted because the training material isn't (currently) fully curated. Even our cordial discussion could lead it to arrive at either side's conclusions.
It's a fascinating piece of technology, but we're not at Skynet-level yet.
Posted on Reply
#41
atomsymbol
ncrsOK, so you have a very narrow definition of "heterogeneous".
Narrow??? Such a claim is absurd/untrue. My definition of the term "hetero-CPU" is obviously more general than your definition of the term. I don't understand how you can be so irrational.
ncrsIf such processor was released by Intel (and there hasn't been one yet),
The dispute here is about the meaning of the term "released by Intel".

It (= the hetero-ISA capabilities of Alder Lake) could have been disabled by BIOS or by microcode, while the physical Alder Lake hardware could have been able to run AVX-512 on P-cores alongside AVX-256 on E-cores just fine.

Now, given the previous sentence as context, please answer the following question: Did Intel release a hetero-ISA CPU, or didn't Intel release a hetero-ISA CPU?

Disabling hetero-ISA capabilities of Alder Lake in BIOS or by microcode would mean that (1) Intel didn't provide support for their hetero-ISA CPU and (2) consciously prevented all other parties to run hetero-ISA software on their hetero-ISA CPU.
ncrsthen it would still be Intel's, or any other manufacturer's of such processor, job to provide support in operating systems for them.
This is in contradiction with the fact that almost any developer or researcher can work on a hetero-CPU-aware operating system without Intel providing any actual hetero-ISA-CPU.

A main problem in this discussion is that you don't know the implementation method/approach - or you are ignoring/suppressing that knowledge.
Posted on Reply
#42
ncrs
atomsymbolNarrow??? Such a claim is absurd/untrue. My definition of the term "hetero-CPU" is obviously more general than your definition of the term. I don't understand how you can be so irrational.
It's the opposite actually since by my definition I consider the same-ISA P-/E-core design as heterogeneous due to differences in performance, and you don't. Hence mine is more general, and yours more specific.
atomsymbolThe dispute here is about the meaning of the term "released by Intel".

It (= the hetero-ISA capabilities of Alder Lake) could have been disabled by BIOS or by microcode, while the physical Alder Lake hardware could have been able to run AVX-512 on P-cores alongside AVX-256 on E-cores just fine.
That did not happen - Alder Lake was never capable of running AVX-512 on P-cores with E-cores simultaneously enabled. The only way was to disable E-cores completely turning it into a homogeneous AVX-512 CPU.
atomsymbolNow, given the previous sentence as context, please answer the following question: Did Intel release a hetero-ISA CPU, or didn't Intel release a hetero-ISA CPU?

Disabling hetero-ISA capabilities of Alder Lake in BIOS or by microcode would mean that (1) Intel didn't provide support for their hetero-ISA CPU and (2) consciously prevented all other parties to run hetero-ISA software on their hetero-ISA CPU.
They did not release a hetero-ISA CPU. In every mode, including the AVX-512 mode that Intel removed from microcode, Alder Lake was an ISA-homogeneous CPU.
Maybe there is an internal microcode version that enables what you are describing, and maybe Intel has an OS that would work with such a CPU. To be honest I would be shocked if the didn't given their R&D capabilities.
In the end they decided that ISA-heterogeneous processors are not yet feasible.
atomsymbolThis is in contradiction with the fact that almost any developer or researcher can work on a hetero-CPU-aware operating system without Intel providing any actual hetero-ISA-CPU.
How is that a contradiction? Of course independent developers can work on whatever they please regardless of Intel or any other company.
What they can't do is implementing Intel-specific support for a non-existent hetero-ISA Intel CPU. If such a processor is ever released by Intel it will be Intel implementing support before hardware release, just like they have done for decades. Coincidentally 3 days ago Intel has started adding support of APX and AVX10 to their Clear Linux distribution. There are no CPUs currently publicly available that can use those ISA extensions.
atomsymbolA main problem in this discussion is that you don't know the implementation method/approach - or you are ignoring/suppressing that knowledge.
I have given you countless examples of Intel developing first party support for Intel technologies before anyone else.
At this point I'm not going to continue discussing this with you.
Posted on Reply
Add your own comment
Feb 16th, 2025 06:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts