• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arrow Lake-S 24 Thread CPU Leaked - Lacks Hyper-Threading & AVX-512 Support

Status
Not open for further replies.
Joined
Dec 12, 2016
Messages
2,096 (0.71/day)
Is Intel planning on releasing new generation HEDT CPUs?
No

Edit: Unless you count Sapphire Rapids Xeon W series which are already out.

Where's the lower IPC and clocks come from? Isn't it on the new Intel 20A?
Newer process nodes don’t automatically guarantee higher clocks at first. Case in point, the various TSMC internode versions that are optimized for different specs: power, clocks and density.

Arrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc. Intel is choosing to allocate the extra real estate of smaller nodes to the NPU, better e cores and the iGPU instead.

I believe Arrow Lake p cores are on the Intel 4 node just like Meteor Lake’s p cores. However, they could go Intel 3 which is due at the end of this year but it seems that Intel is saving its most cutting edge capacity for third party chip designers in order to boost IFS. The Intel 20A node is expected by the end of 2025 unless there are delays.
 
Last edited:
Joined
Jan 11, 2022
Messages
1,013 (0.92/day)
Arrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc. Intel is choosing to allocate the extra real estate of smaller nodes to the NPU, better e cores and the iGPU instead.
Wasn't the NPU on it's own "tile"
 
Joined
Dec 12, 2016
Messages
2,096 (0.71/day)
Wasn't the NPU on it's own "tile"
The whole package must be considered when setting clock speeds and adding features to each ‘tile’. Just because the CPU and NPU are on separate tiles, you can’t add a bunch of transistors independently to both or you will break the power budget and cost.
 
Joined
Jan 29, 2021
Messages
1,910 (1.31/day)
Location
Alaska USA
And I can't help but think that Intel is really holding back the rest of the industry with these kinds of shenanigans. We could have universal AVX-512 support but we can't because... Intel.
Why is it Intel's fault? E-mail AMD and ask them what the deal is.
 
Joined
Mar 17, 2017
Messages
97 (0.03/day)
Location
Europe
Processor Ryzen 9 9950X
Motherboard X670 chipset
Cooling Arctic Liquid Freezer III 240
Memory 64 GiB
Video Card(s) RX 6700XT
Storage WD Black SN750, Seagate FireCuda 530, Samsung SSD 850 Pro, WD Blue HDD, Seagate IronWolf HDD
Display(s) Samsung (4K, FreeSync)
Case Phanteks NEO Air
Power Supply EVGA 750 B5
Mouse Eternico wireless mouse
Keyboard HyperX Alloy Origins Core Aqua with Corsair Onyx Black keycaps
Software Linux + KVM
So... it was Z80's creators that provided support after all. Other operating systems obviously used this and documentation to implement support, but it is the CPU vendor's job to provide the initial support, development environments and documentation.

ChatGPT claims the following about ZX Spectrum ROM development:

"Development was done using cross-development tools, as the ZX Spectrum hardware was not powerful enough for ROM development. This means that the software was written and compiled on a more powerful computer, then transferred to the Spectrum for testing."

"For the development of early 8-bit computers like the ZX Spectrum, it was typical to use a larger, more capable minicomputer or a mainframe."

Thus, from end-user perspective of the ZX Spectrum home computer, the Z80 CPU vendor's contributions to the ZX Spectrum are invisible, are not there. This is what I meant in my previous post.

I am only claiming that year 2024 is quite different from year 1980, nothing more.

I disagree with the idea that Intel should be responsible for turning an OS, such as Window 11 kernel or the Linux kernel, into an OS that supports Intel's heterogenous CPUs (ignoring for the moment the fact that Intel doesn't even have access to Windows 11 kernel source code). I don't understand how you can believe that Intel should be fully responsible for development of such a software feature. Providing initial enablement for hetero-CPUs or a prototype OS that proves it can work is one thing ---- implementing a fully hetero-CPU-aware OS is a very different thing. Can Intel's software engineers contribute hetero-CPU support to Linux kernel so that Intel CPUs sell better in the market? Of course they can, but this doesn't imply that Intel is to be blaimed for mainstream operating systems not being prepared at all for hetero-CPUs.

The main mistake I think you are making is that you believe that a physical sample of a hetero-CPU is needed to develop a hetero-CPU-enabled OS. It isn't required. This mistake is then leading you to the invalid conclusion that, because only Intel has access to prototypes of their CPUs in advance to others (that is: before others), Intel is responsible for bringing hetero-CPU support to operating systems. The thing that you seem not to understand is that any software developer with a Linux machine and an average year-2024 machine can start working on bringing hetero-CPU support to Linux today without any kind of extra equipment, by pretending that the homo-CPU in the developer's machine is a hetero-CPU. In light of this fact, the claim that Intel is responsible for the debacle of their hetero-CPU (Alder Lake) is absurd.
 
Joined
Mar 6, 2017
Messages
3,369 (1.17/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
Why is it Intel's fault? E-mail AMD and ask them what the deal is.
Because starting with Ryzen 7000, they already have AVX-512 support. It’s Intel that pulled support for it starting with 12th gen. As far as I can see, Intel’s the one to blame here.
 
Joined
Mar 17, 2017
Messages
97 (0.03/day)
Location
Europe
Processor Ryzen 9 9950X
Motherboard X670 chipset
Cooling Arctic Liquid Freezer III 240
Memory 64 GiB
Video Card(s) RX 6700XT
Storage WD Black SN750, Seagate FireCuda 530, Samsung SSD 850 Pro, WD Blue HDD, Seagate IronWolf HDD
Display(s) Samsung (4K, FreeSync)
Case Phanteks NEO Air
Power Supply EVGA 750 B5
Mouse Eternico wireless mouse
Keyboard HyperX Alloy Origins Core Aqua with Corsair Onyx Black keycaps
Software Linux + KVM
Arrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc.
Just a note: In most situations, the overall utilization of ALUs in a modern CPU is fairly low (such as: less than 50%). The typical average IPC of most x86 applications is still in the range 1.0 - 2.0. This means that, in theory, 1 or 2 ALUs of a given type are sufficient and having 3 or more ALUs of a given type is a waste of silicon. Applications with an IPC of 4.0 are very rare. The main drivers of CPU core performance in recent years have been: OoO (out-of-order) logic improvements, larger internal CPU buffers and queues, branch prediction improvements. ALUs are relatively cheap in terms of silicon area and are relatively easy to replicate on a chip - OoO logic isn't cheap and is a much harder problem to crack than ALUs.
 
Joined
Jan 3, 2021
Messages
3,726 (2.52/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Just a note: In most situations, the overall utilization of ALUs in a modern CPU is fairly low (such as: less than 50%). The typical average IPC of most x86 applications is still in the range 1.0 - 2.0. This means that, in theory, 1 or 2 ALUs of a given type are sufficient and having 3 or more ALUs of a given type is a waste of silicon. Applications with an IPC of 4.0 are very rare. The main drivers of CPU core performance in recent years have been: OoO (out-of-order) logic improvements, larger internal CPU buffers and queues, branch prediction improvements. ALUs are relatively cheap in terms of silicon area and are relatively easy to replicate on a chip - OoO logic isn't cheap and is a much harder problem to crack than ALUs.
But why is there a large number of execution units in x86 processors? To improve SMT performance?
 
Joined
Nov 8, 2017
Messages
230 (0.09/day)
Arrow Lake will probably closely align itself with Meteor Lake. Meteor Lake p cores on its Intel 4 node has lower IPC than Raptor Lake p cores on its Intel 7 node. The reason for this specific drop is that processor design is all about transistor real estate. CPUs can have higher IPC on smaller nodes because there is room to add more functional units, cache, etc. Intel is choosing to allocate the extra real estate of smaller nodes to the NPU, better e cores and the iGPU instead.
MTL and ARL are not monolithic, though. The CPU unit got his own tile. Raptor lake is also an Arch that wasn't supposed to exist, MTL was 2 years late. Intel slides about MTL also mentioned how flexible the tile design allow them to be. The SOC/GPU/CPU tile can be bigger or smaller depending on which market the SKU is supposed to answer. If Arrow lake is designed to be high performance, they can absolutely make the CPU tile bigger while reducing the others. ARL being a die shrink of MTL, while being meant to be a performance SKU, will make this a launch even worse than the first gen P4. In 2023 Intel wasn't dumb enough to make MTL the replacement of the HX chips on laptops, High performance laptops are still using RPL. Also, worth to keep in mind that Lunar lake is going to be the spiritual successor of what MTL ultimately became: a laptop only chip.

1706991162032.png

1706991804772.jpeg
 
Last edited:
Joined
Jun 10, 2014
Messages
3,019 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Just a note: In most situations, the overall utilization of ALUs in a modern CPU is fairly low (such as: less than 50%). The typical average IPC of most x86 applications is still in the range 1.0 - 2.0. This means that, in theory, 1 or 2 ALUs of a given type are sufficient and having 3 or more ALUs of a given type is a waste of silicon. Applications with an IPC of 4.0 are very rare. The main drivers of CPU core performance in recent years have been: OoO (out-of-order) logic improvements, larger internal CPU buffers and queues, branch prediction improvements. ALUs are relatively cheap in terms of silicon area and are relatively easy to replicate on a chip - OoO logic isn't cheap and is a much harder problem to crack than ALUs.
Going "deeper"(OoO) has certainly been one of the main performance drivers since the Core 2 family, arguably even long before that. But we must not forget that going "wider"(more execution ports) goes along with it, and along with balancing the right execution units (on the execution ports), power gating and so on, they achieve good utilization of execution ports and all the resources to feed these, even though the individual execution units probably have fairly low utilization rates.
Back with Sandy Bridge, Intel had 3 execution ports to do integer or vector operations.
In Haswell they added a forth with an ALU.
In Sunny Cove(Ice Lake/Rocket Lake) they added more execution units on the forth port.
On Alder Lake(Golde Cove) they added the fifth execution port for int/vec, this time with an ALU and LEA unit (similar to Haswell). (While only 3 execution ports still contain vector units.)

But still, there are more minor changes which add up to significant performance gains. Like in Sunny Cove, Intel brought significantly faster integer multiplication and division. More such improvements will be possible as they move to more advanced nodes. While I don't think the ALUs themselves can be much faster (and they are down to a few clock cycles anyways), and those are as you said very cheap, but the other units probably can.

So will we see Intel going even wider? Probably, but I don't see them going straight to 8 ALUs, as it wouldn't be worth the scheduling etc. to manage it before the rest of the front-end can feed it. But as you know, at some point there is a point of diminishing returns (the CPU front-ends are already huge), well unless something changes on the software side. And I don't just mean the quality of software, but also ISA changes and compiler improvements. There could be a lot of efficiency gains if the cost of mispredictions are reduced (like a partial flush). And I'm sure both companies have a lot coming that I'm not aware of.

BTW, lot's of interesting discussions here. :)

But why is there a large number of execution units in x86 processors? To improve SMT performance?
No, not at least the way current x86 microarchitectures implements it. (Currently they only switch between two threads)
Multiple execution ports (each can hold multiple execution units) allows what we call instruction level parallelism (worth reading), which basically means whenever the CPU finds multiple calculations that are independent on each other, it might as well execute them in parallel, and there are huge savings whenever prefetching or branching needs the result before continuing.

We actually got this feature very early on. Back with 80486 we got pipelining, and already with the following Pentium we got two execution ports. Pentium Pro/II added out-of-order execution. Even though these implementations were very simple compared to current designs, these concepts have evolved over decades, and been a core part of the performance gains over these years.

Current designs from Intel(Golden Cove) have 5 ports for int/vec operations + 7 ports for memory operations.
Zen have a different configuration, but keep their integer and vector engines separate. If I read the schematics correctly; 8 ports for integer and memory operations combined (4 of which with ALUs), 6 ports for their vector operations (where 4 are for calculations (but can be fused together for FMA), 2 for load/store). So in theory, Zen 4 is in a way "wider" than Golden Cove, but this doesn't tell us all the finer details that makes up the complete picture. But then Zen 4 can seemingly only issue 6 operations/clock for 14 ports.
But if you can take away one lesson for today; it's the utilization of these ports that makes up a large part of the performance characteristics of an architecture, and is a good part of the explanation why an AMD CPU can win massively in one workload, while Intel wins in another. ;)
 
Joined
Dec 30, 2010
Messages
2,210 (0.43/day)
Remarkable that HT is stripped. Did they do that to save on power?
 
Joined
Jun 29, 2018
Messages
546 (0.23/day)
ChatGPT claims the following about ZX Spectrum ROM development:
Sorry, that's not a source.

I disagree with the idea that Intel should be responsible for turning an OS, such as Window 11 kernel or the Linux kernel, into an OS that supports Intel's heterogenous CPUs (ignoring for the moment the fact that Intel doesn't even have access to Windows 11 kernel source code). I don't understand how you can believe that Intel should be fully responsible for development of such a software feature.
Intel for sure has access and has contributed to the Windows kernel. The term "WinTel" wasn't coined from nothing. That's not belief on my part. Both Intel and Microsoft said multiple times that Intel did the enablement:
The Intel Thread Director team worked closely with Microsoft to enable a seamless experience in Windows 11.

Providing initial enablement for hetero-CPUs or a prototype OS that proves it can work is one thing ---- implementing a fully hetero-CPU-aware OS is a very different thing. Can Intel's software engineers contribute hetero-CPU support to Linux kernel so that Intel CPUs sell better in the market? Of course they can, but this doesn't imply that Intel is to be blaimed for mainstream operating systems not being prepared at all for hetero-CPUs.
They did provide support for every previous Intel-unique CPU feature in Linux, as it is in their best interest to do it.

The main mistake I think you are making is that you believe that a physical sample of a hetero-CPU is needed to develop a hetero-CPU-enabled OS. It isn't required. This mistake is then leading you to the invalid conclusion that, because only Intel has access to prototypes of their CPUs in advance to others (that is: before others), Intel is responsible for bringing hetero-CPU support to operating systems. The thing that you seem not to understand is that any software developer with a Linux machine and an average year-2024 machine can start working on bringing hetero-CPU support to Linux today without any kind of extra equipment, by pretending that the homo-CPU in the developer's machine is a hetero-CPU. In light of this fact, the claim that Intel is responsible for the debacle of their hetero-CPU (Alder Lake) is absurd.
Simulating hardware is not equivalent of running the real hardware. We can't even fully (as in 100% compatibility) simulate a PC from the 90's. There's no way you're going to exercise all the edge cases in such a setup for completely new hardware, not to mention the inevitable hardware bugs. But this is all theoretical anyway since judging by the history of Intel's contributions to the Linux kernel it's not being done that way.
GCC has been already wired up by Intel for Arrow, Lunar and Panther Lakes. Why? Because it's their job to do it and it's in their best interest.
Arrow Lake iGPU support in LLVM was done in November, by Intel. Arrow Lake kernel sound support was done in December, again by Intel. I can keep linking early enablement of unreleased Intel products by Intel themselves over and over again.
Alder Lake support in Linux has not been completed yet. It is Intel's fault and they are fixing it - just yesterday Intel posted patches improving Intel Thread Director support for virtualization in Linux.
 
Joined
Mar 17, 2017
Messages
97 (0.03/day)
Location
Europe
Processor Ryzen 9 9950X
Motherboard X670 chipset
Cooling Arctic Liquid Freezer III 240
Memory 64 GiB
Video Card(s) RX 6700XT
Storage WD Black SN750, Seagate FireCuda 530, Samsung SSD 850 Pro, WD Blue HDD, Seagate IronWolf HDD
Display(s) Samsung (4K, FreeSync)
Case Phanteks NEO Air
Power Supply EVGA 750 B5
Mouse Eternico wireless mouse
Keyboard HyperX Alloy Origins Core Aqua with Corsair Onyx Black keycaps
Software Linux + KVM
They did provide support for every previous Intel-unique CPU feature in Linux, as it is in their best interest to do it.

Apparently, you don't know what the word "every" means. You are posting half-truths that support your "little world". Facts that do not fit your worldview are being ignored in your posts: It is a fact that Intel did not post any support for their hetero-CPU AlderLake (which fully qualifies as an Intel-unique feature) to any operating system. Not to Windows (they don't own the source code), not to Linux, not to MacOS, not to FreeBSD. Microsoft let Intel contribute/collaborate Thread Director to Windows because it is a relatively small amount of source code. Microsoft won't let Intel's engineers touch (directly or indirectly) any large part of their operating system - and adding x86 hetero-CPUs to Windows is a lot of source code.

Sorry, that's [ChatGPT] not a source.

Of course that ChatGPT is a valid source. Its accuracy will get better over time.

ChatGPT query: "Comprehensive list of Intel CPU or GPU features for which Intel didn't provide any software support for."

ChatGPT response:

"Despite these challenges, there have been instances where specific Intel features were noted for lacking software support at their launch or for an extended period afterward. Some notable examples include:

  • Intel Management Engine (ME): While not a directly user-facing feature, the ME has had aspects that were underutilized or lacked clear software utilization paths for end-users.
  • Quick Sync Video: Initially, software support for Intel's integrated GPU video encoding/decoding was sparse, though this has improved significantly over time.
  • Thunderbolt 3: In its early days, Thunderbolt 3 support on Windows PCs was inconsistent, and the software ecosystem around managing Thunderbolt devices was limited.
  • WiDi (Wireless Display): Intel's WiDi technology for wireless screen casting had compatibility and software support issues before being overshadowed by technologies like Miracast.
  • Certain AVX-512 Instructions: Some AVX-512 instruction sets in specific Intel CPUs had limited software optimization or use cases outside of specialized applications."
----

Additionally, when ChatGPT updates its large language model next year to update its knowledge, this very post might allow ChatGPT to include a new bullet in the above bullet list about Intel's heterogeneous Adler Lake CPUs for which (and most people would agree) Intel didn't provide any kind of software support.
 
Last edited:
Joined
Jun 29, 2018
Messages
546 (0.23/day)
Apparently, you don't know what the word "every" means. You are posting half-truths that support your "little world". Facts that do not fit your worldview are being ignored in your posts: It is a fact that Intel did not post any support for their hetero-CPU AlderLake (which fully qualifies as an Intel-unique feature) to any operating system. Not to Windows (they don't own the source code), not to Linux, not to MacOS, not to FreeBSD. Microsoft let Intel contribute/collaborate Thread Director to Windows because it is a relatively small amount of source code. Microsoft won't let Intel's engineers touch (directly or indirectly) any large part of their operating system - and adding x86 hetero-CPUs to Windows is a lot of source code.
Lake CPUs for which (and most people would agree) Intel didn't provide any kind of software support.
You provide no sources in your arguments. None, while I provide links to repositories that directly correlate with mine.
You write that Intel did not provide any support for their "hetero-CPU AlderLake" as a reply to my post that contains a direct link to Intel Thread Director support for virtualization in Linux, posted by Intel employees. There is no logic here, are you just trolling?

Of course that ChatGPT is a valid source.
No.

Intel Management Engine (ME): While not a directly user-facing feature, the ME has had aspects that were underutilized or lacked clear software utilization paths for end-users.
Intel provided direct support for Intel ME in Linux kernel. It even has linux-mei@linux.intel.com as the contact e-mail on that page.
Intel also provided a comprehensive suite of tools to interface with ME and AMT which is based on it, all open source. They have scaled it back recently but the fact remains.

Quick Sync Video: Initially, software support for Intel's integrated GPU video encoding/decoding was sparse, though this has improved significantly over time.
Intel provides first party support for Quick Sync Video not only in the Linux kernel, in Mesa, but also in multimedia libraries like FFmpeg - here's one of the latest additions introducing hardware AV1 support.

Thunderbolt 3: In its early days, Thunderbolt 3 support on Windows PCs was inconsistent, and the software ecosystem around managing Thunderbolt devices was limited.
Intel provides first party support for Thunderbolt, just recently they added support for upcoming Lunar Lake in the Linux kernel. They have been doing this since Thunderbolt's beginning.
Intel also provides full userspace support for managing Thunderbolt and USB4 (which is based on Thunderbolt 3).

WiDi (Wireless Display): Intel's WiDi technology for wireless screen casting had compatibility and software support issues before being overshadowed by technologies like Miracast.
Intel provides first party support for Wireless Display in both the Linux kernel (as part of their first party WiFi drivers) and their iwd project, including userspace.

Certain AVX-512 Instructions: Some AVX-512 instruction sets in specific Intel CPUs had limited software optimization or use cases outside of specialized applications."
I have no idea why you included this here. AVX-512 support has been done by Intel on every layer from the kernel through compilers to libraries. Each major Intel software project like OpenVINO contains direct support for AVX-512 and AMX.

Additionally, when ChatGPT updates its large language model next year to update its knowledge, this very post might allow ChatGPT to include a new bullet in the above bullet list about Intel's heterogeneous Adler Lake CPUs for which (and most people would agree) Intel didn't provide any kind of software support.
Too bad every single point of what it spewed can be defeated easily by simple searches.

To sum up: Intel has been providing first party support for their technologies for years, decades even. They don't always do it fully, as is the case with P- and E-core based CPUs.
 
Joined
Mar 17, 2017
Messages
97 (0.03/day)
Location
Europe
Processor Ryzen 9 9950X
Motherboard X670 chipset
Cooling Arctic Liquid Freezer III 240
Memory 64 GiB
Video Card(s) RX 6700XT
Storage WD Black SN750, Seagate FireCuda 530, Samsung SSD 850 Pro, WD Blue HDD, Seagate IronWolf HDD
Display(s) Samsung (4K, FreeSync)
Case Phanteks NEO Air
Power Supply EVGA 750 B5
Mouse Eternico wireless mouse
Keyboard HyperX Alloy Origins Core Aqua with Corsair Onyx Black keycaps
Software Linux + KVM
You provide no sources in your arguments. None, while I provide links to repositories that directly correlate with mine.

That is simply because what doesn't exist is (by definition) very hard to find on the Internet. Negations are in many cases hard to prove.

You write that Intel did not provide any support for their "hetero-CPU AlderLake" as a reply to my post that contains a direct link to Intel Thread Director support for virtualization in Linux, posted by Intel employees. There is no logic here, are you just trolling?

Intel Thread Director has little to do with CPU's containing hetero-ISA cores, and has more to do with power consumption and disparity of performance between E-cores and P-cores. Why do you keep posting Intel Thread Director as an example of hetero-CPU (N*AVX512 + M*AVX256 cores) enablement? I thought I have made it clear in previous posts that Intel Thread Director doesn't count as hetero-CPU enablement. I should have stated it more clearly - sorry about that.

To sum up: Intel has been providing first party support for their technologies for years, decades even. They don't always do it fully, as is the case with P- and E-core based CPUs.

From Wikipedia about Itanium: "Several groups ported operating systems for the [Itanium] architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris,[61][62][63] Tru64 UNIX,[60] and Monterey/64.[64] The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping."

Interpretation: Intel was unable to provide compiler support for their Itanium CPUs.

Too bad every single point of what [ChatGPT] spewed can be defeated easily by simple searches.

Yes. It was too easy in a sense. Let's hope that ChatGPT's reasoning/logic capabilities improve over time. Additionally, amount of information in an answer depends on the amount of information in a question - and ChatGPT queries are usually quite short and simple (such as: "Write a sad poem about a cancer patient").
 
Joined
Jun 29, 2018
Messages
546 (0.23/day)
That is simply because what doesn't exist is (by definition) very hard to find on the Internet. Negations are in many cases hard to prove.
You specifically made the point that Intel does not and should not support their own technology [1], and you mistakenly make such an assumption again, just below [2].

[1]:
I disagree with the idea that Intel should be responsible for turning an OS, such as Window 11 kernel or the Linux kernel, into an OS that supports Intel's heterogenous CPUs (ignoring for the moment the fact that Intel doesn't even have access to Windows 11 kernel source code). I don't understand how you can believe that Intel should be fully responsible for development of such a software feature.

Intel Thread Director has little to do with CPU's containing hetero-ISA cores, and has more to do with power consumption and disparity of performance between E-cores and P-cores. Why do you keep posting Intel Thread Director as an example of hetero-CPU (N*AVX512 + M*AVX256 cores) enablement? I thought I have made it clear in previous posts that Intel Thread Director doesn't count as hetero-CPU enablement. I should have stated it more clearly - sorry about that.
OK, so you have a very narrow definition of "heterogeneous". If such processor was released by Intel (and there hasn't been one yet), then it would still be Intel's, or any other manufacturer's of such processor, job to provide support in operating systems for them. Just like they have been providing support, months or even years before release, for all their products and technologies. Why would any OS vendor bend over backwards to implement support for something that doesn't even exist. In the past Intel has worked very closely with OS vendors to enable support, a prime example of that was provided by yourself - Itanium, and we'll get to that.

I gave ITD as a main example in the context of our discussion because the dissimilarity of P- and E-cores make it essentially a heterogeneous CPU, just like ARM SoCs are considered to be. This mechanism was developed by Intel (for Linux - not yet fully merged despite years of effort) or Intel with/for Microsoft (for Windows 11).

[2]
From Wikipedia about Itanium: "Several groups ported operating systems for the [Itanium] architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris,[61][62][63] Tru64 UNIX,[60] and Monterey/64.[64] The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping."

Interpretation: Intel was unable to provide compiler support for their Itanium CPUs.
Intel has been providing their own compilers for Itanium since the very beginning. This was back in the days of each major vendor (Intel, HP, SGI, Microsoft) having their own proprietary compilers - all were developed with private documentation and most likely help from Intel themselves.
You can read a bit about their interactions with the open GCC implementation here.
Intel has been providing first party enablement for Itanium in Linux as well (this is the oldest commit I found since most Linux repositories being with 2.6.12-rc2, the whole history is available at archive.org).
The problem with Itanium compilers was not that they didn't exist. It was their performance and inability to exploit the CPU's execution capabilities fully. Not to mention the manufacturing difficulties, but that's another issue entirely.

Yes. It was too easy in a sense. Let's hope that ChatGPT's reasoning/logic capabilities improve over time. Additionally, amount of information in an answer depends on the amount of information in a question - and ChatGPT queries are usually quite short and simple (such as: "Write a sad poem about a cancer patient").
LLMs are not "intelligence", they produce a kind of "word salad" that sounds great at first, but when scrutinized it often falls apart easily. ChatGPT can be trivially wrong even when asked simple math questions. It is a tool, and like any tool it has to be used with caution.
When you understand how LLMs work you'll know that they can't be implicitly trusted because the training material isn't (currently) fully curated. Even our cordial discussion could lead it to arrive at either side's conclusions.
It's a fascinating piece of technology, but we're not at Skynet-level yet.
 
Last edited:
Joined
Mar 17, 2017
Messages
97 (0.03/day)
Location
Europe
Processor Ryzen 9 9950X
Motherboard X670 chipset
Cooling Arctic Liquid Freezer III 240
Memory 64 GiB
Video Card(s) RX 6700XT
Storage WD Black SN750, Seagate FireCuda 530, Samsung SSD 850 Pro, WD Blue HDD, Seagate IronWolf HDD
Display(s) Samsung (4K, FreeSync)
Case Phanteks NEO Air
Power Supply EVGA 750 B5
Mouse Eternico wireless mouse
Keyboard HyperX Alloy Origins Core Aqua with Corsair Onyx Black keycaps
Software Linux + KVM
OK, so you have a very narrow definition of "heterogeneous".

Narrow??? Such a claim is absurd/untrue. My definition of the term "hetero-CPU" is obviously more general than your definition of the term. I don't understand how you can be so irrational.

If such processor was released by Intel (and there hasn't been one yet),

The dispute here is about the meaning of the term "released by Intel".

It (= the hetero-ISA capabilities of Alder Lake) could have been disabled by BIOS or by microcode, while the physical Alder Lake hardware could have been able to run AVX-512 on P-cores alongside AVX-256 on E-cores just fine.

Now, given the previous sentence as context, please answer the following question: Did Intel release a hetero-ISA CPU, or didn't Intel release a hetero-ISA CPU?

Disabling hetero-ISA capabilities of Alder Lake in BIOS or by microcode would mean that (1) Intel didn't provide support for their hetero-ISA CPU and (2) consciously prevented all other parties to run hetero-ISA software on their hetero-ISA CPU.

then it would still be Intel's, or any other manufacturer's of such processor, job to provide support in operating systems for them.

This is in contradiction with the fact that almost any developer or researcher can work on a hetero-CPU-aware operating system without Intel providing any actual hetero-ISA-CPU.

A main problem in this discussion is that you don't know the implementation method/approach - or you are ignoring/suppressing that knowledge.
 
Joined
Jun 29, 2018
Messages
546 (0.23/day)
Narrow??? Such a claim is absurd/untrue. My definition of the term "hetero-CPU" is obviously more general than your definition of the term. I don't understand how you can be so irrational.
It's the opposite actually since by my definition I consider the same-ISA P-/E-core design as heterogeneous due to differences in performance, and you don't. Hence mine is more general, and yours more specific.

The dispute here is about the meaning of the term "released by Intel".

It (= the hetero-ISA capabilities of Alder Lake) could have been disabled by BIOS or by microcode, while the physical Alder Lake hardware could have been able to run AVX-512 on P-cores alongside AVX-256 on E-cores just fine.
That did not happen - Alder Lake was never capable of running AVX-512 on P-cores with E-cores simultaneously enabled. The only way was to disable E-cores completely turning it into a homogeneous AVX-512 CPU.

Now, given the previous sentence as context, please answer the following question: Did Intel release a hetero-ISA CPU, or didn't Intel release a hetero-ISA CPU?

Disabling hetero-ISA capabilities of Alder Lake in BIOS or by microcode would mean that (1) Intel didn't provide support for their hetero-ISA CPU and (2) consciously prevented all other parties to run hetero-ISA software on their hetero-ISA CPU.
They did not release a hetero-ISA CPU. In every mode, including the AVX-512 mode that Intel removed from microcode, Alder Lake was an ISA-homogeneous CPU.
Maybe there is an internal microcode version that enables what you are describing, and maybe Intel has an OS that would work with such a CPU. To be honest I would be shocked if the didn't given their R&D capabilities.
In the end they decided that ISA-heterogeneous processors are not yet feasible.

This is in contradiction with the fact that almost any developer or researcher can work on a hetero-CPU-aware operating system without Intel providing any actual hetero-ISA-CPU.
How is that a contradiction? Of course independent developers can work on whatever they please regardless of Intel or any other company.
What they can't do is implementing Intel-specific support for a non-existent hetero-ISA Intel CPU. If such a processor is ever released by Intel it will be Intel implementing support before hardware release, just like they have done for decades. Coincidentally 3 days ago Intel has started adding support of APX and AVX10 to their Clear Linux distribution. There are no CPUs currently publicly available that can use those ISA extensions.

A main problem in this discussion is that you don't know the implementation method/approach - or you are ignoring/suppressing that knowledge.
I have given you countless examples of Intel developing first party support for Intel technologies before anyone else.
At this point I'm not going to continue discussing this with you.
 
Status
Not open for further replies.
Top