Friday, September 8th 2023
Intel Demos 6th Gen Xeon Scalable CPUs, Core Counts Leaked
Intel's advanced packaging prowess demonstration took place this week—attendees were able to get an early-ish look at Team Blue's sixth Generation Xeon Scalable "Sapphire Rapids" processors. This multi-tile datacenter-oriented CPU family is projected to hit the market within the first half of 2024, but reports suggest that key enterprise clients have recently received evaluation samples. Coincidentally, renowned hardware leaker—Yuuki_AnS—has managed to source more information from industry insiders. This follows their complete blowout of more mainstream Raptor Lake Refresh desktop SKUs.
The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."The more frugal family is described as: "Intel Xeon processors with E-cores (Sierra Forest) are enhanced to deliver density-optimized compute in the most power-efficient manner. Xeon processors with E-cores provide best-in-class power-performance density, offering distinct advantages for cloud-native and hyperscale workloads."
The leaked information suggests that listed "Granite Rapids-SP" ES1 units max out at 56 cores along with 288 MB of cache on an eight-channel memory subsystem carrying two chiplets. It is possible that each tile carries either 28 or 30 cores, and two cores per chiplet being disabled for redundancy purposes. Final production processors could up the ante to around 84 - 90 cores. A Tom's Hardware analysis of Yuuki_AnS's slide proposes that: "the compute chiplets are made on Intel 3 (3 nm-class) process technology, whereas HSIO chiplets are fabbed on a 7 nm-class production node, which is a proven technology and is considered to be optimal for modern I/O chiplets in terms of performance and costs."
Source:
Tom's Hardware
The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."The more frugal family is described as: "Intel Xeon processors with E-cores (Sierra Forest) are enhanced to deliver density-optimized compute in the most power-efficient manner. Xeon processors with E-cores provide best-in-class power-performance density, offering distinct advantages for cloud-native and hyperscale workloads."
The leaked information suggests that listed "Granite Rapids-SP" ES1 units max out at 56 cores along with 288 MB of cache on an eight-channel memory subsystem carrying two chiplets. It is possible that each tile carries either 28 or 30 cores, and two cores per chiplet being disabled for redundancy purposes. Final production processors could up the ante to around 84 - 90 cores. A Tom's Hardware analysis of Yuuki_AnS's slide proposes that: "the compute chiplets are made on Intel 3 (3 nm-class) process technology, whereas HSIO chiplets are fabbed on a 7 nm-class production node, which is a proven technology and is considered to be optimal for modern I/O chiplets in terms of performance and costs."
12 Comments on Intel Demos 6th Gen Xeon Scalable CPUs, Core Counts Leaked
Single threaded flows would have low performance.
Same core counts
Higher TDP
Such innovation!
(I don't except for the new zstd compression)
I would be very surprised to see those clocks in the final product, especially since AMD can do way better: This is a next-next product manufactured on a completely new node, so it's expected that early samples have lower clocks. This is only for the E-core based Xeons while P-core ones will have SMT. I don't think it's related to having chiplets since the current Sapphire Rapids Xeons are also chiplet-based and feature SMT. The SMT-less E-core Xeons are targeted towards a specific segments - mostly cloud computing for which it is not a desirable feature. AMD also has the EPYC 9754S with factory-disabled SMT which I find unusual due to the fact you can already disable SMT in BIOS. Not that it matters much since cloud vendors get specific off-market SKUs anyway.
Both Intel 4th gen Xeons and Zen 4 EPYCs have higher clocks than what this ES1 presents.
My point was that this is just an engineering sample so the clocks shouldn't be taken as final. I probably shouldn't have compared it to AMD but to Intel's current gen, however that was the only solid source of raw clocks I remembered at the time. It's not something tested often.
On the other hand Zen 4c is the same core as Zen 4 which has the same capabilities just with less cache, slower frequency, and a slightly different structure due to having 2 CCXs on the CCD.
Your metric of perf per mm² gained can be easily calculated for Zen 4c, but for E-cores it's significantly harder due to its differences. It might work for workloads not utilizing anything above AVX2, but even then the cache structure complicates MT measurements.