T0@st
News Editor
- Joined
- Mar 7, 2023
- Messages
- 2,077 (3.31/day)
- Location
- South East, UK
Intel's advanced packaging prowess demonstration took place this week—attendees were able to get an early-ish look at Team Blue's sixth Generation Xeon Scalable "Sapphire Rapids" processors. This multi-tile datacenter-oriented CPU family is projected to hit the market within the first half of 2024, but reports suggest that key enterprise clients have recently received evaluation samples. Coincidentally, renowned hardware leaker—Yuuki_AnS—has managed to source more information from industry insiders. This follows their complete blowout of more mainstream Raptor Lake Refresh desktop SKUs.
The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."
The more frugal family is described as: "Intel Xeon processors with E-cores (Sierra Forest) are enhanced to deliver density-optimized compute in the most power-efficient manner. Xeon processors with E-cores provide best-in-class power-performance density, offering distinct advantages for cloud-native and hyperscale workloads."
The leaked information suggests that listed "Granite Rapids-SP" ES1 units max out at 56 cores along with 288 MB of cache on an eight-channel memory subsystem carrying two chiplets. It is possible that each tile carries either 28 or 30 cores, and two cores per chiplet being disabled for redundancy purposes. Final production processors could up the ante to around 84 - 90 cores. A Tom's Hardware analysis of Yuuki_AnS's slide proposes that: "the compute chiplets are made on Intel 3 (3 nm-class) process technology, whereas HSIO chiplets are fabbed on a 7 nm-class production node, which is a proven technology and is considered to be optimal for modern I/O chiplets in terms of performance and costs."
View at TechPowerUp Main Site | Source
The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."
The more frugal family is described as: "Intel Xeon processors with E-cores (Sierra Forest) are enhanced to deliver density-optimized compute in the most power-efficient manner. Xeon processors with E-cores provide best-in-class power-performance density, offering distinct advantages for cloud-native and hyperscale workloads."
The leaked information suggests that listed "Granite Rapids-SP" ES1 units max out at 56 cores along with 288 MB of cache on an eight-channel memory subsystem carrying two chiplets. It is possible that each tile carries either 28 or 30 cores, and two cores per chiplet being disabled for redundancy purposes. Final production processors could up the ante to around 84 - 90 cores. A Tom's Hardware analysis of Yuuki_AnS's slide proposes that: "the compute chiplets are made on Intel 3 (3 nm-class) process technology, whereas HSIO chiplets are fabbed on a 7 nm-class production node, which is a proven technology and is considered to be optimal for modern I/O chiplets in terms of performance and costs."
View at TechPowerUp Main Site | Source