• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel to Detail "Alder Lake" and "Sapphire Rapids" Microarchitectures at Hot Chips 33, This August

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,294 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel will detail its 12th Gen Core "Alder Lake" client and "Sapphire Rapids" server CPU microarchitectures at the Hot Chips 33 conclave, this August. In fact, Intel's presentation leads the CPU sessions on the opening day of August 23. "Alder Lake" will be the session opener, followed by AMD's presentation of the already-launched "Zen 3," and IBM's 5 GHz Z processor powering its next-gen mainframes. A talk on Xeon "Sapphire Rapids" follows this. Hot Chips is predominantly an engineering conclave, where highly technical sessions are presented by engineers from major semiconductor firms; and so the sessions on "Alder Lake" and "Sapphire Rapids" are expected to be very juicy.

"Alder Lake" is Intel's attempt at changing the PC ecosystem by introducing hybrid CPU cores, a concept introduced to the x86 machine architecture with "Lakefield." The processor will also support next-generation I/O, such as DDR5 memory. The "Sapphire Rapids" server CPU microarchitecture will see an increase in CPU core counts, next-gen I/O such as PCI-Express 5.0, CXL 1.1, DDR5 memory, and more.



View at TechPowerUp Main Site
 
Joined
Oct 25, 2019
Messages
203 (0.11/day)
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.
 
Joined
Oct 1, 2006
Messages
4,934 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.
LoL.
You should look up the latency of Alder Lake on DDR5. This is why they consider supporting DDR4 as well.
 
Joined
Apr 24, 2021
Messages
281 (0.21/day)
LoL.
You should look up the latency of Alder Lake on DDR5. This is why they consider supporting DDR4 as well.
I thought the reason for continued DDR4 support was because of DDR5’s price at launch, not it’s latency.
 
D

Deleted member 197223

Guest
Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.

With the above split currently, each application will just use how much resources it needs and you'll really have no issues whatsoever unless you start getting bottlenecked by the actual CPU. But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?

I don't see how an "intelligent" hardware scheduler is this gonna handle this decently anytime soon.
 
Last edited by a moderator:
Joined
Oct 1, 2006
Messages
4,934 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
I thought the reason for continued DDR4 support was because of DDR5’s price at launch, not it’s latency.
TBH it is likely both, I doubt it will sell well with the first DDR5 being expensive and not very fast.
 
Joined
Oct 20, 2017
Messages
135 (0.05/day)
I don't have much trust in Alder Lake. HW scheduler "baked" into silicon - how you update that?
 
Joined
Nov 6, 2016
Messages
1,773 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
socket 1700 + DDR5 come on already... took 5 years just to get this far.
Motherboard manufacturers will have a choice between DDR5 and DDR4 with Alder lake, a motherboard won't be able to do both (well technically it can, but from a profitability standpoint I wouldn't expect to see this. Asrock had a model that did this with two slots for DDR3 and two slots for DDR4). Seeing as DDR5 price and availability will likely not be good at the launch of Alder lake, I suspect that motherboard manufacturers will go with DDR4 for the vast majority of motherboards with DDR5 capabilities being reserved for the $500+ models. So, it terms of seeing the "democratization" of DDR5, I think this wont be reality until Zen4
 
Joined
Feb 11, 2009
Messages
5,570 (0.96/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.

Must be tough being an Intel shill atm, guessing that is why I have not seen you around all that much.

Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.

With the above split currently, each application will just use how much resources it needs and you'll really have no issues whatsoever unless you start getting bottlenecked by the actual CPU. But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?

I don't see how an "intelligent" hardware scheduler is this gonna handle this decently anytime soon.

I think its just, all those background tasks you know Windows is preforming? (just look at your task manager) plus lightweight tasks I imagine (maybe in the future) like Word or Paint or heck, browser stuff, all of that will be done by the Small cores.

Then the big cores are used fro indeed your gaming or rendering etc.

Actually its probably so that the little cores by default dont do anything and that programmes etc are added via updates that then allow themselves to be ran on the little cores and with speculation that those little cores are actually pretty decent, that could end up being quite a few programmes.
 
Last edited:
D

Deleted member 197223

Guest
Must be tough being an Intel shill atm, guessing that is why I have not seen you around all that much.



I think its just, all those background tasks you know Windows is preforming? (just look at your task manager) plus lightweight tasks I imagine (maybe in the future) like Word or Paint or heck, browser stuff, all of that will be done by the Small cores.

Then the big cores are used fro indeed your gaming or rendering etc.

Actually its probably so that the little cores by default dont do anything and that programmes etc are added via updates that then allow themselves to be ran on the little cores and with speculation that those little cores are actually pretty decent, that could end up being quite a few programmes.
Most "everyday" apps and services (i.e drivers) are still single threaded though and live on as 32bit even in 2021 let alone 2025 and beyond. I have purged most UWP apps myself so most services only need 1 thread they all can share that they will never ever be fully utilized. So why would 7 more threads be more logical let alone more power efficient? Heck people only went 64 bit for a bigger memory allocation and nothing has really changed on the code side of things, it's all being bruteforced if anything. Sure it's slowly changing but you aren't really gonna see a big change anytime soon as can bee seen with the web where you need hundreds upon hundreds of megabytes in order to render a page that is bloated with JS. Why fix things when you can just throw more RAM at the problem. Time is also another factor obviously.
 
Last edited by a moderator:
Joined
Dec 30, 2010
Messages
2,200 (0.43/day)
Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.

With the above split currently, each application will just use how much resources it needs and you'll really have no issues whatsoever unless you start getting bottlenecked by the actual CPU. But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?

I don't see how an "intelligent" hardware scheduler is this gonna handle this decently anytime soon.

Uhm this is'nt something new really. Mobile phones use this tech for a longer time. Why would you throw in a big large core when the phone is idle or your simply making a call? You use the smaller cores for that that handle any of the light weight processing, thus saving power.

The moment the CPU detects heavy loads coming in it will switch or assign the threads in this case to the bigger cores. Nothing new really.
 
Joined
Oct 1, 2006
Messages
4,934 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Uhm this is'nt something new really. Mobile phones use this tech for a longer time. Why would you throw in a big large core when the phone is idle or your simply making a call? You use the smaller cores for that that handle any of the light weight processing, thus saving power.

The moment the CPU detects heavy loads coming in it will switch or assign the threads in this case to the bigger cores. Nothing new really.
Android and IOS are *nix based OSes.
Windows is infamous for messing up NUMA which is essentially "Big.Big", so I am not keeping my hopes up for M$.
On the few Lakefield systems I got my hands on, the default behavior when I ran Cinebench was to ignore the Ice Lake core and run it on the 4 Atom cores.
If I change the CPU affinity to force it to run on all cores, it sometimes does the oppsite, and just runs it on the single Ice Lake core.
 
Last edited:
Joined
Jul 3, 2019
Messages
322 (0.16/day)
Location
Bulgaria
Processor 6700K
Motherboard M8G
Cooling D15S
Memory 16GB 3k15
Video Card(s) 2070S
Storage 850 Pro
Display(s) U2410
Case Core X2
Audio Device(s) ALC1150
Power Supply Seasonic
Mouse Razer
Keyboard Logitech
Software 22H2
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.
So you know what Zen4 would be like. Share it with us, pretty please with cherry on top. :laugh::roll::laugh::roll::laugh:
 
Joined
Jun 10, 2014
Messages
2,992 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.
<snip>
But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?
There are normally thousands of background threads, and the OS scheduler does have statistics about the load of each thread.

One concern in latency sensitive situation; e.g. a game have one of its threads with low load but still synchronized, so other threads may have to wait for it causing stutter etc.

Another problem is the increased load on the OS scheduler in general, having to do shuffle threads around more, especially if they suddenly increase their load. This will probably cause stutter. I know the whole OS can come to a crawl if you overload the OS scheduler with many thousands of light threads, with a little load on each.

Most "everyday" apps and services (i.e drivers) are still single threaded though and live on as 32bit even in 2021 let alone 2025 and beyond. …Heck people only went 64 bit for a bigger memory allocation and nothing has really changed on the code side of things, it's all being bruteforced if anything.
This is completely untrue.
64-bit software has nothing to do with maximum memory size, and since people in 2021 still keep repeating this nonsense, the fact police have to step in once again.
When talking about 64-bit CPUs we are referring to the register width/data width of the CPU, contrary to the address width which determines directly addressable memory.
Let's take some historical examples;
Intel 8086: 16-bit CPU with 20-bit address width (1 MB)
Intel 80286: 16-bit CPU with 24-bit address width (16 MB)
MOS 6502 (used in Commodore 64, Apple 2, NES etc.): 8-bit CPU with 16-bit address width (64 kB)
It's just a coincidence that consumer systems at time when most people switched to 64-bit CPUs and software were just supporting 4 GB RAM, which has lead people to believe there is a relation between the two. At the time, 32-bit Xeons were supporting up to 64 GB in both Windows and Linux.

Regarding usage of 64-bit software, you will find very little 32-bit software running on a modern system.

While 64-bit software certainly makes addressing >4GB easier, the move to 64-bit was primarily motivated by computational intensive applications with got much greater performance. Recompiling anything but assembly code to 64-bit is trivial, and the only significant "cost" is a very marginal increase in memory usage (pointers are twice as large), but this disadvantage is small compared to the benefits of faster computation and easier access beyond 4GB.

And FYI, most applications and services running in both Windows and Linux today run 2 or more threads.

Sure it's slowly changing but you aren't really gonna see a big change anytime soon as can bee seen with the web where you need hundreds upon hundreds of megabytes in order to render a page that is bloated with JS. Why fix things when you can just throw more RAM at the problem. Time is also another factor obviously.
The problems you describe about JS is real, but have nothing to do with hardware. JS is a bloated piece of crap both due to the horrible design of the ECMAScript standards and due to "everyone" writing extremely bloated things in JS.
As someone who has programmed for decades I look in horror how even a basic web page can consume a CPU using any of the common JS frameworks. :fear:
 
Last edited:
Top