• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel to Detail "Alder Lake" and "Sapphire Rapids" Microarchitectures at Hot Chips 33, This August

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel will detail its 12th Gen Core "Alder Lake" client and "Sapphire Rapids" server CPU microarchitectures at the Hot Chips 33 conclave, this August. In fact, Intel's presentation leads the CPU sessions on the opening day of August 23. "Alder Lake" will be the session opener, followed by AMD's presentation of the already-launched "Zen 3," and IBM's 5 GHz Z processor powering its next-gen mainframes. A talk on Xeon "Sapphire Rapids" follows this. Hot Chips is predominantly an engineering conclave, where highly technical sessions are presented by engineers from major semiconductor firms; and so the sessions on "Alder Lake" and "Sapphire Rapids" are expected to be very juicy.

"Alder Lake" is Intel's attempt at changing the PC ecosystem by introducing hybrid CPU cores, a concept introduced to the x86 machine architecture with "Lakefield." The processor will also support next-generation I/O, such as DDR5 memory. The "Sapphire Rapids" server CPU microarchitecture will see an increase in CPU core counts, next-gen I/O such as PCI-Express 5.0, CXL 1.1, DDR5 memory, and more.



View at TechPowerUp Main Site
 
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.
 
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.
LoL.
You should look up the latency of Alder Lake on DDR5. This is why they consider supporting DDR4 as well.
 
LoL.
You should look up the latency of Alder Lake on DDR5. This is why they consider supporting DDR4 as well.
I thought the reason for continued DDR4 support was because of DDR5’s price at launch, not it’s latency.
 
Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.

With the above split currently, each application will just use how much resources it needs and you'll really have no issues whatsoever unless you start getting bottlenecked by the actual CPU. But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?

I don't see how an "intelligent" hardware scheduler is this gonna handle this decently anytime soon.
 
Last edited by a moderator:
I thought the reason for continued DDR4 support was because of DDR5’s price at launch, not it’s latency.
TBH it is likely both, I doubt it will sell well with the first DDR5 being expensive and not very fast.
 
I don't have much trust in Alder Lake. HW scheduler "baked" into silicon - how you update that?
 
socket 1700 + DDR5 come on already... took 5 years just to get this far.
Motherboard manufacturers will have a choice between DDR5 and DDR4 with Alder lake, a motherboard won't be able to do both (well technically it can, but from a profitability standpoint I wouldn't expect to see this. Asrock had a model that did this with two slots for DDR3 and two slots for DDR4). Seeing as DDR5 price and availability will likely not be good at the launch of Alder lake, I suspect that motherboard manufacturers will go with DDR4 for the vast majority of motherboards with DDR5 capabilities being reserved for the $500+ models. So, it terms of seeing the "democratization" of DDR5, I think this wont be reality until Zen4
 
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.

Must be tough being an Intel shill atm, guessing that is why I have not seen you around all that much.

Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.

With the above split currently, each application will just use how much resources it needs and you'll really have no issues whatsoever unless you start getting bottlenecked by the actual CPU. But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?

I don't see how an "intelligent" hardware scheduler is this gonna handle this decently anytime soon.

I think its just, all those background tasks you know Windows is preforming? (just look at your task manager) plus lightweight tasks I imagine (maybe in the future) like Word or Paint or heck, browser stuff, all of that will be done by the Small cores.

Then the big cores are used fro indeed your gaming or rendering etc.

Actually its probably so that the little cores by default dont do anything and that programmes etc are added via updates that then allow themselves to be ran on the little cores and with speculation that those little cores are actually pretty decent, that could end up being quite a few programmes.
 
Last edited:
Must be tough being an Intel shill atm, guessing that is why I have not seen you around all that much.



I think its just, all those background tasks you know Windows is preforming? (just look at your task manager) plus lightweight tasks I imagine (maybe in the future) like Word or Paint or heck, browser stuff, all of that will be done by the Small cores.

Then the big cores are used fro indeed your gaming or rendering etc.

Actually its probably so that the little cores by default dont do anything and that programmes etc are added via updates that then allow themselves to be ran on the little cores and with speculation that those little cores are actually pretty decent, that could end up being quite a few programmes.
Most "everyday" apps and services (i.e drivers) are still single threaded though and live on as 32bit even in 2021 let alone 2025 and beyond. I have purged most UWP apps myself so most services only need 1 thread they all can share that they will never ever be fully utilized. So why would 7 more threads be more logical let alone more power efficient? Heck people only went 64 bit for a bigger memory allocation and nothing has really changed on the code side of things, it's all being bruteforced if anything. Sure it's slowly changing but you aren't really gonna see a big change anytime soon as can bee seen with the web where you need hundreds upon hundreds of megabytes in order to render a page that is bloated with JS. Why fix things when you can just throw more RAM at the problem. Time is also another factor obviously.
 
Last edited by a moderator:
Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.

With the above split currently, each application will just use how much resources it needs and you'll really have no issues whatsoever unless you start getting bottlenecked by the actual CPU. But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?

I don't see how an "intelligent" hardware scheduler is this gonna handle this decently anytime soon.

Uhm this is'nt something new really. Mobile phones use this tech for a longer time. Why would you throw in a big large core when the phone is idle or your simply making a call? You use the smaller cores for that that handle any of the light weight processing, thus saving power.

The moment the CPU detects heavy loads coming in it will switch or assign the threads in this case to the bigger cores. Nothing new really.
 
Uhm this is'nt something new really. Mobile phones use this tech for a longer time. Why would you throw in a big large core when the phone is idle or your simply making a call? You use the smaller cores for that that handle any of the light weight processing, thus saving power.

The moment the CPU detects heavy loads coming in it will switch or assign the threads in this case to the bigger cores. Nothing new really.
Android and IOS are *nix based OSes.
Windows is infamous for messing up NUMA which is essentially "Big.Big", so I am not keeping my hopes up for M$.
On the few Lakefield systems I got my hands on, the default behavior when I ran Cinebench was to ignore the Ice Lake core and run it on the 4 Atom cores.
If I change the CPU affinity to force it to run on all cores, it sometimes does the oppsite, and just runs it on the single Ice Lake core.
 
Last edited:
Honestly stoked for this, innovation coming from Intel's side as usual. Superior implementation to Zens 4 I/O chip addled high latency architecture.
So you know what Zen4 would be like. Share it with us, pretty please with cherry on top. :laugh::roll::laugh::roll::laugh:
 
Let's say you have 16 threads, and 8 of those is used for rendering (or anything else i.e compiling) while the others are used for gaming. How is the smaller cores gonna help in this situation? Let alone if you start doing other things like running YouTube or whatever in the background.
<snip>
But wouldn't big.LITTLE start stealing threads from other applications which might actually need those because it thinks it knows what's best for the user?
There are normally thousands of background threads, and the OS scheduler does have statistics about the load of each thread.

One concern in latency sensitive situation; e.g. a game have one of its threads with low load but still synchronized, so other threads may have to wait for it causing stutter etc.

Another problem is the increased load on the OS scheduler in general, having to do shuffle threads around more, especially if they suddenly increase their load. This will probably cause stutter. I know the whole OS can come to a crawl if you overload the OS scheduler with many thousands of light threads, with a little load on each.

Most "everyday" apps and services (i.e drivers) are still single threaded though and live on as 32bit even in 2021 let alone 2025 and beyond. …Heck people only went 64 bit for a bigger memory allocation and nothing has really changed on the code side of things, it's all being bruteforced if anything.
This is completely untrue.
64-bit software has nothing to do with maximum memory size, and since people in 2021 still keep repeating this nonsense, the fact police have to step in once again.
When talking about 64-bit CPUs we are referring to the register width/data width of the CPU, contrary to the address width which determines directly addressable memory.
Let's take some historical examples;
Intel 8086: 16-bit CPU with 20-bit address width (1 MB)
Intel 80286: 16-bit CPU with 24-bit address width (16 MB)
MOS 6502 (used in Commodore 64, Apple 2, NES etc.): 8-bit CPU with 16-bit address width (64 kB)
It's just a coincidence that consumer systems at time when most people switched to 64-bit CPUs and software were just supporting 4 GB RAM, which has lead people to believe there is a relation between the two. At the time, 32-bit Xeons were supporting up to 64 GB in both Windows and Linux.

Regarding usage of 64-bit software, you will find very little 32-bit software running on a modern system.

While 64-bit software certainly makes addressing >4GB easier, the move to 64-bit was primarily motivated by computational intensive applications with got much greater performance. Recompiling anything but assembly code to 64-bit is trivial, and the only significant "cost" is a very marginal increase in memory usage (pointers are twice as large), but this disadvantage is small compared to the benefits of faster computation and easier access beyond 4GB.

And FYI, most applications and services running in both Windows and Linux today run 2 or more threads.

Sure it's slowly changing but you aren't really gonna see a big change anytime soon as can bee seen with the web where you need hundreds upon hundreds of megabytes in order to render a page that is bloated with JS. Why fix things when you can just throw more RAM at the problem. Time is also another factor obviously.
The problems you describe about JS is real, but have nothing to do with hardware. JS is a bloated piece of crap both due to the horrible design of the ECMAScript standards and due to "everyone" writing extremely bloated things in JS.
As someone who has programmed for decades I look in horror how even a basic web page can consume a CPU using any of the common JS frameworks. :fear:
 
Last edited:
Back
Top