• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Alder Lake-S" Comes in a 6+0 Core Die Variant

Joined
Mar 28, 2020
Messages
1,763 (1.01/day)
Its obvious that the bigger die chip goes to the higher end i5 12600 and above SKUs. The budget will get the crippled chip. I suspect it is not just the E-cores that got chopped, but also things like cache size and likely graphic solution as well. Typical Intel strategy where they will always try and create some differences to milk their customers. Anyway from a gaming perspective, I think the efficient cores get disabled anyway, so no big deal losing them. Only problem is how much cache is left, that may have some impact to gaming performance.
 
Joined
Mar 16, 2017
Messages
2,170 (0.76/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Shouldn't the conclusion there be the issue is with the Adrenalin driver, not Windows 11? Unless there is some proof that Windows 10 with/without display drivers has notably lower latency. I'm also unsure how increased L3 Cache and DRAM latency would leader to a 10-20% performance loss in games, which is traditionally not an activity bound by memory speed.
Curious if there is an interplay with Windows 11 and the driver, where it’s looking for a specific CPUID to fix the latency issue. Once you swap the CPU, the software no longer sees the right CPU, and the performance regression emerges because the driver/Windows forced a setting that is CPU-specific. Also, not all Adler Lake SKUs are hybrids, so Windows probably can’t just do an architectural or generational test, but instead look for more specific CPUIDs. Could the Windows 11 scheduler be looking for certain SKUs specifically, and swapping CPUs make a mess of things?
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Curious if there is an interplay with Windows 11 and the driver, where it’s looking for a specific CPUID to fix the latency issue. Once you swap the CPU, the software no longer sees the right CPU, and the performance regression emerges because the driver/Windows forced a setting that is CPU-specific. Also, not all Adler Lake SKUs are hybrids, so Windows probably can’t just do an architectural or generational test, but instead look for more specific CPUIDs. Could the Windows 11 scheduler be looking for certain SKUs specifically, and swapping CPUs make a mess of things?
Sure but why some people associate this issue with an Adrenalin driver is mind bugling. This is a CPU issue not graphics.
 
Joined
Sep 4, 2020
Messages
93 (0.06/day)
IDK why people still think Windows 10 have no use for E cores.
Nov 4th can't come sooner and put these assumptions to rest.
Because win10 is rumoured to not run Alderlake properly and only recognise the smaller e cores.
So the 6p+0e cores will not have that problem!
 
Joined
Oct 8, 2015
Messages
774 (0.23/day)
Location
Earth's Troposphere
System Name 3 "rigs"-gaming/spare pc/cruncher
Processor R7-5800X3D/i7-7700K/R9-7950X
Motherboard Asus ROG Crosshair VI Extreme/Asus Ranger Z170/Asus ROG Crosshair X670E-GENE
Cooling Bitspower monoblock ,custom open loop,both passive and active/air tower cooler/air tower cooler
Memory 32GB DDR4/32GB DDR4/64GB DDR5
Video Card(s) Gigabyte RX6900XT Alphacooled/AMD RX5700XT 50th Aniv./SOC(onboard)
Storage mix of sata ssds/m.2 ssds/mix of sata ssds+an m.2 ssd
Display(s) Dell UltraSharp U2410 , HP 24x
Case mb box/Silverstone Raven RV-05/CoolerMaster Q300L
Audio Device(s) onboard/onboard/onboard
Power Supply 3 Seasonics, a DeltaElectronics, a FractalDesing
Mouse various/various/various
Keyboard various wired and wireless
VR HMD -
Software W10.someting or another,all 3
Will these be of the RBMK 1000 v1.0 type ?
I'm sorry , it's just some Intel adds popping up on YouTube hinting at believing and that's it.
 
Joined
Aug 13, 2010
Messages
5,485 (1.04/day)
Because win10 is rumoured to not run Alderlake properly and only recognise the smaller e cores.
So the 6p+0e cores will not have that problem!
Idk why those rumors keep happening despite people literally having those processors keep telling people its wrong.
Less than 24 hours left to put those silly rumors to rest.

Thank goodness
 
Joined
Apr 16, 2019
Messages
632 (0.30/day)
Idk why those rumors keep happening despite people literally having those processors keep telling people its wrong.
Less than 24 hours left to put those silly rumors to rest.

Thank goodness
That also means less than 24 hours to a lot of butt-hurt AMD fanboys, I can hardly wait to see them scrambling in action, making up excuses for why 5800x gets wrecked by 12600k :p
 
Joined
Jan 17, 2021
Messages
1 (0.00/day)
I do hate the inclusion of ‘e’ cores. Such tech that nobody asked for on the desktop, high performance environment. It might make sense on mobile platforms though.

Intel is being erratic and misleading lately, such a shame (on them)
 
Joined
Jan 29, 2021
Messages
1,881 (1.31/day)
Location
Alaska USA
I do hate the inclusion of ‘e’ cores. Such tech that nobody asked for on the desktop, high performance environment. It might make sense on mobile platforms though.

Intel is being erratic and misleading lately, such a shame (on them)
Nobody is forced to purchase an Alder Lake cpu with e-cores or didn't you know that.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
Not to mention the E cores are efficient and run a lot more cool than the P cores. I saw a screenshot of the temps across each cores and the lowest temperature on the max temperatures of different P cores was like 63c and highest max temperatures on the highest E cores was 47c which is 16c difference that's pretty significant to me. I know the P cores are clocked higher, but is still a huge difference in temperature out of the box plus you further tinker with the multipliers and voltages on the P cores and the E cores.

There isn't much overclocking headroom on the P cores we all basically knew that much already prior to launch. What I'm not sure about is if you can manually drop the multiplier by 1 across all P cores and 1 additional lower multiplier for additional P core in order for a stairway effect on the multiplier to scale back on the power draw and temperatures of the P cores and use that extra leeway to get more out of the E core's max frequency. I'd really like to know if the overall temperature and/or efficiency is better as a whole by underclocking the P cores slightly and overclocking the E cores a bit in turn with some of that extra temperature difference and power draw differences.

In fact it might work out a bit better overall on the Windows 11 scheduler in regard to some programs that inadvertently get assigned to the E cores. You might get better sustained boost on the first few P cores as well. I'd like to know more about just how well you can adjust some of the different clock frequencies and multipliers in different manners. I feel like there is potential for underclocking P cores and overclocking the E cores to lead to a better balance between power draw and temperatures with Alder Lake and I'd say that's especially true of the 129000K because there is two clusters of 1-4 E cores each with it's own multiplier that can be dynamically adjusted so half of the E cores on the 129000K can drop the multiplier by 1. I wish that the E core multipliers on each individual E core could be raised or lowered like with the P cores to be as dynamic, but it's still nice to know it can for clusters of four E cores. I hope that Intel doubles down on multiplier adjust ability on the E cores in it's next go with big LITTLE along with packing in even more of these energy and temperature efficient cores at the same time.

Having looked at how the Intel overclocking software looks like in some screenshots I feel like a unlocked 2C + 8C desktop might not be as interesting as I originally hoped it could at present though a 2C + 16C would be and a 1C + 16C might as well. That would allow more adjustment to the E core clusters at least. I feel that's area of overclocking that needs a strong look at and going as far as dropping the P core multipliers intentionally to see what you can eek out of the E cores. I feel like the P core's might be limiting the E core frequency scaling perhaps due to temperatures since they run vastly hotter. I haven't stumbled upon anyone deliberately trying to overclock the E cores though.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Sure but why some people associate this issue with an Adrenalin driver is mind bugling. This is a CPU issue not graphics.
Why does the latency go up only when the Adrenalin driver is installed then? Unless the images provided are missing some other context, hence why I asked about it. If you're going to post a side-by-side of A (W11 with no driver) and B (W11 with driver), and the latter has increased latency, the implication is that the only thing that changed was the Adrenalin driver, implying that has something to do with the problem.
 
Joined
Feb 1, 2019
Messages
3,669 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Lemme guess......12400[F] + B660 board is going to be the most popular config this gen, because enthusiasts seem to collectively hate E-cores? Intel probably gonna gimp the boost clocks hard compared to 12600K though.

That's a great way to reap the benefits of Golden Cove without being forced to bet everything on Microsoft and Thread Director doing their jobs properly. I can't help but wonder if you can just disable E-cores altogether on all of these chips though, especially for benching and whatnot.
Clocks will almost certainly be significantly lower.

There is the crowd who will feel they dont need 20 threads, but still want that single core performance, they wont want people been able to achieve that on the cheaper SKU's.

Bear in mind though the chips used are also lower quality though, so its not like you can get an i5 and clock it to i9 speeds without some very good luck and tolerance to high voltages, I used to think one could do that many years ago, when I used to buy i5's but after bad experience with trying to clock up my i5's without insane voltage levels and especially after I seen a published graph which showed consistency across 100+ samples where i7's needed less vcore to hit same clock's as an i5, there is a clear binning process going on. So with the higher priced chips you also paying for better binned chips as well. So ultimately an i9 that only needs 3.6ghz for a specific workload will do it with less power/vcore than a i5 at 3.6ghz.

My 9900k is doing 4.8ghz at 1.25v I dont think its even a good sample, and I could never get any of my i5's anywhere near this voltage.
 
Last edited:
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Why does the latency go up only when the Adrenalin driver is installed then? Unless the images provided are missing some other context, hence why I asked about it. If you're going to post a side-by-side of A (W11 with no driver) and B (W11 with driver), and the latter has increased latency, the implication is that the only thing that changed was the Adrenalin driver, implying that has something to do with the problem.
From what I understand, if you change the CPU the performance drops and then if you. Then by reinstalling Adrenalin without restart it works as intended. After restart tanks again.
HW said if you install system from scratch when changing a CPU the system works as intended, despite Adrenalin driver installation (that is how I understand it).
Just because Adrenalin correlates with the problem it does not mean the driver is the problem since fresh install of windows does not exhibit the same issue. It may be the windows 11 problem not Adrenalin.
 
Joined
Feb 1, 2019
Messages
3,669 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Just seems really weird that these SKUs aren’t all some configuration of P+E cores. It waters down the entire product line, IMO. I figured they’d do something like 2P+4E for i3, 4P+4E for i5, and i7 and i9 having 6P and 8P, respectively. I mean, if you’re marketing this, you have to play up the value of the E cores. Because they are less performant than the E-cores, they would emphasize energy savings. Leaving them out of the lesser lines makes it look like saving energy is exclusive for the premium buyer. I guess if they did what I mention, we’d end up with performance regressions over 11 series? Just seems to devalue the purpose and merits of the E-cores, IMO.
My guess is that, they dont need the e-cores on the lower sku's to hit their performance and power targets, its a problem they seemed to mainly have at the high end to compete with AMD.
 
Top