• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Atlas Fallen Optimization Fail: Gain 50% Additional Performance by Turning off the E-cores

Joined
Jun 21, 2019
Messages
44 (0.02/day)
big.LITTLE architecture in desktop CPUs is completely retarded idea by Intel. It came as a response to ARM efficiency which is (and will be) out of reach for Intel or x86 arch in general. It makes some sense in laptops, to designate E cores for background tasks and save battery, but it is not something that can help catch-up with ARM in terms of efficiency (this is impossible due to architecture limitations of CISC - and if you respond that "Intel is RISC internally" - it doesn't matter. The problem is with non-fixed length instruction which makes optimisation of branch predictor miserable). Funny part is that AMD without P/E is way more efficient than Intel (but 5-7 times less efficient than ARM, especially Apple implementation of this ISA)
 
Joined
Aug 12, 2022
Messages
248 (0.30/day)
It definitely has a different kind of workload. But it still doesn't make sense to reduce the overall computing power available and see the performance go up.
It does if each thread only does a miniscule amount of work before having to communicate with the other threads. If I tell eight threads to add 1+1, then send the results to one thread, it'll take longer than just having one thread add 1+1 eight times. And if I tell an E-core to calculate pi to a million digits and send the last digit to a P-core so that it can add that digit to 1, then it'll take way, way longer than just having the P core do all the work. (And normally the programmer can't say which core will run a thread, Windows and Thread Director decide that.)

So for parallel programming, you only spawn new threads when you have large-ish chunks of work for each thread, otherwise you'll risk taking longer than just using one thread.
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
big.LITTLE architecture in desktop CPUs is completely retarded idea by Intel. It came as a response to ARM efficiency which is (and will be) out of reach for Intel or x86 arch in general. It makes some sense in laptops, to designate E cores for background tasks and save battery, but it is not something that can help catch-up with ARM in terms of efficiency (this is impossible due to architecture limitations of CISC - and if you respond that "Intel is RISC internally" - it doesn't matter. The problem is with non-fixed length instruction which makes optimisation of branch predictor miserable). Funny part is that AMD without P/E is way more efficient than Intel (but 5-7 times less efficient than ARM, especially Apple implementation of this ISA)
Those architectural penalties only apply to simpler, in-order designs like the first Atom. With large out-of-order processors, the x86 penalty is irrelevant as the costs of implementing a large out-of-order core far outweigh the complexity of the x86 decoders.
 
Joined
Jun 16, 2021
Messages
53 (0.04/day)
System Name 2rd-hand Hand-me-down V2.0, Mk. 3
Processor Ryzen R5-5500
Motherboard ASRock X370
Cooling Wraith Spire
Memory 2 x 16Gb G.Skill @ 3200Mhz
Video Card(s) Power Color RX 5700 XT
Storage 500 Gb Crucial MX500, 2Tb WD SA510
Display(s) Acer 24.0" CB2 1080p
Case (early) DeepCool
Audio Device(s) Ubiquitous Realtek
Power Supply 650W FSP
Mouse Logitech
Keyboard Logitech
VR HMD What?
Software Yes
Benchmark Scores [REDACTED]
The question I have regarding this story is, did no one involved in the development, playtesting or quality assurance phase of this game use an Intel CPU with P&E cores? Really?
 
Joined
Aug 12, 2022
Messages
248 (0.30/day)
For the record, Intel's E-core are not power-efficient, they're area-efficient (cheap). I suspect this is at least partially true of most ARM E-cores as well. It just sounds better to the customer if you tell the customer that some cores are "efficient" instead of "cheap". But the reality is that cheap cores mean more cores, which is also why AMD uses chiplets. (AMD's "cloud" cores are power-efficient, so certainly I think some E-cores are power-efficient. But I think the first goal was area-efficiency.)

Apple's ARM processors are efficient, but probably more because of TSMC's N5 node than because of ARM or E-cores.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
Those architectural penalties only apply to simpler, in-order designs like the first Atom. With large out-of-order processors, the x86 penalty is irrelevant as the costs of implementing a large out-of-order core far outweigh the complexity of the x86 decoders.

There is huge penalty, Apple Silicon is large ooo processor, and this is where ARM architecture shines - it has 630+ deep ROB (AMD 256), this allow to achieve incredible high instruction-level parallelism, which will be never in reach for X86. Having high ILP also means that these instructions need to be executed in parallel, and here we also back-end execution engines feature extremely wide capabilities. Intel was horrified after launch of M1, they had to "came up with something". So they forced themselves into little.BIG - which is pointless but maybe prolongs death of X86 a little bit in laptop market but is ridiculous in desktop CPUs.

For the record, Intel's E-core are not power-efficient, they're area-efficient (cheap).

Yes, I'd say even they are power throttled cores with aim to take background tasks and save battery live on laptops. They are not more "efficient" in any way, they are just less performant.


Apple's ARM processors are efficient, but probably more because of TSMC's N5 node than because of ARM or E-cores.

Not really, if you try to estimate how much processes it will take to match efficiency of Apple Silicon, it is at least 5 node gap.
 
Last edited:

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,571 (2.86/day)
Location
Piteå
System Name White DJ in Detroit
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
Joined
Jan 18, 2020
Messages
818 (0.46/day)
QA Testing, that's for the users in the first few months after launch?

Pretty sure Microsoft started doing that and it's pretty much standard now.
 
Joined
Apr 5, 2023
Messages
70 (0.12/day)
Even so, the behavior is still strange. I mean, Cinebench also puts load on all cores, but still runs faster when also employing the E-cores. There's something fishy in that code, beyond the sloppy scheduling.
Notice how Cinebench divides it's workload into hundreds of chunks, then parcels those out to each thread as needed. A P-core will finish several of these chunks in the time an E-core can only finish one, but they do all contribute as much as possible to getting the whole task done.

Now imagine what would happen if there were far fewer chunks, specifically exactly as many chunks as there are threads. Each P-core would rapidly finish its single chunk then sit waiting while the E-cores finish theirs.

This is very likely the problem with this game, the developers naively assumed each thread was equally capable and thus divided the work exactly equally between them. Thus, while they gain some performance from the parallelization, they lose much more performance by not utilizing the (far more performant) P-cores to their fullest.
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
There is huge penalty, Apple Silicon is large ooo processor, and this is where ARM architecture shines - it has 630+ deep ROB (AMD 256), this allow to achieve incredible high instruction-level parallelism, which will be never in reach for X86. Having high ILP also means that these instructions need to be executed in parallel, and here we also back-end execution engines feature extremely wide capabilities. Intel was horrified after launch of M1, they had to "came up with something". So they forced themselves into little.BIG - which is pointless but maybe prolongs death of X86 a little bit in laptop market but is ridiculous in desktop CPUs.



Yes, I'd say even they are power throttled cores with aim to take background tasks and save battery live on laptops. They are not more "efficient" in any way, they are just less performant.




Not really, if you try to estimate how much processes it will take to match efficiency of Apple Silicon, it is at least 5 node gap.
You're comparing a CPU that clocks at 3.5 GHz to one that clocks close to 6 GHz. Obviously, the lower clocked CPU would be able to afford bigger structures due to relaxed timings. Zen 4c proves that there's nothing magical about ARM. With some changes in physical design, Zen 4c achieves the same IPC as Zen 4 while being half the size. Apple's designs are very impressive, but that is a testament to Apple's CPU design teams. Note that no other ARM designs come close. You're also mistaken about the sizes of the various out-of-order structures in recent x86 processors.



Zen 4
Zen 3Golden CoveComments
Reorder Buffer320256512Each entry on Zen 4 can hold 4 NOPs. Actual capacity confirmed using a mix of instructions

This table is from part 1 of the Chips and Cheese overview of Zen 4. Notice that Golden Cove, despite the handicaps of higher clock speed and an inferior process, has a ROB size that is much closer to Apple's M2 than Zen 4.
 
Joined
May 20, 2020
Messages
29 (0.02/day)
AMD right about now!
Laugh At Ha Ha GIF by MOODMAN


OS scheduling is independent of thread director, I'm yet to see what TD actually does & how efficient/better it is to a similar but much better software solution I posted in the other thread!
More accurate to say AMD for now.

 
Joined
May 21, 2009
Messages
269 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
E-Trash cores strikes again

:)
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
For the record, Intel's E-core are not power-efficient, they're area-efficient (cheap). I suspect this is at least partially true of most ARM E-cores as well. It just sounds better to the customer if you tell the customer that some cores are "efficient" instead of "cheap". But the reality is that cheap cores mean more cores, which is also why AMD uses chiplets. (AMD's "cloud" cores are power-efficient, so certainly I think some E-cores are power-efficient. But I think the first goal was area-efficiency.)

Apple's ARM processors are efficient, but probably more because of TSMC's N5 node than because of ARM or E-cores.
Apple's E cores are much better than ARM's little cores.
1691687208586.png

Image is edited from the one in the Anandtech article linked above. Notice that the A15's Blizzard E cores are 5 times faster than the A55 in the gcc subtest but consume only 58% more power, making them 3.25x more efficient in terms of performance per Watt. Even the A14's E cores, which consume almost the same power as the A55 in this subtest are 3.75 times faster.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
You're comparing a CPU that clocks at 3.5 GHz to one that clocks close to 6 GHz. Obviously, the lower clocked CPU would be able to afford bigger structures due to relaxed timings.

Higher clocks of Intel CPU only shows they are reaching performance wall, they can't scale with architecture, only with clocks and node shrinks. In last 7 years single core performance of Intel improved 30%, for Apple it was 200%. X86 is not scaling anymore, clock speed alone doesn't matter. Now we have situation where for the same computing power Intel takes 3x more energy than ARM (https://arstechnica.com/gadgets/2022/03/mac-studio-review-a-nearly-perfect-workhorse-mac/3/) . This is 3 times more - to put things in perspective, one node shrink gives about 15-20% of power reduction - so if we would be optimistic, it will take 6 node shrinks for Intel to catch-up with M1 efficiency.
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Higher clocks of Intel CPU only shows they are reaching performance wall, they can't scale with architecture, only with clocks and node shrinks. In last 7 years single core performance of Intel improved 30%, for Apple it was 200%. X86 is not scaling anymore, clock speed alone doesn't matter. Now we have situation where for the same computing power Intel takes 3x more energy than ARM (https://arstechnica.com/gadgets/2022/03/mac-studio-review-a-nearly-perfect-workhorse-mac/3/) . This is 3 times more - to put things in perspective, one node shrink gives about 15-20% of power reduction - so if we would be optimistic, it will take 6 node shrinks for Intel to catch-up with M1 efficiency.
One node shrink doesn't give such low efficiency. It's typically 30 to 40% power reduction at the same performance. Intel's designs are pushed to stupid clocks for single threaded bragging and would be much more efficient if clocked to more reasonable levels. We also have the example of AMD's laptop silicon and that's much closer to the M2 or M1 than you might realize.
 
Joined
Jun 21, 2021
Messages
3,121 (2.49/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
Another PC gaming title that didn't go through QA.

Pity.
 
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
So Strix or not to Strix that is the question o_O

1691687208586.png

Image is edited from the one in the Anandtech article linked above. Notice that the A15's Blizzard E cores are 5 times faster than the A55 in the gcc subtest but consume only 58% more power, making them 3.25x more efficient in terms of performance per Watt. Even the A14's E cores, which consume almost the same power as the A55 in this subtest are 3.75 times faster.
The results are not validated, so it could be even better.
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
So Strix or not to Strix that is the question o_O


The results are not validated, so it could be even better.
I think Anandtech's testing makes sense. Rather than spending time tweaking the compiler to get the highest score for each CPU, they choose reasonable common options, and see how the CPUs fare.
 
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
Generally speaking even if AMD/Intel match or blow past Apple's efficiency in a few years Apple will still have a massive advantage with them controlling the entire ecosystem from Software, hardware & to a smaller extent having the major upside of using such a wide LPDDR5 bus. Which is to say that I don't believe when you take the whole picture into account x86 can win on the consumer front, at least short to medium term.
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Generally speaking even if AMD/Intel match or blow past Apple's efficiency in a few years Apple will still have a massive advantage with them controlling the entire ecosystem from Software, hardware & to a smaller extent having the major upside of using such a wide LPDDR5 bus. Which is to say that I don't believe when you take the whole picture into account x86 can win on the consumer front, at least short to medium term.
Yes, Apple gets to design the entire system, and they are focused on efficiency which is the right metric for mobile use cases. I expect AMD or Intel to come close, but Apple will continue to lead. It also helps that they move to new nodes before AMD, and Intel is still well behind TSMC.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,839 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Cinebench
Cinebench calculates something that's extremely easy to parallelize, because the pixels rendered don't depend on each other. So you can just run a piece of code on one CPU and it's guaranteed that you never have to wait for a result from another core. Cinebench also has a tiny working set that fits into the cache of all modern CPUs, so you're even guaranteed that you don't have to wait on data from DRAM.

That's exactly why certain companies like to use it to show how awesome their product is, because it basically scales infinitely and doesn't rely on the memory subsystem or anything else.

Gaming is pretty much the opposite. To calculate a single frame you need geometry, rendering, physics, sound, AI, world properties and many more, and they are all synchronized (usually), and have to wait on each other, for every single frame. Put the slowest of these workloads on the E-Cores, everything else has to wait. This will not show up in Task Manager on the waiting cores, because the game is doing a busy wait, to reduce latency, at the cost of not freeing up the CPU core to do something else.
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
I can count on one hand how many times i've heard or experienced anything like this since Alder lake. As W1z wrote above, the developers needed to do nothing... they did something silly and we got this.

Actually Alder Lake was a disaster for months after launch. I had lots of problems. It was all fixed up, but E cores are still causing some issues.

I switched to the Ryzen 7800X3D and couldn't be happier.
 
Joined
Jun 1, 2021
Messages
306 (0.24/day)
Do games themselves have to be optimised/aware of P- versus E-cores? I was under the impression that Intel Thread Director + the Win11 scheduling was sufficient for this, but I guess if there's a bug in either of those components it would also manifest in this regard.
The article doesn't describe if the E-core are disabled or if they used something like Process Affinity to limit the process to only use P-cores. If it's the former, then it's very possibly a ring bus issue where if E-cores are active, the clocks of the ring bus are forced to be considerably lower, thus lowering the performance of the P-cores.
 
Joined
Nov 13, 2007
Messages
10,762 (1.73/day)
Location
Austin Texas
System Name stress-less
Processor 9800X3D @ 5.42GHZ
Motherboard MSI PRO B650M-A Wifi
Cooling Thermalright Phantom Spirit EVO
Memory 64GB DDR5 6000 CL30-36-36-76
Video Card(s) RTX 4090 FE
Storage 2TB WD SN850, 4TB WD SN850X
Display(s) Alienware 32" 4k 240hz OLED
Case Jonsbo Z20
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse DeathadderV2 X Hyperspeed
Keyboard 65% HE Keyboard
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Actually Alder Lake was a disaster for months after launch. I had lots of problems. It was all fixed up, but E cores are still causing some issues.

I switched to the Ryzen 7800X3D and couldn't be happier.
I got the 12600K as soon as it was available -- what problems did you have?

I heard of some Deneuvo issues but didn't experience them myself. I was honestly expecting problems, but didn't have anything.

I was thinking of going 7950x3d (i need the cores for VM data schlepping) but it honestly didn't feel worth and 7800x3d is awesome but would be a downgrade for work but a small upgrade for gaming at 4k :/

I do have the itch build another personal AMD rig at some point.

Generally speaking even if AMD/Intel match or blow past Apple's efficiency in a few years Apple will still have a massive advantage with them controlling the entire ecosystem from Software, hardware & to a smaller extent having the major upside of using such a wide LPDDR5 bus. Which is to say that I don't believe when you take the whole picture into account x86 can win on the consumer front, at least short to medium term.

Apple problem is the software. It's too expensive for cloud stuff, and not enterprise friendly enough for corporate stuff, and doesn't really do any gaming. Even if they have amazing hardware (which they've had for years) - there's only so many hipsters at starbucks who code frontend / do marketing and gfx work and I think they already all use apple.

If they opened up to run games and were better for enterprise (better AD/MDM/MDS integration, and better compatibility with microsoft apps) they would crush it on the consumer side.
 
Last edited:
Joined
Feb 1, 2019
Messages
3,592 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Games don't have to be optimized, Thread Director will do the right thing automagically. The developers specifically put load on the E-Cores, which is a mechanism explicitly allowed by Thread Director / Windows 11. It seems Atlas Fallen developers either forgot that E-Cores exist (and simply designed the game to load all cores, no matter their capability), or thought they'd be smarter than Intel
That was my thoughts, thanks for confirming, since I adjusted my Windows 10 scheduler to prefer p-cores I havent seen a single game use my e-cores, so I was thinking what you have confirmed, that these dev's did something different.
 
Top