Monday, April 4th 2022

Hexa-Core CPUs Are the New Steam Gaming Mainstay

Gamers across the world seem to have settled on the price-performance ratio of hexa-core CPUs as their weapon of choice to process virtual worlds. According to the latest Steam Hardware Survey, 34.22% of machines running steam feature a CPU design with six physical cores, surpassing the 33.74% of users that employ a quad-core processor.

The first mainstream quad-core CPUs were launched circa 2009, having had thirteen years in the market already, while mainstream, true hexa-core designs saw the light of day just a year later, with AMD's Phenom II X6 CPUs. CPU designs featuring more than six physical cores have been increasing in numbers consistently throughout the years, while most under-six-core designs have been slowly bleeding users as gamers upgrade their systems. Our own reviews have shown that the best price-performance ratios for gaming are found in the hexa-core arena, but the latest architectures should help accelerate the number of available cores for mainstream users - whether for gaming purposes or not.
Steam Hardware Survey data for CPU cores
Sources: Steam, via Tom's Hardware
Add your own comment

39 Comments on Hexa-Core CPUs Are the New Steam Gaming Mainstay

#26
Warigator
The Von MatricesThe Intel Core 2 Quad Q6600 would beg to differ with a MSRP of $266 in mid-2007.
Or $200 Q9300 released in Q1 2008.
trsttteYou missed the point completely, no one is arguing for less optimization, what everyone is saying is that it's about damn time games start making better use of parallelization instead of just relying on that sweet single core higher boost.
Of course no one wants less optimization. We want more optimization, so for example a large crowd of NPCs with animations is calculated using 16 threads instead of 8 threads. Doing everything on 4/8 is bad optimization in a world when 6 cores on desktop have been available since 2010 and cheap ones since 2017 (5 years!). So that games offer more, not less. More features, more dynamic stuff. Just like going from 1 core to 2 cores allowed for more things a CPU can do. There would be no games like Assassin's Creed for example if we stayed on 1 core 3.2 GHz. Frequency doesn't go up anymore, so developers need to find other ways - using more cores, threads and cache. We can still get more cores, threads and cache, even if we can't get more frequency. I hope these E cores that Intel is adding with new architectures get utilized. Mainstream CPUs with 64 E cores are not very distant, they could happen in 2026. Do you people want games that use perhaps 6 out of 8 P cores and 0 from 64 E cores? And I don't mean bad coding that uses too much resources. Just next-gen games with next-gen features and changes (for the better, not for the worse). 8th generation brought an "awesome" change : microtransactions. Do we want things to continue like that?

For example, I read on the web that AI in Far Cry 6 is very, very bad (and the game uses only 4c). Even though there were Zen 3 8 core 16 thread CPUs when it came out. Why isn't AI in games progressing like graphics? Performance is increasing, so why? Because devs choose so, that's why! AI should be improving exponentially, like it is in some areas outside of video game AI (like even playing video games or other games like Go, poker or bridge). Listen to Jensen Huang when he talks about AI. Why isn't in-game AI getting better? I don't need 300 fps, I need better AI.
Posted on Reply
#27
Punkenjoy
TadasukeOf course no one wants less optimization. We want more optimization, so for example a large crowd of NPCs with animations is calculated using 16 threads instead of 8 threads. Doing everything on 4/8 is bad optimization in a world when 6 cores on desktop have been available since 2010 and cheap ones since 2017 (5 years!). So that games offer more, not less. More features, more dynamic stuff. Just like going from 1 core to 2 cores allowed for more things a CPU can do. There would be no games like Assassin's Creed for example if we stayed on 1 core 3.2 GHz. Frequency doesn't go up anymore, so developers need to find other ways - using more cores, threads and cache. We can still get more cores, threads and cache, even if we can't get more frequency. I hope these E cores that Intel is adding with new architectures get utilized. Mainstream CPUs with 64 E cores are not very distant, they could happen in 2026. Do you people want games that use perhaps 6 out of 8 P cores and 0 from 64 E cores? And I don't mean bad coding that uses too much resources. Just next-gen games with next-gen features and changes (for the better, not for the worse). 8th generation brought an "awesome" change : microtransactions. Do we want things to continue like that?

For example, I read on the web that AI in Far Cry 6 is very, very bad (and the game uses only 4c). Even though there were Zen 3 8 core 16 thread CPUs when it came out. Why isn't AI in games progressing like graphics? Performance is increasing, so why? Because devs choose so, that's why! AI should be improving exponentially, like it is in some areas outside of video game AI (like even playing video games or other games like Go, poker or bridge). Listen to Jensen Huang when he talks about AI. Why isn't in-game AI getting better? I don't need 300 fps, I need better AI.
If you only speak about rendering, rendering that crowd could already be scaled up to all those cores. but it do not mean it will get a gain of performance doing it. Each draw call requires data that could already be on the cache of another core or the L3 of another CCX/CCD and that core could just process it faster that it would take the other core to fetch and process the data.

So just for thing that are easy to multithread, the gain is not that easy to get. If the task is done quickly, it's quite possible a already loaded core would run those faster in sequential than by spreading it to other cores that would have to get the data into their own caches. It's true for quick task but if the task become more complex and take longer to execute, you can then split it and execute it accross multiples cores effectively.

Making things work among multicores is not optimization. Making thing run faster when multicores are available is optimization. The key here is run faster. The number of core a game use do not matter as long as it does what it need to do.

As for the Crowd AI, if they are all single entity act by themselves and don't have any interaction at all (not even collision) with the other people, then yes, you can scale it as much as you want. But if for each action a person in the crowd does have influence or need to take into consideration the action of others, it then become very hard to scale it onto multiple core since you would need to fetch back the action of each other people in the scene to decide the next move.

The action itself are pretty simple to calculate, what take time is to synchronize all that data. If you do it into a single core, you can probably do that very quickly using the cache, if you spread it across multiple cores, it's very possible that you will end up by just spawning thread that will wait for memory access.

Again, Using more cores/thread is not optimization. Making your games running faster or do more work is optimization.

Multithreading require you to design your program in a very specific way that is multithread friendly.
Posted on Reply
#28
Warigator
PunkenjoyAs for the Crowd AI, if they are all single entity act by themselves and don't have any interaction at all (not even collision) with the other people, then yes, you can scale it as much as you want. But if for each action a person in the crowd does have influence or need to take into consideration the action of others, it then become very hard to scale it onto multiple core since you would need to fetch back the action of each other people in the scene to decide the next move.
If that is really the case, then hardware companies should push for higher frequencies. Hard physical limit is 1000 Terahertz, which is 200 000x faster than today's CPUs (source: www.nanowerk.com/nanotechnology-news2/newsid=60186.php). Because we are nowhere near the limit, AMD/Intel/IBM/Fujitsu/TSMC/Samsung and others should try to R&D higher frequency CPUs and of course I don't mean April Fools jokes like the 12900KS.
Posted on Reply
#29
DeathtoGnomes
BSim500I was making the point that games do not scale like synthetic benchmarks due to the nature of game-code, they will often only use what they need and if they don't need more than 4, 8, 12, threads it may just be the nature of the game.
There no such thing as the nature of game code. (Unless you're using Unity... j/k)
No, games dont scale automatically, games do use what they (cores) need up to the predetermined ceiling (core/thread count) that been coded, beyond the normal 4 core/thread, additional and specific code is need, mostly because what was missing from the DX11 API.
Posted on Reply
#30
eidairaman1
The Exiled Airman
DeathtoGnomesCPU core utilization was limited to 4 cores with DX11, anything above 4 cores meant developers needed to update code to adapt to use more. DX12's API allowed the use of as many cores/threads as there is on the gamers system.

Its not surprising to see this kind of shift. There are more laptops with 6-core/3060s than gaming PCs with 8+ cores, so this statistic will likely grow stronger and not change for some time.
Yup a FX 8350 was tested in 2020 compared to the hyper thread cores of the time in modern games and the experience is smoother on the FX8350 inspite only having half resources per core.

Software finally changed thanks to Ryzen, it took intel 6 gens to catch up (Core I 6000-Core I 12000)
Posted on Reply
#31
R-T-B
DeathtoGnomesCPU core utilization was limited to 4 cores with DX11, anything above 4 cores meant developers needed to update code to adapt to use more.
Actually DX11 rendering is singlethreaded by default but can be set to a multithreaded render path. I don't think that has a cap either. Where are you getting this info?

DX11 had horrible overhead on multithreaded rendering yes but it was not limited as you claim.
Posted on Reply
#32
Prima.Vera
GerKNGQuadcores are dead... even if some reviews tell something different with their FPS Charts.
they even struggle with loading and running games in general.
Battlefield, Forza Horizon 5.. even Rainbow Six Siege stutters and still loads textures after minutes while the CPU is pegged at 100% non stop.

this is how my 12100F (with 32GB RAM (3600 CL16 Gear 1 1T) only with NVME SSDs) handles Forza Horizon 5 with a 6900XT.
constant "low streaming bandwidth" reports and the CPU has zero ressources left.
on a 6 core is this not a problem and the CPU is not at a 100%.
I have a quad 3770K, 16GB DDR3 and have faced NONE of the issues you mentioned, and the game is butter smooth. You sure is not something else?
Posted on Reply
#33
DeathtoGnomes
R-T-BActually DX11 rendering is singlethreaded by default but can be set to a multithreaded render path. I don't think that has a cap either. Where are you getting this info?

DX11 had horrible overhead on multithreaded rendering yes but it was not limited as you claim.
That depends on the type of rendering you are talking about, there are different APIs. I'm referring to gaming only. Are you sure you're not mixing DX9 and DX11?

This is not where I learned (it was prior to Unity, so old info ) but I think it explains sufficiently (or you can decipher the wiki easier :D ).

worldanalysis.net/what-is-direct-x-11/
Posted on Reply
#34
R-T-B
DeathtoGnomesAre you sure you're not mixing DX9 and DX11?
Certain. And I am completely talking about gaming. Granted I haven't done game programing outside of Unity or GameMaker in years, but... pretty sure I'm correct on this.
Posted on Reply
#35
DeathtoGnomes
R-T-BCertain. And I am completely talking about gaming. Granted I haven't done game programing outside of Unity or GameMaker in years, but... pretty sure I'm correct on this.
DX10 didnt do very well at multi-threading which is part of the reason why it was more developed in DX11. Really its about how well a developer can use the APIs. I'm sure you can deduce why some developers 'defaulted' to single core usage.
Posted on Reply
#36
R-T-B
DeathtoGnomesI'm sure you can deduce why some developers 'defaulted' to single core usage.
Oh of course. No disagreement there.
Posted on Reply
#37
thelawnet
GerKNGQuadcores are dead... even if some reviews tell something different with their FPS Charts.
they even struggle with loading and running games in general.
Battlefield, Forza Horizon 5.. even Rainbow Six Siege stutters and still loads textures after minutes while the CPU is pegged at 100% non stop.

this is how my 12100F (with 32GB RAM (3600 CL16 Gear 1 1T) only with NVME SSDs) handles Forza Horizon 5 with a 6900XT.
constant "low streaming bandwidth" reports and the CPU has zero ressources left.
on a 6 core is this not a problem and the CPU is not at a 100%.
who pairs a 6900xt with a 12100f?!
Posted on Reply
#38
noel_fs
i rather die with a 4c than have a 6c, cant wait for ryzen 7700x
thelawnetwho pairs a 6900xt with a 12100f?!
its really not that big of deal as long as you strictly use the computer for gaming
Posted on Reply
#39
Warigator
Imo, AMD should increase core counts of every processor grade at Zen 2 prices:
Athlon 4/8 and 6/12
Ryzen 5 8/16 ($249)
Ryzen 7 12/24 (and highest frequency - gaming CPU)
Ryzen 9 16/32 and 24/48
Threadripper 32/64, 48/96, 64/128 and 96/192

If they don't do that, then don't treat them as benevolent, because they are not. Zen 3 instead of lowering prices, upped them by quite a bit. Navi and Navi 2 upped prices as well. Mid-range used to be 150-250$, now it's 330-500$.
Posted on Reply
Add your own comment
Nov 22nd, 2024 15:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts