• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Hexa-Core CPUs Are the New Steam Gaming Mainstay

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.15/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
Gamers across the world seem to have settled on the price-performance ratio of hexa-core CPUs as their weapon of choice to process virtual worlds. According to the latest Steam Hardware Survey, 34.22% of machines running steam feature a CPU design with six physical cores, surpassing the 33.74% of users that employ a quad-core processor.

The first mainstream quad-core CPUs were launched circa 2009, having had thirteen years in the market already, while mainstream, true hexa-core designs saw the light of day just a year later, with AMD's Phenom II X6 CPUs. CPU designs featuring more than six physical cores have been increasing in numbers consistently throughout the years, while most under-six-core designs have been slowly bleeding users as gamers upgrade their systems. Our own reviews have shown that the best price-performance ratios for gaming are found in the hexa-core arena, but the latest architectures should help accelerate the number of available cores for mainstream users - whether for gaming purposes or not.



View at TechPowerUp Main Site | Source
 
I doubt the latest architectures are going to have any kind of notable impact on gaming at all. They're just not needed for it.

Or on 'core count required' for gaming. Just look at the consoles, and its nuff said. They follow x86 now. As long as we don't have a big little console, the whole concept is dead in the water for anything that is called gaming. We see Intel still releasing Alder Lake stacks with P cores only, not surprisingly around the i5 midrange. AMD creates 'gamur' Ryzens that focus on having as much on a single chiplet as possible, not using a bunch of half activated ones. Every time: latency, performance is the no.1 concern.

Also, we have almost no tangible results of E-cores doing anything for games, rather, they're in the way more often than not. Real time applications are going to want the fastest core/cpu they can get, core count was, is and will always come after that, that is 'when you have enough', which is always whatever the mainstream core count is.
 
I hope that game developers will target 100% utilization of hexa and octa core CPUs. So that they don't get underutilized. Raptor Lake i9 will have 24 cores and games still use 4 cores.
 
Quadcores are dead... even if some reviews tell something different with their FPS Charts.
they even struggle with loading and running games in general.
Battlefield, Forza Horizon 5.. even Rainbow Six Siege stutters and still loads textures after minutes while the CPU is pegged at 100% non stop.

this is how my 12100F (with 32GB RAM (3600 CL16 Gear 1 1T) only with NVME SSDs) handles Forza Horizon 5 with a 6900XT.
constant "low streaming bandwidth" reports and the CPU has zero ressources left.
on a 6 core is this not a problem and the CPU is not at a 100%.
Forza H5 on a Quadcore (12100).jpg
 
I hope that game developers will target 100% utilization of hexa and octa core CPUs. So that they don't get underutilized. Raptor Lake i9 will have 24 cores and games still use 4 cores.
That's not how game code works though. Real optimisation is doing the most with the least, not simply arbitrarily maxing out cores via unnecessary coding inefficiency for the sake of purchase justification for someone who just bought a 16C/32T CPU staring at Task Manager all day. Eg, graphics draw calls and physics will always be much more hungry than audio subsystem, rendering subtitles, translating input controls to movement, code that renders the UI / minimap, etc. So there's always going to be a mix of "heavy" and "light" threads and they'll probably never scale up like a synthetic Pi calculation benchmark.

Likewise, you can make games "more real" and use more CPU by having, eg, every person in a crowd in Assassins Creed have completely individual animations and interactable dialogue, but the bottleneck there never was CPU usage in the first place, it's "we do not have the budget to pay for +1,000x more mo-cap actors, hire 1,000x more unique voice actors, require +100x the recording studio time, etc, and you probably don't want to start paying $180 per games or have 8 year long development times either..." As with any well written software, many games will only use what they need and if they don't need more than 12, 16, 24, 32, threads, it may just be the nature of the game. Now I'm off to finish up that replay of Bioshock which runs +200fps on a 2C/4T CPU yet is still 10x more fun than half the unoptimised payment-platforms 'games' churned out today...
 
The first mainstream quad-core CPUs were launched circa 2009, having had thirteen years in the market already
The Intel Core 2 Quad Q6600 would beg to differ with a MSRP of $266 in mid-2007.
 
of Bioshock which runs +200fps on a 2C/4T CPU yet is still 10x more fun than half the unoptimised payment-platforms 'games' churned out today...
yes you carnt beat a bit of Bioshock Bud.
 
Quadcores are dead... even if some reviews tell something different with their FPS Charts.
they even struggle with loading and running games in general.
Battlefield, Forza Horizon 5.. even Rainbow Six Siege stutters and still loads textures after minutes while the CPU is pegged at 100% non stop.

this is how my 12100F (with 32GB RAM (3600 CL16 Gear 1 1T) only with NVME SSDs) handles Forza Horizon 5 with a 6900XT.
constant "low streaming bandwidth" reports and the CPU has zero ressources left.
on a 6 core is this not a problem and the CPU is not at a 100%.
View attachment 242424
RGHD also tested this CPU and didn't have such issues.


Sounds like AMD driver issue moreso to me.
 
I am sure the GPU comparisons still look atrocious and that is after a whole generation just before the launch of the next. Guess it was all those gamers buying up all those GPU's....
 
I think 6c is reasonable for a CPU to have but if you get a decent 4c CPU you can still play games no problem I guess. Utilization of the CPU will be higher though.
 
Wrong, all we need is a single core cpu with out ht/smt off cause o_O

Hmm maybe I'm wrong, cause now I think of it, we need at least 16 cores/32 threads to just have any chance of running pacman.

More seriously. 6 core for now and a few years. 8 cores for future profing a bit. That how I tell people to bay a pc for gaming and depending on there budget as well.
 
RGHD also tested this CPU and didn't have such issues.


Sounds like AMD driver issue moreso to me.

I also have a 12100F and yet to see any issue like that and yet to run into 100% pegged in any game I play or benched.

I actually switched from a 1600x/B350 to Alder lake and this CPU and it was a significant upgrade in every game I play with no issues whatsoever. 'Lost Ark was rather suttery with the1600x and now its all smooth even with the craziest shet going on with lots of players'

Edit:

Forgot that I had Forza 5 on my HDD to 'demo' it before deciding on buying it:
Mix of High-Ultra settings and synched to my monitor's refresh rate.
Forza512100.jpg
Clickable Thumb.

Not a single issue even running it from a HDD and an earlier game version.
 
Last edited:
What seems to be more important is that low cost older GPUs still rule the ratings.

Radeon RX 6000 series is basically nowhere to be seen despite the claims that AMD has ~20% of the discrete GPU market.

I find no explanation at all as to why RTX 3060 mobile is so freaking popular.

gpus.png
 
We did it gang, we finally changed a meaningless statistic!

(Actually it means that many game devs will have their minimum spec PC's hex cores, and will on average have better multi threading)
 
What seems to be more important is that low cost older GPUs still rule the ratings.

Radeon RX 6000 series is basically nowhere to be seen despite the claims that AMD has ~20% of the discrete GPU market.

I find no explanation at all as to why RTX 3060 mobile is so freaking popular.

View attachment 242426
Most Sub-$1600 laptops have RTX 3060 in it and its no surprise to find it on that list.
 
Most Sub-$1600 laptops have RTX 3060 in it and its no surprise to find it on that list.

And really that's all people have been able to buy at a somewhat decent cost the last 18 month's or so.
 
And really that's all people have been able to buy at a somewhat decent cost the last 18 month's or so.
Reckon it's more along the lines of "why spend 2 grands on a desktop when you can buy a laptop with comparable specs".
 
The main thing CPU needs is IPC and frequency (since frequency is nothing without IPC and vice versa).

Right now, some of the gain on higher core count CPU are due to the higher frequency or larger cache. Not all gain but a significant portion. It's a good thing that console now are multicores CPU. Even more, it was great that the PS4 and Xbox were using a crappy 8 core CPU. This forced game devs to do the best multithreading they could do.

But IPC and frequency will still be king.

Also, the code that run in the CPU is really interdependent. That do not really do good with chiplets that have separate cache (Ryzen 1xxx to 3xxx with 2 CCX per CCD and 59xx with 2 CCD). Same things go on the cache layout of Alder Lake E-Cores that are slow to communicate with the P-Cores.

Also, you don't want to be bottlenecked by your CPU, you want to be bottlenecked by your GPU if possible. A GPU bottleneck is way smoother than a CPU bottleneck where frame times are all over the place.
 
CPU core utilization was limited to 4 cores with DX11, anything above 4 cores meant developers needed to update code to adapt to use more. DX12's API allowed the use of as many cores/threads as there is on the gamers system.

Its not surprising to see this kind of shift. There are more laptops with 6-core/3060s than gaming PCs with 8+ cores, so this statistic will likely grow stronger and not change for some time.
 
Just look at the consoles, and its nuff said. They follow x86 now. As long as we don't have a big little console, the whole concept is dead in the water for anything that is called gaming. We see Intel still releasing Alder Lake stacks with P cores only, not surprisingly around the i5 midrange. AMD creates 'gamur' Ryzens that focus on having as much on a single chiplet as possible, not using a bunch of half activated ones. Every time: latency, performance is the no.1 concern.

Yeah, it will depend on how good the incentives from Intel and/or Microsoft and Microsoft's own hability to do some interesting with it that's not just pushing more ads. There are a lot of interesting things that can be done with E cores, the problem is all of them can be more easily and just as well made with regular cores and when you can pack as many cores as you want with chiplets or mcm or whatever other technology the advantages disappear rather quickly.

I think there's some cool opportunities for consoles specifically, to run the OS, party chat and other sub lower functions. Looking at the PS5 for example, it already packs an extra 512mb DDR4 for OS and background stuff, I could totally see a couple E cores for the same purpose like party chat, or broadcast, etc. But then again, if they can have more regular cores instead of a heterogeneous big Little architecture why bother (not to mention when they already experience on how bad it is to maintain an exoteric architecture.

That's not how game code works though. Real optimisation is doing the most with the least, not simply arbitrarily maxing out cores via unnecessary coding inefficiency for the sake of purchase justification for someone who just bought a 16C/32T CPU staring at Task Manager all day. Eg, graphics draw calls and physics will always be much more hungry than audio subsystem, rendering subtitles, translating input controls to movement, code that renders the UI / minimap, etc. So there's always going to be a mix of "heavy" and "light" threads and they'll probably never scale up like a synthetic Pi calculation benchmark.

Likewise, you can make games "more real" and use more CPU by having, eg, every person in a crowd in Assassins Creed have completely individual animations and interactable dialogue, but the bottleneck there never was CPU usage in the first place, it's "we do not have the budget to pay for +1,000x more mo-cap actors, hire 1,000x more unique voice actors, require +100x the recording studio time, etc, and you probably don't want to start paying $180 per games or have 8 year long development times either..." As with any well written software, many games will only use what they need and if they don't need more than 12, 16, 24, 32, threads, it may just be the nature of the game. Now I'm off to finish up that replay of Bioshock which runs +200fps on a 2C/4T CPU yet is still 10x more fun than half the unoptimised payment-platforms 'games' churned out today...

You missed the point completely, no one is arguing for less optiomization, what everyone is saying is that it's about damn time games start making better use of parallelization instead of just relying on that sweet single core higher boost.

Also talking about more animation, it's a matter of good management and planning, and having a good vision for what they want the end result to be. Just look at Left 4 Dead for example, there's a very good comparison video between it and Back 4 Blood that shows how garbage and how much of a cash grab Back 4 Blood was (not to mention how shameless the marketing was name droping Left 4 Dead when it was only a couple guys in common)

 
Quadcores are dead... even if some reviews tell something different with their FPS Charts.
they even struggle with loading and running games in general.
Battlefield, Forza Horizon 5.. even Rainbow Six Siege stutters and still loads textures after minutes while the CPU is pegged at 100% non stop.

this is how my 12100F (with 32GB RAM (3600 CL16 Gear 1 1T) only with NVME SSDs) handles Forza Horizon 5 with a 6900XT.
constant "low streaming bandwidth" reports and the CPU has zero ressources left.
on a 6 core is this not a problem and the CPU is not at a 100%.
View attachment 242424

Hmmmm that's very strange. Before I got the 5900X, I had the i7-6700(non K) and paired it with my 6900XT and was able to fun games ok. It wasn't the greatest, but Forza Horizon 5 looked nowhere near like this when I was playing
 
You missed the point completely, no one is arguing for less optiomization, what everyone is saying is that it's about damn time games start making better use of parallelization instead of just relying on that sweet single core higher boost.
I didn't "miss the point". I was making the point that games do not scale like synthetic benchmarks due to the nature of game-code, they will often only use what they need and if they don't need more than 4, 8, 12, threads it may just be the nature of the game. I'm sure threading can be improved but it simply doesn't scale the way you are demanding "100% utilisation on all cores".

Reality check - Go back to 2009 and see how i3 vs i5 scaled for Dragon Age Origins was 25-30%. Now fast forward a whole decade and compare 4C vs 6C vs 8C benchmarks for the 3300X vs 3600X vs 3700X to see how the percentage gains for a lot of games have gone down not up. Of course some games will scale better than others, but there is no "magic beans" game code that's going to get perfect 100% core usage on every core outside of a handful of niche cases (eg, simulation games) just because it's 2022 or because 6-16 core CPU's have become popular. They'll improve definitely but if you get 150fps on a 5300x, are you going to get 300fps on a 5800X then 600fps on a 5950X in most games? Of course not. Because game code is not CPU-Z / Cinebench...
 
There were a lot less streamers/hackers in the 4 core era. The cheap extra cores just had to find something to do...
 
Back
Top