Friday, April 4th 2025

Eight-Core CPUs Become the Most Popular Choice of PC Users, CPU-Z Stats Show

CPU-Z's Q1 2025 validation data indicates a new trend in CPU core count preferences among PC users. Eight-core processors now account for 24.7% of all validations, a significant increase of 32.6% compared to previous data. In contrast, six-core processors have declined to 22.5% of validations, down by 6.9%. The higher core count of eight-core CPUs aligns with the increasing demand for multithreaded performance in various computing environments, from professional workstations to high-end gaming systems. Market share figures also reveal adjustments in consumer preferences regarding CPU manufacturers. Intel retains a majority of presence with 56.3% of the market. However, AMD's share has risen notably at 43.7%, representing a 16.6% increase from the previous year.

The shift in market shares suggests that users are increasingly drawn to AMD's offerings, which include competitive eight-core processors. A key contributor to the trend toward eight-core CPUs is the rising popularity of specific models. The Ryzen 7 9800X3D, for example, has significantly impacted this new statistic, becoming the most popular CPU according to the CPU-Z validations. This indicates that users are interested in eight-core offerings with 3D V-Cache technology for increased gaming performance. The shift away from six-core configurations, which now represent a smaller portion of the validation data, shows that eight cores are now a sweet spot for many gamers. For multitasking and gaming, it seems like a perfect choice.
Sources: Valid x86, via VideoCardz
Add your own comment

37 Comments on Eight-Core CPUs Become the Most Popular Choice of PC Users, CPU-Z Stats Show

#26
Rover4444
_roman_Core count is not everything.
Core count is a lot of things, though. There's a lot of applications that could make better use of your 5800x versus your 7600x.
Posted on Reply
#27
Vayra86
dirtyferretI guess although with E cores, P cores, cores with X3D and without, cores with multi-threading and without; it seems like a "core" is anything but normalized
Yeah was it ever... the CPU core was always a wild mix of elements, instruction sets, overall capability. Its not really new, that, either.
Posted on Reply
#28
Bomby569
ShihabTwo different issues (and not even comparable).
Some people use computers for things other than video games.
sure and those most likely need even more vram than gamers
Posted on Reply
#29
Shihab
Bomby569sure and those most likely need even more vram than gamers
That's not necessarily the case either. Memory-intensive applications are only one class of programs, of which VRAM-specific ones are a subset. Majority of applications, even in non-entertainment domains (or perhaps especially in these domains. Gotta love those libs/techniques/algorithms still around from Fortran days. Lol!) don't even use GPUs beyond drawing UIs and perhaps some viewport rendering.
Posted on Reply
#30
Rover4444
ShihabThat's not necessarily the case either. Memory-intensive applications are only one class of programs, of which VRAM-specific ones are a subset. Majority of applications, even in non-entertainment domains (or perhaps especially in these domains. Gotta love those libs/techniques/algorithms still around from Fortran days. Lol!) don't even use GPUs beyond drawing UIs and perhaps some viewport rendering.
Okay but anybody doing AI is screaming for VRAM and the production people have been screaming for even longer. If you don't need VRAM you're already well served by the market and have been for a long time and if you don't use a GPU beyond "drawing UIs" and "viewport rendering" then what are we even talking about now, you don't need a GPU. Point is VRAM hasn't kept up compared to every other component in a PC.
Posted on Reply
#31
Shihab
Rover4444Okay but anybody doing AI is screaming for VRAM and the production people have been screaming for even longer. If you don't need VRAM you're already well served by the market and have been for a long time and if you don't use a GPU beyond "drawing UIs" and "viewport rendering" then what are we even talking about now, you don't need a GPU. Point is VRAM hasn't kept up compared to every other component in a PC.
What I was talking about is that demand for better processors with more core count far outweighed -and still does, albeit not as much- the need for more VRAM. With a subtext that the limitations themselves aren't similar.

Some/many AI models may fall under the memory-intensive umbrella, but we are talking about a niche that is highly cloud-centric. People "doing AI" as in who? The rando trying to implement some plagiarism engine genAI or SAM locally? I'm with you. The LLM end user? Those run on their own accelerators (or connect to cloud), afaik. The researcher/developer? They're mostly on Colab. The people who do training? Yeah I think they are looking at things at a different scale...

Meanwhile, look at any random, locally-running, number-crunching, production/engineering/scientific/whatever application, and you'll very likely find it recommending as many cores as you can throw at it, as it has been for decades.
Posted on Reply
#32
Rover4444
ShihabSome/many AI models may fall under the memory-intensive umbrella, but we are talking about a niche that is highly cloud-centric. People "doing AI" as in who? The rando trying to implement some plagiarism engine genAI or SAM locally? I'm with you. The LLM end user? Those run on their own accelerators (or connect to cloud), afaik. The researcher/developer? They're mostly on Colab. The people who do training? Yeah I think they are looking at things at a different scale...
More people than you'd think. When I say "doing AI" I generally mean users buying consumer GPUs. The freedom and privacy of a local setup can far outweigh the benefits of any cloud solution for any normal user, otherwise you would have seen small creators in the production space move over to render farms ages ago. Training SD LoRAs and finetuning LLMs is absolutely possible on consumer hardware. As for researchers, I thought they were training tiny models like 1.5Bs or so, I personally don't really get how Google Colab works or why I'd ever use it.
ShihabMeanwhile, look at any random, locally-running, number-crunching, production/engineering/scientific/whatever application, and you'll very likely find it recommending as many cores as you can throw at it, as it has been for decades.
FreeCAD says hello.
Posted on Reply
#33
Legacy-ZA
Okay, cool.

Now let's make 16-Core at the same price points to progress, or will we be stuck for 10 years like with Intel? I don't see a 3rd player shaking up the market.
Posted on Reply
#34
dirtyferret
Rover4444Core count is a lot of things, though. There's a lot of applications that could make better use of your 5800x versus your 7600x.
tech people talk about performance, amateur fan boys talk about cores
Posted on Reply
#35
Shihab
Rover4444More people than you'd think. When I say "doing AI" I generally mean users buying consumer GPUs. The freedom and privacy of a local setup can far outweigh the benefits of any cloud solution for any normal user,
I highly doubt something that has both high operational cost and relatively high barrier of entry (i.e. technical knowledge) would be as wide spread, especially when it has little beyond entertainment value (as far as LLMs and GenAI in general go, at least).

And cloud rendering services are a dime a dozen these days. But rendering, unlike pip installing some genAI crap, often has economic value, and no "free" cloud service exists for it.
Rover4444FreeCAD says hello.
FreeCAD needs an entire separate computer to handle the all the googling of how to fix its issues and do even basic CAD ops on it. :roll:
Legacy-ZANow let's make 16-Core at the same price points to progress, or will we be stuck for 10 years like with Intel? I don't see a 3rd player shaking up the market.
That should have been Qualcomm (and Apple, if you were to look past their exclusivity), but they themselves are serving as an example that we are not heading in the direction you -and I- are hoping for. Marketing and development are shifting from actual processing performance to those "NPU" gimmicks.
Posted on Reply
#36
_roman_
Rover4444Core count is a lot of things, though. There's a lot of applications that could make better use of your 5800x versus your 7600x.
I highly doubt. I always check my previous 2x 5800x vs my current 7600x. I think I know most of the benchmarks by now.
I'm too lazy. I could even get a ryzen 7700 tray and sell my 7600X for a price difference of 40€. I do not need any more performance as of now. 2x32GiB DRAM / radeon 7800X limit my gaming experience. For gnu gentoo linux the two months in 2023 showed me that a ryzen 3 3100 is enough with 1x 8GiB 3200mt/s DDR4 @ B550. I highly doubt I will find any games / any cheaper graphic cards in the next 2-3 years which motivate me to upgrade. (KCD 2 maybe)

gentoo compiles source code. I saw the differences since 2006 in hardware changes or upgrades or side grades.
5800x = sidegrade = nearly equal = 7600x
7500f / 7600 / 7600x = buy the cheapest and be happy (when you do not care for the cpu graphics)

I always read about microsoft flight simulator and maybe those big city builder games which benefits from high core count and high 3d-cache.
Posted on Reply
#37
phints
As someone who has built many gaming focused PCs for myself or with friends, 8 cores is the obvious choice. Consoles have had 8 cores for generations now too, and less cores tend to introduce micro stutters and such with demanding games especially multiplayer. It’s also better to slightly over spec a CPU especially 8c over 6c since it’s far easier to upgrade and sell your used GPU (2min swap) than a CPU upgrade with repasting or new mobo/ram.

For example a 9700X is $280 at MC hard to argue with that and other than Zen 5’s weirdly high idle power consumption it’s an excellent option.
Posted on Reply
Add your own comment
May 5th, 2025 20:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts