Friday, May 10th 2024

AMD Hits Highest-Ever x86 CPU Market Share in Q1 2024 Across Desktop and Server

AMD has reached a significant milestone, capturing a record-high share of the X86 CPU market in the first quarter of 2024, according to the latest report from Mercury Research. This achievement marks a significant step forward for the chipmaker in its long battle against rival Intel's dominance in the crucial computer processor space. The surge was fueled by strong demand for AMD's Ryzen and EPYC processors across consumer and enterprise markets. The Ryzen lineup's compelling price-to-performance ratio has struck a chord with gamers, content creators, and businesses seeking cost-effective computing power without sacrificing capabilities. It secured AMD's 23.9% share, an increase from the previous Q4 of 2023, which has seen a 19.8% market share.

The company has also made major inroads on the data center front with its EPYC server CPUs. AMD's ability to supply capable yet affordable processors has enabled cloud providers and enterprises to scale operations on AMD's platform. Several leading tech giants have embraced EPYC, contributing to AMD's surging server market footprint. Now, it is at 23.6%, a significant increase over the past few years, whereas AMD was just above 10% four years ago in 2020. AMD lost some share to Intel on the mobile PC front due to the Meteor Lake ramp, but it managed to gain a small percentage of the market share of client PCs. As AMD rides the momentum into the second half of 2024, all eyes will be on whether the chipmaker can sustain this trajectory and potentially claim an even larger slice of the x86 CPU pie from Intel in the coming quarters.
Below, you can see additional graphs of mobile PC and client PC market share.

Source: AnandTech
Add your own comment

140 Comments on AMD Hits Highest-Ever x86 CPU Market Share in Q1 2024 Across Desktop and Server

#126
FoulOnWhite
Where would AMD be without 3D cache i wonder, would they have the best selling CPUs for the last 14mths, no chance. They needed 3D cache from TSMC to save themselves, which to be fair it did just that. Lets see what ZEN5 brings, it's getting time for me to upgrade my 12700k now so maybe this time AMD will have something for me.

And don't start childishly pointing the fanboy finger at me, when i built this, there was no AMD 3D cache CPUs or i would be using one of them instead.


Edited, meant zen5 not AM5
Posted on Reply
#127
Panther_Seraphin
FoulOnWhiteWhere would AMD be without 3D cache i wonder, would they have the best selling CPUs for the last 14mths, no chance. They needed 3D cache from TSMC to save themselves, which to be fair it did just that. Lets see what AM5 brings, it's getting time for me to upgrade my 12700k now so maybe this time AMD will have something for me.

And don't start childishly pointing the fanboy finger at me, when i built this, there was no AMD 3D cache CPUs or i would be using one of them instead.
Would they have the best selling CPU? I suspect it would be close between something like the 5700x and the 12400. Again we are enthusiasts so we look at what is best overall with price being a very distant 2nd point of contention.

UK markets have intel performing absolutley terrible at the moment.
12400f is the best selling Intel CPU and its 7th.
14700k is 9th.
7800x3d is top
but 2 - 6th are all non x3d parts and only the 6th position is another zen 4 part.

Zen 4 and the 13/14th gen from Intel have been a little flat I would guess in terms of sales. If you had any Zen3 setup you were better off getting a 57/5800x3d and as you did if you had a 12th gen you could skip 13 and 14th gen.
Posted on Reply
#128
Tek-Check
FoulOnWhiteWhere would AMD be without 3D cache i wonder, would they have the best selling CPUs for the last 14mths, no chance. They needed 3D cache from TSMC to save themselves, which to be fair it did just that. Lets see what AM5 brings, it's getting time for me to upgrade my 12700k now so maybe this time AMD will have something for me.
- selling gaming X3D CPUs, although very popular and best in the world for intended use, is only a fraction of their revenues. Read their financial reports to find out more about revenue heavy-lifters, which are server chips.
- so, AMD is not "saving themselves" with X3D gaming CPU. They are simply gradually increasing their desktop market share with it, mostly in developed markets. Most of global desktop DIY world still buys cheaper CPUs. Premium gaming CPUs are a privilege.
FoulOnWhiteAnd don't start childishly pointing the fanboy finger at me, when i built this, there was no AMD 3D cache CPUs or i would be using one of them instead.
- I don't really care which vendor anyone buys their PCs from. I have several machines both from Intel and AMD.
- I have had four or five i7 chips from Intel. The best was 2700K. There is still 9700K in one of machines in the other room. 12700K is fine.
- AM5 already brings the fastest gaming experience in the world with 7800X3D, if that is what you are looking for
- Zen5 shoould be a good upgrade. The next X3D mainstream gaming flagship should be '9800X3D' coming out towards Xmas or at CES, unless AMD surprises us with earlier release.
- 7800X3D is ~20% faster than 5800X3D in 1080p gaming (HUB review), so '9800X3D' is hoped to bring similar uplift, if not a tad more with more mature V-cache technology. X3D chips have been a huge success for gamers.
Posted on Reply
#129
gffermari
Every company pushes the envelope with the tech they have. Intel went the 6+Ghz route-no matter the consumption route. AMD could have done the same. But no. They went the 3D V cache route.

When Intel jumps to next gen tech, AMD will have to do something, if the 3D Vcache is not enough.

The thing is that AMDs success is not the 3D Ryzens. The epyc CPUs are the ones …to blame. And the threadrippers too.
My last company bought workstations with threadrippers instead of Xeons.

Only the laptop market is still dominated by Intel, no matter what CPUs AMD has released for this.
Posted on Reply
#130
FoulOnWhite
Tek-Check- selling gaming X3D CPUs, although very popular and best in the world for intended use, is only a fraction of their revenues. Read their financial reports to find out more about revenue heavy-lifters, which are server chips.
- so, AMD is not "saving themselves" with X3D gaming CPU. They are simply gradually increasing their desktop market share with it, mostly in developed markets. Most of global desktop DIY world still buys cheaper CPUs. Premium gaming CPUs are a privilege.

- I don't really care which vendor anyone buys their PCs from. I have several machines both from Intel and AMD.
- I have had four or five i7 chips from Intel. The best was 2700K. There is still 9700K in one of machines in the other room. 12700K is fine.
- AM5 already brings the fastest gaming experience in the world with 7800X3D, if that is what you are looking for
- Zen5 shoould be a good upgrade. The next X3D mainstream gaming flagship should be '9800X3D' coming out towards Xmas or at CES, unless AMD surprises us with earlier release.
- 7800X3D is ~20% faster than 5800X3D in 1080p gaming (HUB review), so '9800X3D' is hoped to bring similar uplift, if not a tad more with more mature V-cache technology. X3D chips have been a huge success for gamers.
How much have they increased it since 2016? Intel still has 76%. Maybe in 2030 they might be closer but not gonna happen in 2 or 3 years without a miracle.
Posted on Reply
#131
Tek-Check
FoulOnWhiteHow much have they increased it since 2016? Intel still has 76%. Maybe in 2030 they might be closer but not gonna happen in 2 or 3 years without a miracle.
- they had 10% only in 2016 in desktop and almost 0% in server. Now, it's a completely different story, especially in server.
- AMD is predicted to have roughly 50/50 revenues in server with Intel in next 15-18 months
- things move slowly in PC market; average life span of PC is 5 years.
- but direction of change is clear, consistent and relentless.
- by 2030, Intel might drop towards 50% in x86, from 95% in 2016.
Posted on Reply
#132
mkppo
stimpy883.) Read to understand why it's there, and then read why Intel doesn't need it.

4.) Incorrect, see answer 3
I think you're thinking of the V-Cache from a consumer workload/gaming perspective when you ask why it's there. But think of it from a technological standpoint, and also AMD's benefit here

1) There are certain server workloads that benefit from cache, and a lot of it. Now when you have a finite amount of space in one Epyc socket, the most efficient use of space is cramming a ton of small cores and then stack a ton of cache on top of it.
2) They can now use these same small cores on the desktop side and stack a ton of cache on top.

What you're saying is that the core design should just have a larger L2 cache + better memory controller to eliminate the need to have the V-cache. That's true, and they'll likely be even faster than stacking V-Cache. But that would mean they would have to design a core just for desktop gaming use case (and some benefits outside, but not as much). Going by how they are doing on the server side, and their constant erosion of intel's market share on that front, it's unsurprising that the key decisions made for all generations of Zen are primarily governed by their server side performance.

Regarding your last point, having more memory channels isn't actually going to solve the gaming problem. The issue is mostly latency, not bandwidth.
Posted on Reply
#133
stimpy88
mkppoI think you're thinking of the V-Cache from a consumer workload/gaming perspective when you ask why it's there. But think of it from a technological standpoint, and also AMD's benefit here

1) There are certain server workloads that benefit from cache, and a lot of it. Now when you have a finite amount of space in one Epyc socket, the most efficient use of space is cramming a ton of small cores and then stack a ton of cache on top of it.
2) They can now use these same small cores on the desktop side and stack a ton of cache on top.

What you're saying is that the core design should just have a larger L2 cache + better memory controller to eliminate the need to have the V-cache. That's true, and they'll likely be even faster than stacking V-Cache. But that would mean they would have to design a core just for desktop gaming use case (and some benefits outside, but not as much). Going by how they are doing on the server side, and their constant erosion of intel's market share on that front, it's unsurprising that the key decisions made for all generations of Zen are primarily governed by their server side performance.

Regarding your last point, having more memory channels isn't actually going to solve the gaming problem. The issue is mostly latency, not bandwidth.
I'm not taking the server market into account. 3D cache is the-more-the-better option for that environment, where clock speeds are not so important, but performance is.

The poor-man's DDR5 memory controller AMD use is just nowhere near as good as what Intel uses, so it forces AMD to rely on the huge L3 cache to smooth the poor performance and high latency it has. Intel CPU's can and do get well over a 120GBs of bandwidth and provide much lower latencies, AMD struggles to get more than 80GBs, at much higher latency, which places more demands on the quantity and speed of their L3 cache.

AMD also knows that 1M of L2 is not enough, and publicly stated that 2MB was the "sweet spot", and they felt 3MB of L2 was overkill for the 3-5% of extra perf it gave them. So how many megabytes of L2 cache does Zen 5 have? Yep, still 1MB, which they know reduces performance by over 10% as that is what AMD said it gained by doubling it - I assume Zen 6 will grab that low-hanging fruit. So Zen 5 is still L2 cache starved, and AMD have taken additional steps to help speed it up. We also know for a fact that with so many gains large coming from the large L3 cache, we can see for anything not including running old and or basic applications and obviously Windows itself, the amount of L3 cache on a vanilla Zen 4 CPU is fine - it will do. But for anything more complex it's half of what it should be, which is why 3D cache is a big thing in the consumer gaming space.

Also, remember the performance losses the 3D cache creates. It increases power consumption, and thermally constrains the chip, reducing clock speed by a fair amount, a few hundred MHz. If AMD made a test version of a Zen4 8 core chip with 2MB L2 cache, and double the native L3 cache, you would be talking at least a 20% performance improvement over the current 3D cache version, and you would not need the additional manufacturing steps, and the losses and the increased manufacturing costs that also come from it.
Posted on Reply
#134
kapone32
stimpy88I'm not taking the server market into account. 3D cache is the-more-the-better option for that environment, where clock speeds are not so important, but performance is.

The poor-man's DDR5 memory controller AMD use is just nowhere near as good as what Intel uses, so it forces AMD to rely on the huge L3 cache to smooth the poor performance and high latency it has. Intel CPU's can and do get well over a 120GBs of bandwidth and provide lower latencies, AMD struggles to get more than 80GBs, at much higher latency, which places more demands on the quantity and speed of their L3 cache.

AMD also knows that 1M of L2 is not enough, and publicly stated that 2MB was the "sweet spot", and they felt 3MB of L2 was overkill for the 3-5% of extra perf it gave them. So how many megabytes of L2 cache does Zen 5 have? Yep, still 1MB, which they know reduces performance by over 10% as that is what AMD said it gained by doubling it - I assume Zen 6 will grab that low-hanging fruit. So Zen 5 is still L2 cache starved, and AMD have taken additional steps to help speed it up. We also know for a fact that with so many gains large coming from the large L3 cache, we can see for anything not including running old and or basic applications and obviously Windows itself, the amount of L3 cache on a vanilla Zen 4 CPU is fine - it will do. But for anything more complex it's half of what it should be, which is why 3D cache is a big thing in the consumer gaming space.

Also, remember the performance losses the 3D cache creates. It thermally constrains the chip, reducing clock speed by a fair amount, a few hundred MHz. If AMD made a test version of a Zen4 8 core chip with 2MB L2 cache, and double the native L3 cache, you would be talking at least a 20% performance improvement over the 3D cache version. And you would not need the additional manufacturing step, and losses that also come from it.
So even after all that you said, Intel still needs way more power to keep up. AMD is not forcing anyone to buy their chips. This sounds like people that tell me my 7900X3D is slow because it is "hampered" with 2 CCDs.
Posted on Reply
#135
stimpy88
kapone32So even after all that you said, Intel still needs way more power to keep up. AMD is not forcing anyone to buy their chips. This sounds like people that tell me my 7900X3D is slow because it is "hampered" with 2 CCDs.
Who is talking about power and Intel chips in those terms? Who is talking about AMD being a bad choice? We are talking about design implementation between the two companies. Intel uses so much power because of their outdated core design and manufacturing process, and the fact they rely on overclocking those parts to stay up with AMD, increasing power consumption dramatically.

And your 7900X3D is actually hampered by 4 cores not having 3D cache and Windows still utilising those cores during gaming, causing cache thrashing, as well as relying on the bus to move the data between the two CCDs, all decreasing performance and increasing latency. This is Microsoft's fault, not AMD's, if that's not obvious and you start crying that I said something negative.

Don't be that kind of fanboi, be better.
Posted on Reply
#136
kapone32
stimpy88Who is talking about power and Intel chips in those terms? We are talking about design implementation between the two companies. Intel uses so much power because of their outdated core design and manufacturing process, and the fact they rely on overclocking those parts to stay up with AMD, increasing power consumption dramatically.

And your 7900X3D is actually hampered by 4 cores not having 3D cache and Windows utilising those cores during gaming, causing cache thrashing.

Don't be that kind of fanboi, be better.
All I will say is LMAO. I am not going to discuss with you how weak my CPU is supposed to be. If I had 4 cores on the 2nd CCD they would also run at 5.6 Ghz. I guess clock speed does not matter anymore but is actually 6 cores. I would also argue that Windows has been working with Dual CCD since 2018.
Posted on Reply
#137
mkppo
stimpy88I'm not taking the server market into account. 3D cache is the-more-the-better option for that environment, where clock speeds are not so important, but performance is.

The poor-man's DDR5 memory controller AMD use is just nowhere near as good as what Intel uses, so it forces AMD to rely on the huge L3 cache to smooth the poor performance and high latency it has. Intel CPU's can and do get well over a 120GBs of bandwidth and provide much lower latencies, AMD struggles to get more than 80GBs, at much higher latency, which places more demands on the quantity and speed of their L3 cache.

AMD also knows that 1M of L2 is not enough, and publicly stated that 2MB was the "sweet spot", and they felt 3MB of L2 was overkill for the 3-5% of extra perf it gave them. So how many megabytes of L2 cache does Zen 5 have? Yep, still 1MB, which they know reduces performance by over 10% as that is what AMD said it gained by doubling it - I assume Zen 6 will grab that low-hanging fruit. So Zen 5 is still L2 cache starved, and AMD have taken additional steps to help speed it up. We also know for a fact that with so many gains large coming from the large L3 cache, we can see for anything not including running old and or basic applications and obviously Windows itself, the amount of L3 cache on a vanilla Zen 4 CPU is fine - it will do. But for anything more complex it's half of what it should be, which is why 3D cache is a big thing in the consumer gaming space.

Also, remember the performance losses the 3D cache creates. It increases power consumption, and thermally constrains the chip, reducing clock speed by a fair amount, a few hundred MHz. If AMD made a test version of a Zen4 8 core chip with 2MB L2 cache, and double the native L3 cache, you would be talking at least a 20% performance improvement over the current 3D cache version, and you would not need the additional manufacturing steps, and the losses and the increased manufacturing costs that also come from it.
If you're not taking the server market into account then what's the point? Every Zen release is designed with server workloads in mind, they just happen to perform excellent in consumer workloads except gaming (in a relative sense). With 2M of L2 cache, they would just have larger dies and maybe wouldn't be able to cram as many cores as they do in their EPYC sockets. Also, many consumer workloads don't really scale with additional L2 cache, it's mostly games.

Your first post which I quoted made it sound like they sell 3D-V cache CPU's as a cash grab and intentionally don't put larger L2 caches in the CPU's. As I described, that's not really the case. It was, again, a server first design which just happened to perform great in games so they just sell it as such. What you're saying is basically them designing a whole different core just for games. That's asking for way too much.
stimpy88And your 7900X3D is actually hampered by 4 cores not having 3D cache and Windows still utilising those cores during gaming, causing cache thrashing, as well as relying on the bus to move the data between the two CCDs, all decreasing performance and increasing latency. This is Microsoft's fault, not AMD's, if that's not obvious and you start crying that I said something negative.
I have a 7950X3D in one of the machines and recently built PC's using both the 7900X3D and 7800X3D and honestly, this cache thrashing issue was blown way out of proportion. Even funnier was all these people claiming they should just have both CCD's with V-Cache. For the last time, that would solve no issue whatsoever.

When using the game bar, pretty much every new game runs and performs just fine. Some of the older games have issues but the FPS is already so high that it doesn't matter at that point. Sure the scheduler could be better but it's not terrible as many point out. HUB's extensive testing back when they were released paints a similar picture.

Also the 7900X3D has 6 cores without V-Cache, not 4. And yet it performs just fine, the games seem to be pinned to the 6 cores in every new title I tested it with.
Posted on Reply
#138
Avro Arrow
Minus InfinityAlas the world is full of sheeple. Why do you think Toyota sells so many batshit boring unremarkable cars!
Well, that's not quite the same thing. Toyota sells so many batshit, boring unremarkable cars because the things just don't break down. Having the most reliable cars in the world is a huge thing because, sure you can buy something cheaper, but it will break down more than a Toyota. That costs you more money on the back-end and also includes the headaches involved.

THAT is why Toyota sells so well. They simply make the most reliable and durable passenger vehicles in the world. Never forget Top Gear's "Can we kill a Toyota Hilux?" segment. They failed to kill it (and they had some pretty creative and hardcore ways of trying to kill it).
Random_UserThis is still hapoening to this very day, and is partly a reason the gap is still huge.
It's also because it takes a LOOONG time for the server side to change. They're a bunch of old men and some of them are pretty set in their ways.
Random_UserBut, can't say the Athlon, for example was unknown outside enthusiasts and OCcrowd. From personal experience, Barton was the next OC potential after Celeron 633. And both were kind of cheaper solutions, compared to Pentium's exstortion level prices. And even for non-OC people, Athlon was much more interesting product, as it was much more affordable, and had almost free bonus MHz.
And then was Athlon and Athlon X2, which undercut the cooler melting P4, and especially Pentium D, at every corner, while being the better performants at the same time.
At the time, I barely knew what Athlon was and I've been building PCs since 1988.
Random_UserBut with a serious catch, of being forced to include even more backdoors, which leading to horrible vulnuerabilities, that won't be ever fixed or patched, due to obvious reasons.
Intel has to include them as well, otherwise they wouldn't even be considered. That con is true across the board so it's not really a con, relatively speaking.
Posted on Reply
#139
mkppo
Avro ArrowWell, that's not quite the same thing. Toyota sells so many batshit, boring unremarkable cars because the things just don't break down. Having the most reliable cars in the world is a huge thing because, sure you can buy something cheaper, but it will break down more than a Toyota. That costs you more money on the back-end and also includes the headaches involved.

THAT is why Toyota sells so well. They simply make the most reliable and durable passenger vehicles in the world. Never forget Top Gear's "Can we kill a Toyota Hilux?" segment. They failed to kill it (and they had some pretty creative and hardcore ways of trying to kill it).
Damn you reminded me of the golden Top Gear era. It was by far the best auto show ever for me.
Posted on Reply
Add your own comment
Dec 21st, 2024 07:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts