• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's Global CPU Market Share is on the Rise, AMD Starts the Downfall

Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I agree with most but for some the igpu wasn't niche given the cost of GPUS.
Well, sure, but that's still a niche use case. While a Vega 8 can work as a holdover until you get a GPU, it's still a holdover - and one that might deliver benefits for a few months but then deliver worse performance afterwards. Of course, an Intel F SKU can't run at all without a dGPU, so that clearly won't work if you can't get a GPU - though stocks do seem good (at least here in Sweden and Norway) currently, and prices are creeping down (though still stupidly high).
And I wasn't arguing the price isn't an issue just that why reduce if you're selling at a higher cost anyway.
Because the better-performing, cheaper competition is going to eat away at sales? All available data seems to suggest that Intel is indeed taking back market share from AMD currently.
Gaming is a niche use of computation.
That is complete and utter nonsense. It is by far the most common hobbyist use of computation; the most common enthusiast use of computation; the main driver for production and sales of high performance computer parts to regular end users. Sure, ho-hum OEM business PCs and thin-and-light general purpose notebooks vastly outnumber gaming PCs. But calling gaming a "niche" use of computation is just silly.
And gaming alone is where AL stretches it's lead, that's still plenty of consumers that are fine on RYZEN then IMHO.
Again: nobody is saying that they aren't. And nobody here (from what I've seen) is arguing for people to ditch their perfectly fine Ryzen setups. But it makes zero sense to buy a lower-performance product at a higher price if you have a choice. "They still work fine" is not an argument for paying a premium. I would still recommend Ryzen for anyone looking for a non-performance sensitive setup (say if my parents needed a new PC) - I'm still quite a ways away from forgiving Intel for their bribery and monopolism. But for anyone performance sensitive today? AMD needs to step up their game, first through cutting the prices of current SKUs to compete, and then through bringing Zen4 to market ASAP (and having it be good, obviously).
 
Joined
Oct 6, 2009
Messages
2,827 (0.51/day)
Location
Midwest USA
System Name My Gaming System
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte b650 Aorus Elite AX
Cooling Phanteks Glacier One 360D30
Memory G.Skill 64000 Mhz 32 Gb
Video Card(s) ASRock Phantom 7900XT OC
Storage 4 TB NVMe Total
Case Hyte y40
Power Supply Corsair 850 Modular PSU
Software Windows 11 Home Premium
I find some of these comments so funny. Every time AMD is up the AMD fans rush in and down talk Intel. Every time Intel is up the same thing happens but in reverse. In some cases, some of the people spouting praise for Intel or AMD are the same people who down talk them months later when performance changes.

Let's face both AMD and Intel both only care about the markets that make them money. My strategy, buy who ever is better at the time I build.
 
Joined
Aug 9, 2019
Messages
1,717 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
Seems AMD finally did the right thing:

5600X becomes a better deal than 12400F with these prices since B660 MBs are more expensive than B450/B550.
5600X(230usd)+B550(80usd)=310usd
12400F(180usd)+B660(120usd)=300usd
5600X is slightly faster stock and 5-10% faster with max tuning.

5800X is a bit more attractive now.
5800X(300usd)+B550(80usd)=380usd
12600KF(270usd)+Z690(200usd)=470usd
12600KF is about 10% faster stock, up to 15% tuned, maybe worth 90usd more.
590X and 5950X is only worth it for productivity even with these prices.
 
Joined
Jan 29, 2021
Messages
1,876 (1.32/day)
Location
Alaska USA
Seems AMD finally did the right thing:
5600X becomes a better deal than 12400F with these prices since B660 MBs are more expensive than B450/B550.
5600X(230usd)+B550(80usd)=310usd
12400F(180usd)+B660(120usd)=300usd
5600X is slightly faster stock and 5-10% faster with max tuning.
5800X is a bit more attractive now.
5800X(300usd)+B550(80usd)=380usd
12600KF(270usd)+Z690(200usd)=470usd
12600KF is about 10% faster stock, up to 15% tuned, maybe worth 90usd more.
590X and 5950X is only worth it for productivity even with these prices.
Those are Microcenter store prices. There are less than a dozen US states that have those stores.
 
Joined
Aug 9, 2019
Messages
1,717 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
Those are Microcenter store prices. There are less than a dozen US states that have those stores.
Bestbuy and amazon also has the 5600X price, but 5800X is more expensive at them.
 
Joined
Jan 28, 2021
Messages
854 (0.60/day)
Sadly your argument is undermined by the fact that AMD's Computing and Graphics division has higher revenue than its Enterprise, Embedded and Semi-Custom division (which includes tens of millions of chips for consoles in the last year, representing a significant portion of its revenue) (~$2.6B in Q4 '21 vs. ~$2.2B), and for Intel, the Client Computing Group similarly has much higher revenue (and margins!) than their Data Center Group ($10.1B/$3.5B gross margin (~30.1%) vs. $7.3B and $1.7B margin (~23.3%)). The advantage of getting into the datacenter markets are in a larger total addressable market than just the consumer markets, larger purchases, and long-term contracts, not necessarily in overall profitability. Server chips have massive list price margins, but are sold at heavy rebates in any kind of volume, and those service contracts come with a lot of labor costs - they still earn them a lot of money, but they aren't cheap markets to compete in.
The last few years have been pretty far out of the norm as to what you would normally expect from endpoint computing. AMDs strong performance is due to all kinds of things from Intel fumbling hard, to work trends shifting, as well as just having a really solid product. They did really well in client side of things but you have to wonder how it would have looked if Intel wasn't so far behind and the market demand as strong as it was. You can certainly make a lot of money in the client space but its done at scale and thats not something AMD can do like Intel can.

No matter how you look at it margins are better in datacenter and HPC parts. Not only are the margins better, once you win one datacenter contract thats good for at least five years minimum as platforms are expanded and upgraded, and replicated across sites so that one initial win is pretty much guaranteed to provide future sales for years down the road.
Also, if AMD needed their consumer fans for PR for their enterprise parts, do you think it's reasonable to abandon that strategy after one generation in the lead (two if we're a bit generous)? That still seems incredibly short-sighted.
IDK, if you had limited supply who would you rather say "no" to. DellEMC, and HPE or a bunch of fickle gamers?
As for limited supply, that is absolutely a factor. But the priorities AMD has made with that supply is precisely why we have grounds to criticize them. There are plenty of things they could have done to improve low-end and mid-range chip supplies, including making a tiny, 4-core, half L3 chiplet for low end parts, for which they would have gotten massive numbers off of each wafer (and of course they could also have used those for low core count, PCIe/RAM bandwidth oriented EPYC chips). They've chosen to aim narrowly at the premium market with no visible concern for anything else. And with that, they are shooting themselves in the foot.
Its fair to criticize them for the priority decisions they made but I would say the made the right business priorities. Ideally I'm sure they'd go after all of the markets including entry-level to mid-range markets that would be served by a 4 core Zen3 but I don't think they have the resources to pursue all that at the same time. Zen3 chiplets are already pretty small (compared to Alder Lake anyway) and yields are really good so while you'd get more by making a smaller chiplet and slightly better yields there is less money in each chip so there isn't much incentive to do it when your 8 core chiplet is yeidling great and selling out. Also, nobody really buys server CPUs in less than 10 core per socket configurations so a 4 core chiplet would be pretty much useless for Epyc.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The last few years have been pretty far out of the norm as to what you would normally expect from endpoint computing. AMDs strong performance is due to all kinds of things from Intel fumbling hard, to work trends shifting, as well as just having a really solid product. They did really well in client side of things but you have to wonder how it would have looked if Intel wasn't so far behind and the market demand as strong as it was. You can certainly make a lot of money in the client space but its done at scale and thats not something AMD can do like Intel can.

No matter how you look at it margins are better in datacenter and HPC parts. Not only are the margins better, once you win one datacenter contract thats good for at least five years minimum as platforms are expanded and upgraded, and replicated across sites so that one initial win is pretty much guaranteed to provide future sales for years down the road.
You're not entirely wrong, but remember: even back in 2019, at the peak of Intel's 14nm supply crunch and before AMD was making much headway into the datacenter space, their consumer branch was still outselling their datacenter branch by nearly 39% (US$ 10bn v. 7.2bn). People tend to dramatically underestimate the scale at which consumer computing operates, and overvalue datacenter sales. Yes, datacenters are a huge and high margin market, but it's also a high cost of entry and high maintenance market, which eats away at those margins. It's still very profitable, but so is the client computing market, at least for CPU makers.
IDK, if you had limited supply who would you rather say "no" to. DellEMC, and HPE or a bunch of fickle gamers?
It's not an either-or situation. The question is where one finds the balance between the two. And that's what's being criticized here.
Its fair to criticize them for the priority decisions they made but I would say the made the right business priorities. Ideally I'm sure they'd go after all of the markets including entry-level to mid-range markets that would be served by a 4 core Zen3 but I don't think they have the resources to pursue all that at the same time. Zen3 chiplets are already pretty small (compared to Alder Lake anyway) and yields are really good so while you'd get more by making a smaller chiplet and slightly better yields there is less money in each chip so there isn't much incentive to do it when your 8 core chiplet is yeidling great and selling out. Also, nobody really buys server CPUs in less than 10 core per socket configurations so a 4 core chiplet would be pretty much useless for Epyc.
And it's entirely reasonable to criticize AMD for how they've gon about this either way: I'm not beholden to their profit-only corporate logic, nor do I see any reason to accept any deeply ideological arguments towards their "main purpose being to make money" or anything of that sort. The only reason companies exist in the first place is in order to produce somehow useful products - thought this can be manipulated greatly, profits are always secondary to this in the end. This is also the basis of my criticism: I disagree completely with their choice to entirely ignore low-end CPU and APU markets for the past years.

You're also missing the point when talking about yields - the point of a smaller die wouldn't be better yields (those are already great), but simply increasing the die per wafer count. Heck, if anything, the good yields from TSMC 7nm is a disincentive towards producing low-end chips from an 8c CCX, as there simply aren't enough defective chips for this. That's why I brought up a smaller die to begin with. A thought experiment (which is obviously very simplistic, but still useful): the Zen3 CCD is reportedly 11.27x7.43mm. Putting that into a die per wafer calculator alongside TSMC's (old, likely superseded) 0.09/cm² defect density, gives us 692 complete dice per wafer, with ~50 defects. So, ~642 fully functioning dice, with 50 that might be salvaged with a lower core or CU count or cache size (though they may also be unsalvageable). AMD has essentially no reason to sell lower core count chips unless they just can't meet power or clock targets. So, let's assume a 4c CCX would just have the bottom four cores lopped off (it would likely entail more of a reconfiguration of the layout, but the area would still be similar). That would make it ~11.27x4.2. Plugging that back into the calculator gives us 1228 total dice with 51 defects or 1177 fully enabled dice per wafer. That's not quite a doubling, but it's still a massive increase in output per wafer. And these dice would of course be equally usable across their entire product range, from consumer desktop to OEM desktop to low core count/high RAM bandwidth/high PCIe lane count servers. And, of course, you could still make a 32c EPYC with these if you wanted to, though it would have slightly higher latencies than 4x8 core equivalents. Designing the new die would be a significant cost, but it would give them a product portfolio much better suited to addressing a diverse market, while drastically increasing the actual numbers of chips produced. And allocating even 10% of the wafers used for producing 8c CCDs would have given them a pretty significant amount of 4c CCDs.

Of course, there are more factors than just CCDs in this, and there have been reported shortages of CPU packaging and substrate materials, hold-ups with downstream processing plants, and it's not given that they have sufficient IOD supplies for this either. They also likely banked on their single Zen3 die to hold them over till Zen4 alongside the new APUs, but this has clearly failed in light of the shortages.

But most of that is unknown and unknowable, and doesn't change the fact that I simply think AMD made the wrong judgements here, and reacted poorly to the shortages. They tried to ride things out and keep prices high while investing minimally, which is backfiring significantly against a resurgent Intel that competes across a much broader range of products. They've left the low end essentially barren for two product generations and multiple years, which is undermining the public support that allowed them to gain their current position in the first place. This might be a passable plan if you're in an entrenched position of power, but AMD isn't. They're still highly vulnerable to the much larger and wealthier Intel. And especially in light of that, but also in light of their responsibility to produce actually useful and somewhat attainable products for their customers, I think they've missed the mark.
 
Joined
Jan 28, 2021
Messages
854 (0.60/day)
You're not entirely wrong, but remember: even back in 2019, at the peak of Intel's 14nm supply crunch and before AMD was making much headway into the datacenter space, their consumer branch was still outselling their datacenter branch by nearly 39% (US$ 10bn v. 7.2bn). People tend to dramatically underestimate the scale at which consumer computing operates, and overvalue datacenter sales. Yes, datacenters are a huge and high margin market, but it's also a high cost of entry and high maintenance market, which eats away at those margins. It's still very profitable, but so is the client computing market, at least for CPU makers.
It's not an either-or situation. The question is where one finds the balance between the two. And that's what's being criticized here.
It doesn't have to be either or in the long run but given where AMD was just 5 years ago in terms of engineering and financial resources and how long product development is I think AMD was kinda stuck on the path they were on, which is to build Zen primary to beat Intel in the datacenter. You can certainly make money in both datacenter and client spaces but I would argue the datacenter is worth more in the long run in terms of what AMD is able to offer in that market. The datacenter is also where Intel had the weakest product line and they (AMD) were excepting Intel to be way more competitive in their notebook lineup then they actually were. Epyc is absolutely destroying Xeon in the datacenter in a way that they could never do to the same extent in the consumer space so in that sense I think they made the right choices to set the company up to make money and be able to be competitive in the long run.
You're also missing the point when talking about yields - the point of a smaller die wouldn't be better yields (those are already great), but simply increasing the die per wafer count. Heck, if anything, the good yields from TSMC 7nm is a disincentive towards producing low-end chips from an 8c CCX, as there simply aren't enough defective chips for this. That's why I brought up a smaller die to begin with. A thought experiment (which is obviously very simplistic, but still useful): the Zen3 CCD is reportedly 11.27x7.43mm. Putting that into a die per wafer calculator alongside TSMC's (old, likely superseded) 0.09/cm² defect density, gives us 692 complete dice per wafer, with ~50 defects. So, ~642 fully functioning dice, with 50 that might be salvaged with a lower core or CU count or cache size (though they may also be unsalvageable). AMD has essentially no reason to sell lower core count chips unless they just can't meet power or clock targets. So, let's assume a 4c CCX would just have the bottom four cores lopped off (it would likely entail more of a reconfiguration of the layout, but the area would still be similar). That would make it ~11.27x4.2. Plugging that back into the calculator gives us 1228 total dice with 51 defects or 1177 fully enabled dice per wafer. That's not quite a doubling, but it's still a massive increase in output per wafer. And these dice would of course be equally usable across their entire product range, from consumer desktop to OEM desktop to low core count/high RAM bandwidth/high PCIe lane count servers. And, of course, you could still make a 32c EPYC with these if you wanted to, though it would have slightly higher latencies than 4x8 core equivalents. Designing the new die would be a significant cost, but it would give them a product portfolio much better suited to addressing a diverse market, while drastically increasing the actual numbers of chips produced. And allocating even 10% of the wafers used for producing 8c CCDs would have given them a pretty significant amount of 4c CCDs.
Yeah, I totally agree a 4 core CCX is what they need and the lack of something that size is a big problem, I probably would have bought a couple and recommend them others sure Maybe I'm overestimating the difficulty of getting a new CPU to product (even one based on an existing layout) but I have to think AMD knows this is a weakspot in their lineup and just don't have the resources to develop the chip. They also can't afford to artificially sell Ryzen 5 CPUs based on working 8 core CCXs when Ryzen 7s are worth so much more let alone what the dies are worth in Eypyc CPUs. Maybe they could have handled the pricing better but I think the product lineup is what they are stuck with for now.

They just don't have the resources to fill every product market segment and what they do have is addressing the most valuable segment so while I'd personally really like a 4 core Ryzen for my HTPC and to be able to recommend a Ryzen 3 or 5 as entry level CPU to someone I find it really hard to find fault in their product road maps.
 
Joined
Oct 23, 2020
Messages
671 (0.44/day)
Location
Austria
System Name nope
Processor I3 10100F
Motherboard ATM Gigabyte h410
Cooling Arctic 12 passive
Memory ATM Gskill 1x 8GB NT Series (No Heatspreader bling bling garbage, just Black DIMMS)
Video Card(s) Sapphire HD7770 and EVGA GTX 470 and Zotac GTX 960
Storage 120GB OS SSD, 240GB M2 Sata, 240GB M2 NVME, 300GB HDD, 500GB HDD
Display(s) Nec EA 241 WM
Case Coolermaster whatever
Audio Device(s) Onkyo on TV and Mi Bluetooth on Screen
Power Supply Super Flower Leadx 550W
Mouse Steelseries Rival Fnatic
Keyboard Logitech K270 Wireless
Software Deepin, BSD and 10 LTSC
5600X becomes a better deal than 12400F with these prices since B660 MBs are more expensive than B450/B550.
5600X(230usd)+B550(80usd)=310usd
12400F(180usd)+B660(120usd)=300usd
5600X is slightly faster stock and 5-10% faster with max tuning.
Mhm u take an older System for the same price?

Yeah B660 is pricier but good entry have against the B550 2x M2 with PCIE 4.0,
B550 offers only 1x 4.0.

B550 is between H610 and B660.
 
Joined
Aug 9, 2019
Messages
1,717 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
Mhm u take an older System for the same price?

Yeah B660 is pricier but good entry have against the B550 2x M2 with PCIE 4.0,
B550 offers only 1x 4.0.

B550 is between H610 and B660.
If older performs better I take older any day :) If you need 2 x pcie 4.0 m.2 then B660 is they way to go.

Ram oc is shit in gear 1 on B660 on locked CPUs, best you can expect is 3400-3600 vs 3800+ on B550. Also you can overclock CPU on B550.

If you plan on getting Raptor lake B660 can be viable, also Asus B660 strix F and G D5 can overclock locked CPUs.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
This solution is based in a 12 inch wafers, but in the future silicon fabs plan to use 18 inch wafers, which means more dies per wafer, and therefore, lower price per die, and lower price to end customers. This calculator shows the number of dies per wafer in various wafer sizes: https://anysilicon.com/die-per-wafer-formula-free-calculators/
450mm wafers have been "the next big thing" for decades, and are still nowhere near hitting production. The infrastructure and equipment to process them still does not exist. At the same time, the silicon lithography industry has been investing massively in new fabs for several years, with these mostly set to open in between this year and 2025-2027. All of these fabs are 300mm. It would be fantastic if this came to pass, but it seems extremely unlikely to happen at this point. Maybe in ten years and at limited scale?
 
Top