• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Cuts Down Ryzen 7000 "Zen 4" Production As Demand Drops Like a Rock

AMD's B350 chipset, an old AM4 chipset which was supposed to support all AM4 CPUs... Do you know when it got Vermeer Ryzen 5000 series support?
1 year and 6 months after its launch - BIOS 7.20 for ASRock AB350M Pro4 as late as May this year. Ryzen 5000 was launched back in November 2020.

Better late than never!

:)

And I am willing to wager that AMD never said that the B350 would get AM4 CPU compatibility with all future products on launch day. AMD made good on their word.

:p:D
 
Last edited:
  • Haha
Reactions: ARF
Yeah buying a dead end LGA1700 is apparently a better buy than a platform you'll be able to upgrade for years to come. Have fun with that 300 watt bloated Raptor Lake junk.

it uses 74 watts while gaming... and i am getting a budget 660 board
 
Part of price hike is an early adopter premium. Prices will have to go down in a few months, if not earlier, as no one will sell new gear in any significant amount.

I wonder how much labour costs have gone up in Taiwan and other parts of Asia where boards are produced, together with inflation? If you found that core compenents are essentially similar in market value, where else we could look for cost increase, apart from early adopter premium, to explain new prices?

For example, Gigabyte has one factory in Taiwan providing 25% of production capacity. As motherboard productions still requires a lot of manual labour, that factory will need to pay workers more than the one in, say Malaysia, due to different standard of life in the two countries. Is it possible that Gigabyte would need to offset those differences in retail prices, and if yes, how much?

Taiwan did not go on a money printing binge like most western democracies when Covid broke out. Taiwan is one of just a few countries that restrained itself a bit.

Hence, they are 'suffering' with a little under 3% inflation. Still high for them, they usually have very near zero inflation.
 
What a good video. Incredible efficiency after undervolting, losing 10% gaming performance with a 50% reduction in power draw.
The value of the 13900K for just gaming is terrible, though, just as it always is for the i9s. This is a productivity CPU.

But the i5s and i7s will offer amazing value for gaming. I think AMD will have to launch the X3D versions at the price of the regular ones.

Yeah der8auer does great content in general buildzoid as well really cover the finer details that others often don't get into much or rarely cover that pretty routinely are like and now I'll show you how I go about trying to nearly break the hardware for a 0.1% better benchmark result. The suggestions by der8auer on min/max power limits were very close to what I had suggested for 13600K at 85w/170w. They are probably pretty spot on mostly too for the 13900K model I can see where he would've settle around those figures for power limits for the hardware in mind and just general expectations around it that are reasonable compromise.

I think I'd probably play with it a good bit if I ended up getting a 13900K chip 102w and 153w might work really well or 68w and 187w. They might work better or worse and depending on the individual users usage and expectations of the chip which der8auer explains in the video and fully agree upon he's really quite level headed more than you might expect for a high level enthusiast. He's usually pretty well spot on with his analysis which is refreshing.

I think you've got a degree of subjectivity on ideal sweet spots for min/max power limits for a given chip and der8auer really tried to convey that to the audience to digest. Where I mentioned above it might work better in regard to 13900K and applicable as well to 13600K, but maybe slightly different figures to aim for I think they would yield a bit differing results in key area's. It's kind of a case of do you want to prioritize a bit more base frequency or boost frequency depending on which you pick and there are things to consider around that with thermal limitations in mind which is sort of where I think 102w and 153 might work better or would in situations depending on a persons cooling setup however 68w and 187w in other instances.

I think in der8auer's case with cooling he used 68w and 187w might help slightly over the 90w and 180w overall it adds up to a bit less wattage average which is good since it's throttling anyway and also allows for higher thermal boost wattage and lower idle however 102w and 153w might work better in such scenario if 68w with 187w just exacerbates the throttling more and cause too uneven frame time variance at the same time.

Looking at the frame time variances in Steve's results in FarCry 6 for example they are pretty stretched and I think that's a result of the low base and high boost frequency arrangement of Raptor Lake design which in some instances will result in more pronounced erratic behavior like that and probably temperature related in part. In Gamer Gandalf's instance I'd be curious what raising and lowering the LLC settings and comparing does. Lowering the LLC could smooth out the erratic behavior a bit because it slightly undervolts under load scenario's like gaming that helps with temperatures and is easier as well on motherboard VRM's so overall you get a bit of a efficiency gain and lower thermals, but you have to be more careful about instability if voltage drops too much however you have less voltage overshoot with a lower LLC and that can cause really bad voltage spikes and lower efficiency. Basically worse frame time variance of the voltage delivery by the VRM's.

Someone on TPU for Alder Lake had a post power limits and efficiency at different wattage figures scaled across the same Blender workload and at the time it looked like 65w look like kind of the peak sweet spot for minimum power limit for the older architecture. A few things have changed and gone up since with Raptor Lake in terms of frequencies so around 80w to 95w minimum being ideal seems about right with the higher frequencies involved. Much better overall performance for wattage draw though relative to previous generation proving once again how much better the changes to E cores are and improves to IPC of both core types along with better cache design.

Intel actually change the cache structure pretty close to how I thought they might make changes to balance considerations around cache misses between the P core and E core die types in regard to L1 and L2 caches of each. Combined with the processor scheduling and a shared L3 cache you can kind of do foreground/background role assignment between either to optimize base frequency and boost frequencies of each die type optimally short duration high boost ST and long duration low boost MT being the general premise of P core and E core design nature and it appears like Intel has tried to do just that. You could also reverse that role structure to much like you can reverse background and foreground process scheduling in window with time slices.

I am pretty interested in the 13600K in particular though the DDR4 board options aren't as appealing on features unfortunately. I really like more full feature workstation boards more like Aero D z670 and the new z790 ProArt. I'm not a big of cut down micro ATX and ITX boards. I don't really have a big issue with micro ATX though I feel like in the modern era they've gone downhill a bit on designs in terms of PCIE slot functionality to incorporate in board M.2 slots. Perhaps if they start going back to them have a reasonable amount of full length slots then combine that with better rear I/O USB4/TB4 ports on the rear in place of a truckload of onboard M.2's I'll take a look at them again more. I think micro ATX and ITX could stand to have more rear I/O USB4/TB4 ports to make them a bit more versatile. Early micro ATX boards I didn't feel like I was really sacrificing anything crucial, but modern ones I can't quite say the same about.
 
Yeah buying a dead end LGA1700 is apparently a better buy than a platform you'll be able to upgrade for years to come. Have fun with that 300 watt bloated Raptor Lake junk.
So you will buy a bad option now in the hope it will worth it in the future?
The sensible thing is to to wait or if you need to upgrade to go with the good option (RL).
Another option is that you are on AM4 and happy with your mobo so drop the best zen3 cpu you get.
You can actually buy a new zen3 system altogether and be very much OK if Intel is out of the questions for some reason.

Zen4 is sadly in none or those options hance the low demand for it.
 
Hi AMD, you are not competing with lame Intel 10/11th gen parts anymore so, surprise.
 
Hi AMD, you are not competing with lame Intel 10/11th gen parts anymore so, surprise.
ZEN 5 is going to be P+E, AMD's biggest mistake was not doing P+E on ZEN4 as entry level boards are disposable junk, even though if you take upgradeability into account — ZEN4 is simply not worth it compared to 13gen

Whoever is reading this: Get a B660M or B660-F if it's similarly priced and slap a 13600K on it — if you feel a need in few years - slap a 13900K as then it's going to be madly cheap and you got yourself a +30% multi thread boost and you saved bunch of money (even single is 8-10% better on 13900K)
 
ZEN 5 is going to be P+E, AMD's biggest mistake was not doing P+E on ZEN4 as entry level boards are disposable junk, even though if you take upgradeability into account — ZEN4 is simply not worth it compared to 13gen

Whoever is reading this: Get a B660M or B660-F if it's similarly priced and slap a 13600K on it — if you feel a need in few years - slap a 13900K as then it's going to be madly cheap and you got yourself a +30% multi thread boost and you saved bunch of money (even single is 8-10% better on 13900K)
Nonsense. Big-Little was not ready for Zen4, as they needed to develop 16-core chiplets for Bergamo C SKU to see how those smaller 'cloud' cores behave. Those are MT cores too. Once small cores are ready, it will be rolled out on client platforms. They thread carefully, step by step, just like with 3D Cache models.

13600K should be a good deal for many users. Most people do not need huge MT boost. CPU's potential in this regard never gets used by most owners who game.

Zen4 higher SKUs, 7900X and 7950X are productivity powerhouses that will gradually become more popular, once prices settle down a bit. Neither 13900K nor 13700K can win with those in efficiency, power management and platform longevity.
 
Nonsense. Big-Little was not ready for Zen4, as they needed to develop 16-core chiplets for Bergamo C SKU to see how those smaller 'cloud' cores behave. Those are MT cores too. Once small cores are ready, it will be rolled out on client platforms. They thread carefully, step by step, just like with 3D Cache models.

13600K should be a good deal for many users. Most people do not need huge MT boost. CPU's potential in this regard never gets used by most owners who game.

Zen4 higher SKUs, 7900X and 7950X are productivity powerhouses that will gradually become more popular, once prices settle down a bit. Neither 13900K nor 13700K can win with those in efficiency, power management and platform longevity.
$150 motherboard (B660 Mortar or B660-F) + $409 (13700K) + $210 64GB RAM vs $200 motherboard+ $549 (7900x) + $300 32GB DDR5 (5-10% in terms of performance gain isn't worth it compared to double the RAM) for worse ST and MT performance

On AMD ur getting support til 2025 (which is fairly soon)

On Intel you're getting room for 13900K only (maybe KS)

And Intel is still a better buy. If you get a 13600K or 13700K you won't need to upgrade til AM5 reaches EOL anyway.
 
$150 motherboard (B660 Mortar or B660-F) + $409 (13700K) + $210 64GB RAM vs $200 motherboard+ $549 (7900x) + $300 32GB DDR5 (5-10% in terms of performance gain isn't worth it compared to double the RAM) for worse ST and MT performance

On AMD ur getting support til 2025 (which is fairly soon)

On Intel you're getting room for 13900K only (maybe KS)

And Intel is still a better buy. If you get a 13600K or 13700K you won't need to upgrade til AM5 reaches EOL anyway.
Smart buyers will pair a B760 / B660 with a i5 13600F or i7 13700F. Those cpu's are due for release this January.
 
$150 motherboard (B660 Mortar or B660-F) + $409 (13700K) + $210 64GB RAM vs $200 motherboard+ $549 (7900x) + $300 32GB DDR5 (5-10% in terms of performance gain isn't worth it compared to double the RAM) for worse ST and MT performance

On AMD ur getting support til 2025 (which is fairly soon)
On Intel you're getting room for 13900K only (maybe KS)

And Intel is still a better buy. If you get a 13600K or 13700K you won't need to upgrade til AM5 reaches EOL anyway.
Productivity powerhouse CPUs for professionals are not paired with $200 board, but more premium solutions. Initially a bit higher cost of AM5 doesn't deter them. They are happy to put a new system together and pay more now in order to save in a long run, without needing entirely new system yet again in two years or so. In 2024, they will certainly not be looking to buy old 13900K. That's absurd to suggest.

AMD has better offer on high end due to the simple fact that those engineers, architects and creatives will be able to slot one single Zen5 and Zen6 CPUs into such system. Plus, Zen CPUs will save them time and money in a long run due to supreme power efficiency in specific workflows and faster finishing of jobs. See GN's review of those CPUs to see more details.

Smart buyers will pair a B760 / B660 with a i5 13600F or i7 13700F. Those cpu's are due for release this January.
'Smart' is vague and relative concept.
 
I can't be the only one laughing?
 
If AMD is smart, they will use the TSMC factory time on their gpu's, flood the market with those, because we all know they will sell well.

Then, when the market for those saturate, use that TSMC factory time to make a buttload of 7800X3D AM5 chips, and I bet those will sell out day one.

If AMD is smart I think this should be the plan personally.
 
And I was hoping for some cheap components to throw into a new HTPC… But those motherboard prices? Eesh.
Well I'm not in a hurry.

If AMD is smart, they will use the TSMC factory time on their gpu's, flood the market with those, because we all know they will sell well.
Even if they do that, it will take many months for changes in allocation to take effect.
If any of the higher consumer chips are good enough for EPYC validation, that may be an option, but I'm not sure they are. They probably are using the relevant bins for EPYC already.

Then, when the market for those saturate, use that TSMC factory time to make a buttload of 7800X3D AM5 chips, and I bet those will sell out day one.

If AMD is smart I think this should be the plan personally.
That will probably not happen.
5800X3D has fairly limited supply, as these cache chiplets(correct naming?) are probably leftovers that didn't pass the requirements for EPYC. If they were to mass produce extra wafers of cache just for consumer products, then it would be way too costly. We are talking about a lot of extra die space just to get a tiny amount of extra performance here. 5800X3D was mostly a gimmick, a successful PR stunt, but in reality its gains are far less than most forum users think. We are talking about a few percent extra gaming performance and gains in a handful applications, the rest performs worse due to lower clocks. The story for a 7800X3D will be pretty much the same, and will probably not live up to the hype, but I still expect them to launch it eventually, when they have enough cache chiplets to make at least some of them.
 
Price is not so much a hinderance for a brand new system. Seriously, the AVERAGE computer buyer simply tends to turn their head at the NEED for water-cooling - the maintenance and possible problems. If the CPUS we less power hungry and only needed a CPU cooler to achieve this kind of performance, people would be much more ready to get them. Hence...the belated product that starts at 65W. The same goes for Intel.
 
And I was hoping for some cheap components to throw into a new HTPC… But those motherboard prices? Eesh.
Well I'm not in a hurry.


Even if they do that, it will take many months for changes in allocation to take effect.
If any of the higher consumer chips are good enough for EPYC validation, that may be an option, but I'm not sure they are. They probably are using the relevant bins for EPYC already.


That will probably not happen.
5800X3D has fairly limited supply, as these cache chiplets(correct naming?) are probably leftovers that didn't pass the requirements for EPYC. If they were to mass produce extra wafers of cache just for consumer products, then it would be way too costly. We are talking about a lot of extra die space just to get a tiny amount of extra performance here. 5800X3D was mostly a gimmick, a successful PR stunt, but in reality its gains are far less than most forum users think. We are talking about a few percent extra gaming performance and gains in a handful applications, the rest performs worse due to lower clocks. The story for a 7800X3D will be pretty much the same, and will probably not live up to the hype, but I still expect them to launch it eventually, when they have enough cache chiplets to make at least some of them.
The 5800x3d was like 15% faster in gaming and in some gamesxmuch more even. That said, I'm sticking to my 7950x because I don't just use it for gaming. It's a hell of CPU all around and significantly better than Intel's 13th gen. You could run it with high end air cooling but given 360mm AIOs are not that expensive, it's better.
 
Price is not so much a hinderance for a brand new system. Seriously, the AVERAGE computer buyer simply tends to turn their head at the NEED for water-cooling - the maintenance and possible problems. If the CPUS we less power hungry and only needed a CPU cooler to achieve this kind of performance, people would be much more ready to get them. Hence...the belated product that starts at 65W. The same goes for Intel.
Unsurprisingly, very few people actually know what (or who) Watt is, let alone what't the difference between 35W and 95W for example. Usually people belive the more the better still apply.
And to make things worse, intel markets their TDP (another crazy term for non-PC guys :)) for nominal clocks. Use any boost and thermals skyrocket, yet most buyers belive they will get the max of their CPU with the rated cooler.

And as you said, add 'water' to the PC and people go bannanas.
 
Back
Top