Monday, October 17th 2022
AMD Cuts Down Ryzen 7000 "Zen 4" Production As Demand Drops Like a Rock
AMD reportedly scaled down production of its Ryzen 7000 series desktop processors in response to bleak demand across the PC hardware industry. Wccftech claims to have read an internal company document calling for reduced supply to the channel as market response to the Ryzen 7000-series is weak. This comes hot on the heels of AMD revising its Q3-2022 forecast, trimming its guidance by a $1 billion drop in revenue, citing weak demand in the PC market. However, we are seeing no deviation from the launch pricing for Ryzen 7000-series SKUs or compatible Socket AM5 motherboards. The platform went on sale from late September, on the same day that Intel announced its competing 13th Gen Core "Raptor Lake" processors. The new Intel chips are expected to start selling from a little later this month.
Unlike 13th Gen Core processors, Ryzen 7000 series processors appear to be a victim of the platform—notwithstanding the high pricing of the processors, which start at $299 for the 6-core 7600X, buyers lack access to affordable motherboards, and have to contend with expensive DDR5 memory. Pricing of cheaper LGA1700 motherboards based on entry-level H610 and B660 chipsets with cost-effective DDR4 memory support have added depth to consumer choice, besides Intel's 12th Gen range starting from under $150.
Source:
Wccftech
Unlike 13th Gen Core processors, Ryzen 7000 series processors appear to be a victim of the platform—notwithstanding the high pricing of the processors, which start at $299 for the 6-core 7600X, buyers lack access to affordable motherboards, and have to contend with expensive DDR5 memory. Pricing of cheaper LGA1700 motherboards based on entry-level H610 and B660 chipsets with cost-effective DDR4 memory support have added depth to consumer choice, besides Intel's 12th Gen range starting from under $150.
242 Comments on AMD Cuts Down Ryzen 7000 "Zen 4" Production As Demand Drops Like a Rock
Hence, they are 'suffering' with a little under 3% inflation. Still high for them, they usually have very near zero inflation.
I think I'd probably play with it a good bit if I ended up getting a 13900K chip 102w and 153w might work really well or 68w and 187w. They might work better or worse and depending on the individual users usage and expectations of the chip which der8auer explains in the video and fully agree upon he's really quite level headed more than you might expect for a high level enthusiast. He's usually pretty well spot on with his analysis which is refreshing.
I think you've got a degree of subjectivity on ideal sweet spots for min/max power limits for a given chip and der8auer really tried to convey that to the audience to digest. Where I mentioned above it might work better in regard to 13900K and applicable as well to 13600K, but maybe slightly different figures to aim for I think they would yield a bit differing results in key area's. It's kind of a case of do you want to prioritize a bit more base frequency or boost frequency depending on which you pick and there are things to consider around that with thermal limitations in mind which is sort of where I think 102w and 153 might work better or would in situations depending on a persons cooling setup however 68w and 187w in other instances.
I think in der8auer's case with cooling he used 68w and 187w might help slightly over the 90w and 180w overall it adds up to a bit less wattage average which is good since it's throttling anyway and also allows for higher thermal boost wattage and lower idle however 102w and 153w might work better in such scenario if 68w with 187w just exacerbates the throttling more and cause too uneven frame time variance at the same time.
Looking at the frame time variances in Steve's results in FarCry 6 for example they are pretty stretched and I think that's a result of the low base and high boost frequency arrangement of Raptor Lake design which in some instances will result in more pronounced erratic behavior like that and probably temperature related in part. In Gamer Gandalf's instance I'd be curious what raising and lowering the LLC settings and comparing does. Lowering the LLC could smooth out the erratic behavior a bit because it slightly undervolts under load scenario's like gaming that helps with temperatures and is easier as well on motherboard VRM's so overall you get a bit of a efficiency gain and lower thermals, but you have to be more careful about instability if voltage drops too much however you have less voltage overshoot with a lower LLC and that can cause really bad voltage spikes and lower efficiency. Basically worse frame time variance of the voltage delivery by the VRM's.
Someone on TPU for Alder Lake had a post power limits and efficiency at different wattage figures scaled across the same Blender workload and at the time it looked like 65w look like kind of the peak sweet spot for minimum power limit for the older architecture. A few things have changed and gone up since with Raptor Lake in terms of frequencies so around 80w to 95w minimum being ideal seems about right with the higher frequencies involved. Much better overall performance for wattage draw though relative to previous generation proving once again how much better the changes to E cores are and improves to IPC of both core types along with better cache design.
Intel actually change the cache structure pretty close to how I thought they might make changes to balance considerations around cache misses between the P core and E core die types in regard to L1 and L2 caches of each. Combined with the processor scheduling and a shared L3 cache you can kind of do foreground/background role assignment between either to optimize base frequency and boost frequencies of each die type optimally short duration high boost ST and long duration low boost MT being the general premise of P core and E core design nature and it appears like Intel has tried to do just that. You could also reverse that role structure to much like you can reverse background and foreground process scheduling in window with time slices.
I am pretty interested in the 13600K in particular though the DDR4 board options aren't as appealing on features unfortunately. I really like more full feature workstation boards more like Aero D z670 and the new z790 ProArt. I'm not a big of cut down micro ATX and ITX boards. I don't really have a big issue with micro ATX though I feel like in the modern era they've gone downhill a bit on designs in terms of PCIE slot functionality to incorporate in board M.2 slots. Perhaps if they start going back to them have a reasonable amount of full length slots then combine that with better rear I/O USB4/TB4 ports on the rear in place of a truckload of onboard M.2's I'll take a look at them again more. I think micro ATX and ITX could stand to have more rear I/O USB4/TB4 ports to make them a bit more versatile. Early micro ATX boards I didn't feel like I was really sacrificing anything crucial, but modern ones I can't quite say the same about.
The sensible thing is to to wait or if you need to upgrade to go with the good option (RL).
Another option is that you are on AM4 and happy with your mobo so drop the best zen3 cpu you get.
You can actually buy a new zen3 system altogether and be very much OK if Intel is out of the questions for some reason.
Zen4 is sadly in none or those options hance the low demand for it.
Whoever is reading this: Get a B660M or B660-F if it's similarly priced and slap a 13600K on it — if you feel a need in few years - slap a 13900K as then it's going to be madly cheap and you got yourself a +30% multi thread boost and you saved bunch of money (even single is 8-10% better on 13900K)
13600K should be a good deal for many users. Most people do not need huge MT boost. CPU's potential in this regard never gets used by most owners who game.
Zen4 higher SKUs, 7900X and 7950X are productivity powerhouses that will gradually become more popular, once prices settle down a bit. Neither 13900K nor 13700K can win with those in efficiency, power management and platform longevity.
On AMD ur getting support til 2025 (which is fairly soon)
On Intel you're getting room for 13900K only (maybe KS)
And Intel is still a better buy. If you get a 13600K or 13700K you won't need to upgrade til AM5 reaches EOL anyway.
AMD has better offer on high end due to the simple fact that those engineers, architects and creatives will be able to slot one single Zen5 and Zen6 CPUs into such system. Plus, Zen CPUs will save them time and money in a long run due to supreme power efficiency in specific workflows and faster finishing of jobs. See GN's review of those CPUs to see more details. 'Smart' is vague and relative concept.
Stop the hate, toxicity, trolling, and insulting.
Please read and understand the guidelines.
Thank You.
Then, when the market for those saturate, use that TSMC factory time to make a buttload of 7800X3D AM5 chips, and I bet those will sell out day one.
If AMD is smart I think this should be the plan personally.
Well I'm not in a hurry. Even if they do that, it will take many months for changes in allocation to take effect.
If any of the higher consumer chips are good enough for EPYC validation, that may be an option, but I'm not sure they are. They probably are using the relevant bins for EPYC already. That will probably not happen.
5800X3D has fairly limited supply, as these cache chiplets(correct naming?) are probably leftovers that didn't pass the requirements for EPYC. If they were to mass produce extra wafers of cache just for consumer products, then it would be way too costly. We are talking about a lot of extra die space just to get a tiny amount of extra performance here. 5800X3D was mostly a gimmick, a successful PR stunt, but in reality its gains are far less than most forum users think. We are talking about a few percent extra gaming performance and gains in a handful applications, the rest performs worse due to lower clocks. The story for a 7800X3D will be pretty much the same, and will probably not live up to the hype, but I still expect them to launch it eventually, when they have enough cache chiplets to make at least some of them.
And to make things worse, intel markets their TDP (another crazy term for non-PC guys :)) for nominal clocks. Use any boost and thermals skyrocket, yet most buyers belive they will get the max of their CPU with the rated cooler.
And as you said, add 'water' to the PC and people go bannanas.