• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Why everyone say Zen 5 is bad ?

More importantly, there is a very good reason why new bleeding edge processes usually start out with Apple and their low-power SOCs - initially these processes are just not a good fit for power-hungry desktop parts and the yields would be abysmal. Apple is essentially both a test run and a stabilizer. Even if NV or AMD COULD buy some 2nm allocation there is no way they would WANT to. I think ARF thinks that new nodes are magic that just by itself makes any chip design better and works OOB. It's not.

Some time ago, it was AMD who led Apple.
AMD released the first 40nm GPU back in Q2 2009 (RV740 aka Radeon HD 4770), while Apple released a 45nm (at Samsung) A4 in 2010.

You probably forgot that it was AMD who always used the state-of-the-art manufacturing node first..
 
Performance increases are significantly below what consumers expected or average uplift from past generations. There's no improvement to efficiency either, when both architectures are within their power sweet spot they end up about the same. The 9700X only looks good in comparison to the 7700X power wise due to the 7700X being tuned out of it's sweet spot by default by simply limiting it's power would bring it in line as the 7700 shows.

The 9700X's performance increase is so low it's really not worth considering at it's current price. If AMD were to cancel it's launch I doubt anyone would care.

Maybe the X3D parts with 2nd gen X3D cache will fair better but that's going to depend on how the new architecture works with the X3D cache and how much of an improvement the 2nd gen cache is. This release does not have me holding my breath.
 
I mean idk about anyone else but before I even judge performance I generally just want my PC to start.
somebody here has very high standards
To be fair it says "up to" right there.
it is in very small lettering at the bottom but I'm willing to bet most AMD fan boys either skipped that or ignored it
 
Some time ago, it was AMD who led Apple.
AMD released the first 40nm GPU back in Q2 2009 (RV740 aka Radeon HD 4770), while Apple released a 45nm (at Samsung) A4 in 2010.

You probably forgot that it was AMD who always used the state-of-the-art manufacturing node first..
You mean the times when node shrinks worked rather differently than now (there wasn’t really a market of low-power mobile SOCs driving the industry) and Apple just started designing their own SOCs and had a partnership with Samsung, thus their choice of 45nm? By the way, 40nm TSMC was an equivalent to modern day half-step optimizations like what N4 is to N5. It was still fundamentally a 45nm tier node. I love comparing apples to oranges. As it stands now, TSMC 2nm is not viable for mass production of desktop CPUs or GPUs. End of. The idea that AMD, NV or Intel deliberately sabotage themselves and have no idea what they are doing is asinine.
Again, as @Dr. Dro mentioned, a 2nm Zen 5 CPU would be insanely expensive and the yields would also mean that the supply would be rather small. You up for paying 2000 bucks for those?

somebody here has very high standards
I mean, that IS a good starting point. Should be in Pros in the review - "The PC with this CPU starts and boots to desktop OK, very impressive".
 
You mean the times when node shrinks worked rather differently than now (there wasn’t really a market of low-power mobile SOCs driving the industry) and Apple just started designing their own SOCs and had a partnership with Samsung, thus their choice of 45nm? By the way, 40nm TSMC was an equivalent to modern day half-step optimizations like what N4 is to N5. It was still fundamentally a 45nm tier node. I love comparing apples to oranges. As it stands now, TSMC 2nm is not viable for mass production of desktop CPUs or GPUs. End of. The idea that AMD, NV or Intel deliberately sabotage themselves and have no idea what they are doing is asinine.
Again, as @Dr. Dro mentioned, a 2nm Zen 5 CPU would be insanely expensive and the yields would also mean that the supply would be rather small. You up for paying 2000 bucks for those?

What does this mean? When will AMD be ready for 2 nm? In 2030?

What I want is Ryzen 7 9700X to be a 12-core (if not, then at least 10-core) with 50% more cache than now, made on the 3nm process node, being a monolithic chip.
This is my expectation.
 
What does this mean? When will AMD be ready for 2 nm? In 2030?
When yields for high-power chips become viable. So, when TSMC says they are. I am not sure why you fail to understand that this isn't on AMD only.

What I want is Ryzen 7 9700X to be a 12-core (if not, then at least 10-core) with 50% more cache than now, made on the 3nm process node, being a monolithic chip.
This is my expectations.
I too want a Ford F150 to be a 911 GT3 RS beater on the Nurburgring. For some reason, Ford keeps making it a workmans pickup truck. No idea why.
 
What does this mean? When will AMD be ready for 2 nm? In 2030?

What I want is Ryzen 7 9700X to be a 12-core (if not, then at least 10-core) with 50% more cache than now, made on the 3nm process node, being a monolithic chip.
This is my expectation.
N3 has abysmal SRAM scaling so I wouldn't expect increases in cache size when AMD moves to that family of process nodes.
 
What does this mean? When will AMD be ready for 2 nm? In 2030?

What I want is Ryzen 7 9700X to be a 12-core (if not, then at least 10-core) with 50% more cache than now, made on the 3nm process node, being a monolithic chip.
This is my expectation.
@Onasi already stated why that's not feasible
 
When yields for high-power chips become viable. So, when TSMC says they are. I am not sure why you fail to understand that this isn't on AMD only.

Sorry, but the yields won't magically improve if no one makes the said chips in risk production.
 
Sorry, but the yields won't magically improve if no one makes the said chips in risk production.
What the hell do you think Apple orders are? Look at 3nm - they started out with the mobile A17. Then went to N3E with the more powerful M4. That's how it works. It's an iterative marathon, not a sprint where TSMC beats their head against the wall trying to instantly produce AD102 sized chips with yields of maybe a couple per wafer.
And this is not risk production, that comes before even offering a node to clients on a regular basis.
Just stop, you know nothing of what you are talking about. Hell, I don't know what I am talking about besides the basics since modern litography is THAT complex of a business.
 
What the hell do you think Apple orders are? Look at 3nm - they started out with the mobile A17. Then went to N3E with the more powerful M4. That's how it works. It's an iterative marathon, not a sprint where TSMC beats their head against the wall trying to instantly produce AD102 sized chips with yields of maybe a couple per wafer.
And this is not risk production, that comes before even offering a node to clients on a regular basis.
Just stop, you know nothing of what you are talking about. Hell, I don't know what I am talking about besides the basics since modern litography is THAT complex of a business.

Don't worry, there will be millions of unhappy clients who will vote with their wallets - don't buy that crap. The business model will fail, and TSMC and AMD will be forced to change ;) :D
 
Don't worry, there will be millions of unhappy clients who will vote with their wallets - don't buy that crap. The business model will fail, and TSMC and AMD will be forced to change ;) :D
I know that maybe for some in modern world this seems like a shock, but MUH CAPITALISM can not and will not change the laws of physics and applied material sciences.
 
he business model will fail, and TSMC and AMD will be forced to change
TSMC just beat Q1 earnings expectations (already high) and are booked solid for several quarters thanks to AI. I don't believe the word "fail" means what you think it means
 
Last edited:
Since there are two node shrinks afterwards, if you don't know about them. TSMC 3nm and TSMC 2nm.

AMD just showed us the same old - to snatch defeat from the jaws of victory.
Sigh......

  • You do realise that the 4nm tech being used in AMDs parts is TSMC N4X which only entered serial production at the end of 23/early24.
  • The equivalent 3nm stuff isnt being done till late 24/early 25
  • 2nm is 2025/26 at the earliest
There isnt just one "node" anymore as TSMC and all the rest of the foundries tend to do multiple types of fabrication at the set node and they have differing strengths/weaknesses on things like transistor density, leakage, and cost.

AMD who always used the state-of-the-art manufacturing node first..
Eh...Not really?

Intels foundries tended to be leading at the time. Their 65/45/32nm production was always quicker than the competition and at some times quite a bit.

Core 2 Duos @65nm were released in July 06 and were going up against Athlon X2 were on 90nm at the time and the 65nm refresh were released in December 06
Core 2 Quads @45nm were released in Jan 08 and were going up against Phenom I which were on 65nm and it wasnt until Phenom II was released in Feb' 09 that AMD had a 45nm part
Core i series @32nm were released in March 10 and they wouldnt have "competition" from the FX series until March 2011

If anything this just shows how bandly Intel's foundries have got it wrong in the last 5 or so years.

Sorry, but the yields won't magically improve if no one makes the said chips in risk production.
The are called the "apple faithful" who are willing to spend a minimun of £1500 for a minimum acceptable spec in 2024 (16Gb Macbook Air)..........dont even get me started on the MBP pricing

Also apple are happy to pay the early premium tax AND absorb the slightly higher defect rates from the early production runs of the 1st gen node.



Everyone looking forward to the £500 9600x and £1300+ 9950x parts would be with this mentality?
 
IDK, I see a lot of difference in reviews.

Those two guys from AU, and that long haired guy that looks like he's related to John Belushi thinks it's boring.

AT, that guy from CA whose name isn't Linux, and the Linux guy thinks it's really good (below).
When taking the geometric mean of those nearly 400 raw benchmark results, it sums up the greatness of Zen 5 with the Ryzen 5 9600X and Ryzen 7 9700X processors. The Ryzen 7 9700X delivered 1.195x the performance of the Core i5 14600K competition or 1.15x the performance of the prior generation Ryzen 7 7700X. The Ryzen 5 9600X came in at 1.35x the performance of the Core i5 14500 and 1.25x the performance of the Ryzen 5 7600X. Or if still on Zen 3 for comparison, the Ryzen 5 9600X was 1.82x the performance of the Ryzen 5 5600X.

The raw performance of these Ryzen 9000 series processors was extremely impressive.
 
N3 has abysmal SRAM scaling so I wouldn't expect increases in cache size when AMD moves to that family of process nodes.
That is the case but remember that N4P also had low SRAM scaling. Yet the 32MB L3 in the Zen 5 CCD is much smaller. If they can shrink the logic a good amount that frees up more die area for caches again.
 
It's the typical pipeline of expectations and disappointment.
Expectation: AMD said these were gaming leadership parts.
Reality: They are not.

Why are people disappointed?
1. AMD released the low TDP parts first.
2. AMD set the price a bit high.
3. AMD still hasn't increased core count since 2019.
4. AMD hasn't been able to launch X3D parts with the initial parts - if this continues happening they will probably always be disappointing and hurt their products initial impressions.
5. Not AMD's fault but there were rumors of even more insane results. While these rumors should have been discarded as false, some people still believed them.
Agreed.

It almost seems like if the chips reviewed today (9600x/9700x) had come out in 2022 it would of been great but those same chips now in 2024/2025 ? Very meh.
 
That is the case but remember that N4P also had low SRAM scaling. Yet the 32MB L3 in the Zen 5 CCD is much smaller. If they can shrink the logic a good amount that frees up more die area for caches again.
SRAM doesnt benefit the same way as logic does with smaller node sizes. I mean most DRAM nodes are still in the 10/12nm ranges due to this fact. So I think we are going to see the rise of stacking caches for the short/medium term until there is a break through in either material or layout design.
 
SRAM doesnt benefit the same way as logic does with smaller node sizes. I mean most DRAM nodes are still in the 10/12nm ranges due to this fact. So I think we are going to see the rise of stacking caches for the short/medium term until there is a break through in either material or layout design.
We all know. But it seems you are not following what I said? AMD already set up their core for future scaling being mainly logic. The N4P Zen 5 L3 cache (all SRAM) is 0.6x the area as the N5 Zen 4 L3 cache. Zen 5 increased the proportion of area used for logic heavily.

When they make Zen 6 on N3E the logic WILL scale. Which means they will have even more area available for cache despite it not scaling.

So I don't think they'll go to stacked cache by default even with Zen 6. Too costly.
 
IDK, I see a lot of difference in reviews.

Those two guys from AU, and that long haired guy that looks like he's related to John Belushi thinks it's boring.

AT, that guy from CA whose name isn't Linux, and the Linux guy thinks it's really good (below).
Yeap. Tom's seems to like it too. Some of these others look like the hate storm from clicks is too hard to pass up.

The $280 Ryzen 5 9600X sets a new bar in the value segment, earning 4.5 stars in our ranking. Its class-leading gaming and single-threaded performance, coupled with exceptional power efficiency and a reasonable price point, earns it a spot on our list of the Best CPUs for gaming. Intel's competing Core i7-14600K still holds an advantage in heavily threaded workloads, but AMD has a commanding lead in gaming that helps offset that advantage. Additionally, the Ryzen 5 9600X's stellar power consumption metrics will ultimately yield a quieter system that has more forgiving cooling and power requirements.

The Ryzen 7 9700X also beats the competing Intel processors in gaming and single-threaded work, and while its $360 price tag is $40 lower than the launch price of its predecessor, pricing pressure from AMD's $375 Ryzen 7800X3D becomes more of an acute concern. We've assigned this chip a 3.5-star ranking (we'll have a full separate review in the coming days).
 
Bad is a very subjective term will these cpus boot, game, and perform at a decent enough level sure but we've waited 2 years for this and the slides AMD showed off were at best misleading and at worst straight up out of fantasy land.

For those who want power efficiency and don't care about any other metric these cpu's are pretty decent.

For those who care about generational uplifts, gaming performance, MT performance this is a major letdown......

I'm not surprised, right after AMDs presentation I got the vibe that these wouldn't even be that impressive over Ryzen 7000

Maybe we've been spoiled and while I didn't expect this to beat the 7800X3D in gaming or the 14700K in MT I did expected better than this.
 
Yeap. Tom's seems to like it too. Some of these others look like the hate storm from clicks is too hard to pass up.
I think we are seeing who Intel's gives the most sponsorship money. It is very crazy how some of them are so negative.
 
Because reviewers started using "reddit" or other terrible commuinities to get their idea of what people want and not using a proven methods. Such as using the scientifict method to do a review. As such they have horrblw bias of opinions and critiria for what is considered "good"

Just my thoughts on the matter
 
Back
Top