• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Zen5 only 16 core.

techspot actually ran this test

What We Learned​


Cache matters. Often more so than cores, when it comes to PC gaming performance. So, there you have it. As we found with Intel's 10th-gen series two years ago, and in fact, probably more than we realized two years ago. The arrival of AMD's 3D V-Cache processors has proven this beyond a shadow of a doubt, causing significant challenges for Intel's gaming performance, something the company deeply cares about.

This data also supports recommendations we made years ago. For example, we appreciated the value delivered by the Ryzen 5 5600 series and recommended it despite it having only 6 cores, which many at the time believed would be insufficient for future gaming.


Today, the 6 and 8-core Zen 3 parts are still delivering comparable performance, and of course, parts like the Ryzen 5 5600X are still very usable.

So, until games fully saturate the 5600X, you won't see an improvement with the 5800X, and by that time, we expect both CPUs will be struggling. Of course, there are instances where 8-core models of the same architecture are faster than their 6-core counterparts, but in those examples, the 6-core processors still deliver highly playable performance, making the core count argument moot, especially considering the cost difference.


As a more modern example, the Ryzen 5 7600 costs $210 and delivers comparable gaming performance to the 5800X3D. The Ryzen 7 7700 costs $310 – almost 50% more – and it would be challenging to find a game where the 7700 is even 20% faster than the 7600; in fact, such a scenario might not even exist. Looking over our most recent data from the 5700X3D review, the 7700X is just 4% faster than the 7600X on average, so in terms of value, the 6-core model is significantly better for gaming.
I don't care about his opinion, I care about his data. The data you posted shows big differences between 6 and 8 cores in eg. Star wars (21% difference in 1% lows between 5800x and 5600x) , 13% in CP, 12.5% in AC mirage. The averages are hugely misleading, you are not playing averages. Couple that with the fact we don't know where he is testing (it might be light areas), it's not as conclusive as you might think. I've already posted live footage comparing core counts so...why are we even arguing about it?
 
Seems zen 5 might be only 16 core. It's all you need for gaming though so I guess it's good enough. Intel arrow lake is definitely going to be more than that. So is it going to be AMD for gaming and Intel for Gaming/productivity I wonder.
Nobody can tell for sure how the next gen CPUs will perform. But you can be sure about two things. First, Zen 5 will be strong at gaming and productivity. There is absolutely no reason to think you have to go the Intel route if you prefer both. Especially with the 13th/14th gen disaster and unstable SKUs I would be careful to go the Intel route. And second, for the best gaming experience there will be no other choice than Zen 5 3D.

I wouldn't expect much difference at gaming between Zen 5 Non-3D and Arrow Lake. Ryzen 9950X seem to be 10-15% faster than 14900K here. So, the interesting question actually is, how will it be in productive apps. According to the numbers from AMD, for example Cinebench R24, Zen 5 should be able to be 20-30% faster here. So, what about Arrow Lake? It's still 24 cores. The p-core is only somewhat faster per clock compared to Raptor Lake, apparently around 10% (according to Intel +14% compared to the Meteor Lake p-core which was a slight regression compared to the Raptor Lake p-core). The new e-core seems to be about 40% faster on average for most workloads. So, theoretically Arrow Lake could be 30% faster than Raptor Lake overall. BUT, Arrow Lake will not have SMT enabled for the p-cores. So, the advertised +10% might be more like -10-15% in multithreaded workloads. Which could result in just 20-25% overall performance uplift. Just like Zen 5. And we don't know if Arrow Lake will be able to run such high clock speeds as Raptor Lake. Intel uses an improved fabrication node. But the cores are also beefier with increased power requirements and probably less optimized for higher clock speeds due to their more process-node-agnostic design philosphy. So, it might be possible that we see even somewhat lower clock speeds overall.

Anyway, except for the Zen 5 3D SKUs it might be a tight competition between AMD and Intel. At least performance wise. Power efficiency might be a different story. Intel still has to catch up a lot here.
 
Seems zen 5 might be only 16 core. It's all you need for gaming though so I guess it's good enough. Intel arrow lake is definitely going to be more than that. So is it going to be AMD for gaming and Intel for Gaming/productivity I wonder.
I'm still going to wait for reviews on both before I decide where to go next.
https://www.pcgamer.com/hardware/pr...on-we-couldnt-do-more-than-16-cores-says-amd/

I also think that the more time AMD stays only on 16 cores, the more it will lose competitiveness against intel's offerings.
Remember that when Zen launched back in 2017, what made it competitive against intel was that AMD's 8-core Ryzen 7 1800X fought against the quad core i7-7700K and later hexa-core i7-8700K.
AMD sticking to 16-core very much resembles to the never ending intel craze behind quad-core being "good enough". It was for how much, 5 generations, 6 generations of CPUs ?

And the problem is not so much if the games require more or less cores, but that intel will give you more, and that it's already 2024 (if you haven't forgotten), and these CPUs are going to be used well beyond 2030, and for sure during the PlayStation 6 product lifecycle - a new console will definitely help to use more cores more often.
 
I don't care about his opinion
he gets paid for his opinion making him a professional
I've already posted live footage comparing core counts so...why are we even arguing about it?
I don't care about your live footage or amateur opinion, you have yet to post proof you have a clue to what you are talking about
 
And the problem is not so much if the games require more or less cores, but that intel will give you more, and that it's already 2024 (if you haven't forgotten), and these CPUs are going to be used well beyond 2030, and for sure during the PlayStation 6 product lifecycle - a new console will definitely help to use more cores more often.
Just… no. I would not expect the next consoles to go beyond 8 cores either. Add an NPU - maybe. But more cores? Unlikely. Hell, I would not be surprised if consoles go for a hybrid approach, if the rumors of AMD experimenting with Zen and ZenC cores is true. 8+4 config with 4 “weaker” cores perma-reserved for the system sounds reasonable. Games aren’t professional workloads that can scale well up to a ridiculous core count. Making games increasingly multi-threaded is incredibly hard and often the performance benefits don’t outweigh the difficulties. There is a reason why games are still comparatively low-thread workloads and it’s not because “consoles bad” or “developers lazy”. And all new technical gimmicks being introduced are HEAVILY GPU weighted.

tl:dr AMD will be fine and 2030 or whatever is a long way off.
 
Just… no. I would not expect the next consoles to go beyond 8 cores either.

First, no where I have said that the "next console" will have more cores. I said that the bad PC ports will require more cores in order to run acceptably on a PC.

tl:dr AMD will be fine and 2030 or whatever is a long way off.

Second, I also have not said anything about AMD's "fineness" through 2030. Read the post again.

AMD has made a strategic misstep and mistake by staying with crazy power hogs 170-watt 16-cores, instead of making a smarter move to a wide CPU with more execution units, clocked comparatively conservatively, and in considerably lower power envelope, while, of course offering much higher performance because of its width.
 
AMD has made a strategic misstep and mistake by staying with crazy power hogs 170-watt 16-cores, instead of making a smarter move to a wide CPU with more execution units, clocked comparatively conservatively, and in considerably lower power envelope, while, of course offering much higher performance because of its width.
That I actually do agree with, we see both with new Intel E-cores and Apple M-chips that this is seemingly the most viable way forward.

I guess I spent way too long this evening in this thread and my eyes just glazed while reading your post and I rushed to some incorrect conclusions. Forgive my misunderstanding.
 
Low quality post by maxfly
This isn't adding anything to the conversation, it's merely a character assassination attempt on anyone you disagree with. One could use the same logic to imply that you are just trying to validate your purchase being an Intel owner. Neither statements have any basis in fact, only create conclusions based on assumptions.
Sure it is...its called humor ;)

Try not to take yourself so seriously there killer.
 
AMD's first 16 core on AMZN top 50 comes in 14th place, it's not like they're that popular, even though it's ahead of any Intel 12/13/14900K/S, yet no one wants more cores on Intel? :D

Also, lots of people who wants more cores for AM5 doesn't need it themselves, their Ryzen 5 2500U laptop works great for their needs.

Meanwhile they keep on ignoring that AMD needs to sell Threadripper as well.

The OP would never buy AMD anyway, I don't really know the intent of this thread.
AMD will be fine and 2030 or whatever is a long way off.
2030 is actually closer now than Zen 1 launch date.

Scary.
 
  • Haha
Reactions: ARF
There are so many arguments for AMD limiting itself to 16c on MSDT. Muh gaming certainly isn't one of them, its simply a non issue at that point.

Its good in my view they're limiting themselves to it. They'll need to diversify their stack in other ways and performance between different SKUs needs to matter enough. Not a bad thing for us.
 
There are so many arguments for AMD limiting itself to 16c on MSDT. Muh gaming certainly isn't one of them, its simply a non issue at that point.

Its good in my view they're limiting themselves to it. They'll need to diversify their stack in other ways and performance between different SKUs needs to matter enough. Not a bad thing for us.

Let's be real they want to save 24 cores for Hedt or at least what's left of it and Epyc and charge $$$$ for it. If intel had a 16P16E configuration I bet it wouldn't take AMD long to figure it out.

At the same time it's pretty useless for gaming and I'd honestly prefer more pcie lanes but that just me... I do find it funny that MT performance mattered a lot less when AMD destroyed intel at it to the point that intel hedt died lol. Now we need it for games apparently or the cpu is trash.....

Different views are a good thing me with a 4090 and someone else with a 7800XT are going to have much different views on how much cpu is necessary if we only account for our own use cases. I've said it multiple times the R5 7600 and equivalent intel cpu are awesome at gaming to the point I almost purchased a second one over a 7800X3D just to sit with it till the 9800X3D comes out lol.... I doubt I'd notice the difference even with a 4090.
 
That I actually do agree with, we see both with new Intel E-cores and Apple M-chips that this is seemingly the most viable way forward.

I guess I spent way too long this evening in this thread and my eyes just glazed while reading your post and I rushed to some incorrect conclusions. Forgive my misunderstanding.

Considering apple is nonexistent in the desktop/server space this is a poor take when talking about AMDs new desktop consumer sku’s (entirely unrelated and dissimilar to new mobile designs). Also intel will be MIA for desktop chips until Oct/Nov., and judging by their mobile core ultra chips and the ditching of HT/SMT, frequency cuts, the only thing they may compete in is power.

I doubt anyone here wants AMD to stagnate, but unlike intel who withheld innovation/improvements and got caught with their pants down, the scalability thats the entire basis of Zen allows them to easily compete at any time they see fit if intel is going to play the moar cores game.
 
AMD zen5 stuck as Skylake Refresh.... :D
 
  • Like
Reactions: ARF
AMD zen5 stuck as Skylake Refresh.... :D

Na we are at least getting double digit IPC improvements.

The i7 2600k through the 6700k which was a half decade was not much of an improvement without the clock increases and to get 6 cores you needed hedt which was always slightly behind desktop in IPC....

Intel could only get away with that shite due to Faildozer and Pilesucker..... Glad intel even with all their issues are very competent, if only their stuff could just ship on time consistently we'd be golden.
 
Intel 12/13/14900K/S, yet no one wants more cores on Intel?

How many cores does the Core i9-14900K have ?

Considering apple is nonexistent in the desktop/server space

Sure... :laugh:

Microsoft's Windows was the dominant desktop operating system (OS) worldwide as of February 2024, with a market share of around 72 percent. Apple’s Mac operating system has gained market share over the years, growing to command around a fifth of the market. Linux and Google's Chrome OS have retained small but stable market shares in recent years.

Apple’s Mac operating system

With an equally long history, Apple’s Mac operating system (macOS, previously Mac OS X and OS X) has also evolved through numerous releases. The most recent version, macOS Ventura, is the nineteenth release of macOS. A older version of macOS, Catalina, is currently the most popular macOS, now run on 87.4 percent of Apple computers as of January 2023. macOS runs on Apple’s Mac computers, including the MacBook, which is Apple’s laptop PC product including the MacBook Pro and MacBook Air, and the iMac – Apple’s desktop computer.

 
How many cores does the Core i9-14900K have ?



Sure... :laugh:




Desktop and Server is an important distinction you have seemed to miss, I wasn’t aware we were including users who scroll FB and IG with $2500 laptops.
 
  • Haha
Reactions: ARF
he gets paid for his opinion making him a professional

I don't care about your live footage or amateur opinion, you have yet to post proof you have a clue to what you are talking about
You are being very aggressive for no reason and I don't even understand what you are asking. What proof? I've posted live footage from my pc and how it behaves in games with only 6 cores enabled. What's left to discuss about it, the results are clear as day.
 
That Ryzen 9 9950X might be the last Ryzen I buy. After that i might change to the Threadripper HEDT platform because of more PCIe-Lanes.

At the beginning i will measure how much benefit i will have with my workload when switching on/of threading. I'm not that fan of it as at every change the whole context has to be rebuilt. And that takes time. Oh. My workload doesn't consist even one game. I don't think that i need a new board. I have a Asus ProArt X670E. The only change i have seen is USB4. Mainly i need USB2 Sticks for my 3D Prinzters. So i don't know what for i need that USB4 ports. But i will wait a bit to have the first problem found and solved by others. I'm not an early adopter who needs the newest hardware only to have it.
 
No, this is more you cherry picking the SKU and time-frame you want to represent X3D. First off, what relevance does temporary launch scarcity pricing have to do with the pricing of a product overall? It'd be one thing if that pricing stuck over a majority of the product's lifetime, at least then you could claim that was effectively the price. That isn't the case here though, for the majority of it's life the 7950X3D has been at it's MSRP. In essence you are trying to take a small period of a product's aftermarket pricing history and implying that it somehow represents all X3D CPUs. That's nonsense. Second, why out of all the X3D CPUs would you choose the 7950X3D? You are trying to represent a class of products with the less common SKU so I do believe you have explaining to do as to why that wasn't just a blatant attempt to cherry-pick.


You mean like intentfullly ignoring other X3Ds or excluding the large period of a product's life as you did above?

Valve was never part of the original conversation. You are jumping past so many other potential reasons to come to this conclusion. Again, simply because something isn't mentioned doesn't mean it's being ignored. By saying something is being ignored, you are implying intentful exclusion and in this case you are assuming my intent and I've already explicitly told you that assumption was wrong. Just because you do something, don't automaticlaly assume someone else does.


Conclusions are subjective, the degree to which depends on the exact conclusion. Performance data is not. You should not be seeing large performance differences if you replicate the exact test setup and settings used in reviews. The use of said data could be considered somewhat subjective though, as in how applicable it is at large or how useful said data is (it's comparability to real life scenarios for example).


"drawbacks"?

Thus far you've only pointed out one thing you think is a drawback. There seems to be a mythos here where higher than normal prices during a full moon on the summer solstace turns into a three headed hydra before our very eyes.
The price difference I mentioned was comparing the average street price of the 7950X to the X3D MSRP for the first few months of the X3D launch, it isn't "temporary launch scarcity pricing", it's the price it kept for most of the time until the first major price drop happened over 4 months after release.
The discussion was mainly about the 7950X3D, the point of X3D CPUs being more expensive doesn't change anyway. The only point where a X3D CPU was cheaper than it's standard equivalent, was the 7900X3D being on sale because no one wanted to get it. All 7000X3D SKUs had it's MSRP at least $100 higher than it's closest equivalent.

The main drawbacks of X3D CPUs are that they run hotter, while having a lower maximum temperature. The price. More limited overclocking wise. And in the case of the dual CCD SKUs, they can have more scheduling issues.
Naturally for people that care about tasks that benefit from the cache, they would often be okay with the drawbacks, for someone that doesn't, it's just adding drawbacks while gaining nothing important to them.
 
I would love a 12-core Zen5 x3D single CCD 95-105W. I think it would be the sweet spot for gaming and general work for a long time :)

If the xx50 version of that CPU is a dual-CCD 24-core, you'd still only have half the maximum performance available for parallelizable productivity.

I guess what I'm asking is if 8 cores is currently not giving you the performance you need/anticipate where a 50% increase to 12 would make a big difference. Because if I was doing Blender, etc. for many hours a week (I'm not), I'd still have the identical feeling that I would get more done with that dual CCD CPU if my current one was only the single-CCD model, no matter if that's made of 6, 8, 12, or 32 cores per CCD.
 
How many cores does the Core i9-14900K have ?
Try altavista or something.

My point is that it has more cores, yet sells worse.

The word is that 16 cores isn't enough on AM5.

Still, no one thinks the 14900K should have even more cores in order to sell better than the one with less cores (9950X).

Hypocrisy.
 
If the xx50 version of that CPU is a dual-CCD 24-core, you'd still only have half the maximum performance available for parallelizable productivity.

I guess what I'm asking is if 8 cores is currently not giving you the performance you need/anticipate where a 50% increase to 12 would make a big difference. Because if I was doing Blender, etc. for many hours a week (I'm not), I'd still have the identical feeling that I would get more done with that dual CCD CPU if my current one was only the single-CCD model, no matter if that's made of 6, 8, 12, or 32 cores per CCD.
I got your point and you are right in general but I would love a 12-core Zen5 x3D single CCD. I think it would be perfect for me when I play a game while simultaneously running VMs, Photoshop, Visual Studio, Android Studio, etc., on the second monitor (maybe even streaming too). For example, lately, I was playing some RP Red Dead 2 on RedM, and it is very poorly optimized, using all 16 threads of my CPU at 60%+ utilization. It works fine, but I think 12 cores would be the sweet spot for scenarios like this and for future-proofing.

I'm not sure how well dual CCD CPUs handle assigning programs and games to the correct CCD now, and I don't really need 16 cores or the headache of managing program assignment to the right CCD all the time. Additionally, I want the x3D version because sometimes I'm playing WoW, and it provides a nice FPS boost. Unfortunately, this CPU will probably never happen; at least, I don't see it coming anytime soon.
 
Last edited:
Tested at native 1440p as well with 6/12. GPU was anywhere between 76 and 88%. 0.2% lows were below 60.

While that CPU is not shown, it does illustrate that the CPU is very nearly maxed while the GPU has headroom. This is an indication of a CPU bottleneck situation. Throw in an 8 core CPU and that dynamic would change. This very clearly shows that the game in question will utilize lots of cores. And this is more common in game from the last 6 or so years.
 
Last edited:
Back
Top