• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Why everyone say Zen 5 is bad ?

Joined
Oct 30, 2020
Messages
250 (0.17/day)
Yes good point which mean Zen 5 still sucks as it is 5% better at best on apples to apples comparison. Sounds like WIN11 gimped performance of Ryzen 5000 and 7000 and 9000 equally so this idea a patch will help not so since it also helps 5000 and 7000 just as much. Meaning point still stands that Zen 5 is hardly if any better than Zen 4 for gaming and most other consumer and even professional workloads that are not AVX512. AVX512 on other hand Zen 5 is a big big uplift. But few use AVX512 for anything.

It's a bit unfair to say Zen 5 is not good for professional workloads that are not AVX512. Most of those have 15-30% uplift without using AVX512. Those ones are even larger.

This 5% figure you keep repeating is for gaming, sure. But not most consumer MT workloads let alone professional ones. Have a look at the Phoronix review, and no it's not because of Linux it's the benchmarks themselves.
 
Joined
Aug 10, 2024
Messages
22 (0.20/day)
I kind of frustrated with this.

I get that for example 9700X is on par with performance of 7700X, however is it REALLY enough to claim Zen 5 is "DOA" or "Bad" ?
My point is, it's doing that with lower clock, since all core 9700X = 4480MHz vs. 7700X = 5190MHz, AND a lot lower power usage (9700X = 88W vs. 7700X = 148W).
Relevant tables from GN video :
View attachment 357853
View attachment 357854

What is wrong with everyone dumping on CPU that is clearly better than previous one ?
This is similar situation to Core 2 Duo E6300 vs. Pentium Extreme Edition 965 (just not as extreme, since we have very different pricing, but that's just future for you).
Everyone clearly knows which one is better of the two, even if both have similar performance :

So, WTF reviewers ?
From what I see, the only bad part about those new AMD CPUs is price, but that will be adjusted later (as always).
Also, I would love to see overclocking performance and power increase associated with it, however (I guess ?) early BIOSes/AGESA aren't stable enough for it ?

Lastly, I'm really afraid of everyone always expecting performance jumps of 20-25% between generation, when frequency scaling is TOUGH on higher end of the scale. Intel clearly shown where limits of that scale lay (both now, and in Pentium 4 days), and what are consequences of pushing blindly for frequency increases. Seeing frequency regression is really good, when paired with similar performance vs. other stuff.
We really don't need more frequency wars (neither on CPUs, nor on GPUs), and there is more to good CPU than just performance vs. previous gen.

For much less Hardware Unboxed did the review of 14900K 10 months ago, which showed 555Watts of system consumption and ridiculous low performance in games . like 1% boost.
He didn't say it's a flop. He didn't make 4-5 videos saying BULLSHIT about Intel. I am really upset with Hardware Unboxed. Also Zen 5 is amazing for Linux and Server/HPC Workloads. People are just crazy for clicks these days.

If you want mature reviews, go 1leveltech.com etc.

I am with you, it's very unfair.

Check those two releases....FROM HARDWARE UNBOXED. they found 4-5% uplift in games. Also this processor consumes 88Watts max (65Watts)....
1724367290190.png


Now when Intel does a release, this guy doesn't make 4 videos saying bad things, even when the 13th to 14th generation uplift in games was mere 1.9%....
Power consumption is also RIDICULOUS... THIS CPU consumes 50% more Watts than AMD.(7950x)
WAS A FLOP

1724367660162.png

1724367491252.png


It's a bit unfair to say Zen 5 is not good for professional workloads that are not AVX512. Most of those have 15-30% uplift without using AVX512. Those ones are even larger.

This 5% figure you keep repeating is for gaming, sure. But not most consumer MT workloads let alone professional ones. Have a look at the Phoronix review, and no it's not because of Linux it's the benchmarks themselves.it
 
Joined
May 10, 2023
Messages
256 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
Trashing the cache means poor performance correct?
Depends on what you're running. Since L3 is just a victim cache, there are some applications that don't really care much about it, and you can see that for the applications where the X3D chips perform equally or even worse than their non-x3D parts. Cinebench seems to be one of those tasks.
Also stepping back to the original statement it was talking about overprovisioned cores. In my test clearly I'm not overprovisioning the cores so that is likely another flaw in my test not supplying enough contention on the cache to produce a meaningful test.
Even if you spin up 10 VMs using 10 cores each for your machine (so way more cores than you actually have), your bottleneck is going to be the rest of the CPU since cinebench doesn't care much about the extra cache.
In the mixed CCD setup I made it slightly more complex by only using 1.5 cores on each CCD.
I don't think cinebench has issues with cross-CCD latency, so it's going to be faster since it has more actual resources to make use, hence why you got better results.
But I guess those tests of yours didn't mean much since you wanted to see cache stuff, but ended up not doing so haha
 
Joined
Jul 5, 2013
Messages
27,797 (6.68/day)
I get that for example 9700X is on par with performance of 7700X, however is it REALLY enough to claim Zen 5 is "DOA" or "Bad" ?
My point is, it's doing that with lower clock, since all core 9700X = 4480MHz vs. 7700X = 5190MHz, AND a lot lower power usage (9700X = 88W vs. 7700X = 148W).
You have perfectly valid points. Personally, I think every reviewer that has failed to see the very points you just pointed out really needs to take a step back, STFU and see the forest for the trees. Better performance + lower power usage + lower heat = big win in my book. It's the exact same thing as the RTX 4060/4060ti models. They had better performance, MUCH lower power draw and ran cooler. Seriously, do people have their heads up their butts or what?
 
Last edited:
Joined
Apr 30, 2020
Messages
985 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
You have perfectly valid points. Personally, I think every reviewer that has failed to see the very points you just pointed out really needs to take a step back, STFU and see the forest for the trees. Better performance + lower power usage + lower heat = big win in my book. It's the exact same thing as the RTX 4060/4060ti models. They had better performance, MUCH lower power draw and ran cooler. Seriously, do people have their heads up their butts or what?

Its the way reviewing has changed over the years. From moving away from in game benchmarks that repeat scene as tools to the use of actual to game play instead & picking out spots that the reviewer believe taxes the system. Aslo they have changed to used gsmes that have higher current player population over older titles with low player amounts.


I disagree with it, because it brings in too much varible to testing. With no base or control point.

(Note: I'm typing on my phone.)
 
Last edited:
Joined
Oct 23, 2020
Messages
56 (0.04/day)
You have perfectly valid points. Personally, I think every reviewer that has failed to see the very points you just pointed out really needs to take a step back, STFU and see the forest for the trees. Better performance + lower power usage + lower heat = big win in my book. It's the exact same thing as the RTX 4060/4060ti models. They had better performance, MUCH lower power draw and ran cooler. Seriously, do people have their head up their butts or what?
Other posts have already refuted that point. The 7700 pulls the same power as the 9700X, and is less than 10% slower on average in common workloads. Also those clocks mentioned don't seem to relate to the real world, at least according to the TPU review which shows the 9700X all-core clocks at +-100Mhz compared to the 7700 depending on the workload.
The 4060 and 4060Ti are also a different thing, though the power consumption difference is a bit exaggerated if we can tweak, as Ampere could have easily used less power without significant performance impact, at stock it's a fair point. But the biggest issue I remember people having with those was the combination of a small performance improvement, at the same price, but with a clear drop in the hardware, the 4060Ti has the hardware and drawbacks you would expect from a XX50Ti card or lower at double the price.
It's a bit unfair to say Zen 5 is not good for professional workloads that are not AVX512. Most of those have 15-30% uplift without using AVX512. Those ones are even larger.

This 5% figure you keep repeating is for gaming, sure. But not most consumer MT workloads let alone professional ones. Have a look at the Phoronix review, and no it's not because of Linux it's the benchmarks themselves.
Saying that most of those have 15-30% uplift without AVX512 is quite the exaggeration. It seems like most have between 8~18%, with some going higher. There's a reason why the mean on the Phoronix review was 18% even with all the tests that have huge improvements.
 
Last edited:
  • Like
Reactions: SL2
Joined
Jan 20, 2019
Messages
1,559 (0.73/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail
Its the way reviewing has changed over the years. From moving away from in game benchmarks that repeat scene as tools to the use of actual to game play instead & picking out spots that the reviewer believe taxes the system.

I disagree with it, because it brings in too much varible to testing. With no base or control point.

"actual gameplay" benchmarking is no stranger to variable results either, or increased variability. You can end up with some pretty fluctuated game conditions which are hard to replicate (esp. multiplayer titles). Then theres the issue of different reviewers targeting different scenes for a mix bag of results so cross-referencing would be a bummer. On top, can't blame the reviewers either as real-time gameplay setup/play/recordings/assessments are time consuming.

For actual benchmarks for a specific title with specific hardware (CPU/GPU) you can look to YT vids where something usually comes up. I take feedback from wherever it may come and usually numerous sources when its time to pull the upgrade-trigger.
 
Joined
Oct 30, 2020
Messages
250 (0.17/day)
Saying that most of those have 15-30% uplift without AVX512 is quite the exaggeration. It seems like most have between 8~18%, with some going higher. There's a reason why the mean on the Phoronix review was 18% even with all the tests that have huge improvements.

Honestly, I forgot to mention the type of workloads. In their review, have a look at the ML and creator workload tests. You'll easily find 15-30% gains there.

My point is, it's a very good CPU for professional workloads and saying there's 5% gain in anything aside from AVX512 is a gross exaggeration.
 
Joined
Oct 23, 2020
Messages
56 (0.04/day)
Honestly, I forgot to mention the type of workloads. In their review, have a look at the ML and creator workload tests. You'll easily find 15-30% gains there.

My point is, it's a very good CPU for professional workloads and saying there's 5% gain in anything aside from AVX512 is a gross exaggeration.
When talking about more common workloads and the 9700X and 9600X, 5% average isn't that far from being accurate. Even the 9950X I would say a average of ~9% over the 7950X is about what should be expected in common professional workloads.
The big issue with Zen5 is that it's good with some niche software and workloads, while being relatively bad at most common software and workloads.
 
Joined
Aug 29, 2005
Messages
7,262 (1.03/day)
Location
Stuck somewhere in the 80's Jpop era....
System Name Lynni PS \ Lenowo TwinkPad L14 G2
Processor AMD Ryzen 7 7700 Raphael (Waiting on 9800X3D) \ i5-1135G7 Tiger Lake-U
Motherboard ASRock B650M PG Riptide Bios v. 3.10 AMD AGESA 1.2.0.2a \ Lenowo BDPLANAR Bios 1.68
Cooling Noctua NH-D15 Chromax.Black (Only middle fan) \ Lenowo C-267C-2
Memory G.Skill Flare X5 2x16GB DDR5 6000MHZ CL36-36-36-96 AMD EXPO \ Willk Elektronik 2x16GB 2666MHZ CL17
Video Card(s) Asus GeForce RTX™ 4070 Dual OC (Waiting on RX 8800 XT) | Intel® Iris® Xe Graphics
Storage Gigabyte M30 1TB|Sabrent Rocket 2TB| HDD: 10TB|1TB \ WD RED SN700 1TB
Display(s) KTC M27T20S 1440p@165Hz | LG 48CX OLED 4K HDR | Innolux 14" 1080p
Case Asus Prime AP201 White Mesh | Lenowo L14 G2 chassis
Audio Device(s) Steelseries Arctis Pro Wireless
Power Supply Be Quiet! Pure Power 12 M 750W Goldie | 65W
Mouse Logitech G305 Lightspeedy Wireless | Lenowo TouchPad & Logitech G305
Keyboard Ducky One 3 Daybreak Fullsize | L14 G2 UK Lumi
Software Win11 IoT Enterprise 24H2 UK | Win11 IoT Enterprise LTSC 24H2 UK / Arch (Fan)
Benchmark Scores 3DMARK: https://www.3dmark.com/3dm/89434432? GPU-Z: https://www.techpowerup.com/gpuz/details/v3zbr
The hype has made Zen5 specially 9700X disappointing, we all got told how efficient Zen5 would be compared to Zen4 with TSMC's 4nm node but at the last min rumours started to popup at AMD would re-spec the 9700X from 65W to 95W so it could bet the 7700X and compare to the 7800X3D and this shouldn't have delayed the 9700X because it's just a bios update but they wanted the packing to match.

I am still using a 7700 locked at 65W and I do not feel the need to upgrade even looking at the last 7800X3D built for my cousin awesome computer same specs as mine just different motherboard, ram and ssd I am still wondering what uplift the 9800X3D will bring and maybe the also a X3D model of the Ryzen 5 for am5.
 
Joined
Jul 30, 2019
Messages
3,277 (1.68/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
Depends on what you're running. Since L3 is just a victim cache, there are some applications that don't really care much about it, and you can see that for the applications where the X3D chips perform equally or even worse than their non-x3D parts. Cinebench seems to be one of those tasks.

Even if you spin up 10 VMs using 10 cores each for your machine (so way more cores than you actually have), your bottleneck is going to be the rest of the CPU since cinebench doesn't care much about the extra cache.

I don't think cinebench has issues with cross-CCD latency, so it's going to be faster since it has more actual resources to make use, hence why you got better results.
But I guess those tests of yours didn't mean much since you wanted to see cache stuff, but ended up not doing so haha
I think you have been the only one in recent memory to bring up the idea of cache thrashing to be a problem with dual (or multi) CCD chips and it peaked my interest. When I saw the review on the 9950x and the new need for the X3D core parking treatment of this non-X3D cache CPU combined with what looks like regressions in performance for some things I'm kind of taken back. This can't all be related to to the increase in core to core latency can it? I've kind of latched onto your idea about cache thrashing for a moment. How common is the is problem today? How do you detect it? How do you measure it's relevance as an issue? Do the indicated tests with 9950x apparent performance regressions suffer from this issue?

Looking back my whole effort was a bit flawed and convoluted as I needed to first step back and ask myself first how to tell the difference between a CCD latency issue vs. a CCD cache trashing issue. I have no method of detecting that so I basically just wasted my time with bad assumptions but at least it gave me an interesting observation in how to potentially squeeze some performance out of a dual CCD chip that I did not see before. So here is my rub with your strong statement about L3 just being a victim cache. How do you go about proving your statement? Dual and multi-CCD chips have been around for awhile now and if this was such a serious problem wouldn't Threadripper and Epyc been massive failures by now?

I try to avoid the hype train for any brands products but perhaps I've fallen victim to it nonetheless with unreasonable expectations and it's got me asking more questions before I leap and potentially throw money at something that actually might be just fine. Naturally when they start changing architecture unintended consequences can happen. Code that was optimal for it's time becomes suboptimal for new CPU architecture changes. Is that what happened with Zen5 in combination with increased core latencies that are detrimental for gaming and detrimental for users now having to manage core management issues like with 7950X3D? Coming around and tying into the OP's topic I don't think Zen5 is bad but the question becomes where is it good, and does it still fit with the use case that I had hoped it would by this time?

If you're one attempting to go from a 7950x to 9950x the prospects don't look as rosy as the marketing hype train had tooted for a few months now. Put aside for the moment that the prospect of upgrading from 7950x to 9950x while possible is somewhat unrealistic except for those who have casual money to throw at it. I've been though that before with 3950x to 5950x although one could make arguments that the uplift from Zen2 to Zen3 could justify such an upgrade. Being more conservative in my spending since then I skipped 7950x to be a bit wiser with my dollar and wait for a much larger delta in price/performance upgrade value. In the area that effects my work daily for several years now the recent results on Virtualization indicates practically no improvement from 5950x to 9950x which on face value is highly disappointing and I find myself questioning it. Does this mean for example the significant gains in Java will go unrealized when contained in a virtual environment where benchmarks indicated no improvement in virtualization? I don't have a clear answer or idea on this yet but I have some time to try and find some answers while I wait for the first or second round of 9950x discounts.

1724390064903.png
1724391184586.png
 
Last edited:
Joined
Aug 10, 2024
Messages
22 (0.20/day)
You have perfectly valid points. Personally, I think every reviewer that has failed to see the very points you just pointed out really needs to take a step back, STFU and see the forest for the trees. Better performance + lower power usage + lower heat = big win in my book. It's the exact same thing as the RTX 4060/4060ti models. They had better performance, MUCH lower power draw and ran cooler. Seriously, do people have their heads up their butts or what?
I am with you on this conclusion also, man this CPU is crazy good, playing games and stressing all cores the CPU at 65C. This is a major achievement. Who else plays Games with a very good performance and gets no more than 65C from the CPU??? 9700x has very good performance...let's also say, a 5% difference from 14700k in gaming is NOTHING, as nowadays games are GPU bound. He failed to give a shoutout and congratulations to AMD for this amazing achievement. I agree that AMD PR marketing screwed up though...

There was too much overhype from leakers before the launch and maybe this created a lot of expectation. But I believe the product will mature and improve with OS patches and microcode update, chipset drivers and so for...
 
Joined
Aug 10, 2024
Messages
22 (0.20/day)
Have you checked phoronix.com reviews? this thing is amazing under Linux (Server, Workstation, HPC workloads)....
I think you have been the only one in recent memory to bring up the idea of cache thrashing to be a problem with dual (or multi) CCD chips and it peaked my interest. When I saw the review on the 9950x and the new need for the X3D core parking treatment of this non-X3D cache CPU combined with what looks like regressions in performance for some things I'm kind of taken back. This can't all be related to to the increase in core to core latency can it? I've kind of latched onto your idea about cache thrashing for a moment. How common is the is problem today? How do you detect it? How do you measure it's relevance as an issue? Do the indicated tests with 9950x apparent performance regressions suffer from this issue?

Looking back my whole effort was a bit flawed and convoluted as I needed to first step back and ask myself first how to tell the difference between a CCD latency issue vs. a CCD cache trashing issue. I have no method of detecting that so I basically just wasted my time with bad assumptions but at least it gave me an interesting observation in how to potentially squeeze some performance out of a dual CCD chip that I did not see before. So here is my rub with your strong statement about L3 just being a victim cache. How do you go about proving your statement? Dual and multi-CCD chips have been around for awhile now and if this was such a serious problem wouldn't Threadripper and Epyc been massive failures by now?

I try to avoid the hype train for any brands products but perhaps I've fallen victim to it nonetheless with unreasonable expectations and it's got me asking more questions before I leap and potentially throw money at something that actually might be just fine. Naturally when they start changing architecture unintended consequences can happen. Code that was optimal for it's time becomes suboptimal for new CPU architecture changes. Is that what happened with Zen5 in combination with increased core latencies that are detrimental for gaming and detrimental for users now having to manage core management issues like with 7950X3D? Coming around and tying into the OP's topic I don't think Zen5 is bad but the question becomes where is it good, and does it still fit with the use case that I had hoped it would by this time?

If you're one attempting to go from a 7950x to 9950x the prospects don't look as rosy as the marketing hype train had tooted for a few months now. Put aside for the moment that the prospect of upgrading from 7950x to 9950x while possible is somewhat unrealistic except for those who have casual money to throw at it. I've been though that before with 3950x to 5950x although one could make arguments that the uplift from Zen2 to Zen3 could justify such an upgrade. Being more conservative in my spending since then I skipped 7950x to be a bit wiser with my dollar and wait for a much larger delta in price/performance upgrade value. In the area that effects my work daily for several years now the recent results on Virtualization indicates practically no improvement from 5950x to 9950x which on face value is highly disappointing and I find myself questioning it. Does this mean for example the significant gains in Java will go unrealized when contained in a virtual environment where benchmarks indicated no improvement in virtualization? I don't have a clear answer or idea on this yet but I have some time to try and find some answers while I wait for the first or second round of 9950x discounts.

View attachment 360248View attachment 360249
 
Joined
Aug 10, 2024
Messages
22 (0.20/day)
The hype has made Zen5 specially 9700X disappointing, we all got told how efficient Zen5 would be compared to Zen4 with TSMC's 4nm node but at the last min rumours started to popup at AMD would re-spec the 9700X from 65W to 95W so it could bet the 7700X and compare to the 7800X3D and this shouldn't have delayed the 9700X because it's just a bios update but they wanted the packing to match.

I am still using a 7700 locked at 65W and I do not feel the need to upgrade even looking at the last 7800X3D built for my cousin awesome computer same specs as mine just different motherboard, ram and ssd I am still wondering what uplift the 9800X3D will bring and maybe the also a X3D model of the Ryzen 5 for am5.
I agree the hype was not good...
 
Joined
Jul 11, 2022
Messages
356 (0.41/day)
Other posts have already refuted that point. The 7700 pulls the same power as the 9700X, and is less than 10% slower on average in common workloads. Also those clocks mentioned don't seem to relate to the real world, at least according to the TPU review which shows the 9700X all-core clocks at +-100Mhz compared to the 7700 depending on the workload.
The 4060 and 4060Ti are also a different thing, though the power consumption difference is a bit exaggerated if we can tweak, as Ampere could have easily used less power without significant performance impact, at stock it's a fair point. But the biggest issue I remember people having with those was the combination of a small performance improvement, at the same price, but with a clear drop in the hardware, the 4060Ti has the hardware and drawbacks you would expect from a XX50Ti card or lower at double the price.

Saying that most of those have 15-30% uplift without AVX512 is quite the exaggeration. It seems like most have between 8~18%, with some going higher. There's a reason why the mean on the Phoronix review was 18% even with all the tests that have huge improvements.

Yes if truly within +/- 100MHz it is an embarrassing uplift. Not if it truly clocks 200MHz lower Zen 5 for 5% better performance than maybe not as bad, though I do not think so. But AMD CPUs are so clock dynamic that it is hard to tell.

Would be interesting to do a fixed static clock normalized IPC test across games and other non AVX512 apps to see the full story.

Though if Zen 5 struggles to clock as high even if it could be better than it will not matter.

But do not think that is the case. The clock speeds are within 100MHYz of each other and Zen 5 IPC clock normalized uplift is just a flop at 5% for gaming and most other consumer workloads and 8-18% for most professional workloads that are not AVX512.
 
Joined
Aug 8, 2024
Messages
22 (0.20/day)
Yes if truly within +/- 100MHz it is an embarrassing uplift. Not if it truly clocks 200MHz lower Zen 5 for 5% better performance than maybe not as bad, though I do not think so. But AMD CPUs are so clock dynamic that it is hard to tell.

Would be interesting to do a fixed static clock normalized IPC test across games and other non AVX512 apps to see the full story.

Though if Zen 5 struggles to clock as high even if it could be better than it will not matter.

But do not think that is the case. The clock speeds are within 100MHYz of each other and Zen 5 IPC clock normalized uplift is just a flop at 5% for gaming and most other consumer workloads and 8-18% for most professional workloads that are not AVX512.
I don't think the clocks really matter as long as the new CPU is an improvement on previous models (better performance, efficiency, and pricing etc).

AMD should have called the 9700x just 9700 (65W) and priced it below $300 so it gets compared to the 7700 instead.

Hopefully the 9800X3D will have a higher power limit such as 105W or 120W so the performance doesn't get handicapped.
 
Joined
Nov 26, 2021
Messages
1,648 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Yes if truly within +/- 100MHz it is an embarrassing uplift. Not if it truly clocks 200MHz lower Zen 5 for 5% better performance than maybe not as bad, though I do not think so. But AMD CPUs are so clock dynamic that it is hard to tell.

Would be interesting to do a fixed static clock normalized IPC test across games and other non AVX512 apps to see the full story.

Though if Zen 5 struggles to clock as high even if it could be better than it will not matter.

But do not think that is the case. The clock speeds are within 100MHYz of each other and Zen 5 IPC clock normalized uplift is just a flop at 5% for gaming and most other consumer workloads and 8-18% for most professional workloads that are not AVX512.
In multithreaded workloads that stress the cores, the 9700X clocks significantly lower than the 7700. The quote below is translated from German:

Depending on the application, the clock rates for the Ryzen 7 9700X are only 4.3 to 4.5 GHz per core at full load on the CPU. The comparison with an AMD Ryzen 7 7700X only helps to a limited extent: This allows up to 142 watts and thus achieves up to 5.3 GHz. The Ryzen 7 7700 with 88 watt PPT is around 4.8 to 4.9 GHz – but still significantly more than the Ryzen 7 9700X.

1724422771535.png
 
Joined
May 10, 2023
Messages
256 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
I think you have been the only one in recent memory to bring up the idea of cache thrashing to be a problem with dual (or multi) CCD chips and it peaked my interest.
Just to make it clear: my initial point is that the dense Epycs are mostly meant for hyperscalers, which use those CPUs as VM hosts, and more often than not overprovision those (which I believe we can agree ends up trashing the cache due to all context switches).
Another user argued that 3D-Cache would be beneficial to such Epycs, and I argued against it because for a VM host scenario it has no benefits while costing more.

I don't think I mentioned anything straight related to cache trashing and dual CCD chips.
This can't all be related to to the increase in core to core latency can it?
I do believe the core parking stuff is solely related to the increased latency, but that's only relevant for games, I guess.
I've kind of latched onto your idea about cache thrashing for a moment. How common is the is problem today? How do you detect it? How do you measure it's relevance as an issue?
Depends on the application, but it's relevant still. You can measure it with profiling tools, as an example:
Screenshot 2024-08-23 at 11.08.48.png


Chips and cheese usually has a nice comparison w.r.t. that kind of stuff, you can see it for the 9950x in their post:

Do the indicated tests with 9950x apparent performance regressions suffer from this issue?
Not really, it seems to be mostly a latency issue.
How do you go about proving your statement?
About it being a victim cache? It has been like so for all moderns CPUs for quite a while lol
But take it from AMD themselves:
Screenshot 2024-08-23 at 11.19.44.png


Also some extra sources in case you want to get more into it:

Dual and multi-CCD chips have been around for awhile now and if this was such a serious problem wouldn't Threadripper and Epyc been massive failures by now?
I think you're conflicting ideas at this point. Having L3 as a victim cache is not a "problem" haha
Does this mean for example the significant gains in Java will go unrealized when contained in a virtual environment where benchmarks indicated no improvement in virtualization?
If you're running java stuff inside of a VM, you'll still see improvements related to Zen 5. The VM won't nullify that.
It's not that often that you see virtualization improvements, because there's not really much to improve upon at all (mostly stuff related to context switches and whatnot), but this has no relation to the kind of stuff running inside of said VM.

FWIW, I'm really eyeing a 9950x to upgrade from my 5950x, the gains with python, numpy, databases and compile stuff are amazing and are things that I work with on a daily basis. The 7950x was kinda okaish in those regards, but the 9950x makes it a really worthwhile jump for me.
 
Joined
Jul 30, 2019
Messages
3,277 (1.68/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
Just to make it clear: my initial point is that the dense Epycs are mostly meant for hyperscalers, which use those CPUs as VM hosts, and more often than not overprovision those (which I believe we can agree ends up trashing the cache due to all context switches).
Another user argued that 3D-Cache would be beneficial to such Epycs, and I argued against it because for a VM host scenario it has no benefits while costing more.

I don't think I mentioned anything straight related to cache trashing and dual CCD chips.

I do believe the core parking stuff is solely related to the increased latency, but that's only relevant for games, I guess.

Depends on the application, but it's relevant still. You can measure it with profiling tools, as an example:
View attachment 360282

Chips and cheese usually has a nice comparison w.r.t. that kind of stuff, you can see it for the 9950x in their post:


Not really, it seems to be mostly a latency issue.

About it being a victim cache? It has been like so for all moderns CPUs for quite a while lol
But take it from AMD themselves:
View attachment 360286

Also some extra sources in case you want to get more into it:


I think you're conflicting ideas at this point. Having L3 as a victim cache is not a "problem" haha

If you're running java stuff inside of a VM, you'll still see improvements related to Zen 5. The VM won't nullify that.
It's not that often that you see virtualization improvements, because there's not really much to improve upon at all (mostly stuff related to context switches and whatnot), but this has no relation to the kind of stuff running inside of said VM.

FWIW, I'm really eyeing a 9950x to upgrade from my 5950x, the gains with python, numpy, databases and compile stuff are amazing and are things that I work with on a daily basis. The 7950x was kinda okaish in those regards, but the 9950x makes it a really worthwhile jump for me.
Thanks for all your replies!
 
Joined
Jun 1, 2011
Messages
4,601 (0.93/day)
Location
in a van down by the river
Processor faster at instructions than yours
Motherboard more nurturing than yours
Cooling frostier than yours
Memory superior scheduling & haphazardly entry than yours
Video Card(s) better rasterization than yours
Storage more ample than yours
Display(s) increased pixels than yours
Case fancier than yours
Audio Device(s) further audible than yours
Power Supply additional amps x volts than yours
Mouse without as much gnawing as yours
Keyboard less clicky than yours
VR HMD not as odd looking as yours
Software extra mushier than yours
Benchmark Scores up yours
He didn't make 4-5 videos saying BULLSHIT about Intel.
Actually Steve has been putingt Intel through the ringer on techspot and was a major AMD proponent on the 3xxx, 5xxx, 7xxx series. In fact many Intel fans consider him an AMD fan boy there.

Not sure why so many people are tossing hissy fits on Zen 5. You don't need to buy a new CPU every year. AMD doesn't owe you 15%+ gaming increase series after series. You can buy Intel, you can buy AMD, you can wait and sit out a series or two.
 
Joined
Oct 30, 2020
Messages
250 (0.17/day)
FWIW, I'm really eyeing a 9950x to upgrade from my 5950x, the gains with python, numpy, databases and compile stuff are amazing and are things that I work with on a daily basis. The 7950x was kinda okaish in those regards, but the 9950x makes it a really worthwhile jump for me.

Yep, it's usually the case that many of those benchmarks aren't part of most reviews. I was going through the reviews to figure out what my % gains are from apps that I actually run and most of the DB, code compilation and scientific stuff range in the 18-35% range, Numpy being on the higher end there.

These might not be as popular as CB and the like but throwing around statements like 'Zen 5 is bad for everyone it's only 5% gain in everything' is grossly incorrect.

There's also a bunch of stuff happening with windows (and bios) as well and the newer windows update should increase the gap from Zen 4>5 in games. Wendell did an early investigation with some interesting results here
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
Actually Steve... In fact many Intel fans consider him an AMD fan boy there.

He probably isn't, but some here definitely play the devil's advocate role.

You don't need to buy a new CPU every year.

The thing is that AMD no longer releases new CPUs every year. In fact, their best CPU is still the one released 4 years ago Ryzen 5000 series.
 
Joined
Feb 26, 2024
Messages
91 (0.33/day)
This can only be true if low yields forces AMD to do so, otherwise they have no reason to. Of course they'll do it to some degree, but I'm not sure to what extent.

Since we don't know shit about yields, there's no point in speculating.
This is about parametrics, not defect related yields.
 
Joined
Dec 29, 2010
Messages
3,809 (0.75/day)
Processor AMD 5900x
Motherboard Asus x570 Strix-E
Cooling Hardware Labs
Memory G.Skill 4000c17 2x16gb
Video Card(s) RTX 3090
Storage Sabrent
Display(s) Samsung G9
Case Phanteks 719
Audio Device(s) Fiio K5 Pro
Power Supply EVGA 1000 P2
Mouse Logitech G600
Keyboard Corsair K95
This has happened in the past, MSFT screwing AMD. Their OS is too slow or bs etc etc, need more scheduler lol. I'm not in a hurry so waiting for the chips to land where they're supposed to with the incoming patches is ok for me.
 
Top