Others have given some good points on other things, but I want to raise an issue with using CPU-Z as the measuring bar for performance.
It's... not a good reflection of real world performance, and in fact, you're going to run into this issue with most synthetics because in reality, you trade off accuracy by boiling performance down to one number, and finding a synthetic that accurately represents that average is easier said than done.
Even the "good" ones like Passmark have this problem. Go look up the single core scores of the 5800X and 5800X3D and you'll see exactly what I mean; it rates the 5800X slightly higher, whereas the 5800X3D will only ever score lower if the cache isn't helping at all, which makes it obvious the cache isn't being factored at all in whatever method their benchmark uses, which means... you guessed it, the synthetic suddenly doesn't accurately represent real world performance. Maybe it represents "desktop/productivity" performance fair enough, but even some of that stuff may (or may not) see increases from cache.
Passmark is very aware of this shortcoming, which is why they came up with a "gaming ranking"... yet this has a very big problem of it's own! Look at what CPU is currently topping the chart. Not the 7800X3D. Not the 7950X3D. It's the 7900X3D. Huh? Why's that? Let's look deeper. Wait... the 5600X3D is topping the 5800X3D too! Now it's a bit more apparent what's going on. The Ryzen 5 and lower half of the Ryzen 9 tiers use CCDs with 6 cores instead of 8. But all the X3D chips have the same 64MB extra cache. What they are doing is clearly averaging "cache per core" which results in the 6 core CCD models scoring higher. In reality, it doesn't work this way at all! L3 cache is shared, at least on single CCD CPUs. I think it still is on multi-CCD CPUs but then you add latency to cross CCDs which negates the effect. This is why, say, a 5800X has 32 MB cache, a 5900X has 64 MB, and a 5800X3D has 96 MB, but only the latter sees the uplift. The middle one has the same +32MB uplift but it's spread across two CCDs.
In short, even the "good" synthetics have some serious flaws. And CPU-Z's benchmark is not regarded as a "good" one. Certain CPUs had architectural adjustments that brought them real world performance uplifts, but CPU-Zs benchmark would see little to no difference.
For reasons like this, I'm not a fan of using synthetics as an accurate average of performance. I understand why it's done; you need to reduce variables and using the same measuring method does that... but what happens when that measuring method itself is consistent... but consistently wrong? This happens.