It's not new, the difference has been there since Ryzen 3000 and L3 results are pretty much mirrored (though slightly faster usually). I'm 90% sure that the "cache differences" in AIDA are horseshit. AIDA is pretty meaningless on both cache and memory front (especially DRAM latency, it's wildly unpredictable compared to membench's latency counter), it's not hard to dupe AIDA with memory settings that are flat out unstable or tank performance in other benchmarks (LinpackXtreme, DRAM Calc).
As to L3 in AIDA, infamous example was the L3 cache read "bug" with Renoir APUs. And no, before you ask, it wasn't an issue of boost or Cstates, all-core and Cstates off made zero difference on 4650G. AMD "fixed" it with an AGESA patch that literally didn't do anything to performance in any other test in existence. Suspected that AMD probably tweaked Precision Boost to prevent cores from parking during AIDA to make users feel better about themselves. Then Cexanne came around and now we're back to crappy 300-400GB/s L3 in AIDA......see the pattern here? If AIDA is authoritative, we'd be claiming that Zen 3 has demonstrably slower L3 than Zen 2 Renoir of all things......AIDA is the single greatest pat-oneself-on-the-back machine, it's popular because it's easy, doesn't mean it indicates anything at all. When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings..........
Again, don't get me wrong, I'm not trying to discredit you or cast doubt on your choice of settings for the benchmark. But if it's supposed to be a CPU-heavy game, it should perform the part, and nothing that I can see so far shows that. Please provide more HWInfo if you can though, more is better.
View attachment 213606
Okay, make up your mind?
First you say that my settings are bad for 3800 14-15-15 (feel free to offer actionable feedback), then you say that my 54.8ns/101s membench is better than average for 2CCD and yours is significantly better than expected for some reason (are you implying board firmware or PEBCAK?). I never claimed to be running the tightest 3800CL14 setup in the world but neither are most of the other results in here, so, which one is it then?
I'm well aware of polling rate. That doesn't significantly change the test behaviour at all. Upping polling rate may cause a little more of the "high" boost clocks to translate into effective clock, but your own HWInfo screenshot indicates that usage and load are still nowhere near what's expected even from a mildly CPU-bound game (I have a LOT of those). Look at the disparity between your "clocks" and effective clocks, it's the classic symptom of mostly-idle cores and has little to do with polling rate. If anything, needing to increase polling rate to portray increased CPU usage just tells confirms how low average usage is...
Plus, while per-core clocks and power vary a lot and a loose polling rate may miss occasional peaks, polling rate can't fool temps. I've done a shit load of logging in a few other games on 5900X trying to figure out the 10-15C temp spikes that Zen3 chiplet seem to experience sometimes, particularly in MW19 where clocks/per-core power/temps jump around like a roller coaster. Insurgency:Sandstorm is an example of a game that works the CPU moderately but effective clock doesn't show it, only per-core power and temps do. You can up or down the polling rate all you like, if a game is actually CPU-intensive it makes no difference and will naturally show it in the data.
And one thing that polling rate *certainly* won't fool is the fact that the GPU is running full tilt during this benchmark for more than just part of it. It takes a real long time at 100% load to get to 72.5C edge temp, and 180W is literally max possible load. So from what I can tell, it's quite a bit more GPU bound than the vague "29%" number seems to imply, are you insinuating that "bad settings" are solely to blame for most of the test running on GPU?
Or are you implying that the GPU is bad (it certainly is no 3080, I never made any claims regarding GPU perf)? Which in itself would be an admission that the bench isn't nearly so CPU-bound as it should be?
That's been rumored for a long time, but it's never made any sense. Ryzen has always functioned on symmetric CCDs. CCD1 and CCD2 cores are clearly demarcated in differences in per-core power during all-core for example, and it does not paint a picture of 8+4 or anything that isn't 6+6.
Some games like MW19 run a heavy "all-core" AVX workload sometimes...but whereas on a 3700X it runs truly 8-core loads, on 5900X it automatically limits itself to the 6 cores of CCD1. And Windows scheduler seems to pick its favoured background processing core not based on core quality (mine is literally the worst core), but the fact that it's not on the same CCD1 as the two preferred performance cores inevitably are.