- Joined
- May 2, 2017
- Messages
- 7,762 (2.83/day)
- Location
- Back in Norway
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
I mean, I should just start linking you to previous responses at this point, as everything you bring up is asked and answered four pages ago. "Normalizing" for either of those mainly serves to hide the uneven starting point introduced by said normalization, as each "normalized" operating point represents a different change from the stock behaviour of each chip. In the case of power limits being lowered, this inherently privileges the chip being allowed the biggest reduction from stock, due to how DVFS curves work.Normalized for either consumption or performance. Great for them that they ran as configured by Intel but that's not my argument at all
Yes, I really do. Outside of purely academic endeavors, who ever tests PC components normalized for performance? I mean, doing so isn't even possible, given how different architectures perform differently in different tasks. If you adjust a 12900K so it perfectly matches an 11900K in Cinebench, then it will still be faster in some workloads, and possibly slower in others. Normalizing for performance for complex components like this is literally impossible. Unless, that is, you tune normalize for performance in every single workload, and then just record the power data? That sounds incredibly convoluted and time-consuming though.You think a comparison normalized for performance is deeply flawed?
Well, too bad. I have explained the issues with this to you at length multiple times now. If you're unable to grasp that these problems are significant, that's your problem, not mine.I mean come on, you cannot possible believe that. I don't believe you believe that.
And, once again: at what points? "Normalized for consumption" - at what wattage? The only such comparison that would make sense would be a range, as any single test is nothing more than an unrepresentative snapshot. And any single workload, even across a range, is still only representative of itself. For such a comparison to have any hope whatsoever of being representative, you'd need to test a range of wattages in a range of workloads, and then graph out those differences. Anything less than that is worthless. Comparing the 12900K at 65W vs. the 5800X at 65W in CB23 tells us only that exact thing: how each perform at that specific power level in that specific workload. You cannot reliably extrapolate anything from that data - it's just not sufficient for that.I said it before, normalized for consumption, 8 gc cores are around 20-25% more efficient, normalized for performance the difference is over 100%. So yeah, the 5800x at 65 can get up to 13-14k.
As for your "normalizing for performance": once again, you're just trying to use neutral and quasi-scientific wording to hide the fact that you really want to use a benchmark that's relatively friendly to ADL as the be-all, end-all representation of which of these CPUs is better, rather than actually wanting to gain actual knowledge about this.
I'm starting to sound like a broken record here, but: ADL has an advantage at lower power limits in less instruction dense workloads due to its lower uncore power draw.Again, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it
And once again, pulling numbers out of nowhere as if this is even remotely believeable. Also, 720p? Wtf? And how oddly, unexpectedly convenient that the one game you're testing in is once again a game that's uncharacteristically performant on ADL generally. Hmmmmmm. Almost as if there might be a pattern here?Outside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT
... no. Did you even look at the AT testing? The 5950X, running 8 cores active, on the same CCX (they control for that in testing), in the same workload, at the same power limit as the 5800X, clocks higher while consuming less power per core.Ive no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.
It would be really, really helpful if you at least tried to understand what is being said to you. The boost behaviours, binning and DVFS characteristics of these chips are not the same. This is what I was saying about your "arguments" about binning on the 12400K: you're infinitely generous with giving Intel the benefit of the doubt, but you consistently pick worst case scenarios for AMD and show zero such generousness in that direction.
And yet more unsourced numbers pulled out of thin air. This is starting to get tiring, you know.And yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head
Uhhhhh... what? This is what you said, in literally your previous post:But im not using igorslab for efficiency comparisons.
You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.
Could you at least stop flat out lying? That would be nice, thanks.
I don't know that TPU's testing is flawed - but I have explicitly said that this might indeed be the case. Given the number of possible reasons for this, and my complete lack of access to data surrounding their testing other than what's published, I really can't know. It's absolutely possible that there's something wrong there.Im using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
However, you seem to fail to recognize that the Igor's Lab testing seems to be similarly flawed, only in the other direction. As I explained above, it's entirely possible to harm performance on AMD CPUs through giving them too much power, which drives up thermals, drives down clocks, increases leakage, and results in lower overall performance. Given that Igor's testing is with an auto OC applied and the power levels recorded are astronomical, this is very likely the case. So, if I agree to not look at TPU's results, will you agree to not look at Igor's Lab's results? 'Cause for this discussion, both seem to be equally invalid. (And no, you can't take the Igor's Lab Intel results and compare them to Zen3 results from elsewhere, as this introduces massive error potential into the data, as there's no chance of controlling for variables across the tests.
Oh, and a bit of a side note here: you are constantly switching back and forth between talking about "running the 12900K at X watts" and "8 GC cores at X watts". Are your tests all willy-nilly like this, or are you consistently running with or without E-cores enabled? That represents a pretty significant difference, after all.