- Joined
- Jun 14, 2020
- Messages
- 3,275 (2.04/day)
System Name | Mean machine |
---|---|
Processor | 12900k |
Motherboard | MSI Unify X |
Cooling | Noctua U12A |
Memory | 7600c34 |
Video Card(s) | 4090 Gamerock oc |
Storage | 980 pro 2tb |
Display(s) | Samsung crg90 |
Case | Fractal Torent |
Audio Device(s) | Hifiman Arya / a30 - d30 pro stack |
Power Supply | Be quiet dark power pro 1200 |
Mouse | Viper ultimate |
Keyboard | Blackwidow 65% |
Do you understand what best case scenario is and what it's used for? If Zen 3 loses in the best case scenario thaen no further testing needs to be done. For example CBR23 is a best case scenario for golden cove, so if they lose in CBR23 they will lose in everything else.You didn't say it was better, but you did say "You don't need SPEC, there are hundreds of other benchmarks", in other words saying that those benchmarks are a reasonable replacement for SPEC. This is what I have argued against - none of the benchmarks you mentioned are, no single benchmark can ever be. Did I make a silly analogy about it? Yes, because IMO what you said was silly, and deserved a silly response. A single benchmark will never be representative of anything beyond itself - at best it can show a rough estimate of something more general, but with a ton of caveats. As for using a collection of various single benchmarks: sure, that's possible - but I sure do not have the time to research and put together a representative suite of freely available and unbiased benchmark applications that can come even remotely close to emulating what SPEC delivers. Do you?
The point being: I'm leaning on SPEC because it's a trustworthy, somewhat representative (outside of gaming) CPU test suite, and is the closest we get to an industry standard. And, crucially, because we have a suite of high quality reviews using it. I do not rely on things like CB as, well, the results are pretty much useless. Which chip is the fastest and/or most efficient shows us ... well, which chip is the fastest and most efficient in cinebench. Not generally. And the point here was something somewhat generalizeable, no? Heck, even GeekBench is superior to CB in that regard - at least it runs a variety of workloads.
... I have explained that, at length? If you didn't grasp that, here's a brief summary: because we have absolutely zero hope of approaching the level of control, normalization and test reilability that good professional reviewers operate at.
And, once again, you take a result from a single benchmark and present it as if it is a general truth. I mean, come on: you even link to the source showing how that is for a single, specific workload - and a relatively low intensity, low threaded one at that. Which I have acknowledged, at quite some length, is a strength of ADL.
Have you been paying attention at all? Whatsoever? I'm not interested in best case scenarios. I'm interested in actually representative results, that can tell us something resembling truth about these CPUs. I mean, the fact that you're framing it this way in the first place says quite a bit about your approach to benchmarks: you're looking to choose sides, rather than looking for knowledge. That's really, really not how you want to approach this.
And, again, unless it wasn't clear: there is no single workload that gives a representative benchmark score for a CPU. None. Even something relatively diverse with many workloads like SPEC (or GeekBench) is an approximation at best. But a single benchmark only demonstrates how the CPU performs in that specific benchmark, and might give a hint as to how it would perform in very similar workloads (i.e. 7zip gives an indication of compression performance, CB gives an indication of tiled renderer performance, etc.) - but dependent on the quirks of that particular software.
This is why I'm not interested in jumping on this testing bandwagon: because testing in any real, meaningful way would require time, software and equipment that likely none of us have. You seem to have either a woefully lacking understanding of the requirements for actually reliable testing, or your standards for what you accept as trustworthy are just far too low. Either way: this needs fixing.
Sapphire Rapids has been delayed ... what is it, four times now? Due to hardware errors, security errors, etc.? Yeah, that's not exactly a good place to start for a high performance comparison. When it comes out, it won't be competing against Zen3, it'll be competing against Zen4 - EPYC Genoa.
As for your fabulations about what a 16c SR CPU will perform like at 130W or whatever - have fun with that. I'll trust actual benchmarks when the actual product reaches the market. From what leaks I've seen so far - which, again, aren't trustworthy, but they're all we have to go by - SR is a perfectly okay server CPU, but nothing special, and nowhere near the efficiency of Milan, let alone Genoa.
And, crucially, SR will be a mesh fabric rather than a ring bus, and will have larger caches all around, so it'll behave quite differently from MSDT ADL. Unlike AMD, Intel doesn't use identical core designs across their server and consumer lineups - and the differences often lead to quite interesting differences in performance scaling, efficiency, and performance in various specific workloads.
Regarding SR, you are missing the point. It doesn't matter at all what it will be competing against, the argument I made was that 16GC cores would wipe the 5950x off the face of the Earth in terms of efficiency, the same way 8 GC cores wipe the 5800x. So when SR will be released and what it will be facing when it does is completely irrelevant to the point im making.