margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review
any idea why people are asking on twitter which is obviously the wrong platform considering they can just ask here?
I was replying to his thoughts that he put on Twitter, he quoted your article, I replied my thoughts on his tweet initially.
The heat added to your room isn't based on the temperature it is running at. If the CPU is using 200W, it is adding almost 200W heat to your room, regardless of the temp being 40 or 80 degrees. That's a very common misconception.
Not entirely true. At higher temperatures, the processor leaks more power. So yes the temperature of the processor has a small (but noticeable) impact on the power consumption. And I am not just referring to CPUs, this also applies to GPUs, VRMs, really anything with logic.
Yeah I appreciate your effort not flooding the chat. Here are my 2 cents:
- Margin of error. In this case, the score is almost the same. Google Octane 2.0 is not the best application to test CPU cores scaling...
- Same as Google Octane 2.0. the difference here is in the 10s miliseconds... Almost identical.
- In Tensor Flow, I saw in the chart 10500 is 2.2% faster than 8700K. I'm not sure what test @w1zard is using. There are so many parameters that may affect the result.
- Margin of error. The result should be read as identical.
- Digicortex compute plugin uses SSE, AVX / AVX-2 and AVX-512 instruction set. Depending on cooling, boost frequency, and RAM setup (x86 Compute Plugin is NUMA aware), you will see 10 Gen i5 with better memory setup going ever so slightly faster than the older 8700K.
- Games Not all can eat up all cores. Very few fully utilize the full cores of the 10900K. Hence the results. Game is NOT the representation of CPU multicore performance (in my own point of view). They are there for your entertainment. Rendering videos , 3D scenes or simulating physics / neural network with proper cores config (intra_/inter_op_parallelism_threads, launch simultaneous processes on multiple NUMA node, bind OpenMP threads to physical processing units, sets the maximum number of threads to use for OpenMP parallel regions, ...)
- Margin of error. AC: odyssey results will vary a lot. Take it with a grain of salt. It is relative result, not absolute.
When you look at the benchmark, think this way: Is CPU X run better with this application than CPU Y ? Benchmark gives you a representation of how CPUs perform within the enclave of the application. Rules are set by the developers and CPUs play by their rules which sometimes favor the in-theory-slower CPU. I'm running Photoshop and it just loves high frequency cores. So, a server Xeon CPU that cost thousands of dollars will bite the dust vs a cheap 9900KS.
Metro Exodus loves 6-8 high speed cores and scale poorly with 12-16 cores...
And someone is trying to promote their Twitter using this forum. I guess.
In that case, I think their testing setup is flawed. I highly doubt they locked the GPU frequency to a static frequency, because GPU boost can alter results, and is not always consistent between runs. That's number 1. Number 2, they probably did not lock the fan speeds to a certain amount and had them running on auto settings. The more variables you introduce, the more variance there can be in results. That's why you need to lock the frequency of the graphics card so that it doesn't fluctuate, and locking the fan speeds means there is one less variable that can be introduced. I understand it is very difficult to completely control the temperature of CPUs (would be extremely difficult to keep the temperatures static), keeping as much things static instead of adaptive helps.
In regards to AVX-512, that isn't available on Coment Lake. Only on HEDT, server, and Ice Lake platforms at the moment. Skylake through Comet lake has literally the same architecture, hence the 0 IPC improvement. If you keep RAM speed all the same and ignore the security fixes, then a 6700K clocked at 4.2 GHz all core will perform exactly the same as an i3-10300 at 4.2 GHz all core.
If games don't always eat up all the cores, then that means a 10900K SHOULDN'T be fully loaded up. Which means it should run a tiny bit faster because it runs higher clocks when it's not utilized all the way.
Turbo ratios of the processors I mentioned
---------------------------------------------
i7-8700K - 47/46/45/44/44/43/-/-/-/-
i9-9900K - 50/50/50/48/48/47/47/47/-/-
i9-9900KS - 50/50/50/50/50/50/50/50/-/-
i5-10400F - 43/?/?/?/?/40/-/-/-/-
i5-10500 - 45/?/?/?/?/42/-/-/-/-
i5-10600K - 48/48/48/47/45/45/-/-/-/-
i7-10700 - 48/?/?/?/?/?/?/46/-/- (with Turbo Boost Max 3.0)
i7-10700K - 51/51/51/48/48/47/47/47/-/- (with Turbo Boost Max 3.0)
i9-10900 - 52/?/?/?/?/?/?/?/?/46 (with Turbo Boost Max 3.0 and Thermal Velocity Boost)
i9-10900K - 53/53/51/?/50/?/?/?/?/49 (with Turbo Boost Max 3.0 and Thermal Velocity Boost)
With those turbo ratios, you can now see why I was skeptical of the results, because the older i7s and i9s have higher turbo ratios than even today's 10th gen i5s (aside from 10600/K). Which means, even the older 8700K still should beat the i5-10500. I left out the i7-9700K because it didn't have HT, and that alone can change factors, so I didn't talk about that.