Friday, February 14th 2020
Intel Core i9-10900 10-core CPU Pictured
Intel's desktop Comet Lake-S lineup is close to being released and we are getting more leaks about the CPU models contained inside it. Perhaps one of the most interesting points for Comet Lake-S series is that it brings a boost in frequency and boost in core count, with the highest-end Core i9 processors going up to 10 cores. Thanks to Xfastest, a Hong Kong-based media outlet, we have first pictures of what appears to be an engineering sample of the upcoming Core i9-10900 processor.
Being a non-K version, this CPU is not capable of overclocking and has a fixed TDP rating of 65 Watts. Compared to 125 W of the K models like the upcoming Core i9-10900K, this CPU will output almost half the heat, thus requiring a less capable cooling solution. The CPU is installed in LGA1200 socket, which is a new home for Comet Lake-S CPUs and provides backward compatibility for coolers supporting LGA1151. In the sample processor pictured below, we can see a marking on the CPU that implies 2.5 GHz base clock. Previously rumors were suggesting that this CPU version has 2.8 GHz base clock, however, it can be an early engineering sample given that no official imprints are found on the CPU heat spreader.
Source:
VideoCardz
Being a non-K version, this CPU is not capable of overclocking and has a fixed TDP rating of 65 Watts. Compared to 125 W of the K models like the upcoming Core i9-10900K, this CPU will output almost half the heat, thus requiring a less capable cooling solution. The CPU is installed in LGA1200 socket, which is a new home for Comet Lake-S CPUs and provides backward compatibility for coolers supporting LGA1151. In the sample processor pictured below, we can see a marking on the CPU that implies 2.5 GHz base clock. Previously rumors were suggesting that this CPU version has 2.8 GHz base clock, however, it can be an early engineering sample given that no official imprints are found on the CPU heat spreader.
106 Comments on Intel Core i9-10900 10-core CPU Pictured
think... 8t vs 24t same fps..... on a threaded game...
i mean if you want to sit around and play cinebench on your gaming rig then yes, threads are greatr
And that is with Hitman not being limited in any other way. Ie GPU, or the engine/game code itself, or RAM. Game is also offline, so no latency hit, no tickrate limitations or impact, network load, etc.
The gist of it all is that AMD is now 'close enough' making Intel's overpriced top end completely ridiculous. And if you did buy this for 'high thread counts' then yes, Ryzen all the way because it simply has more of them. The wiggle room that is left for an Intel rig has decreased substantially and the potential gain over a lower priced AMD alternative with often double the thread count, is negligible for gaming. Yes, even for high refresh, these days.
Only if you feel like spending >2500 on your gaming PC, which is a total waste of time, is Intel really going to offer any sort of noticeable impact. Below that, you stand to gain more with a good GPU choice and a CPU that will last longer than any Intel 6- or 8 core /thread CPU. Those 9700's.... obsolete faster than any HT / SMT CPU even if they clock higher. There is just no question about it. So yay, you gain 4 FPS today... :p
The 3570k proved that non HT CPUs just run into trouble faster when thread count is saturated, while the HT quads had at least a few more years in them. Why repeat this for such a meagre gain? So its really a double edged blade here. Short term gain is at odds with future proofing, really. You're forgetting this is a top end i9 part we are speaking of.
Average Joe does not consider this, making your entire argument irrelevant. You are definitely looking at performance oriented use cases here with much lower idle times. As you say, most people will not ever see the extra performance - nor will they run Cinebench, or buy a 400>+ dollar CPU, fast RAM and pricy board. Those that do, often are using the extra performance, they do have sustained loads.
you will likely see higher peak usage on one of 3900x 24 threads than you'll see on any of 9700k's eight.
gamegpu.com/action-/-fps-/-tps/metro-exodus-sam-s-story-test-gpu-cpu
peak usage
75% on 9700k
80% on 3900x
81% on 3700x
83% on 9900k
same story in most of other modern games
70% on 9700k vs 83% on 3900x
gamegpu.com/action-/-fps-/-tps/wolfenstein-youngblood-test-rtx
44% on 9700k vs 64% on 3900x
gamegpu.com/action-/-fps-/-tps/red-dead-redemption-2-test-gpu-cpu-2019
84% on 9700k vs 92% on 3900x
gamegpu.com/action-/-fps-/-tps/star-wars-jedi-fallen-order-test-gpu-cpu-2019
RDR2:
WY:
MESS:
As far as hitching in gaming... some games improve when disabling HT (assuming it has enough real cores to handle it). Some games it doesnt matter. So to me, those situations are game dependent and the game's/engine issue, not the cpu.
How does it bottleneck a core when it is getting more work done? Your analogy there doesnt make sense to me. If ht goes away, it is still slower... its bottlenecked compared to two real cores, but we arent dealing with that as a comparison.
Wolfenstein Youngblood:
3900X: 83% load on core 2 and 19% load on core 5, lol
Core 2 operates nearly on full load whereas core 5 idles. The Intel distribution is way better here
Note this answer is regardless of brand.
Your inflating your own opinion, I game more on Intel than AMD , I see no in-game difference between the two besides minor differences in max FPS, but nothing worth caring about.
If everything normal, the BIOS and chipset driver should inform the OS, so that the OS assigns the load mostly to the best cores.
The load can be intensive or extensive. Extensive is when you measure how much time the thread is loaded, and intensive is when you measure the real load factor at a single point in time.
And we also don't know that if the OS rapidly switches between the available threads, there won't be conflicts between the apps competing for those threads.
It's all relative and unless you have 100% of the information and the whole picture, you can't claim anything.
Things have improved greatly over the years that either Intel's/MS/game developer (or some combination of them all) for writing better software so issues like HT causing issues doesn't seem to come up anymore, at least not that I'm aware of.
The 9900K with HT has a way more evenly load Distribution in comparison.
But AMD doesn't think so. They have decided that the threads should be switched off rapidly on and off in order to distribute the heat, thus the clock optimally.
Is that how ryzen boost works?
Seems kinda pointless having 12 cores and pushing one to 90% while five sit at 5%.
Not really the point of buying a high core count CPU
So that's an issue, you can't have. Three heavier loaded threads with superfluous activity distributed across the rest?.
What's curious to me is people's theory's of how processors work these days and why intel are better.
That boost algorithm which both use ,designed per SKU also has some say.
Some of the most vocal against multi core and. Ryzen in general are he'll bent on keeping their four core hyperthreaded chips relevant via downplaying multi core ryzens.
Let's see where that gets them.
It's the Aliens, simple.
In Ryzen, there are CCX, these a quad-core complexes which share cash. When you rapidly switch load on and off to them, you add quite a bit of latency, thus your overall performance decreases.
It's in your best interest to use the cores in the same CCX and not switch to the the other CCX for thread off/onloading.
I mean - yeah, it's better to use only as many threads as your app needs, and leave the rest alone, for other apps.
Of course, when your CPU is at 100% full load, then this rapid switching becomes 100% pointless.
The same could be said for videocards. There are different driver versions which might load the shaders up to 95% or 99%, of course with 99% load you will get a few more frames/second.
All on all it seems to benefit ryzens performance,this and cache size.
100% load is never a good thing for games though.