Thursday, February 27th 2020

Intel 10th Generation Core "Comet Lake-S" Desktop Processor Boxed Retail SKUs Listed

Ahead of their rumored April 2020 availability product codes of Intel's upcoming 10th generation Core "Comet Lake-S" desktop processors leaked to the web, courtesy momomo_us. The lineup includes 22 individual SKUs, although it's unknown if all of these will be available in April. There are four 10-core/20-thread SKUs: the i9-10900K, the i9-10900KF, the i9-10900, and the i9-10900F. The "K" extension denotes unlocked multiplier, while the "F" extension indicates lack of integrated graphics. "KF" indicates a SKU that's both unlocked and lacking an iGPU. Similarly, there are four 8-core/16-thread Core i7 SKUs, the i7-10700K, the i7-10700KF, the i7-10700, and the i7-10700F.

The 6-core/12-thread Core i5 family has several SKUs besides the range-topping i5-10600K and its siblings, i5-10600KF and i5-10600. These include the i5-10500, i5-10400, and i5-10400F. The quad-core Core i3 lineup includes the i3-10320, i3-10300, and i3-10100. The former two have 8 MB L3 cache, while the i3-10100 has 6 MB. Among the entry-level Pentium SKUs are the G6600, G6500, G6400, G5920, and G5900.
Source: momomo_us (Twitter)
Add your own comment

86 Comments on Intel 10th Generation Core "Comet Lake-S" Desktop Processor Boxed Retail SKUs Listed

#76
medi01
GlacierNinebehind which nobody is standing except you.
Oh, it's me against entire world, who, let me guess, agrees with you. How refreshing.
Thanks for spending time to respond to me, enlightened one!
I won't wash my hands that typed response to your blessed message for a day.
candle_86i look at it based on pricing, over 5000 machines your an enterprise, because at that point it's cheaper to buy an enterprise license than it is to buy a per seat license.
Procurement processes are very VERY different for juggernauts.
It's frustrating that people think it makes sense to argue about it.
Posted on Reply
#77
Berfs1
ARFI don't know why you keep mentioning warranties when these warranties always expire and they never matter because the service always happens after they expire! ! :slap:

Meanwhile, Intel is in a desperate situation. 13% sales share at MF.de with only around 2,500 CPU sales in February 2020.

For a comparison, AMD's top was almost 30,000 CPU sales in December 2019.

What a domination! :cool:


Amd/comments/fbz43k
We keep mentioning warranties since #1, warranty acts as a way of confidence in their products, and #2, means if anything goes wrong, it can be covered. Reliability is #1 in the enterprise area, and good warranties are the closest to that. Let's say Computer A has a certain amount of performance and 5 year warranty, and Computer B has 5% more performance and 1 year warranty. Most of the times, a company will choose Computer A since they would rather take the ~5% performance hit in favor of a computer that can be guaranteed* to last 5x as long as Computer B.

*again, warranties do not mean it is guaranteed to last as long as the warranty period, it means, should anything go wrong, they will cover the repairs or however the terms are.
ARFTell them to stop it. It is very anti-green. Just upgrade the CPU if you need to but never buy things which you don't need and put thus pressure on the environment for manufacturing it.
Yea okay go ahead and spend the time to upgrade one thousand five hundred processors and make sure they run reliably without error afterwards. As if you will have the time for that.
ARFNeither of the things you write are even partially correct. PC components are designed to work for decades.
CPUs can work for decades, PSUs come with 10, 12 year warranties, RAM is essentially with lifetime warranty.

:laugh:

Please think about the environment.
Dude, if you really think processors under heavy load all the time will last longer than 5 years (without underclocking/undervolting), do you live in a dreamworld? Just because a PSU comes with 10-12 year warranties does not mean you just use it until it dies. If a computer is working on some important data and all of a sudden the power goes out or the SSD fails, "because warranty isn't over yet", is that really the reason you are going to give when a company may have just lost a bunch of money because of that? I agree with making computers more efficient and reducing waste, but you just don't do that kind of stuff in mission critical scenarios.
AddSubJust dumped my Ryzen rig, which I used for modern gaming primarily (have two other WinXP based machines for really older titles, Althon XP + 7800GTX SLI rig for really old stuff, and a Phenom + GTX 580 SLI machine for Crysis era stuff). That said, going from a 6-core/12-thread Ryzen @ 4.05Ghz to a 6-core/6-thread 9600k @ 5.3GHz was amazing. Between 40% to 100% improvement in older titles (2010-2017) and around 20% to 30% in titles that came out last 12 months.

Don't trust all these YouTube reviewers, it's all "sponsorships" and straight out obfuscation(lies) at worst and at best its use of very specific benchmarks (or even subsets of veryyyy specific settings within benchmarks) to get "desired" results. I knew I was in trouble when my shiny new Ryzen was spitting out real world numbers, as in stuff outside of AIDA/Everest, that were very close to a 10 year old X58/i7-920 platform (one in the profile).

Anyway, TPU is one of the few remaining places to get unbiased reviews. FFS, Wiz still uses SuperPi to bench his CPUs, in fact its the very first test he uses in his reviews.

As for these new CPUs, Intel will stay gaming/ST king for a long while. I don't see AMD doing much to change that, not until AM5 at least. There is no way AMD can gap the 1GHz difference with IPC improvements, and the fact many titles and middleware engines are still using x87 here and there, something intel is still a master of. Never mind most apps are very single threaded, even in 2020.

...
..
.
Yes, Intel CPUs are better than AMD CPUs for gaming. However, what you failed to also mention was what did you do with your graphics card situation? FYI, for price/performance, performance/watt, price, multithreaded performance, IPC, upgradeability, and power consumption, AMD > Intel. Intel is ONLY better than AMD in gaming since the codes usually are not optimized for a lot of threads. Intel also leads in single threaded only because the clock speed is much higher than the IPC difference. Other than that, it is like buying a 2009 Mercedes over a 2009 Rav4 V6; yes the Mercedes is faster, but has terrible MPG, is more expensive, requires premium, etc. You get the point.
HenrySomeoneCouldn't agree more, in regard to gaming Intel is still MILES ahead, especially when comparing cpus at thread parity and just like you say (and I have several times before), Ryzens won't change that anytime soon...
I wouldn't say miles ahead, since the performance gaps are not that big when comparing maximum performance on both sides.
medi01I worked for 350-ish company, 5k-ish company, 80k+ ish company.
The 5k-ish one was much closer to 350 than to 80k+, processes wise.
In the latter, "single dude sits there and decides what crap to pick up" is not even remotely imaginable.

But let's argue about semantics, shall we.
I mean who calls what an enterprise should be very entertaining to talk about.

lol ok buddy
candle_86Nope non z on a z board using turbo oc to 4ghz with ddr3 1866 oced to 2000mhz but nice try. The ryzen was still faster in single threaded titles. I play older titles mostly, cnc3, coh 1, civilization iv. All where faster, and then games like cities skylines absolulty loved it. In cities I went from unplayable at 150k pop to playable at 500k pop, though cities can use up to 6 cores.

His point is 40-100% which isn't possible. 1st Gen ryzen has ipc at haswell level, now haswell to Skylake is 10-12% at best in ipc usually 5%. His 9th gen has the same ipc as Skylake but let's be generous and say it gained an extra 5%.

Now and 3rd gen clock for clock tie Intel ipc except in gaming because of latency to the I/o die. But that loss is shown to be under 5% with an rtx 2080ti in games here at tpu as an average. Now his i5 is at 5ghz so he gains 25% over a stock 3rd gen without turbo. At best his average is 30% but because of amds turbo it's going to be closer to 10% and that's if he using an rtx 2080ti and if performance scales lineraly which it doesn't.

Now you've also got another issue at 5ghz the 2600k is still considered slow and is best paired with a 1070 to or below these days, it's time has already ended, sandybridge was declared dead at the high-end way back in 2018. It's still a fine budget chip but it's missing things like aes and it's avx decoding is considered slow.
Yeah, um, so, here's the thing. Ivy Bridge has lower IPC than Zen+. Zen+ from what I remember, had 0 IPC improvement from Zen, so we can assume Zen+ = Zen for IPC. I can prove it too. I have done the tests.
Berfs1We keep mentioning warranties since #1, warranty acts as a way of confidence in their products, and #2, means if anything goes wrong, it can be covered. Reliability is #1 in the enterprise area, and good warranties are the closest to that. Let's say Computer A has a certain amount of performance and 5 year warranty, and Computer B has 5% more performance and 1 year warranty. Most of the times, a company will choose Computer A since they would rather take the ~5% performance hit in favor of a computer that can be guaranteed* to last 5x as long as Computer B.

*again, warranties do not mean it is guaranteed to last as long as the warranty period, it means, should anything go wrong, they will cover the repairs or however the terms are.


Yea okay go ahead and spend the time to upgrade one thousand five hundred processors and make sure they run reliably without error afterwards. As if you will have the time for that.


Dude, if you really think processors under heavy load all the time will last longer than 5 years (without underclocking/undervolting), do you live in a dreamworld? Just because a PSU comes with 10-12 year warranties does not mean you just use it until it dies. If a computer is working on some important data and all of a sudden the power goes out or the SSD fails, "because warranty isn't over yet", is that really the reason you are going to give when a company may have just lost a bunch of money because of that? I agree with making computers more efficient and reducing waste, but you just don't do that kind of stuff in mission critical scenarios.


Yes, Intel CPUs are better than AMD CPUs for gaming. However, what you failed to also mention was what did you do with your graphics card situation? FYI, for price/performance, performance/watt, price, multithreaded performance, IPC, upgradeability, and power consumption, AMD > Intel. Intel is ONLY better than AMD in gaming since the codes usually are not optimized for a lot of threads. Intel also leads in single threaded only because the clock speed is much higher than the IPC difference. Other than that, it is like buying a 2009 Mercedes over a 2009 Rav4 V6; yes the Mercedes is faster, but has terrible MPG, is more expensive, requires premium, etc. You get the point.


I wouldn't say miles ahead, since the performance gaps are not that big when comparing maximum performance on both sides.



lol ok buddy

Yeah, um, so, here's the thing. Ivy Bridge has lower IPC than Zen+. Zen+ from what I remember, had 0 IPC improvement from Zen, so we can assume Zen+ = Zen for IPC. I can prove it too. I have done the tests.
geez i see how long my reply thread has been, about a whole page long, my apologies, just catching up on a very old thread.
Posted on Reply
#78
medi01
Berfs1lol ok buddy
I thought "procurement processes are drastically different" is easy to understand.
Oh well.
I'll write it off as "mental capacity problems".
Posted on Reply
#79
AddSub
I mentioned this in another thread, but my Ryzen even at max OC of about 4225MHz still bottlenecks a decade old GTX580.... 580. So far I've tried RX480 (both single and xfire), RX580 (both single and xfire), GTXTi550 (both single and SLI), GTX580 (both single and SLI), GTX1070 (both single and SLI). And yes, I have lot of GPU's. It's a hobby, benchmarking that is. I literally have bins labeled 1.5v and 3.3v, to seperate my AGP cards by voltage.


Anyway, here are some results of Ryzen vs i5 at their max OC's. Win10, Geil EVO-X Ryzen branded 16GB RAM kit (same kit in both systems was used). 3466MHz @ 18-20-20-40 was used for both systems, although the IMC on the i5 can go up to 4000 (so far). Motherboards are Gigabyte AX370 Gaming 5 for Ryzen and Gigabyte Z390 Aorus Master for i5. i5 has no HT so 6 cores, Ryzen is @ a 6+6 config. ASUS Strix 1070 8GB in SLI (SLI disabled). Will try to update as days go on.


I'm working on more modern benches such as more recent 3DMarks. Just finshed Superposition and difference is only in single digits, percent wise. I'm parsing-through and averaging down massive amounts of 32bit/64bit CPU-Z runs. That said, outside of AIDA64 memory stuff and few CPU benches in there like Queen or SinJulia, I'm not seeing Ryzen win a whole lot of these. Anything legacy or near legacy 3D and Intel just destroys. Anything requiring more threads and Ryzen gains back ground. Even then though...




----------------------------
SuperPi Mod1.5 XS
i5: 6.857sec
Ryzen: 10.384sec
----------------------------

----------------------------
Cinebench R10 ST (32bit):
i5: not run yet
Ryzen: not run yet

Cinebench R10 MT (32bit):
i5: not run yet
Ryzen: not run yet
----------------------------

----------------------------
Cinebench R10 ST (64bit):
i5: not run yet
Ryzen: not run yet

Cinebench R10 MT (64bit):
i5: not run yet
Ryzen: not run yet
----------------------------

----------------------------
Cinebench R15 ST:
i5: 224
Ryzen: 167

Cinebench R15 MT:
i5: 1278
Ryzen: 1311
----------------------------

----------------------------
Cinebench R20 ST:
i5: not run yet
Ryzen: not run yet

Cinebench R20 MT:
i5: 3042
Ryzen: 2764
----------------------------

----------------------------
WinRAR 5.60 ST:
i5: 2224
Ryzen: 1580

WinRAR 5.60 MT:
i5: 12007
Ryzen: 9329
----------------------------

----------------------------
Win10 bootup (stock soft settings, auto-logon, XPG SX8200 480GB NVMe @ Gen3x4, time taken at sub-2% CPU activity)
i5: 46sec
Ryzen: 68sec
----------------------------

3DMark 2006:



----------------------------


----------------------------

3DMark 2003:



----------------------------


----------------------------

3DMark 2001:

Total:



Detail:


------------------------------------------------------------------------------------------------------------------------------------------------------


Edited for formatting, spelling, and addition of more benchmark fields.
...
..
.
Posted on Reply
#80
Master Tom
I have the i9-9900K. I would have upgraded to a 10-Core Comet Lake CPU, but with a new Mainboard. That is too much work, disassembling the computer and putting everything together again. I only do that, when building a complete new system.
Posted on Reply
#81
Berfs1
AddSubI mentioned this in another thread, but my Ryzen even at max OC of about 4225MHz still bottlenecks a decade old GTX580.... 580. So far I've tried RX480 (both single and xfire), RX580 (both single and xfire), GTXTi550 (both single and SLI), GTX580 (both single and SLI), GTX1070 (both single and SLI). And yes, I have lot of GPU's. It's a hobby, benchmarking that is. I literally have bins labeled 1.5v and 3.3v, to seperate my AGP cards by voltage.


Anyway, here are some results of Ryzen vs i5 at their max OC's. Win10, Geil EVO-X Ryzen branded 16GB RAM kit (same kit in both systems was used). 3466MHz @ 18-20-20-40 was used for both systems, although the IMC on the i5 can go up to 4000 (so far). Motherboards are Gigabyte AX370 Gaming 5 for Ryzen and Gigabyte Z390 Aorus Master for i5. i5 has no HT so 6 cores, Ryzen is @ a 6+6 config. ASUS Strix 1070 8GB in SLI (SLI disabled). Will try to update as days go on.


I'm working on more modern benches such as more recent 3DMarks. Just finshed Superposition and difference is only in single digits, percent wise. I'm parsing-through and averaging down massive amounts of 32bit/64bit CPU-Z runs. That said, outside of AIDA64 memory stuff and few CPU benches in there like Queen or SinJulia, I'm not seeing Ryzen win a whole lot of these. Anything legacy or near legacy 3D and Intel just destroys. Anything requiring more threads and Ryzen gains back ground. Even then though...




----------------------------
SuperPi Mod1.5 XS
i5: 6.857sec
Ryzen: 10.384sec
----------------------------

----------------------------
Cinebench R10 ST (32bit):
i5: not run yet
Ryzen: not run yet

Cinebench R10 MT (32bit):
i5: not run yet
Ryzen: not run yet
----------------------------

----------------------------
Cinebench R10 ST (64bit):
i5: not run yet
Ryzen: not run yet

Cinebench R10 MT (64bit):
i5: not run yet
Ryzen: not run yet
----------------------------

----------------------------
Cinebench R15 ST:
i5: 224
Ryzen: 167

Cinebench R15 MT:
i5: 1278
Ryzen: 1311
----------------------------

----------------------------
Cinebench R20 ST:
i5: not run yet
Ryzen: not run yet

Cinebench R20 MT:
i5: 3042
Ryzen: 2764
----------------------------

----------------------------
WinRAR 5.60 ST:
i5: 2224
Ryzen: 1580

WinRAR 5.60 MT:
i5: 12007
Ryzen: 9329
----------------------------

----------------------------
Win10 bootup (stock soft settings, auto-logon, XPG SX8200 480GB NVMe @ Gen3x4, time taken at sub-2% CPU activity)
i5: 46sec
Ryzen: 68sec
----------------------------

3DMark 2006:



----------------------------


----------------------------

3DMark 2003:



----------------------------


----------------------------

3DMark 2001:

Total:



Detail:


------------------------------------------------------------------------------------------------------------------------------------------------------


Edited for formatting, spelling, and addition of more benchmark fields.
...
..
.
specifically with pifast, i have submissions on HWBOT where ryzen processors are slower than LGA1155 processors. No, that is not an exaggeration. Also, I would not use 3D mark to compare LGA1151 vs AM4 since 3Dmark is known to have issues with 10+ thread parts. However, I did expect the ryzen processor to with in Winrar at least for multithreading, since it not only has IF, but also SMT...
Posted on Reply
#82
AddSub
Berfs1specifically with pifast, i have submissions on HWBOT where ryzen processors are slower than LGA1155 processors. No, that is not an exaggeration. Also, I would not use 3D mark to compare LGA1151 vs AM4 since 3Dmark is known to have issues with 10+ thread parts. However, I did expect the ryzen processor to with in Winrar at least for multithreading, since it not only has IF, but also SMT...
I got Vantage results, SLI & single GTX1070. Ryzen actually pulls ahead in individual GFX scene tests (not really the feature tests where it's back and forth). Although, I think it's the drivers (for this run I used 442.xx on i5 and 388.xx on Ryzen, so not a fair or official comparo). Will add the retests with equalized drivers, whatever the results, to the original posts above. Here are the "bests" for now, mismatched drivers and all. (The total score is still in favor of i5, since the CPU score is just insane, again mismatched drivers I'm sure.)





Single GTX 1070:




2 x GTX1070 (SLI):





...
..
.
Posted on Reply
#83
Dragonsmonk
AddSubAnything legacy or near legacy 3D and Intel just destroys. Anything requiring more threads and Ryzen gains back ground. Even then though...
Well that is not really surprising, is it? 2001, 2003, 2006 - not much has changed since in Intel's world so that the optimization still kicks in. However, the Ryzen was nowhere near existing yet, thus no optimization - so why are you wasting your own time running those benches?

On top of that you're comparing an end 2018 Intel to an early 2017 Ryzen. Unfortunately the generation difference for Ryzen is immense. For a more equalized test it would have to be a 2xxx or 3xxx series Ryzen. Since the launch of the mentioned 9600k was basically in the middle of both - one is 6 months earlier, the other 8 month later.

There is a reason why I was raised with "never trust statistics you have not forged yourself...."
Posted on Reply
#84
GlacierNine
DragonsmonkWell that is not really surprising, is it? 2001, 2003, 2006 - not much has changed since in Intel's world so that the optimization still kicks in. However, the Ryzen was nowhere near existing yet, thus no optimization - so why are you wasting your own time running those benches?

On top of that you're comparing an end 2018 Intel to an early 2017 Ryzen. Unfortunately the generation difference for Ryzen is immense. For a more equalized test it would have to be a 2xxx or 3xxx series Ryzen. Since the launch of the mentioned 9600k was basically in the middle of both - one is 6 months earlier, the other 8 month later.

There is a reason why I was raised with "never trust statistics you have not forged yourself...."
Not to mention, lets be honest - what matters isn't how well you can run a 19 year old benchmark. Everything available is able to perform the tasks that we considered difficult in 2001, with no difficulty whatsoever.

I will gladly buy a CPU that performs 10% worse in a 2001 benchmark if it performs 10% better in a current one, because I'm still going to be many times faster in *both* of those circumstances than a chip **from 2001** would be.
Posted on Reply
#85
Berfs1
AddSubI got Vantage results, SLI & single GTX1070. Ryzen actually pulls ahead in individual GFX scene tests (not really the feature tests where it's back and forth). Although, I think it's the drivers (for this run I used 442.xx on i5 and 388.xx on Ryzen, so not a fair or official comparo). Will add the retests with equalized drivers, whatever the results, to the original posts above. Here are the "bests" for now, mismatched drivers and all. (The total score is still in favor of i5, since the CPU score is just insane, again mismatched drivers I'm sure.)





Single GTX 1070:




2 x GTX1070 (SLI):





...
..
.
In those scenarios, for the Ryzen system (both really), can you confirm both PCIe slots were using either PCIe x8 or x16? Sometimes the 2nd (and usually the 3rd) PCIe x16 mechanical slots operate in x4 electrical mode when the 1st PCIe x16 slot is occupied. Also, as some others are mentioning, I recommend using a more modern benchmark as these benchmarks are (for the most part) meaningless now since there are increasingly more multithreaded applications that people use.
Posted on Reply
#86
GlacierNine
Berfs1In those scenarios, for the Ryzen system (both really), can you confirm both PCIe slots were using either PCIe x8 or x16? Sometimes the 2nd (and usually the 3rd) PCIe x16 mechanical slots operate in x4 electrical mode when the 1st PCIe x16 slot is occupied. Also, as some others are mentioning, I recommend using a more modern benchmark as these benchmarks are (for the most part) meaningless now since there are increasingly more multithreaded applications that people use.
I wouldn't even bother addressing multithreading when there are so many things that didn't exist then and so many fundamental architectural differences.

Entire instruction sets we rely on today didn't exist in 2001. SSE2 was introduced in 2001. By 2006 we had SSE4. These days we have AVX and AVX2. There was no 64 bit addressing, hell, there weren't even 32 bit operating systems. Motherboards still had northbridges and communicated with memory via the Front Side Bus. Programs weren't Large Address Aware, etc etc.

These benchmarks are **irrelevant** today, for reasons so much more important than threads or clocks.
Posted on Reply
Add your own comment
May 1st, 2025 05:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts