Sunday, May 26th 2019

Intel Pushes the Panic Button with Core i9-9900KS

With 7 nm AMD Ryzen 3000 processor family expected to make landfall early-July, and "Ice Lake" nowhere in sight, a panicked Intel announced the development of the Core i9-9900KS 8-core/16-thread LGA1151 processor. Based on the 14 nm "Coffee Lake Refresh" silicon, this processor has a base-frequency of 4.00 GHz, up from 3.60 GHz of the original; and an all-core Turbo Boost frequency of 5.00 GHz, identical to the original i9-9900K, which has its max-turbo set at 5.00 GHZ, too. A revamped Turbo Boost algorithm is expected to yield significant gains in multi-core performance. The company didn't reveal TDP, pricing, or availability.
Add your own comment

170 Comments on Intel Pushes the Panic Button with Core i9-9900KS

#151
Valantar
ToxicTaZAgain your talking about the IPC of the 3.8GHz 3900X vs the 3.6GHz 9900K just to point out! 200MHz on the CPU base is huge performance increase thus is why the 9900KS has a 4GHz base to start giving it an extra 400MHz across all 8 cores base increase advantage over the 9900K. 3900X with 3.8GHz base and 4 extra cores sure hope it would be faster than 3.6GHz 9900K with only 8 cores.

If you're talking about the factory throttling OC then 3900X is 4.6GHz vs 9900K 4.7GHz giving Intel an extra 100MHz advantage boost on 8 cores where the 9900KS has 5GHz across all 8 cores giving an extra 300MHz per core advantage.

9900K can all cores AVX-2 5.1GHz OC headroom vs 3900X 4.4GHz all cores OC at best.

Most articles are misleading and people take it for facts.
Seriously? Jesus, it seems there's no end to explaining that IPC means "instructions per clock". In other words, IPC does not vary with clock speed, performance varies with clock speed (and can be said to be a product of IPC and clock speed combined).

The graph shown in my post that you quoted is for frequency normalized performance - i.e. with averaged scores from the SPEC2017 test suite divided by the clock speed of each respective CPU. Here is the chart for performance for each CPU:

See how it differs from the IPC/frequency normalized graph below (clearly marked as "Performance per GHz", i.e. "performance per clock")?


So: In SPEC2017, the 9900K ekes out a small victory overall compared to the 3900X. It scores 7.71 vs. 7.60 overall, with subscores of 9.59 vs. 9.56 in SPECfp and 5.98 vs. 5.77 in SPECint. However, the 9900X is clocked higher than the 3900X, meaning that when adjusted for clock speed - i.e. when looking at IPC - the Zen2 architecture is faster. Actually it's quite noticeably faster, with an overall SPEC score of 1.65 vs. 1.54, or 2.07 vs. 1.92 in SPECfp and 1.25 vs. 1.19 in SPECint.


As to your final statement: that's nonsense, at least if you look at serious sites like AnandTech. If you find what they're saying to be misleading, you aren't understanding what they are saying.
Posted on Reply
#152
ToxicTaZ
ValantarSeriously? Jesus, it seems there's no end to explaining that IPC means "instructions per clock". In other words, IPC does not vary with clock speed, performance varies with clock speed (and can be said to be a product of IPC and clock speed combined).

The graph shown in my post that you quoted is for frequency normalized performance - i.e. with averaged scores from the SPEC2017 test suite divided by the clock speed of each respective CPU. Here is the chart for performance for each CPU:

See how it differs from the IPC/frequency normalized graph below (clearly marked as "Performance per GHz", i.e. "performance per clock")?


So: In SPEC2017, the 9900K ekes out a small victory overall compared to the 3900X. It scores 7.71 vs. 7.60 overall, with subscores of 9.59 vs. 9.56 in SPECfp and 5.98 vs. 5.77 in SPECint. However, the 9900X is clocked higher than the 3900X, meaning that when adjusted for clock speed - i.e. when looking at IPC - the Zen2 architecture is faster. Actually it's quite noticeably faster, with an overall SPEC score of 1.65 vs. 1.54, or 2.07 vs. 1.92 in SPECfp and 1.25 vs. 1.19 in SPECint.


As to your final statement: that's nonsense, at least if you look at serious sites like AnandTech. If you find what they're saying to be misleading, you aren't understanding what they are saying.
So your saying your saying that the 9900KS is faster than the 3900X because 9900KS has better IPC and has nothing to do with GHz got you.

I'm very excited about the 9900KS, can't wait to get it! Going to be fantastic.
Posted on Reply
#153
Valantar
ToxicTaZSo your saying your saying that the 9900KS is faster than the 3900X because 9900KS has better IPC and has nothing to do with GHz got you.

I'm very excited about the 9900KS, can't wait to get it! Going to be fantastic.
So you're not able to read graphs. Got it. You see, the test data quite clearly shows that AMD has the better IPC, while Intel stays slightly ahead in performance due to higher clocks. Given that the KS should sustain marginally higher clocks than the K, it will probably perform a few percent better. And be a power hog, of course.
Posted on Reply
#154
ToxicTaZ
ValantarSo you're not able to read graphs. Got it. You see, the test data quite clearly shows that AMD has the better IPC, while Intel stays slightly ahead in performance due to higher clocks. Given that the KS should sustain marginally higher clocks than the K, it will probably perform a few percent better. And be a power hog, of course.
But most likely the 9900KS will use way less power than 3900X as per graph since your an graph person! I'm an PC Gamer guy basically its all about "gaming performance" for me.
Posted on Reply
#155
Kapone33
ToxicTaZBut most likely the 9900KS will use way less power than 3900X as per graph since your an graph person! I'm an PC Gamer guy basically its all about "gaming performance" for me.
A difference of 30 watts is nothing.
Posted on Reply
#156
Valantar
kapone32A difference of 30 watts is nothing.
At those power levels that is mostly true, though with the KS being a binned and factory OC'd model, it'll more than likely match the 3900X. TPU's manually OC'd 9900K hit 378W in the same gaming load - though it's interesting to see that the 9900K's stock power consumption seems to have gone up by about 20W from their initial review also:

Given that a factory OC needs more voltage headroom than a manually tuned OC by any semi-competent person, we can expect even a well-binned 9900KS to match or exceed those numbers at stock clocks. It'll still outperform the 3900X in games, even if the actual difference will be too small to notice, but on the other hand it shows just how high Intel needs to push their clocks to match AMD's better IPC when 8 Intel cores consume as much power as 12 AMD cores. Nobody is saying the 9900KS (nor the K) are bad, it's just that they are no longer unequivocally the best, and the competition has managed to make their weak spots look particularly bad.
ToxicTaZ*snip*
I responded to the contents of your post above, but I have to comment on the fact that your attempts at moving the goal posts when you are proven wrong are blatantly obvious.

*Data shoing how AMD has better IPC is posted* "Intel has better IPC!" "No, Intel has slightly better performance, AMD beats them on IPC" "But Intel is better for gaming!"

That's not how a civil debate is conducted, just FYI.
Posted on Reply
#157
zlobby
ToxicTaZ3800X is faster than 3700X.
9900KS is faster than the 3800X.
What a wall of text but no definition of 'faster'? Totally won me!
Posted on Reply
#158
Kapone33
Anyone who says that Intel is the fastest period is a little or very delusional, every review from Ryzen to Threadripper has them beating the Intel price competitor in almost everything except Gaming at 1080P (and not even all games). Don't get me wrong I am not saying that Intel is bad or that it can't keep up with AMD CPUs. In fact I would say it's a wash overall and I am never one to compare. My thought process when it comes to CPUs is (is it faster than what I had before) so yeah I had an FX 8320 (a better CPU than opinions of those who never owned one for a good amount of time), The R7 1700 was faster than that chip, The 2600 was faster in clock speed than the 1700 and the 1900x has way more to offer than those chips. I am also confident that the next TR4 chip I put in my system will be faster again than the 1900x. I would even go so far as to say that even people who regularly review hardware would be hard pressed to notice the difference between an AMD vs Intel system if they are unaware of the hardware and using no FPS counter when gaming. Raise your hand if you can notice the difference between 160 and 150 FPS in a game on a 120HZ monitor.
Posted on Reply
#159
EarthDog
kapone32Raise your hand if you can notice the difference between 160 and 150 FPS in a game on a 120HZ monitor.
I agreed with you until this, lol.

How the heck are you supposed to notice that fps difference when the monitor cant even show it??? That said, I can notice a 10-20 fps difference from 120+ to 144 pretty easily. It's also not all about average but minimums... 10 fps can be a lot. The difference between the next setting up graphics wise...or reaching your monitors refresh rate. Some people just dont like glass ceilings and other think 'good enough' is fine. ;)
Posted on Reply
#160
Kapone33
EarthDogI agreed with you until this, lol.

How the heck are you supposed to notice that fps difference when the monitor cant even show it??? That said, I can notice a 10-20 fps difference from 120+ to 144 pretty easily. It's also not all about average but minimums... 10 fps can be a lot. The difference between the next setting up graphics wise...or reaching your monitors refresh rate. Some people just dont like glass ceilings and other think 'good enough' is fine. ;)
I agree with you I should have expanded on that comment to say average. But I did say 150 vs 160 on a 120HZ monitor because they would both be higher than the monitor's refresh rate. I agree also that minimums are very important on how the game feels but I did not want people to call me an AMD fan boy for pointing out that it looks like AMD is better at the 1% minimums. You are absolutely right about the glass ceilings too there are plenty of people who get 1080Tis for 1080P 60HZ monitors without realizing that a 1660 or RX 570 would be plenty enough for those specs (because it's not good enough). :D
Posted on Reply
#161
EarthDog
You're missing the point. Your example was impossible to notice in the first place. :)
Posted on Reply
#162
Valantar
EarthDogYou're missing the point. Your example was impossible to notice in the first place. :)
I thought the point was exactly that it is impossible to notice, yet a rather shocking amount of people keep harping on "Intel is better for gaming" when that is only the case if you are one of the very few with a high-end GPU and a monitor to match. Which is why this whole argument is rather silly.
Posted on Reply
#163
EarthDog
ValantarI thought the point was exactly that it is impossible to notice, yet a rather shocking amount of people keep harping on "Intel is better for gaming" when that is only the case if you are one of the very few with a high-end GPU and a monitor to match. Which is why this whole argument is rather silly.
But it isn't impossible to notice. More FPS is more FPS regardless. As I said, it could be the difference between hitting your refresh rate or raising IQ settings. It matters and can be noticable. But to use a scenario where there is literally zero chance to see the difference isn't appropriate when it can be noticed in most situations. Anyone can make up some BS examples to make something impossible.... like this was. So yes, that is impossible as he described, but its entirely possible in most other situations. It paints a lopsided picture that doesn't jive with most people's reality, that example.
Posted on Reply
#164
Kapone33
EarthDogYou're missing the point. Your example was impossible to notice in the first place. :)
Hahaha I hear you :)
Posted on Reply
#165
Valantar
EarthDogBut it isn't impossible to notice. More FPS is more FPS regardless. As I said, it could be the difference between hitting your refresh rate or raising IQ settings. It matters and can be noticable. But to use a scenario where there is literally zero chance to see the difference isn't appropriate when it can be noticed in most situations. Anyone can make up some BS examples to make something impossible.... like this was. So yes, that is impossible as he described, but its entirely possible in most other situations. It paints a lopsided picture that doesn't jive with most people's reality, that example.
Nothing you're saying here goes against what I said. If you have a 1080p >=144Hz monitor and something like an RTX 2080 or -Ti, it might indeed be noticeable. Maybe even with a 2070, but in fewer games. If not, or if you had a 1440p or 4k monitor, it wouldn't be. The vast majority of people gaming even on relatively high end monitors have <=120Hz monitors, and even more than those have lower end GPUs that only hit those types of frame rates in esports titles. In other words, for all of these people - the vast majority of gamers - it is indeed impossible to notice. Yet plenty of people in the latter group keep harping on the "Intel is best for gaming" nonsense when there's no way they'd notice the difference between the two with their own setup. That's the point here. We're not talking about 1%'ers with $5000 gaming rigs here, but gaming in general. And for gaming in general, the field is now even enough for it to not really matter which you pick - you're going to have a good experience no matter what.
Posted on Reply
#167
John Naylor
Intel Pushes the Panic Button with Core i9-9900KS
With a thread title like that, would be nice to have seen some reasoning or justification ... as written, just comes off as bias. With the 3900X having been announced with no IGP, why wouldn't Intel respond with a non IGP version of the 9900K that they could sell $50 cheaper. AMD did pretty well .... but there is a big "but" ... Kinda like losing the title game to your rival ... ya can feel good about losing the game by a smaller margin than last, but it's still not a win. The AMD CPUs do very well in certain tasks ... the relevant part though is how many folks are actually doing those things ? If one is building a box where the breadth of usage includes gaming, office apps, CAD, photo and video editing why would one even consider the 3900X ? It costs more and doesn't finish on top.

The discussion is like "what's the best tool for a particular job ... a hammer a screwdriver or a wrench ? " Can ya say wrench because it's best at tightening bolts ? or do you make a choice based upon the tasks you expect to perform. If I'm planning a tool box as a gift for my wife to keep at home and ask folks what's the best type of hammer to put in, would you say an air nailer ? Then why say a 12 or more core CPU when the user's apps include nothing that takes advantage of them ? Biggest task my wife would have is banging a tack in the wall to hang a picture or tap down a floor nail that popped up that she stepped on.

Here's TPUs test results on the 3900x / 9900k

Ryzen is king for getting your name on web site benchmark leader boards ...
Ryzen is king for rendering ...
Ryzen is king for software development ...
No clear / significant winner in Web Browsers
Ryzen is king for the science lab
No clear / significant winner in Office Productivity
Intel takes the PhotoShop crown
Intel takes the Premiere crown
Intel takes the Photogrammetry crown
Ryzen is king in text recognition

Ryzen is king in VM ware
No clear / significant winner File Compression (app dependent)
Intel takes the Encryption crown
Ryzen is king in Graphics / mixed media encoding
Intel takes the music encoding crown
Intel takes the gaming crown

Intel takes the CAD crown


The items in bold I have done ... the items underlined, I do pretty much every day. To quote Frank Zappa, the "crux of the biscuit" is how the tools you employ do with things you actually do ... how it does in things you don't do is irrelevant. If asked to build a box for a science lab, rendering, software development, VMware. media encoding, etc I would definitely recommend a 3900X, but in 25 years of PC building, we have done 2 rendering boxes and 0 boxes in the other categories.

So if I am asked to build a box that will primarily be used for gaming, video / photo editing, office apps, browsing ad other "everyman uses" ... and which doesn't include science lab, rendering, software development, VMware. media encoding, like stuff, what CPU ....

a) 9900K ($479)
b) 9900KF ($449)
c) 3900X ($499)

How can I justify AMD based build that costs more and performas less is the tasks at hand ? What's best to put on ya feet to go down a hill ? Roller blades or skis ? ... kinda depends on whether we talking snow or pavement. Use the one appropriate for the surface.

Pushing the panic button ... No. The sports equivalent would be after the manager brings in a new pitcher to face a hitter, the other manager responds by using a pitch hitter. In other words, ... a perfectly normal and appropriate response to market conditions. Seems the KF also is bringing a bit better performance on average

www.cpubenchmark.net/compare/Intel-Core-i9-9900K-vs-Intel-Core-i9-9900KF/3334vs3435

The suggestion that the KF is a ploy by Intel and is just a binned CPU was hysterical .... is it generally a sound business strategy to bin high performing chips and sell them $30 cheaper than the run of the mill stuff ? First thingthat popped into my head when this was announced was the nvidia GTX 560 Ti 448 ... where they took failed 570s, disabled the broken shader units and sold as a 560 Ti 448. Take a 9900k and remove the IGP or disable a failed IGP and you have the 9900KF
Posted on Reply
Add your own comment
Apr 10th, 2025 20:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts