• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-9900K

Well, their fault for being lazy bastards all these years.
I'm not so sure anymore. The 9900k, while being the best they can produce at the moment (for a desktop chip) is a dud, because of cramming too many cores at too high clocks with too big of a process size, even though they know 14nm very well at this point. Sure they stagnated on core count, but how much more would they be able to do? This can probably be blamed on 10nm issues, and no manufacturer would hold on to an older process that is more expensive and less efficient, no matter how much in the lead they may be. I mean, even if AMD was still messing around with Bulldozer, Intel could still be cranking out more quad core chips per wafer with 10nm than 14nm. There's literally no reason not to do that.
 
Temps.png


adding more money on cooler, best gaming processor ever .. nice ..

Here we have an ambiant temp of 21-22C, Just imagine the temps in summer when most of people does not have air conditioning, what could be the result and consequences ?
I live in Asia, I have aircon and the ambiant temp is set to 25-26C, but so many people here does not have aircon, so the ambient temp is around 30-33C as usual temp here so an average of 10 more extra degrees to consider. So this CPU is not made to be used in some countries where temps are high... This CPU is running way more hotter than my 6950x OC 4.4GHz, I really believe there is a problem with this 9900K and we may hear a lot about it with the first buyers as soon as the climate gets warmer.
 
Last edited:
This is the Pentium D all over again.
 
same bro, I live in Indonesia and really near to equator line.

I've tried 86K + Raijintek Triton, the temp so bad indeed, moreover at summer, maybe 75-100 still safe for the CPU, but I really feel not safe with it, or comfortable with it, my limit since Athlon Xp 2000 only 65, above that .. nope ..

I'm sticked with 3770K right now, next upgrade almost definately 26X or wait for Ryzen 3000.

the big issues that I thinked, almost all tech review give these lake cpus as a recommended, or great bla bla bla .. yes indeed maybe in term of performance, purely.. but how overall ? the cost you may add for decent cooler, price, temp, performance and price percore, etc ..
Here we have an ambiant temp of 21-22C, Just imagine the temps in summer when most of people does not have air conditioning, what could be the result and consequences ?
I live in Asia, I have aircon and the ambiant temp is set to 25-26C, but so many people here does not have aircon, so the ambient temp is around 30-33C as usual temp here so an average of 10 more extra degrees to consider. So this CPU is not made to be used in some countries where temps are high... This CPU is running way more hotter than my 6950x OC 4.4GHz, I really believe there is a problem with this 9900K and we may hear a lot about it with the first buyers as soon as the climate gets warmer.
 
Which is what my point is when you say "Intel will have to fight back." Intel has nothing to fight back with... until 2022 when they have a brand new arch on 7nm.
They're going to have to take a page out of AMD's CCX playbook if they hope to go any further. There's too many issues with the monolithic production process.
 
They're going to have to take a page out of AMD's CCX playbook if they hope to go any further. There's too many issues with the monolithic production process.
That monolithic design process is probably the only thing giving Intel an advantage right now. The CCX design is great in a lot of areas... it's cheap and gives good yields, and it's easily scaleable, but the design is a bit slower than a monolithic design due to higher latency. So, while the monolithic design is better for performance, it's also expensive and doesn't scale easily.
 
Nor should you mince words. Now tell us how you really feel! :D

Wizzard is wise to just put the numbers out there comment moderately and not piss all over Intel or anyone else he does reviews/business with/for....he can leave it to us to say this processor SUCKS.
 
Wizzard is wise to just put the numbers out there comment moderately and not piss all over Intel or anyone else he does reviews/business with/for....he can leave it to us to say this processor SUCKS.
There's a way to do things professionally and then there's the unprofessional way. w1zzard is a pro.
 
Wizzard is the best, thanks to him and this site I started overclocking on my own.
 
@1d10t I think a 240mm rad in push-pull config with fans set to mild profile & using a really good thermal paste, I think the load temps for the i9 part would/may hover round the mid 70C,
depending heavily on ambient room temps.

You didn't get the point do you? I think most of the time CPU itself gonna be fine, but supporting parts aka motherboard would last longer. How motherboard would deal with such high TDP, and pretty much makes VRM at or above their operational condition while withstanding such heat on their PCB? Not to mention we rely on "software sensor" or "embedded solution" in CPU itself to read temperature, gone the days where thermistor are soldered on motherboard. That, and VRM on new motherboard almost a joke, they slapped 10 ++ chokes on but leave FET on high-side and low-side.Did I mention PWM controller?
In general, all motherboard maker make me sick, they giving plastic shrouds, gay light while cheap out their power delivery system.

To the both of you, let's put this one to rest once and for all.
Multithreading for games doesn't work that way at all. We will not get games which fully utilizes 6+ cores for rendering. The direction in game development is less CPU overhead and more of the heavy lifting on the GPU.
Most people misunderstand the features of Direct3D 12. While it is technically possible to have multiple CPU threads build a single queue, the added synchronization and overhead in the driver would be enormous. For this reason, we're not going to see more than 1 thread per workload that can be parallelized, which means separate rendering passes, particle simulation, etc. So games having 6+ threads for rendering is unlikely, and even for games having 2-3, all the main rendering will be done by the main rendering thread.
Intel or AMD is not to blame here, not the developers either, just forum posters and tech journalists driving up expectations without any technical expertise.

You have actually very good questions.
Firstly, it's important to understand that utilization in Windows Task Manager is not actual CPU load, but rather how much threads have allocated in the scheduling interval. Games usually have multiple threads waiting for events or queues, these usually run in a loop constantly checking for work, but to the OS these will seem to have 100% core utilization. There are several reasons to code this way, firstly to reduce latency and increase precision, secondly Windows is not a realtime OS, so the best way to ensure a thread gets priority is to make sure it never sleeps. Thirdly, any thread waiting for IO(HDD, SSD, etc.) will usually have 100% utilization while waiting. It's important to understand that the "100% utilization" of these threads is not a sign of CPU bottleneck.
Secondly, game engines to a lot of things that are strictly not rendering or doesn't impact rendering performance unless it "disturbs" the rendering thread(s).
This is a rough illustration I made in 5 min: (I apologize for my poor drawing)
View attachment 109032
Some of these tasks may be executed by the same thread, or some advanced game engines scale this dynamically. Even if a game uses 8 threads on one machine and 5 on a different one, doesn't mean it will have an impact on performance. Don't forget the driver itself can have up to ~four threads on top of this.

Most decent games these days have at least a dedicated rendering thread, many also have dedicated ones for game loop and event loop. These usually have 100% utilization, even though the true load of event loop is usually ~1%. Modern games may spawn a number "worker threads" for asset loading, this doesn't mean you should have a dedicated core for each, since these are usually just IO wait. I could go on, but you should get the point.
There are exceptions to this, like "cheaply made" games like Euro Truck Simulator 2, which does rendering, game loop, event loop and asset loading in the same thread, which of course give terrible stutter during gameplay.

So you might think it's advantageous to have as many threads as possible? Well, it depends. Adding more threads that are synchronized will cause latency, so a thread should only be given a workload it can do independently and then sync back up, or even better, an async queue. At 60 FPS we're talking of a frame window of 16.67 ms, and in compute time that's not a lot if most is spent on synchronization.

We kind in same boat here.The introduction D3D12 feature, such as reduce overhead and thus made more draw calls, giving developer to access hardware at driver level, eliminating necessity of HAL. But there still a catch in arithmetic codes , OS and architecture itself.

Many developer still reluctant from using smallint codes dan don't forget the nature of CPU is serial processor, so "paralleling" threads in single execution could pose a quite challenge in Windows itself. If we spawn many threads, CPU would not mark them and OS will decide the first query would execute first and move to next cycle,leaving them "un-cache" unless set instruction said to be cached.Lengthy instruction and Microsoft seem capped their desktop OS to mere 128KB read-ahead, so we have to add wait states between interval otherwise we'll faced incoherency in cache.

Man, i'm mumbling too much,what do i know i'm just hardware guy :D
My point is neither CPU maker,developers or OS are to blame, they just guaranteed their product is widely adopted.
 
Last edited:
There's a way to do things professionally and then there's the unprofessional way. w1zzard is a pro.

Exactly what I was saying but more succinct and professional. However, I'm not a professional I'm a hack who tortures hardware and shitposts on boards like this saying what I mean with no filter....pure bliss.
 
That monolithic design process is probably the only thing giving Intel an advantage right now. The CCX design is great in a lot of areas... it's cheap and gives good yields, and it's easily scaleable, but the design is a bit slower than a monolithic design due to higher latency. So, while the monolithic design is better for performance, it's also expensive and doesn't scale easily.

But we've got some interesting market shake-up happening when 7nm Ryzen launches 1H next year since I just do not know what Intel are going to counter it with. They'll lose the performance crown, and lose on efficiency and price too. It's a triple whammy.

Like I said before, the only realistic counter to 7nm Ryzen is another 14nm refresh. This will be seriously humiliating as it will lose in all metrics. And as the hot and power hungry 9900K is at the limit already, they're backed in a corner. They can't push clocks any further. Anything past 4.7Ghz produces serious heat and power draw if we're talking 8-cores.

TheIR 10nm, when it does arrive in late 2019 best case, won't be in the form of a 10nm 8-core to counter the 3700X. A 10nm 8-core from Intel is very likely to arrive in 2020! By that time AMD will be about to release 7nm+ or the 4700X.
 
This integrated graphics takes space that could be populated by 4 more cores. And yet 178mm2 9900K is not big at all, compared to 8 core zen 192mm2 is smaller. 2600K sandy bridge is 216mm2.

Zen 2 can offer at best 8 cores around 100mm2, no integrated gpu. The final optimisation of 7nm+ is 4x desity, but this is something else. 2x density of 12nm glo fo/samsung and tsmc, which are not as dense as 14nm++ intel.
 
Last edited:
That monolithic design process is probably the only thing giving Intel an advantage right now. The CCX design is great in a lot of areas... it's cheap and gives good yields, and it's easily scaleable, but the design is a bit slower than a monolithic design due to higher latency. So, while the monolithic design is better for performance, it's also expensive and doesn't scale easily.
Intel is going the MCM route after the lakes dry out, that's why they hired Keller & bought this company recently. The ring bus will die a slow death, unless Intel decides that MSDT & ring bus will coexist with their MCM solutions. I think their dGPU & next major uarch change may also coincide around the same time. It could be a perfect storm or quite likely the other one which could catapult AMD in the lead, if things don't pan out the way Intel's planning.
 
That monolithic design process is probably the only thing giving Intel an advantage right now. The CCX design is great in a lot of areas... it's cheap and gives good yields, and it's easily scaleable, but the design is a bit slower than a monolithic design due to higher latency. So, while the monolithic design is better for performance, it's also expensive and doesn't scale easily.
Which is what Intel is running into. Why do you think these chips are so expensive? I'm betting that they're having yield issues in which (due to how complex these chips are) many chips don't come out quite good enough to be a Core i9. One way or another they will have to adopt a CCX-like design at some point, the more cores you try to pack onto a single die/chip the more complex things get. This is simply the nature of the beast. Not everything can be perfect.
 
Which is what Intel is running into. Why do you think these chips are so expensive? I'm betting that they're having yield issues in which (due to how complex these chips are) many chips don't come out quite good enough to be a Core i9. One way or another they will have to adopt a CCX-like design at some point, the more cores you try to pack onto a single die/chip the more complex things get. This is simply the nature of the beast. Not everything can be perfect.
And we will remember this time as the "highest IPC, no matter the cost".
 
If you're disappointed then I guess you had hopes, and if you had hopes then I don't understand how you got them in the first place. Intel is stuck at 14 nm and we all knew that,
how much magic could Intel put into this chip without changing the process? AND IT IS STILL CALLED COFFEE LAKE.

The 9900K is amazing for what it is given the 4+ year old (but improved) process that's being used, and it runs hot for the same reason. It's overpriced, but that's nothing new for the fastest of its kind.
If you thought it would cost $100 less, just be happy that you didn't buy it, as it barely makes sense today, and will make even less sense whenever the successor shows up.
If you don't mind the actual price then I guess you upgrade every year or so, so yeah, you can afford it.

I won't buy it, still happy with my 2600K, and next time I'll probably go back to AMD.
 
If you're disappointed then I guess you had hopes, and if you had hopes then I don't understand how you got them in the first place. Intel is stuck at 14 nm and we all knew that,
how much magic could Intel put into this chip without changing the process? AND IT IS STILL CALLED COFFEE LAKE.

The 9900K is amazing for what it is given the 4+ year old (but improved) process that's being used, and it runs hot for the same reason. It's overpriced, but that's nothing new for the fastest of its kind.
If you thought it would cost $100 less, just be happy that you didn't buy it, as it barely makes sense today, and will make even less sense whenever the successor shows up.
If you don't mind the actual price then I guess you upgrade every year or so, so yeah, you can afford it.

I won't buy it, still happy with my 2600K, and next time I'll probably go back to AMD.

THis!.... Yes 2600k is still pretty darn good to do most things I'd guess, I've got e5-1680 v2 8 core Ivy and I absolutely love it! It is 22nm but has same cores as this and quad channel memory doesn't OC as high without extreme cooling but 4.4-4.6 doable on moderate water cooling and performs better particularly on memory bandwidth and surprisingly uses about the same or less watts to do it! So to me this processor wouldn't be so disappointing in 2018 except I have a processor from 2013 which I bought for couple hundred bucks less than this thing and it outperforms it or equals it on every metric....with half a decade intel advancement that is pretty sad, boo intel.
 
  • Like
Reactions: SL2
There have been "wild" variations in power usage across reviews for this chip.

Upon "investigation", Hardware Unboxed found the issue: depending on the board used, it's VRM capabilities and whether or not the CPU has "been heated" prior to whatever bench is being tested, the performance drops significantly VS testing with a board that can actually use everything the CPU can "throw @ it", so long as the cooler used manages to perform without forcing throttling.
 
Point taken, I would never buy the 9900k over 2700x, it just makes no sense but I would think about 9700k for a proper upgrade.
 
meh... oh well thanks for the review, it confirm my path for a 2600/2700 (or X variante but sometime non X perform better) as next upgrade specially given the number at 1440p

Point taken, I would never buy the 9900k over 2700x, it just makes no sense but I would think about 9700k for a proper upgrade.
well ... i have a 6600K but i don't even considere a 9600k/9700k as a proper upgrade :D nor would i considere any of the 2 over a 2700/X (even over a 2600/X)
 
There have been "wild" variations in power usage across reviews for this chip.

Upon "investigation", Hardware Unboxed found the issue: depending on the board used, it's VRM capabilities and whether or not the CPU has "been heated" prior to whatever bench is being tested, the performance drops significantly VS testing with a board that can actually use everything the CPU can "throw @ it", so long as the cooler used manages to perform without forcing throttling.

we have a chip with a claimed boost of 5g but only if two cores or less are in use.. more than two it only boosts to 4.7g.. it would also seem that after 30 minutes continuous use it all drops down to 4.4g..

none of this is made clear to the average user.. having said that the average user probably dosnt care anyways.. i still recon this thing comes close to to being sold under false pretenses.. it aint what it claims to be..

trog
 
we have a chip with a claimed boost of 5g but only if two cores or less are in use.. more than two it only boosts to 4.7g.. it would also seem that after 30 minutes continuous use it all drops down to 4.4g..

none of this is made clear to the average user.. having said that the average user probably dosnt care anyways.. i still recon this thing comes close to to being sold under false pretenses.. it aint what it claims to be..

trog
well still better than a K chip locked to default frequencies and BSOD once you apply a single hertz to OC ... like my 6600K
 
Send 9600k review ? Any ETA on that ?
 
Back
Top