# Intel Core i9-10900



## W1zzard (Jul 13, 2020)

In our Core i9-10900 review we're taking a close look at what can be gained from unlocking the power limit of this 65 W processor. Results are impressive: up to 40% faster apps and performance that rivals the Core i9-10900K at much lower pricing, but heat output is increased, too.

*Show full review*


----------



## dirtyferret (Jul 13, 2020)

Will the 10c/20t be able to handle WoW classic or should I go for 10900k?


----------



## Space Lynx (Jul 13, 2020)

dirtyferret said:


> Will the 10c/20t be able to handle WoW classic or should I go for 10900k?



hey for the price of $435 free ship at central computers... its still cheaper than 3800 xt at the moment (unless you live in california).  so 2 extra cores and beats 3800 xt or ties it in 99% of games. and the 65w is fine, not enough gain from the extra heat/wattage to be worth. honestly a good value imo.  most people only need 6 core though... maybe 8.  so i'd say the 3700xx at $260 and free game valhalla at the moment is still the best value


----------



## BarbaricSoul (Jul 13, 2020)

@W1zzard is there a air cooling solution that can keep the 10900 cool (<75'c) running WCG 24/7? With the power limit lifted?


----------



## W1zzard (Jul 13, 2020)

BarbaricSoul said:


> @W1zzard is there a air cooling solution that can keep the 10900 cool (<75'c) running WCG 24/7? With the power limit lifted?


NH-U12 running Blender = 75°C, so yes. Why do you care about 75°C specifically? that's 25°C below the limit, ie, the distance between your room temp and freezing


----------



## BarbaricSoul (Jul 13, 2020)

the computer is in my bedroom, with no thermostat in the room. I don't need a heater this time of year


----------



## KarymidoN (Jul 13, 2020)

dirtyferret said:


> Will the 10c/20t be able to handle WoW classic or should I go for 10900k?



Zen3 is on the Way, if you buy this it'll be Outmatched in price/Performance in 3 months.
If can't wait 3 Monts get a X570 or B550 AMD board, get a chiep processor with it (3600 non XT) and upgrade later to ZEN3 (3600 still sells really well on the used market).
Since its AMD you won't have to buy a New MOBO+RAM to Upgrade your PC, just Update your bios and plug your new Ryzen 4000 series when it launches (wait for reviews tho).

OR

Wait for ZEN3 launch and buy intel cause MAYBE they will price drop the current SKU'S.
Either Way, wait a little bit to purchase GPU and CPU's for now.


----------



## _Flare (Jul 13, 2020)

i7 and i9 offi spec is DDR4-2933, below i7 only 2666
beside that, GREAT TEST as always


----------



## Searing (Jul 13, 2020)

BarbaricSoul said:


> the computer is in my bedroom, with no thermostat in the room. I don't need a heater this time of year



The heat added to your room isn't based on the temperature it is running at. If the CPU is using 200W, it is adding almost 200W heat to your room, regardless of the temp being 40 or 80 degrees. That's a very common misconception.


----------



## W1zzard (Jul 13, 2020)

_Flare said:


> i7 and i9 offi spec is DDR4-2933, below i7 only 2666
> beside that, GREAT TEST as always


Ugh, fail, I’ll retest tomorrow morning


----------



## Rowsol (Jul 13, 2020)

Damn, look at those stock temps.


----------



## HD64G (Jul 13, 2020)

So, for anyone who spends wisely on a CPU, a 3700X is much better than this CPU, being 2-3% slower in CPU and gaming tests on average (paired with GPUs costing less than $800) and costing much less. For OC, only the K models are fit. Pointless and expensive product for what offers imho.


----------



## W1zzard (Jul 13, 2020)

HD64G said:


> So, for anyone who spends wisely on a CPU, a 3700X is much better than this CPU, being 2-3% slower in CPU and gaming tests on average (paired with GPUs costing less than $800) and costing much less. For OC, only the K models are fit. Pointless and expensive product for what offers imho.


You need to look at the blue bar


----------



## Dave65 (Jul 13, 2020)

I've always been about leading edge tech, I just can't get excited over a 14nm part even tho Intel should be commended for squeezing all they can out of it and at temps that are not HELL like.. But just can't do it!


----------



## Searing (Jul 14, 2020)

Dave65 said:


> I've always been about leading edge tech, I just can't get excited over a 14nm part even tho Intel should be commended for squeezing all they can out of it and at temps that are not HELL like.. But just can't do it!



That has been a problem, from a consumer enjoyment point of view. My new 10th gen CPU performs great, it just feels old right after I bought it  Emotionally not exciting


----------



## trom89 (Jul 14, 2020)

How much cool would it be if the power limiter was called "Beast Mode" 
Nice review


----------



## Totally (Jul 14, 2020)

W1zzard said:


> You need to look at the blue bar



They're nice and all right up until reaching the power and thermals. It's like meeting someone and it's obvious that you're like hitting above your league and only to find out they have a have a triple digit body count and have had STIs.


----------



## mechtech (Jul 14, 2020)

"MP3 encoding is a single-threaded process "

What software are you using/version, and is there a program that will do it multi-threaded?

It would be interesting if the PS5 could be run through a few games and measure fps vs power consumption then compared to a PC.


----------



## ppn (Jul 14, 2020)

Can the ringbus uncore be overclocked.


----------



## blu3dragon (Jul 14, 2020)

Is the clock speed graph done with power limits enabled or disabled?  I'd be more interested to see disabled results to compare with a possible oc on the k series chips.

Also, any idea why the temps are so high compared to the 10900k when power limits are disabled?  (it doesn't seem to consume That much more).  Possibly the 10900k the pts are too low.


----------



## Searing (Jul 14, 2020)

I do have a question for the reviewer. I can't find BCLK in my Asus motherboard. Can H470 do this? Can only Z490 adjust the BCLK?


----------



## biffzinker (Jul 14, 2020)

Searing said:


> I do have a question for the reviewer. I can't find BCLK in my Asus motherboard. Can H470 do this? Can only Z490 adjust the BCLK?


Base clock adjustment is restricted unless it’s a Z490 motherboard.


----------



## Flanker (Jul 14, 2020)

Could this be a good choice with a cheap, say H410,  board?


----------



## W1zzard (Jul 14, 2020)

mechtech said:


> What software are you using/version, and is there a program that will do it multi-threaded?


Using the LAME MP3 encoder, like probably everyone. You could encode multiple MP3s in parallel, but that's not multi-threaded encoding in my opinion



Flanker said:


> Could this be a good choice with a cheap, say H410,  board?


Sure, as long as it can handle the power draw/temps when unlocked


----------



## blu3dragon (Jul 14, 2020)

Flanker said:


> Could this be a good choice with a cheap, say H410,  board?


Good question.  You'll lose the ability to overclock the memory, which will help a little in games, and you may lose the ability to increase power limits, which will have an impact in certain applications.  I guess overall it'll depend on your use case and total budget for cpu+Mobo.  Assuming you want it for games, I'd be tempted to at least pair it with a lower end z490 board (granted, that means $160'ish right now) and some reasonably fast memory ('b' die).  A 10700k would probably be a better choice in this case though.
For multi threaded applications, then if you can find a lower end board that still lets you remove the power limits (without overheating the vrms) then it might make sense, however, if you are looking for value with lots of cores, a ryzen 9 probably makes more sense.


----------



## W1zzard (Jul 14, 2020)

blu3dragon said:


> and you may lose the ability to increase power limits


You won't. Intel doesn't segment the power limit controls. Maybe the BIOS won't have options, you can always use Throttlestop.


----------



## sumolDeLaranja (Jul 14, 2020)

It would be interesting to know what's the minimum budget required to achieve comparable results to this test's pseudo-unlocked 10900.
I mean, the difference between an ASUS Z490 Maximus XII Extreme and an ASRock X570 Taichi is around 600€ per my local price aggregator.


----------



## blu3dragon (Jul 14, 2020)

sumolDeLaranja said:


> It would be interesting to know what's the minimum budget required to achieve comparable results to this test's pseudo-unlocked 10900.
> I mean, the difference between an ASUS Z490 Maximus XII Extreme and an ASRock X570 Taichi is around 600€ per my local price aggregator.


You don't need something that extreme   Look up the price of an Asus Prime z490-p.


----------



## W1zzard (Jul 14, 2020)

sumolDeLaranja said:


> It would be interesting to know what's the minimum budget required to achieve comparable results to this test's pseudo-unlocked 10900.
> I mean, the difference between an ASUS Z490 Maximus XII Extreme and an ASRock X570 Taichi is around 600€ per my local price aggregator.


Yeah that pricing is crazy. Other than features, don't expect any significant performance difference between cheapest and most expensive. VRMs might be hotter, will be fine. Plenty of decent Z490 boards out there below 200.


----------



## yeeeeman (Jul 14, 2020)

The most impressive thing in my opinion is that at stock it consumes similar power to 3600x which is a 6 core baked on 7nm, while performing similar to a 3800x. Not bad.


----------



## cucker tarlson (Jul 14, 2020)

_Flare said:


> i7 and i9 offi spec is DDR4-2933, below i7 only 2666


running 4133 cl16-16-16-31 on i5 10500 no problem.



yeeeeman said:


> The most impressive thing in my opinion is that at stock it consumes similar power to 3600x which is a 6 core baked on 7nm, while performing similar to a 3800x. Not bad.


yes it also throttles like crazy.
max turbo is what matters here,with PLs removed.
I dont think anyone buying an i9 or r9 should really consider power efficiency on 10/12 cores.what matters here is performance and it's really good.10900 beats 3900XT in cpu tests,not to mention gaming.



sumolDeLaranja said:


> It would be interesting to know what's the minimum budget required to achieve comparable results to this test's pseudo-unlocked 10900.


a basic z490 mobo except for a couple of asrock ones.
same price as x570 probably.


----------



## W1zzard (Jul 14, 2020)

cucker tarlson said:


> running 4133 cl16-16-16-31 on i5 10500 no problem.


Z490 is required for "memory overclocking" 
@_Flare listed the default spec, everything above that is considered mem oc by intel


----------



## HD64G (Jul 14, 2020)

W1zzard said:


> You need to look at the blue bar


So, you suggest anyone to get an relatively expensive non-K Intel CPU, pair it with a very expensive motheboard to withstand the 250W power consumption when unlocked? Not a very sensible choice me thinks. A R9 3900X is much better all-round. For gamers with 1080P, high refresh rate screens having a 2080Ti, even a 9700K is better choice. This CPU is as useless to exist as the R7 3800X(T) from AMD.


----------



## W1zzard (Jul 14, 2020)

HD64G said:


> So, you suggest anyone to get an relatively expensive non-K Intel CPU, pair it with a very expensive motheboard to withstand the 250W power consumption when unlocked?


That ASUS motherboard is stupid, but it's what I have, and have been using for all Comet Lake reviews. So not switching until next rebench, to ensure fair comparison. I'm actually suggesting you buy the cheapest LGA1200 board with the features you want. Cheapest Z490 is fine, cheap H410, too.



HD64G said:


> A R9 3900X is much better all-round


Not according to my data. 3900X is slightly slower on average in CPU tests, 7% slower at 1080p, $10 more expensive, only 20% better power and 2°C cooler. I can give you "similar if not fully focused on gaming"


----------



## HD64G (Jul 14, 2020)

W1zzard said:


> That ASUS motherboard is stupid, but it's what I have, and have been using for all Comet Lake reviews. So not switching until next rebench, to ensure fair comparison. I'm actually suggesting you buy the cheapest LGA1200 board with the features you want. Cheapest Z490 is fine, cheap H410, too.


I beg to differ and will repeat that any mediocre board wont withstand for long a CPU that uses 250W when pushed. So, it is by no means economical to get that 10900 and unlock it over a stock 3900X that can work properly on a $100 board. My opinion ofc.


----------



## W1zzard (Jul 14, 2020)

HD64G said:


> that any mediocre board wont withstand for long a CPU that uses 250W when pushed.


Not sure about that. The 3900X is 200 W full system, 10900 is 240 W, so your argument becomes "$100 boards can handle 200 W, but not 240 W" ? Could be a possibility, not convinced, but I don't have any good data.

Edit: Did a quick test just for you, 10900K @ 5 GHz manual, Prime95 = 225 W. 10900K because I was too lazy to uninstall the 10900 from the ASUS board
MSI Z490 Gaming Plus (which I bought for € 155 in retail), VRMs just warm, plenty of headroom, cooling the CPU is almost impossible though, with that Intel TS cooler


----------



## navjack27 (Jul 14, 2020)

I'm using this CPU on the new stock cooler on a asrock b460m pro4. Doing a review of my own. It does handle this CPU completely with a raised power limit of around 110w but I've tested 65 to 125 PL1 overrides. Also tested using my memory kits 3466 timings at the 2933 max. You'd be surprised what you can get away with hd64g

Edit: one fun thing I'll drop in here is that undervolting exists. Max this Mobo can do is -0.100v and that's what I've got it set at for my 24/7. 110w PL1 with that undervolt. I tested every voltage interval in 0.005v steps with 6 runs of realbench sans the opencl test with hwinfo logs. It's a measurable difference. I also did my test suite at "stock" 65w PL1 and jedec memory timings, 110w and 125w. Then I did em again at my memory kits 3466 timings at 2933. The review will be up on https://thechipcollective.com/ "soon" but the results are over here 



__ https://twitter.com/i/web/status/1282369498935418880


----------



## HD64G (Jul 14, 2020)

W1zzard said:


> Not sure about that. The 3900X is 200 W full system, 10900 is 240 W, so your argument becomes "$100 boards can handle 200 W, but not 240 W" ? Could be a possibility, not convinced, but I don't have any good data.
> 
> Edit: Did a quick test just for you, 10900K @ 5 GHz manual, Prime95 = 225 W. 10900K because I was too lazy to uninstall the 10900 from the ASUS board
> MSI Z490 Gaming Plus (which I bought for € 155 in retail), VRMs just warm, plenty of headroom, cooling the CPU is almost impossible though, with that Intel TS cooler
> View attachment 162195


Thanks for the effort to provide more info @W1z . You wrote in the review that the PL2 limit is 224W. So, how the heck a normal board of $150 will stand the test of time if you combine that with this CPU if used with that power consumption? The 3900X uses 140W at stock. And we should leave temps of the CPUs aside as most can afford a $30-40 cooler when building a system of over $1000. But I would buy a very good and expensive board to use with a 225W CPU. No point to own that vs a 3900X for productivity or a 10700K if gaming is the target.


----------



## W1zzard (Jul 14, 2020)

_Flare said:


> i7 and i9 offi spec is DDR4-2933, below i7 only 2666


finished retesting with 2933, new charts are up



HD64G said:


> PL2 limit is 224W


It's just the limit. I don't think it can reach that in normal circumstances, so consider it "infinite, no limit". In theory you could increase the voltage, but makes no sense because multiplier locked


----------



## navjack27 (Jul 14, 2020)

W1zzard said:


> It's just the limit. I don't think it can reach that in normal circumstances, so consider it "infinite, no limit". In theory you could increase the voltage, but makes no sense because multiplier locked


Yeah that PL2 is very generous


----------



## blu3dragon (Jul 14, 2020)

HD64G said:


> Thanks for the effort to provide more info @W1z . You wrote in the review that the PL2 limit is 224W. So, how the heck a normal board of $150 will stand the test of time if you combine that with this CPU if used with that power consumption? The 3900X uses 140W at stock. And we should leave temps of the CPUs aside as most can afford a $30-40 cooler when building a system of over $1000. But I would buy a very good and expensive board to use with a 225W CPU. No point to own that vs a 3900X for productivity or a 10700K if gaming is the target.


In general, Asus, Gigabyte and MSI all did a good job speccing up the vrms on their z490 boards to cope with the power requirements of comet lake.  Look up some more reviews if you like. The Asus Prime Z490-P at $160 can handle the power just fine and W1zzard just showed that the MSI Z490 Gaming Plus can too.  It seems you need to be more careful with ASRock (or rather just avoid ASRock for comet lake).

The more expensive boards basically give you additional io, more power capability for overclocking (not needed for this chip, and generally will require pretty extreme cooling), and slightly better memory oc capability (at least for Asus, have not researched the others).


----------



## Poul-erik (Jul 14, 2020)

First this.




And so this, to me, it makes no sense.


----------



## W1zzard (Jul 14, 2020)

Poul-erik said:


> And so this, to me, it makes no sense.


Have you read the review? At least the conclusion text?


----------



## HD64G (Jul 14, 2020)

A messed-up product that has no purpose in market other than to gain traction for Intel CPU product line. It is exactly the same as the XT line of Ryzen 3000. No need to make it a bigger thing that it is. Over and out of this discussion. Everyone can have an opinion. Arguments always matter though.


----------



## navjack27 (Jul 14, 2020)

The place in the market for this is for the masses of people who have bought K series CPUs in the past and DO NOT overclock or use ANY of the features that K offers. Turbo is getting good. Power limits are finally at ranges where they won't be hit, at least for the PL2. There are people who will not ever be in the market for AMD. These are enthusiasts too and not just some newbie. That's the market segment but the public has been almost brainwashed into thinking that if you are getting a Intel CPU you NEED TO get a K. It's wrong and the public needs to be educated on how good these things are.


----------



## Bee9 (Jul 14, 2020)

The stock cooler is a joke with 200W. 
Really tempting to get one of these and build a gaming rig for summer. It's getting up to 36C now in my place.


----------



## Searing (Jul 14, 2020)

Ok I've now tested it with two mobos. There is a Gigabyte B460M DS3H (don't confuse it with the other B460 model, D3H is better) that is total garbage. The VRM hit 110 degrees and it throttled at stock settings (Gigabyte used 85W at stock, not 65W, not Intel's guidance) and I had to lower the limit all the way to 75W for it to work properly with that mobo. Returned!

I also used an Asus H410M and it is great (except the speckled white, yuck, I prefer all dark colours on my mobos)! The stock cooler can only cool 125W at maximum, so I set the motherboard max power draw to 110W and it hit 98 degrees for the CPU temperature, which is fine. Drawing 110W on the Asus H410M, the VRM is fine (note: unlike Gigabyte, Asus hides the VRM temp, but I can see no throttling), the cooler is fine, and you get 10 cores at about 4ghz and 100 percent load. Goes back up to 5ghz under 100 percent load of course. Neat!

Asus H410M, i9-10900, and stock cooler at 110W (set it in the BIOS), it works! 

I'd really like Intel to release another all black cooler (the new one looks great, too bad it only comes with the i7 or i9, would be perfect for the 6 core CPUs) that is twice as thick and can cool 150W instead of 110W. I hate aftermarket coolers, they are overpriced or junk. If AMD put better bearings in their stock coolers so they'd be quiet at low RPM I'd never buy another aftermarket cooler either. I can't stand anything cheaper than the whopping $95 CAD Noctua U12S all black, that's expensive, but nice. The stock Intel black cooler is pretty neat, just not strong enough. I'll use the stock cooler though, I like the $100 cost savings and don't need more than 4ghz on 10 cores for 100 percent load.


----------



## navjack27 (Jul 15, 2020)

got some questions from twitter about this.


__ https://twitter.com/i/web/status/1283185165670461443

__ https://twitter.com/i/web/status/1283184685724643328


----------



## Berfs1 (Jul 15, 2020)

I'm gonna be honest, something is up with some of the results. I don't really understand how a 10900K at 5.2 GHz loses to a i9-10900K at 2.5 GHz. I made these weird numbers up because you could probably understand when running a CPU much faster, it SHOULD perform faster right? Take a look at your 10900K numbers. I don't really believe that a 10900K which clocks higher, loses to a stock 10900. Oh by the way, how does an i5-10500 beat a 10900K performance wise? That really doesn't make any sense. You could argue that maybe there are extra core to core latencies involved, but then you need to explain why a 10900 beats a 10500, but a 10900K LOSES to the 10500. That makes zero sense. Check the clocks. I guarantee you something isn't right about that.


----------



## Bee9 (Jul 15, 2020)

Berfs1 said:


> I'm gonna be honest, something is up with some of the results. I don't really understand how a 10900K at 5.2 GHz loses to a i9-10900K at 2.5 GHz. I made these weird numbers up because you could probably understand when running a CPU much faster, it SHOULD perform faster right? Take a look at your 10900K numbers. I don't really believe that a 10900K which clocks higher, loses to a stock 10900. Oh by the way, how does an i5-10500 beat a 10900K performance wise? That really doesn't make any sense. You could argue that maybe there are extra core to core latencies involved, but then you need to explain why a 10900 beats a 10500, but a 10900K LOSES to the 10500. That makes zero sense. Check the clocks. I guarantee you something isn't right about that.



What application are you referring to? Some applications just don't use all cores.
In many cases, the unleashed , overclocked to the max 10900 (in the red bar) has higher clock speed than the K variants chilling at default speed. With Intel sketchy boosty thingy, you will get a lot of variable clock speed between runs and that may cause variation in the result. 
I don't see anything wrong with the chart.


----------



## Berfs1 (Jul 15, 2020)

Bee9 said:


> What application are you referring to? Some applications just don't use all cores.
> In many cases, the unleashed , overclocked to the max 10900 (in the red bar) has higher clock speed than the K variants chilling at default speed. With Intel sketchy boosty thingy, you will get a lot of variable clock speed between runs and that may cause variation in the result.
> I don't see anything wrong with the chart.




Image 1 - Not sure how the 9900K loses to an i5-10500 but okay.
Image 2 - not really sure how a stock 10900 beats a 10900 with faster RAM or max turbo, but okay.
Image 3- Not really sure how an i5-10500 beats an i7-8700K but okay.
Image 4 - Not sure how an i5-10600K loses to an i5-10500 but is faster than an i5-10400F but okay.
Image 5 - Not sure how an i5-10400F beats an i7-8700K but okay.
Image 6 - Not sure how an i7-10700 beats an i9-10900K but okay.
Image 7 - Please explain how an i7-10700/K beats an i9-9900KS.

There are much more inconsistencies, I don't really wan't to flood the chat.


----------



## BarbaricSoul (Jul 15, 2020)

Searing said:


> The heat added to your room isn't based on the temperature it is running at. If the CPU is using 200W, it is adding almost 200W heat to your room, regardless of the temp being 40 or 80 degrees. That's a very common misconception.



Tell that to my 3930k that heats my room to what I consider uncomfortable temperatures whenever it runs at temperatures over 75'c


----------



## W1zzard (Jul 15, 2020)

Berfs1 said:


> Please explain


margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review



navjack27 said:


> twitter


any idea why people are asking on twitter which is obviously the wrong platform considering they can just ask here?


----------



## Bee9 (Jul 15, 2020)

Berfs1 said:


> View attachment 162263View attachment 162264View attachment 162265View attachment 162266View attachment 162267View attachment 162268View attachment 162270
> Image 1 - Not sure how the 9900K loses to an i5-10500 but okay.
> Image 2 - not really sure how a stock 10900 beats a 10900 with faster RAM or max turbo, but okay.
> Image 3- Not really sure how an i5-10500 beats an i7-8700K but okay.
> ...



Yeah I appreciate your effort not flooding the chat. Here are my 2 cents:

Margin of error. In this case, the score is almost the same. Google Octane 2.0 is not the best application to test CPU cores scaling...
Same as Google Octane 2.0. the difference here is in the 10s miliseconds... Almost identical.
In Tensor Flow, I saw in the chart 10500 is 2.2% faster than 8700K. I'm not sure what test @w1zard is using. There are so many parameters that may affect the result.
Margin of error. The result should be read as identical.
Digicortex compute plugin uses SSE, AVX / AVX-2 and AVX-512 instruction set. Depending on cooling, boost frequency, and RAM setup (x86 Compute Plugin is NUMA aware), you will see 10 Gen i5 with better memory setup going ever so slightly faster than the older 8700K.
Games Not all can eat up all cores. Very few fully utilize the full cores of the 10900K. Hence the results. Game is NOT the representation of CPU multicore performance (in my own point of view). They are there for your entertainment. Rendering videos , 3D scenes or simulating physics / neural network with proper cores config (intra_/inter_op_parallelism_threads, launch simultaneous processes on multiple NUMA node, bind OpenMP threads to physical processing units,  sets the maximum number of threads to use for OpenMP parallel regions, ...)
Margin of error. AC: odyssey results will vary a lot. Take it with a grain of salt. It is relative result, not absolute.
When you look at the benchmark, think this way: Is CPU X run better with this application than CPU Y ?  Benchmark gives you a representation of how CPUs perform within the enclave of the application. Rules are set by the developers and CPUs play by their rules which sometimes favor the in-theory-slower CPU. I'm running Photoshop and it just loves high frequency cores. So, a server Xeon CPU that cost thousands of dollars will bite the dust vs a cheap 9900KS. 



W1zzard said:


> margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
> different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
> i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review
> 
> any idea why people are asking on twitter which is obviously the wrong platform considering they can just ask here?


Metro Exodus loves 6-8 high speed cores and scale poorly with 12-16 cores...
And someone is trying to promote their Twitter using this forum. I guess.


----------



## Berfs1 (Jul 15, 2020)

W1zzard said:


> margin of error, especially the games are hard to run 100% the same, almost impossible for some, like AC
> different power limits, different turbo profiles, cache sizes, apps not benefiting from more cores
> i also remember one of our games runs better with fewer cores, i think it was metro, there's a discussion in the comments of a previous CML review
> 
> ...


I was replying to his thoughts that he put on Twitter, he quoted your article, I replied my thoughts on his tweet initially.



Searing said:


> The heat added to your room isn't based on the temperature it is running at. If the CPU is using 200W, it is adding almost 200W heat to your room, regardless of the temp being 40 or 80 degrees. That's a very common misconception.


Not entirely true. At higher temperatures, the processor leaks more power. So yes the temperature of the processor has a small (but noticeable) impact on the power consumption. And I am not just referring to CPUs, this also applies to GPUs, VRMs, really anything with logic.



Bee9 said:


> Yeah I appreciate your effort not flooding the chat. Here are my 2 cents:
> 
> Margin of error. In this case, the score is almost the same. Google Octane 2.0 is not the best application to test CPU cores scaling...
> Same as Google Octane 2.0. the difference here is in the 10s miliseconds... Almost identical.
> ...


In that case, I think their testing setup is flawed. I highly doubt they locked the GPU frequency to a static frequency, because GPU boost can alter results, and is not always consistent between runs. That's number 1. Number 2, they probably did not lock the fan speeds to a certain amount and had them running on auto settings. The more variables you introduce, the more variance there can be in results. That's why you need to lock the frequency of the graphics card so that it doesn't fluctuate, and locking the fan speeds means there is one less variable that can be introduced. I understand it is very difficult to completely control the temperature of CPUs (would be extremely difficult to keep the temperatures static), keeping as much things static instead of adaptive helps.

In regards to AVX-512, that isn't available on Coment Lake. Only on HEDT, server, and Ice Lake platforms at the moment. Skylake through Comet lake has literally the same architecture, hence the 0 IPC improvement. If you keep RAM speed all the same and ignore the security fixes, then a 6700K clocked at 4.2 GHz all core will perform exactly the same as an i3-10300 at 4.2 GHz all core.

If games don't always eat up all the cores, then that means a 10900K SHOULDN'T be fully loaded up. Which means it should run a tiny bit faster because it runs higher clocks when it's not utilized all the way.

Turbo ratios of the processors I mentioned
---------------------------------------------
i7-8700K - 47/46/45/44/44/43/-/-/-/-
i9-9900K - 50/50/50/48/48/47/47/47/-/-
i9-9900KS - 50/50/50/50/50/50/50/50/-/-
i5-10400F - 43/?/?/?/?/40/-/-/-/-
i5-10500 - 45/?/?/?/?/42/-/-/-/-
i5-10600K - 48/48/48/47/45/45/-/-/-/-
i7-10700 - 48/?/?/?/?/?/?/46/-/- (with Turbo Boost Max 3.0)
i7-10700K - 51/51/51/48/48/47/47/47/-/- (with Turbo Boost Max 3.0)
i9-10900 - 52/?/?/?/?/?/?/?/?/46 (with Turbo Boost Max 3.0 and Thermal Velocity Boost)
i9-10900K - 53/53/51/?/50/?/?/?/?/49 (with Turbo Boost Max 3.0 and Thermal Velocity Boost)

With those turbo ratios, you can now see why I was skeptical of the results, because the older i7s and i9s have higher turbo ratios than even today's 10th gen i5s (aside from 10600/K). Which means, even the older 8700K still should beat the i5-10500. I left out the i7-9700K because it didn't have HT, and that alone can change factors, so I didn't talk about that.


----------



## Bee9 (Jul 15, 2020)

Berfs1 said:


> If you keep RAM speed all the same and ignore the security fixes, then a 6700K clocked at 4.2 GHz all core will perform exactly the same as an i3-10300 at 4.2 GHz all core.


There is a big IF. If we ignore the patches and the RAM speed / timing is the same then....
This statement is true on its own but when you introduce it to the real world, it becomes irrelevant.



Berfs1 said:


> If games don't always eat up all the cores, then that means a 10900K SHOULDN'T be fully loaded up. Which means it should run a tiny bit faster because it runs higher clocks when it's not utilized all the way.


Not that simple. What you mentioned is perfectly fine in theory. In real life, when the conditions for boosting aren't met, the CPU simply won't boost just as high. The ambient temperature, CPU, case temperatures are not disclosed or recorded during the test. So, we have little information to whether the CPU always boost to their max. 

The 8700K will beat the crap out of the i5 10500 in some applications. Not all of them. I had a 8700K myself and the results are consistent with what W1zard post here. 
Keep in mind that the results you saw from older CPUs benchmark are from older set of applications + drivers + windows updates. Therefore, it's hard to get a 100% accurate results. 

I think @W1zzard has done the best he could with the benchmark consider the time / money / effort. I like the variables because that represents what consumers will get when they purchase the xyz components and slap them all together. Do you in real life fix your GPU core clock? I guess not. It's good to isolate the CPU from all the variables and test their capability but that renders the test irrelevant in real life situation. That's what manufacturers did in their marketing: In a very well controlled situation, the CPU can boost up to xx ghz, but in real life, you rarely see the advertised number. That's what happen when you isolate too many variables.


----------



## Searing (Jul 15, 2020)

BarbaricSoul said:


> Tell that to my 3930k that heats my room to what I consider uncomfortable temperatures whenever it runs at temperatures over 75'c



Sigh. Your CPU doesn't draw the same amount of power at all times. My 10900 usually draws around 35W doing light work. When it is running hotter, that is usually when it is drawing closer to 200 watts. The temperature has almost zero effect on how much heating is happening in your room. I could attach a giant heatsink to it, and when it is running at 40 degrees, if it is drawing 200 watts, that is how much it is heating the room.

Almost ALL CPU power is converted to heat, there are no moving parts like in a car engine. Sure it changes with temperature, but the effect is almost zero, so the other guy is also just being pedantic without being right.


----------



## navjack27 (Jul 15, 2020)

Bee9 said:


> So, we have little information to whether the CPU always boost to their max.



So much this. Yeah. With the 10900 you have a 28 sec window for boosting. If the CPU calls whatever the idle was before you started testing actually not in the moving average of the boost window then you'll get X amount of boosting. Your test can be less than 28 seconds long and on one core but that load jumps from core to core. Penalty for cache invalidation and moving it along with skewing the boosting yet again. If you attempt to control for everything you basically won't have a review published. It is impossible.

I went down that rabbit hole during my testing and it didn't really get me anywhere sane. Tested at a 125w PL1. You can end up with a review that is 50 pages of just asinine iteration.


----------



## W1zzard (Jul 15, 2020)

Bee9 said:


> fix your GPU core clock?


You can't. Well, unless you are willing to live with base clock, which is A LOT lower on Turing.

For GPUs the biggest negative effect to repeatability is "cold card" at the start of benchmarks (very convenient for GPU makers, too, if noob reviewers are involved). Within 30 seconds the card will heat up and clocks/perf will drop significantly on many models. That's why all my game tests include some time to heat up the card before measuring FPS


----------



## Dave65 (Jul 16, 2020)

Searing said:


> That has been a problem, from a consumer enjoyment point of view. My new 10th gen CPU performs great, it just feels old right after I bought it  Emotionally not exciting


I know if I had forced myself back to Intel for that extra frames in games, I just know I would not be happy, I do it with cars, trucks and anything tech. And it has cost me dearly in the passed for not having the Kahunna's to say NO and be happy with what you got. But with Ryzen, I don't have that problem... Yet


----------



## Berfs1 (Jul 16, 2020)

Bee9 said:


> There is a big IF. If we ignore the patches and the RAM speed / timing is the same then....
> This statement is true on its own but when you introduce it to the real world, it becomes irrelevant.
> 
> 
> ...


Yes in real life I fix my GPU clock, as well as use Boost Lock in Precision X1. Locks the frequency to boost clocks, allows for easier and simpler overclocking.

As for drivers, I can understand that, but at the same time, if the 8700K was running older drivers and Windows versions, shouldn't that actually perform better than current drivers? The reason I think that is because the security updates are supposed to fix the shortcuts, which in turn loses performance, sooo the older chips SHOULD perform slightly faster but idk. Given how it's literally the same architecture under the hood, 8700K should beat the 10500.









						Intel product specifications
					

Intel® product specifications, features and compatibility quick reference guide and code name decoder. Compare products including processors, desktop boards, server products and networking products.




					ark.intel.com
				




Check the link above to see the differences; not many aside from clock speeds and TDP.



W1zzard said:


> You can't. Well, unless you are willing to live with base clock, which is A LOT lower on Turing.
> 
> For GPUs the biggest negative effect to repeatability is "cold card" at the start of benchmarks (very convenient for GPU makers, too, if noob reviewers are involved). Within 30 seconds the card will heat up and clocks/perf will drop significantly on many models. That's why all my game tests include some time to heat up the card before measuring FPS


Have you used EVGA Precision X1? On all Turing cards I have helped people with, they all have locked to boost frequencies, in excess of 1800 MHz on the core.


----------



## W1zzard (Jul 16, 2020)

Berfs1 said:


> As for drivers and Windows versions


Same drivers and Windows version in all tests. Also same application versions



Berfs1 said:


> Precision X1


Ah yea, I remember now. I even looked into how they are doing it, using NVDIA's "test clocks for stability" API. 
Great idea actually. Maybe for future CPU reviews I could lock the GPU freq? Nobody will miss a few % in performance, but better repeatability will help with quality of results?


----------



## Berfs1 (Jul 16, 2020)

Searing said:


> Sure it changes with temperature, but the effect is almost zero, so the other guy is also just being pedantic without being right.


Define "almost zero".

Just tested with my GPU fans at 0% and ran 5x GPU Z "PCIe tests" to load up the GPU completely (around 94%). GPU power on HWINFO refers to Board Power Draw on GPU-Z, and GPU Core (NVVDD) Input Power (sum) [the first one] refers to GPU Chip Power Draw on GPU-Z, as the numbers were exactly the same (within rounding of course). At around 80°C, the BPD was ~110W average while CPD was ~73W average. I then ran the GPU fans at 100% as well as paused all 5 tests to quickly cool down to 40°C, then set fans back down to 0%, resumed all 5 tests, and then let it heat up to 50°C to read again. At around 50°C, the BPD was ~100W average while CPD was ~62W average. Yes, leakage is present. And it isn't zero. The % change from 50-80°C is ~10% increase in BPD and ~17.7% increase in CPD. That is nowhere near zero. While it may have a *small* impact (as I previously mentioned), temperature does impact the actual numbers, and it is noticeable. I could then proceed to prove how this correlates with the importance of cooling quality but that is not the point of this test; the point was to show how leakage affects the power consumption, which in turn can and will affect the actual wattage of heat dissipated.

The graphics card I just tested is a GTX 980 SC ACX 2.0 by EVGA, and it is in an eGPU box (connected to my laptop via Thunderbolt 3, at PCIe 3.0 x4). I believe I repasted this GPU with IC Diamond a year ago, and I used Boost Lock for this. This is actually [nearly] ideal for testing leakage, as there are not as many variables compared to a desktop, such as a radiator in the front that is cooling the CPU, so there are minimal variable factors involved. I already know the stable points for the overclock, 1425 MHz (from 1367) on core and 8000 MHz (from 7000) memory. While under load, the GPU maintained 1425 MHz core and 8 GHz memory throughout, up until around 75-77°C where the core clock dropped to 1412 MHz and stayed that way until temperatures were brought back down. There are 2x 120mm fans connected, one is the standard internal one, and the 2nd one is a fan that I added to boost cooling when I had a Tesla before, but I leave it in anyways since it does help lower the temperatures by about 3-7°C. Both 120mm fans inside the eGPU box are static speed, no variance. The internal PSU has the fan facing the graphics card, so it does help exhaust some heat, but not a lot. That is the only unconstrained variable that I can think of, however the PSU fan never ramped up during the test, so I consider that to be an independent variable for this test.

EDIT - I moved the sentences in the correct order; I was typing a lot of different things, and they went out of order, my apologies.


----------



## W1zzard (Jul 16, 2020)

Berfs1 said:


> GPU Z "PCIe tests" to load up the GPU completely


GPU-Z PCIe test is not a good choice for that, its load profile is relatively low. Better use any random game with no FPS cap, and in windowed mode. I like to use Unigine Heaven for that because it loads reasonably fast nowadays, and I can pause the movement, so load is fixed. Furmark is a bad choice because GPUs/drivers will detect it and clocks down way too much



Berfs1 said:


> dropped


NVIDIA drops 13 MHz at fixed temperature intervals, depending on the architecture


----------



## EarthDog (Jul 16, 2020)

@W1zzard  - what about the pcie test in 3dmark suite?


----------



## W1zzard (Jul 16, 2020)

EarthDog said:


> @W1zzard  - what about the pcie test in 3dmark suite?


Not sure how much GPU load it produces, never really used it, load time is longer than other methods


----------



## Bee9 (Jul 16, 2020)

Berfs1 said:


> As for drivers, I can understand that, but at the same time, if the 8700K was running older drivers and Windows versions, shouldn't that actually perform better than current drivers? The reason I think that is because the security updates are supposed to fix the shortcuts, which in turn loses performance, sooo the older chips SHOULD perform slightly faster but idk. Given how it's literally the same architecture under the hood, 8700K should beat the 10500.


New GPU drivers / windows updates may affect the performance and gain some advantages for the 10th gen when compared to the 8700K in gaming. The net gain from GPU drivers and other factors may contribute to higher performance in 10th gen. (I'm just pointing out many possibilities.) 



Berfs1 said:


> EVGA Precision X1


Nice. I will give it a try. I think @W1zzard should lock the frequency to a stable boost clock.


----------



## blu3dragon (Jul 16, 2020)

Berfs1 said:


> View attachment 162263View attachment 162264View attachment 162265View attachment 162266View attachment 162267View attachment 162268View attachment 162270
> Image 1 - Not sure how the 9900K loses to an i5-10500 but okay.
> Image 2 - not really sure how a stock 10900 beats a 10900 with faster RAM or max turbo, but okay.
> Image 3- Not really sure how an i5-10500 beats an i7-8700K but okay.
> ...



Most of these look like margin of error to me.  The results are essentially the same, implying that the CPU is not the bottleneck.

One or two look to have a slightly bigger difference.  It's possible that those tests are simply less repeatable, or maybe helped by newer boosting algorithms?


----------



## Berfs1 (Jul 16, 2020)

W1zzard said:


> GPU-Z PCIe test is not a good choice for that, its load profile is relatively low. Better use any random game with no FPS cap, and in windowed mode. I like to use Unigine Heaven for that because it loads reasonably fast nowadays, and I can pause the movement, so load is fixed. Furmark is a bad choice because GPUs/drivers will detect it and clocks down way too much


For stability it is not a good choice, I just wanted to show how leakage works at a certain temperature, which honestly, any test that provides a consistent load will be fine. If anything it would be more noticeable at higher wattages.



blu3dragon said:


> Most of these look like margin of error to me.  The results are essentially the same, implying that the CPU is not the bottleneck.
> 
> One or two look to have a slightly bigger difference.  It's possible that those tests are simply less repeatable, or maybe helped by newer boosting algorithms?


i7-8700K vs i5-10500 for example, it's literally the same Turbo Boost 2.0, but the 8700K has higher turbo ratios. Sure TDP might have something to do with that, but if that's the case why not test the 10900K with a 65W TDP setting?

The reason I am going after this is because it is basically implying that the i9-10900 is FASTER than the i9-10900K, which clearly would not be true, by logic. I can see those numbers, and I can see the percentage differences, however for someone that just wants to see if x CPU is better than y CPU, they don't really care to see how much faster x CPU is, they only care if it's faster. Yes there are people like that, and one way to prevent that scenario from happening is keeping everything else static (which the GPU clocks weren't static). The reason that is important is, if you run your GPU at a fixed frequency, the variations are much smaller. That's why I push for static frequencies on GPUs *WHEN COMPARING CPUs* because that's how you should test CPUs in games with everything else exactly the same. If you keep all other things are equal, then allow the CPUs to boost on its own, that's fine. I have done some testing myself (not with the same titles), and I can guarantee you those small variations that is giving a certain CPU an edge, is because the GPU is boosting ever so slightly faster on one CPU rather than the other. Yes, you can say margin of error. But like, you can't really say it's margin of error when the GPU frequency fluctuates. As W1zzard mentioned, if you have less variables, the tests are more repeatable.

TLDR, just have less variables that's all.

Also, please don't misunderstand, I greatly appreciate the time these reviewers take as I know this stuff takes weeks to do, and I am not trying to discredit them at all, I am just trying to help them out, that's all.


----------



## BarbaricSoul (Jul 16, 2020)

Searing said:


> Sigh. Your CPU doesn't draw the same amount of power at all times. My 10900 usually draws around 35W doing light work. When it is running hotter, that is usually when it is drawing closer to 200 watts. The temperature has almost zero effect on how much heating is happening in your room. I could attach a giant heatsink to it, and when it is running at 40 degrees, if it is drawing 200 watts, that is how much it is heating the room.
> 
> Almost ALL CPU power is converted to heat, there are no moving parts like in a car engine. Sure it changes with temperature, but the effect is almost zero, so the other guy is also just being pedantic without being right.



Actually, considering it runs at 100% full load running WCG, it pretty much does use the same amount of power at all times when it's booted up. Which is exactly why I asked about a cooling solution *that can keep the 10900 cool (<75'c) running WCG 24/7. * Whether it's because of the temperature it's running at, or how much power it is using, I know that if my 3930k is running at over 75'c, it noticeably increases the temperature in my room. Higher temperatures (>75'c), in my case, have two causes. One is when I increase my OC, which is to be expected. And two is when I need to clean the air filters in my case to maintain air flow in the case. Both cause increased temperatures in my room, but only one involves increased power being drawn.


----------



## blu3dragon (Jul 16, 2020)

Berfs1 said:


> For stability it is not a good choice, I just wanted to show how leakage works at a certain temperature, which honestly, any test that provides a consistent load will be fine. If anything it would be more noticeable at higher wattages.
> 
> 
> i7-8700K vs i5-10500 for example, it's literally the same Turbo Boost 2.0, but the 8700K has higher turbo ratios. Sure TDP might have something to do with that, but if that's the case why not test the 10900K with a 65W TDP setting?
> ...



OK, obviously anything that can be done to reduce the error and improve the repeatability of results will be helpful.  At some point though people should realize that a difference of 1 or 2% simply won't be noticeable in the real world and for practical purposes you can consider any two results that are within that margin as being equal.  (either because the cpus really are that close in performance, or because the bottleneck is elsewhere in the system).



BarbaricSoul said:


> Actually, considering it runs at 100% full load running WCG, it pretty much does use the same amount of power at all times when it's booted up. Which is exactly why I asked about a cooling solution *that can keep the 10900 cool (<75'c) running WCG 24/7. * Whether it's because of the temperature it's running at, or how much power it is using, I know that if my 3930k is running at over 75'c, it noticeably increases the temperature in my room. Higher temperatures (>75'c), in my case, have two causes. One is when I increase my OC, which is to be expected. And two is when I need to clean the air filters in my case to maintain air flow in the case. Both cause increased temperatures in my room, but only one involves increased power being drawn.



The point here is that if you want to know how much a 10900 will heat up your room compared to your 3930k, you need to compare how much power each cpu uses.
Let's say your 3930k has a water cooler, that keeps it at 75 degrees while drawing 300w, and a 10900 system has an air cooler that keeps it at 80 degrees while drawing 65w.
In both cases, the power drawn will leave the back of the case as heat into the room.  So, in this example the 3930k will heat up the room much faster despite being at a slightly lower temperature.

Now, if you have the same system and cooler, then the cpu temperature will be proportional to the amount of power it is using.  So, when your 3930k is at 75 degrees it might be using 300w, but when it is at 50 degrees it will be using much less than that (lets say 75w).  This is why your room heats up more when your cpu is at 75 degrees vs 50 degrees.


----------



## yeeeeman (Jul 18, 2020)

cucker tarlson said:


> running 4133 cl16-16-16-31 on i5 10500 no problem.
> 
> 
> yes it also throttles like crazy.
> ...


if you want performance, you get 10900k, not 10900. So 10900 is a very good mix of performance and efficiency.
I still stand upon what I said. 10900 shows that without those massive OCs that the K parts need in order to match Zen 2 parts with their higher IPC, 14nm process is very efficient at frequencies similar to AMD parts (10900 runs at ~4Ghz on average on all cores)


----------



## cucker tarlson (Jul 18, 2020)

yeeeeman said:


> if you want performance, you get 10900k, not 10900. So 10900 is a very good mix of performance and efficiency.
> I still stand upon what I said. 10900 shows that without those massive OCs that the K parts need in order to match Zen 2 parts with their higher IPC, 14nm process is very efficient at frequencies similar to AMD parts (10900 runs at ~4Ghz on average on all cores)


no.just no.
10900 is PL restricted
once you remove them it's right up there with 10900k.it also beats not only 3900x,but 3900xt too.


Spoiler: cpu tests











and please tell me you're joking about "massive overclocks",10900k can achieve 200mhz OC over stock 4.9


----------



## tecky (Aug 27, 2020)

That is awesome work - thanks therefor! You are the only one so far more or less testing the non k version! Keep up the good work!

I need a recommendation if you would go for the 3900x or the 10900 non k version.
At the moment I have an Hackintosh i5 10400 and a 3700x in my Windows rig. With both rigs I am doing video editing, like premiere and AE and Final Cut.
With the Win Rig I do gaming as well - AAA Titles, on a 2060S.
Maybe I am considering to go up on a Ryzen 3900x and keep the Hackintosh with i5 next to my side or sell actually the majority of my current hardware
and build just one rig with the i9 10900. From a price point after selling and buying its equal to what I currently have. Of course when I would buy the 3900x instead+keep the Hackintosh it would be approx 120Dollars more.

So what would be your recommendation?

New Build would be:
Asus Z490-A with OC planned TDP unlock to 200W-250W.
32GB Crucial Ballistix 2*16
i9 10900 (non k)
Dark Power Pro 11 650W Platin
1xNVME SanDisk 500GB MacOS
1XNVME EVO PLUS 250GB WIN
Be Quiet Dark Rock Pro 4
NXZT 510 Elite
1. Win GPU 2060s @ x8 PciE3 for Win
2. Hackintosh GPU RX580 @ x8Pcie3 for MacOS

Win current build
3700X Stock, maybe 3900x = maybe future option if not the i9
B450 Tomahawk Max
250GB Evo Plus
2TB Crucial MX500
32GB Crucial Ballistix 2*16 @3733mhz
RTX 2060S
Straight Power 11 Gold 550

Hacki Current @OpenCore
i5 10400
NVME SanDiskExtremePro 500GB
RX580
MSI Gaming Plus Z490
32GB Crucial Ballistix 2*16 @3200mhz
BeQuiet 600W Gold

Actually I want to save some bucks and not spend anymore. So I guess with the new build I would be good to go for at least 2022, until DDR5 release/establishment.
The Hacki is more a fun project at not yet too relevant for production purposes. But runs 100% stable with all support.
My thougts were just that the i9 is the best i can get for gaming and also decent for editing, whereas in comparison the 3900x isnt a bad choice either.
And i dont know how worst the efficency will get with the i9?! But to let it stock it wouldnt be a great choice i guess?!
I cant decide ..... please help


----------



## blued (Mar 31, 2021)

10900f is around $349 @ Amazon. I presume this is identical to the 10900 w/o the IGP in perf, power, general behavior? Thanks.


----------



## W1zzard (Mar 31, 2021)

blued said:


> 10900f is around $349 @ Amazon. I presume this is identical to the 10900 w/o the IGP in perf, power, general behavior? Thanks.


Correct


----------



## blu3dragon (Mar 31, 2021)

blued said:


> 10900f is around $349 @ Amazon. I presume this is identical to the 10900 w/o the IGP in perf, power, general behavior? Thanks.


Yes, it is.

(Longer answer: I've seen some tests on 10900k vs kf which basically concluded that they were the same and if you have an external gpu only thing you lose is quicksync.  Not sure if anyone has done real statistical analysis with 10900 vs 10900f or the lower end parts, but they should also be the same and since you can't oc these parts anyway the only possible difference is a small diff in power consumption).


----------



## blued (Apr 1, 2021)

blu3dragon said:


> Yes, it is.
> 
> (Longer answer: I've seen some tests on 10900k vs kf which basically concluded that they were the same and if you have an external gpu only thing you lose is quicksync.  Not sure if anyone has done real statistical analysis with 10900 vs 10900f or the lower end parts, but they should also be the same and since you can't oc these parts anyway the only possible difference is a small diff in power consumption).


Well, raising the PL from 1 to 2 is essentially an effective OC. Its just not as fine tuneable as with an unlocked multiplier, but results in the review are compelling enough. Just that its a sloppy OC that draws significantly more power than the k model. Still I'm interested in it and may get one to play around with.


----------



## Deleted member 202104 (Apr 1, 2021)

blued said:


> Well, raising the PL from 1 to 2 is essentially an effective OC. Its just not as fine tuneable as with an unlocked multiplier, but results in the review are compelling enough. Just that its a sloppy OC that draws significantly more power than the k model. Still I'm interested in it and may get one to play around with.



It's tunable in the sense that you can set whatever PL1 you want.  65w, 95w, 125w, 132w, or no limit.  An example is if you set PL1 to 95w, the 10900 will run all core at 4.0-4.1GHz.  Remove the limit and it will run at 4.6 all core.  Set it at 125w and it's pretty much a stock 10850k


----------



## W1zzard (Apr 1, 2021)

blued said:


> Well, raising the PL from 1 to 2 is essentially an effective OC. Its just not as fine tuneable as with an unlocked multiplier


I actually find it much better tunable, because like @weekendgeek said, you can set it to any limit you want, to match your cooling/power/heat/noise budget


----------



## VeqIR (Apr 17, 2021)

Can you similarly unlock power limit on an i9 10900KF?  Since 10900KF is basically the same price as 10900, I’m curious about any differences in the silicone in terms of heat and power usage and comparing the two processors in benchmarks.  From what I’ve read, 10900K/KF are the higher quality silicone, and the 10850K is the silicone that did not quite make the cut For the 10900K series.  I don’t know where 10900 non-K falls in that regard; my guess is that it’s also not quite the top bin of silicone.


----------



## blu3dragon (Apr 17, 2021)

VeqIR said:


> Can you similarly unlock power limit on an i9 10900KF?  Since 10900KF is basically the same price as 10900, I’m curious about any differences in the silicone in terms of heat and power usage and comparing the two processors in benchmarks.  From what I’ve read, 10900K/KF are the higher quality silicone, and the 10850K is the silicone that did not quite make the cut For the 10900K series.  I don’t know where 10900 non-K falls in that regard; my guess is that it’s also not quite the top bin of silicone.


10900 and 10900F are basically the same aside from the integrated graphics being disabled in the F.  Same quality silicon as far as known.

10900K or KF are the top binned dies.
10900 / 10900F and 10850K or KF are below that.


----------



## W1zzard (Apr 18, 2021)

VeqIR said:


> Can you similarly unlock power limit on an i9 10900KF?


Adjusting the power limit works on any recent CPU from Intel


----------



## VeqIR (Apr 18, 2021)

Back to my original wish then: to compare “stock” 10900/F with a “stock” 10900KF, both with power limit removed/set to maximum.  Could be interesting to compare the power draw and temperatures.  Though the turbo frequencies and behavior would be a bit different between the two, but still.


----------



## blued (Apr 20, 2021)

Bought a 10900f on the strength of this review. I liked the fact that the reviewer had no problems with temps on a NH-U14S cooler since I have a NH-D15S. Did a 10 min test run with P95 and temps got up to 90c and CPU power package draw was 230w. Of course P95 is overkill, anything else cant get past 75c (gaming max 50-60c).


----------



## arni-gx (Jul 25, 2021)

weekendgeek said:


> It's tunable in the sense that you can set whatever PL1 you want.  65w, 95w, 125w, 132w, or no limit.  An example is if you set PL1 to 95w, the 10900 will run all core at 4.0-4.1GHz.  Remove the limit and it will run at 4.6 all core.  Set it at 125w and it's pretty much a stock 10850k



sorry, BUMP.......







so, my new cpu, all 10c 20t = 4,6ghz is in unlock power limit mode with my motherboard, all along ?? or, still within power limit at TDP 65 watt ??


----------



## The red spirit (Jul 26, 2021)

arni-gx said:


> sorry, BUMP.......
> 
> 
> 
> ...


It's still at TDP, but not at PL1. Either your default PL1 is high or you just captured short PL2 burst here.


----------



## arni-gx (Jul 26, 2021)

The red spirit said:


> It's still at TDP, but not at PL1. Either your default PL1 is high or you just captured short PL2 burst here.



that screenshot is from ass creed syndicate....





this is from borderlands 3.....  so, my PL1 is not tdp 65w..... ?? also, what is my PL2 ?? in this review, i see no explanation about PL2 from i9 10900 non k......


----------



## The red spirit (Jul 26, 2021)

arni-gx said:


> that screenshot is from ass creed syndicate....
> 
> 
> 
> ...


PL1 and PL1 are settings found on any Intel CPU. They are power limits, PL1 is sustained power limit (i9 10900 default is 65 watts), PL2 is short term power limit (224 watts for i9 10900, if I remember right). Your screenshot shows 62.1 watt power usage, so everything is fine, but motherboard manufacturers tend to mess a lot with them. As power limits are adjustable and technically aren't spec, but suggestion. These two videos are must watch to learn a bit about power limits (even if second one is a bit silly, due to not accepting that stock is fine):


----------

