• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-11900K

Only came here for the H264/H265/rendering parts and it says it all to me at least.
 
You forget to mention clicks. Social media and ad revenue models have essentially made these sites beholden to the mob. This perverts their analysis and reviews, it's not a hobby for them they are there to make a living.

Just look at how many people threatened TPU with essentially de-funding them by blocking / removing them based on their Zen 3 tests, then badmouthed them in other forums. This type of threat happens all the time on youtube, look at the comments.

If these sites say something that causes viewership to decline 20%, that's like taking a 20% pay cut. No sane person is going to do that, hence they're going to have a huge bias to create content that their viewers want to see - tell people what they want to hear. Numbers are numbers, so in one sense there's no deception, but there is perception manipulation as platforms can be manipulated to provide the desired results. As one once put it, figures can lie, and liars can figure. Buyer beware.

I'll be on the lookout for The Walkin' Dude's Hardware Review Page.

Although, based on your comments in this thread, I'm not sure I'll be expecting any unbiased reporting.

Coming from someone who's used Intel predominantly over the last decade, the 11900k is an embarrassment. It's overpriced, lost two cores, uses more power than most GPUs, and is the same price as the competition's 12 core part.
 
Coming from someone who's used Intel predominantly over the last decade, the 11900k is an embarrassment. It's overpriced, lost two cores, uses more power than most GPUs, and is the same price as the competition's 12 core part.

Personally, I think the 11900K is the only one worth buying of the lot, and it's far from an embarassment, but it's also impossible for it's market...

Single thread is amazing, and the latency is low.

Is capable of higher speed memory, but the only thing that will take advantage of that is the igpu, which is more power/heat. Who the hell is going to put this into an HTPC, or not use a dgpu for gaming in a tower?

I'm very curious what it's capable of with a large cooling loop and that new boost mode.

re: AVX comments, AVX units are how I find my OC limits.
 
I've come to the conclusion that this is basically performance competitive with the 5800x while using a lot more power. The main problem is that it costs at least as much as a 5900x and is marketed as a replacement for the 10900k, which has two more cores. As a 10700k, or maybe a 10750k it would make a lot more sense.
 
I'll be on the lookout for The Walkin' Dude's Hardware Review Page.

Although, based on your comments in this thread, I'm not sure I'll be expecting any unbiased reporting.

Coming from someone who's used Intel predominantly over the last decade, the 11900k is an embarrassment. It's overpriced, lost two cores, uses more power than most GPUs, and is the same price as the competition's 12 core part.

I've actually never stated my opinion on the 11900K vs others overall, only that many of the reviews are inherently bias. Like someone else stated, I think it is competitive maybe even a little better than a 5800X depending on the use case, but as such it's way overpriced given that it is also far less efficient.

At least for now until I learn more, that is my opinion. It may change, as I'm following a more enthusiast centered thread on overclockers.net So what does an 11900K with DDR4-4400 or DDR4-5200 look like? Places like that, are where you'll find out what this new chip can really do.

Now the 11700K at $399 and the 11600K at $269, that's a different story. They are beating 5600X and 5800X in price, and availability. There appears to be some value there, although that is still murky until we see what happens with different memory and BIOS settings.

Of course, if you need lots of threads, 5900X and 5950X are still the right choices.

I don't do that


yet our readership keeps growing and growing. not many tech sites are bigger than TPU nowadays

You don't get to say that after you changed your test platform to the AMD recommended DDR4-3733 / 3800.
 
You don't get to say that after you changed your test platform to the AMD recommended DDR4-3733 / 3800.
I didn't even know this was AMD recommended, are you sure? Given today's memory prices I felt like using faster memory was reasonable, especially given our enthusiast audience

Also I made the memory decision before having RKL hardware here, who could have expected that the MC is such a failure.. I probably would have picked 3600 CL14 or CL16 just to spare me from those FML moments last week
 
I didn't even know this was AMD recommended, are you sure? Given today's memory prices I felt like using faster memory was reasonable, especially given our enthusiast audience

Also I made the memory decision before having RKL hardware here, who could have expected that the MC is such a failure.. I probably would have picked 3600 CL14 or CL16 just to spare me from those FML moments last week

AMD has been recommending that for Zen 2 as the IF will usually go to 1900 on a good motherboard, so it can maintain 1:1 I'm not an AMD guy but AMD released this on one of their slides for enthusiasts. My understanding from forum posts is that 2000 (DDR4-4000) is beyond most AMD rigs, but I see in forums where DDR4-3800 is widely used as the preferred choice for both Zen 2 and Zen 3.

Thing is, with Comet Lake you can pretty easily run it up to DDR4-4400 or even higher. So in that comparison, 3800 is an AMD stopping point on , not a stopping point for Intel. There are people with 10900K's getting an extra 15% or more on fps testing using the even higher speed RAM.

Naturally anything above DDR4-3200 has an element of playing the chip lottery, for both platforms.

1617307382844.png


It's looking like with gear 2, a lot of people are getting DDR4-4800 and higher. This nullifies the extra latency [caused by gear 2].

1617307608265.png
 
Memory speed is a tricky one. In my view there are a few different ways to test things:
  1. Stick to stock speeds/clocks with good latency (CL16 - commonly available, or CL14 - less available) memory sticks.
  2. Tune each platform for best performance on a given set of memory sticks (you could pick some reasonable mid-range, or some high end sticks for this and you would need to use motherboards in similar market segments for each platform as well).
  3. Get some memory with higher than stock speed and enable XMP/DOCP. The problem with doing this is you are then at the mercy of the bios and how conservative or aggressive the speed is for the platform. It seems like DDR4-3600cl16 is still the sweet spot for this. Any higher than that and you start running the chip / motherboard lottery too much (particularly on the AMD side, but now 11th gen intel struggles over 3200)
I doubt most people (even most enthusiasts) spend much time tuning memory voltages and timings to the limit, so this means (2) is an overclocking focused article / topic.

Just to give some background data, I have a 5800x and some 'b die' that can do at least DDR4-4000. However, I run it at 3600 CL14. If I go higher than that I need to do two things:
  1. Increase SoC voltages. This increases power/heat in the cpu. Since my cpu is already power/heat limited this will likely lower boost clocks a little
    • I say likely since I have not measured this effect, but I can see a rather significant increase in SoC power consumption in HWInfo when going from 1600 to 1800 FCLK.
  2. Increase latencies. This somewhat negates the higher speeds.
Taken together I get basically no additional performance at DDR4-3800 instead of DDR4-3600. I also ran into an issue with a few bios revisions where my board would not post at 1900 FCLK for 1:1 with 3800. When pushing higher than 1900 FLCK with my chip I run into WHEA errors which I haven't managed to stabilize. When I thought I had it reasonably close it wouldn't post maybe once in 5 attempts. It's possible a more recent bios and some more tuning could get me up around 1933 or 1966, but the gains are almost immeasurable and I start needing to just push more voltage through everything to get it stable.
 
Last edited:
Intel make the 5600x look even better

5600x: 134W
11900k: 433W

performance difference at 1440p: 0.7%
 
ASRock provided a Z590 Taichi, but I can't use it because it has no option to turn set power limit to default. You can only type in numbers, but for that you have to know the default PL values first
Just a thought based on Throttlestop's workings, type in 0 and it allows max PL, type in a random high figure such as all 8's and it gets ignored and goes to default PL, maybe this will work on the Taichi too. :confused:
 
Intel make the 5600x look even better

5600x: 134W
11900k: 433W

performance difference at 1440p: 0.7%

Not too good at reading charts are ya.

You have to run the 11900K with asynch memory 1:2 vs the 5600X at 1:1 *and* adaptive boost off to get those numbers. In other words, you can almost but not quite cripple the 11900K into dropping to the level of a 5600X by doing such a funky config.

Alternately, you can save $100 on that DDR4-3800, get DDR4-3200C14 instead, and get 1.7% higher than the 5800X that is running DDR4-3800. This is from TPU's own chart.
 
Not too good at reading charts are ya.

You have to run the 11900K with asynch memory 1:2 vs the 5600X at 1:1 *and* adaptive boost off to get those numbers. In other words, you can almost but not quite cripple the 11900K into dropping to the level of a 5600X by doing such a funky config.

Alternately, you can save $100 on that DDR4-3800, get DDR4-3200C14 instead, and get 1.7% higher than the 5800X that is running DDR4-3800. This is from TPU's own chart.
i didnt feel like quoting every single number, the review does that nicely

These chips are incredibly power hungry for no benefit
 
These chips are incredibly power hungry for no benefit

Yes they are.

The 11900k trails the 5800x in Cinebench 23 multi by 2190 points while using 85 additional watts and costs an additional $100.

It also trails the 10900k by 779 points while needing 14 watts more to do so.

The 11900k did what nobody else could - made the 5800x look like a bargain at $450.

Yay Intel!
 
It's FX all over again.
Wanna bet the (possible) 11980XE will thermal throttle under load and we'll have to LN2 cool it?
 
Yes they are.

The 11900k trails the 5800x in Cinebench 23 multi by 2190 points while using 85 additional watts and costs an additional $100.

It also trails the 10900k by 779 points while needing 14 watts more to do so.

The 11900k did what nobody else could - made the 5800x look like a bargain at $450.

Yay Intel!

Yeah, it's really $170 more at least at retail right now vs the 5800X and that's really the issues it's both more expensive than the 10900k and the 5800X while being worse in a lot of ways... That's the real problem I see for the $400+ rocket lake chips there is just too many better options both from AMD and Intel themselves.... Maybe they should have renamed this sku to the intel fanboi edtion or the milk our diehard fans edition... I owned a 9900k and loved it and even the 10700k is pretty nice to work with but this seems like a shitshow that doesn't deserve the i9 branding.
 
Not too good at reading charts are ya.

You have to run the 11900K with asynch memory 1:2 vs the 5600X at 1:1 *and* adaptive boost off to get those numbers. In other words, you can almost but not quite cripple the 11900K into dropping to the level of a 5600X by doing such a funky config.

Alternately, you can save $100 on that DDR4-3800, get DDR4-3200C14 instead, and get 1.7% higher than the 5800X that is running DDR4-3800. This is from TPU's own chart.
Even more importantly, that certainly isn't gaming power consumption! This site was one of the rare ones that always had those figures and now they're gone (I can only imagine why) and I actually can't find them anywhere else either (well, I'm sure they are somewhere if I was to look long enough, but that's not the point).
 
I don't do that

No matter what you do you will never make everyone happy..... Intel fanboys will say unless you're running 4400+ with CL17 or lower your benchmarks are irrelevant and AMD fanboys will say unless you're running 3800 CL14 with uber tight timings you're catering to intel fanboys.




It's looking like with gear 2, a lot of people are getting DDR4-4800 and higher. This nullifies the extra latency [caused by gear 2].

View attachment 194873

Still pretty terrible latency compared to previous intel arch's especially considering its running at 1000mhz higher vs my kit. Guessing that's a 2x8 kit vs this 4x8 kit others can get to low 30ns on intel's previous gen parts so 48ns isn't very impressive for 5000mhz memory. My guess is the biggest culprit to latency is intel having to backport this to 14nm I guess we will see when Alderlake comes out later this year on 10nm.
cachemem.png
 
Last edited:
Just a thought based on Throttlestop's workings, type in 0 and it allows max PL, type in a random high figure such as all 8's and it gets ignored and goes to default PL, maybe this will work on the Taichi too. :confused:
tried that, typing higher than maximum goes to maximun, typing 0 isnt allowed ...

Even more importantly, that certainly isn't gaming power consumption! This site was one of the rare ones that always had those figures and now they're gone (I can only imagine why) and I actually can't find them anywhere else either (well, I'm sure they are somewhere if I was to look long enough, but that's not the point).
Yeah .. so for the new test bench i'm using a 3080, previously I used a 2080 Ti. This means retest all gaming power consumption, so I setup Cyberpunk instead of Witcher, more modern and everything.. but fail, I forgot to test gaming power draw while restesting the 40 or-so CPUs I have in the test group, and then didn't have the time to go back and retest all of them until launch. So I just dropped the gaming power measurement, for now, it will definitely be back.
 
No matter what you do you will never make everyone happy..... Intel fanboys will say unless you're running 4400+ with CL17 or lower your benchmarks are irrelevant and AMD fanboys will say unless you're running 3800 CL14 with uber tight timings you're catering to intel fanboys.

Yeah, nobody said that. Results on either platform using 3800+ are irrelevant to most users, though many do not realize that. If you run DDR4-3200 C16 like 80% of DIY types use, you'll get entirely different results.

IMO anything over 3600 C18 is getting into the hardcore enthusiast space, you're starting to talk about RAM that is twice (and higher) as expensive as the more common modules. In fact, if you use the more common 3200 C14/C16, Comet lake is quite frequently the winner.

The thing is, if you're going to go up into the more expensive RAM why stop at 3800? Like I said before, that is an AMD optimal speed, not an Intel one. Only people who are pushing the limits to get the last 5% of so out of their rig are going to buy that in the first place, and if they do their research Intel owners won't be buying that speed.


Still pretty terrible latency compared to previous intel arch's especially considering its running at 1000mhz higher vs my kit. Guessing that's a 2x8 kit vs this 4x8 kit others can get to low 30ns on intel's previous gen parts so 48ns isn't very impressive for 5000mhz memory. My guess is the biggest culprit to latency is intel having to backport this to 14nm I guess we will see when Alderlake comes out later this year on 10nm.
View attachment 194946

That's not known yet, what the optimal settings are for RKL memory.

I'm seeing people get crazy high frequencies with gear 2. This is a DDR4-4600 kit getting DDR4-5600 speed. We aren't talking LN2 types either. That will ah, probably drop that 1:2 latency down enough to be competitive with the latency on something like your Coffee Lake setup.

1617367459259.png
 
tried that, typing higher than maximum goes to maximun, typing 0 isnt allowed ...


Yeah .. so for the new test bench i'm using a 3080, previously I used a 2080 Ti. This means retest all gaming power consumption, so I setup Cyberpunk instead of Witcher, more modern and everything.. but fail, I forgot to test gaming power draw while restesting the 40 or-so CPUs I have in the test group, and then didn't have the time to go back and retest all of them until launch. So I just dropped the gaming power measurement, for now, it will definitely be back.

Gotta say Wizz, I appreciate and am impressed by your endless patience with the endless questioning of your methods and results, as well as your willingness and ability to address those questions directly and in a matter-of-fact fashion. I'd have flown off the handle long ago were I in your position.

Edit: This is not meant to call out any post or member specifically, nor to imply that Wizz or anyone at TPU (or anywhere) should be exempt from being questioned. :)
 
Last edited:
Slapping a few Atom cores on a desktop CPU wasn't really interesting to begin with
Like I said, it was interesting because it was supposed to be significantly faster single thread than RKL which was already supposed to be significantly faster than CML, meaning that we would finally get a big increase in gaming performance after many long years, not because of some Atom cores, they will most likely be the first thing to disable.
 
Yeah, nobody said that. Results on either platform using 3800+ are irrelevant to most users, though many do not realize that. If you run DDR4-3200 C16 like 80% of DIY types use, you'll get entirely different results.

IMO anything over 3600 C18 is getting into the hardcore enthusiast space, you're starting to talk about RAM that is twice (and higher) as expensive as the more common modules. In fact, if you use the more common 3200 C14/C16, Comet lake is quite frequently the winner.

In my direct experience with Ryzen 5000/Comet/Coffee lake as long as you are 3200 CL14 4x8 Ryzen is generally faster but just moving to 1440p negates any perceivable differences all the way down to a properly configured R5 3600.

I run all my setups at 3600 4x8 CL16-16-16 or better though but I'm not on here trying to tell Wiz how to do his job he knows much better than I do how to test hardware.

Like I said, it was interesting because it was supposed to be significantly faster single thread than RKL which was already supposed to be significantly faster than CML, meaning that we would finally get a big increase in gaming performance after many long years, not because of some Atom cores, they will most likely be the first thing to disable.

It's seem to be due to the increase latency negating the ST performance gains so it ends up mostly just balancing out.
 
In my direct experience with Ryzen 5000/Comet/Coffee lake as long as you are 3200 CL14 4x8 Ryzen is generally faster but just moving to 1440p negates any perceivable differences all the way down to a properly configured R5 3600.

I run all my setups at 3600 4x8 CL16-16-16 or better though but I'm not on here trying to tell Wiz how to do his job he knows much better than I do how to test hardware.

Ya well, I'll just leave this here. 10900K. This is JUST from changing up memory. Note the effect of ring. Again, even 5% is enough to completely rearrange the winners and losers on the top CPUs.

1617373691486.png
 
Back
Top