• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i5-10600K

Alder lake in 18 months. that is the way to go, what little improvement rocket lake brings like m.2 slot and if any IPC at all,
are not worth it for a trivial short therm fun and will be dwarfed by whatever intel can make on 7nm and AMD on 5nm.
DDR4 is clearly bottlenecking the CPUs, and I'm not very enthusiastic about using overvolted stretched to the limit DDR4 at 1.35V, when the norm is 1.2V.
When they can run at 5000Mhz 1.2V I take. both CPU and DDR at 1.2V yeah.

2080Ti is 50% faster than 2060super. next gen RTX 30 is expected 50% faster, actually 60% more performance at 60% the size the current chips, so 2080Ti is in a way the new 3070 and down to earth. pretty average stuff starting the end of this year.

HT brings 20-25% improvement, stuttering gone. but it just can't work for 5 and 3nm GPUs, it is not future proof enough.
 
I don't really pay attention to any rumors as far as what chips are expected to perform at.... I wait till they're actually launch to judge performance and any sorta release schedule for future products is always subject to change. At this point I really can't take any intel road maps seriously as they were suppose to have 10nm out the door ages ago. I do hope they get their shit together though because my preference is to have one system from each chip maker to show people I do builds for the difference between them in more real world stuff vs looking at graphs etc.
 
All you need for gaming is a Core i9 9900k.

What you WANT for gaming is a Core i9 10900K.

Thank you TechPowerUp for completely gaslighting the AMD fanboys LOL.
 
Disappointing... isn't it? But we always want faster products that OC well. Normally, I suppose both Intel and AMD in the past have left a lot of headroom on the table to make future products more powerful if they have to, and to keep chips nice and efficient. Now that they're competing pretty fiercely, efficiency and headroom go out the window in the name of raw speed... unfortunately, with that, that means guys like you and me don't get to do much to make it go faster when we get one. These chips are basically overclocked out of the box... just look at the cooling requirements today compared to the cooling requirements 10 years ago. Even the notorious Pentium 4 could be cooled effectively by this... don't try it on one of these new chips!

You don't get to overclock the CPU much if at all, true.

Overclock the memory instead.

Gamers Nexus used OC'd RAM on a 10600K and were able to beat a 10900K in multiple games:

GN Video "Intel i5-10600K Cache Ratio & RAM Overclock Beats 10900K: How Much Memory Matters" :


Here's German computerbase.de using the fastest memory they could (AIO cooler) in a 9900k vs 3900X comparison. I circled the ones where the CPU was not overclocked, except the top with the max memory speed (they got to 4133 on the 9900k). The 3600 one is easily achievable with Intel Z390 / Z490.


Capture.JPG
 
I don't doubt that some people will pair these with 2080 ti or faster next gen gpu to get their 5-10% extra performance at 1080p but my guess is most people buying these are more in the $300-400 gpu range if not lower and for them its sorta silly vs what the competition offers at a lower price

I imagine most folks, with 2080ti money, are likely to pair it with a 10900k. There will be some, but it doesn't mesh with "bragging rights".
 
I imagine most folks, with 2080ti money, are likely to pair it with a 10900k. There will be some, but it doesn't mesh with "bragging rights".


I agree, you tend to see people more often with $350+ cpu and sub $300 gpu vs sub 300 cpu and and $1000+ gpu....... Just pointing out that this chip is still pretty niche on why you would spend a decent chunk more on the platform and cooling it vs a 3600.


Either way intel will likely sell out of 10th gen chips as fast as they can make them which given their limited 14nm fab capacity isn't likely to be very many. The best selling 10th gen chip on amazon US couldn't even crack the top 20 likely due to very limited quantities.
 
Nice to see the 8700K still relevant.
 
TBH I don't really care about those security flaws as they don't affect me. There is something really cool called not updating your BIOS :) That's how my 9750H can stay undervolted. They always say, if it isn't broken, don't fix it, and I have not had anything broken with my system, so yeah, I'll keep my undervolting privileges :)

That's an option for older stuff, I would assume newer motherboards would come with an updated BIOS, also I thought there were windows 10 updates to address them?

Was just curious if they had fixed them at the source with this new gen?
 
For gaming still are the best CPUs by far, as much as I hate Intel. Definetly will going to wait for the 10nm CPUs with PCI-E 4.0.
I'm not even considering upgrading to those overly power hungry crap CPUs, even if they are the fastest.
AMD are also not an option, since my overclocked 3770K can match them in Games without any issues...
 
Well done Blue team, putting pressure on Red team for the next few months. Narrative doesn't change though: Red team wins in parallel tasks, its neck to neck in gaming, and Red team's platform cost will be slightly cheaper at each pricepoint.
 
Well, much better for gaming at 1080p with a top big beefy GPU, as expected.
At 1440p, where the GPU gets more relevance, a 10600K has a mere 0~5 fps gain for $65 more (vs an R5 3600, same cores same threads). That's a lot of green already only on the CPU alone.

And with a more affordable GPU would be interesting to see how things would shape. I think guru3D kinda provided a clearer scenario for the regular enthusiast Joe out there:

As far as gaming goes, the pure raw wins are mostly for Intel, but everything is relative when it comes to gaming as 98% of the time your actual limitation is the GPU, and not CPU. For gaming, GPUs matter more than CPUs. You can measure the effect of CPU performance with games, but only when you steer away from GPU limitation, that's why we use a 1000+ USD graphics card as in the lower resolutions you'll see differences. But with a Radeon RX 5700 or GeForce RTX 2070 these differences would be much closer towards equal for one another. Hey, everything is relative.

The vast majority of you guys have a far more GPU limited graphics card. [...] the reality absolutely and unequivocally is that you can game pretty darn well with a lower-cost processor as well. However, if you need 144 FPS at Full HD, the symbiosis of a faster processor and GPU can make a difference (at great cost). Currently, I find 8-core processors a sweet spot in relation to gaming, 6-cores for value.

So, to get such performance advantage/difference at 1080p, as showed by most reviews, you need to spend more on both the CPU and the GPU. AKA: with a more humanly affordable GPU, the potential advantage provided slims down, denying the justification for the price premium of the CPU even more. So basically, the money saved on the cpu can be much better invested on a better GPU for more FPS (specially if planing on upgrading later to a 1440p monitor, since the GPU grunt will already be there and is much more important than the CPU).
 
Is this a sponsored review? Because having "All You Need for Gaming" and "World's Fastest Gaming Processor" for 10900k review seems very inapropriate and biased.
Specifically checked AMD cpu reviews and none of them had anything like that in the title (while being much more innovative and better value). Seems fishy.
The subtitle capability was added a few weeks ago, it's used in various reviews, including recent amd CPUs
 
The 10600K has already proven itself as more than capable, then along come Intel with their dodgy as hell marketing trying to upsell.
Intel 10th.jpg
 
Decent performance, but at this point you just can't lose in both perf/W and perf/$ to the 3600.
 
AMD should switch to a soldered on copper heatspreader for Ryzen. I mean even if Intel turn copies and does the same AMD thus far is ahead in the overall core count race anyway. It wouldn't cost much extra to make that change and would maybe eek out another 25-50MHz overclock, but the more important part of that equation is how that impacts a chip like a the 3950x 25MHz x16 cores or 50MHz x16 cores equals 400MHz to 800MHz combined clock speed gain not so bad ehh!? Meanwhile Threadripper that would be 800MHz to 1600MHz combined on the 32 core behemoth. AMD would be almost crazy not to do that and release it as a new SKU above it's current top end lineup and then they can continue it onward with Zen 3 chips. It should actually not only allow the chips to eek out a bit more high end clocking headroom, but also of importantance more efficiency per clock in turn because with less heat comes less excess voltage requirements due to better stability. Besides that they should be doing that anyway on the mobile chips in particular for that matter keep them running cooler and possibily improve and extend battery life.
 
AMD should switch to a soldered on copper heatspreader for Ryzen.
AMD is already is using solder for Ryzen between the die, and nickel plated copper heatspreader unless you get a Ryzen APU then it's paste. The lower clockspeeds has always been tied to the fabrication process for the dies. There's also variation in the quality of the die depending on where it comes off the silicon wafer.
 
Well, much better for gaming at 1080p with a top big beefy GPU, as expected.
At 1440p, where the GPU gets more relevance, a 10600K has a mere 0~5 fps gain for $65 more (vs an R5 3600, same cores same threads). That's a lot of green already only on the CPU alone.

And with a more affordable GPU would be interesting to see how things would shape. I think guru3D kinda provided a clearer scenario for the regular enthusiast Joe out there:



So, to get such performance advantage/difference at 1080p, as showed by most reviews, you need to spend more on both the CPU and the GPU. AKA: with a more humanly affordable GPU, the potential advantage provided slims down, denying the justification for the price premium of the CPU even more. So basically, the money saved on the cpu can be much better invested on a better GPU for more FPS (specially if planing on upgrading later to a 1440p monitor, since the GPU grunt will already be there and is much more important than the CPU).
Not just upgrading to 1440p you can downsample from 1440p or close to it on a 1080p display and still see a huge benefit on the GPU side and negate the CPU bottleneck necessity. The CPU situation is only important at ultra high refresh rates and that are easily sustainable. Polling rates on mouse/keyboard also are part of the issue as well because you increase the polling rate you decrease lag, but it isn't linear just as raising refresh rate's impact on lag isn't linear either. You want a best compromise balance between GPU and CPU followed by refresh rate and peripherial polling rates.

AMD is already is using solder for Ryzen between the die, and nickel plated copper heatspreader unless you get a Ryzen APU then it's paste. The lower clockspeeds has always been tied to the fabrication process for the dies. There's also variation in the quality of the die depending on where it comes off the silicon wafer.
I hadn't even realized they were nickel plated copper, but that makes sense the fact that it escaped me bothers me now.
 
Last edited:
I ask again...
Please, add to next CPU test Civilization VI AI benchmark (Average turn time, s).
 
HAHAHAHA Why do test using DDR-3200????????? INTEL say "YOU CAN NOT USE MORE THEN DDR4-2666"
 
HAHAHAHA Why do test using DDR-3200????????? INTEL say "YOU CAN NOT USE MORE THEN DDR4-2666"
You can use it in overclocked mode, same as you use DDR4-3600 on Zen2.
 
HAHAHAHA Why do test using DDR-3200????????? INTEL say "YOU CAN NOT USE MORE THEN DDR4-2666"
Base spec without xmp enabled.
 
These actually good CPU from blue team.
However, in my sight, many i5 user paired their CPU with H or B motherboards, obviously they will not have Z board feature. Coupled that with extra cost for good cooler, I kinda agree this is another disappointment from budget to performance ratio.
 
Intel recommends not to use the memory above the indicated frequency as it can damage the memory controller thus causing the loss of its warranty, so you cannot use a technology created by Intel itself at the risk of losing your warranty, however you have an option , which is to PAY for "INSURANCE" against damage due to OC.
 
Intel recommends not to use the memory above the indicated frequency as it can damage the memory controller thus causing the loss of its warranty, so you cannot use a technology created by Intel itself at the risk of losing your warranty, however you have an option , which is to PAY for "INSURANCE" against damage due to OC.
You happen to have source for that? Because if you don't, you're two posts into TPU and already trolling.
 
Back
Top