Friday, August 17th 2018
Intel Confirms Soldered IHS for 9th Gen Core Series
Soldered integrated heatspreader has been a longstanding demand of PC enthusiasts for Intel's premium "K" mainstream-desktop processors. With AMD implementing it across all its "Summit Ridge" and "Pinnacle Ridge" Ryzen AM4 processors, just enough pressure for built on Intel. The company, in a leaked slide, confirmed the feature-set of its upcoming 9th generation "K" Core processors, which highlights "STIM" (soldered thermal interface material) for this chip. It shows that STIM could be exclusive to the "K" series SKUs, namely the i9-9900K, i7-9700K, and i5-9600K.
The slides also list out the clock speeds and cache sizes of the three first 9th generation desktop SKUs, confirming that the Core i7-9700K will indeed be the first Core i7 desktop SKU ever to lack HyperThreading. The TDP of the 8-core chips don't seem to breach the 95W TDP barrier Intel seems to have set for its MSDT processors. The slides also seem to confirm that the upcoming Z390 Express chipset doesn't bring any new features, besides having stronger CPU VRM specifications than the Z370. Intel seems to recommend the Z390 to make the most out of its 8-core chips.
Source:
VideoCardz
The slides also list out the clock speeds and cache sizes of the three first 9th generation desktop SKUs, confirming that the Core i7-9700K will indeed be the first Core i7 desktop SKU ever to lack HyperThreading. The TDP of the 8-core chips don't seem to breach the 95W TDP barrier Intel seems to have set for its MSDT processors. The slides also seem to confirm that the upcoming Z390 Express chipset doesn't bring any new features, besides having stronger CPU VRM specifications than the Z370. Intel seems to recommend the Z390 to make the most out of its 8-core chips.
93 Comments on Intel Confirms Soldered IHS for 9th Gen Core Series
All unlocked? They've spent years shaving down OC capability by making it impossible to raise BCLK in an amount any more significant than farting on a brick wall, and limiting, and later eliminating, what you can do with non-K chips. If it were possible to actually move BCLK like you could the old FSB, I'd be happy with a regular i3... and I wouldn't care much about unlocked multipliers. They were always there in the extreme edition CPUs, but I always got by on non extreme CPUs. I've even had a few of those unlocked Black Edition AMD chips, and the unlocked multiplier didn't matter to me much then, either.
As with most Intel chips, the halo products are usually the best, not in terms of VFM though.
In that scenario, with core counts increasing from both companies, developers should be taking note and making their software (including games) more parallel/core/thread aware.
So in e.g. 2 years an 8/16 or 16/32 CPU may perform better than it does today (including games) and better than an 8/8.
If you can afford new, top of the line hardware and the possible hassle of reinstalling your OS and software (or can pay someone to do it) every 6 to 12 months, then it doesn't really matter.
As usual, those with $$$$/€€€€ have a much easier time.
EDIT: He mentions in the comments how cinebench benefit from SMT (it's a bench program though so not really interesting), but in the article he mentions how Zenmax Opticstudio scaled fine with HT on. They have their place at the low end, and they absolutely kill it in any price/performance metric you care to mention. The G4560 is one of the best chips ever made tbh, and one could only dream of what it would be if it were unlocked...
Some people mean something different with "binning", like the sub-binning some of the AiB vendors do for GPUs, where they re-test the GPUs and determine which ones are better.
In average, I believe any overclock x will be a little harder to achieve on i7-9700K vs. i9-9900K, there might be a little difference if you're trying to set a record. But sometimes you are bottlenecked by other factors leading to a similar maximum overclock. Sill, you have to remember that overclocking is becoming more and more just symbolic with such high clocks out of the box. Now you need to bump voltage and have extreme cooling just to get a few hundre MHz extra, it's not like old Sandy Bridge any more, where easily get a good overclock before even touching the voltage. If you're overclocking for the experience of overclocking, then go ahead, that's the only reason to do it at this point. The Linux kernel is surely much better at scheduling, and even allows a lot of finetuning for various workloads, but that applies to specific server workloads.
SMT scales poorly with many applications at once, part of it is the OS kernel's fault, and part of it is that any synchronous task will suffer from the added latency.
SMT is really challenging for the OS kernel scheduler. The scheduler measures the wait time of each thread to load balance the cores, matching more and less intensive threads together so the total throughput is as high as possible. This means threads are shuffled around a lot, creating a lot of latency for single threads, but may give slightly higher total throughput. SMT only scales well you have one large workload of similar threads, but scales terribly when you have a mix of high, medium and low loads, especially if they keep changing constantly.
As I have said before, if we don't have these new techs, no one will use them. If we have them, there is more chance of them being used.
It's a bit chicken and egg, but I would rather have them and not need them than need them and not have them; assuming I can get them within my budget. ;)
Do some performance analysis of e.g. copying a load of files in the background while gaming/watching video.
Running some video encoding while gaming/web browsing etc. etc.
i.e. real world multi-tasking scenarios. Given the media content explosion, I would think these kinds of activities are becoming more common.
Likewise including streaming + gaming.
For professional creators too, things like gaming + blender rendering, video encoding etc. and indeed several of these at once.
Being able to do everything on one rig versus needing several could be a huge cost saving. Maybe even things like having a VM running in the background using a dedicated video card for some work task while you game on another.
/rambling thoughts :)