Tuesday, March 12th 2019
Intel to Refresh its LGA2066 HEDT Platform This Summer?
Intel is rumored to refresh its high-end desktop (HEDT) platforms this Summer with new products based on the "Cascade Lake" microarchitecture. Intel now has two HEDT platforms, LGA2066 and LGA3647. The new "Cascade Lake-X" silicon will target the LGA2066 platform, and could see the light of the day by June, on the sidelines of Computex 2019. A higher core-count model with 6-channel memory, will be launched for the LGA3647 socket as early as April. So if you've very recently fronted $3,000 on a Xeon W-3175X, here's a bucket of remorse. Both chips will be built on existing 14 nm process, and will bring innovations such as Optane Persistent Memory support, Intel Deep Learning Boost (DLBOOST) extensions with VNNI instruction-set, and hardware mitigation against more variants of "Meltdown" and "Spectre."
Elsewhere in the industry, and sticking with Intel, we've known since November 2018 of the existence of "Comet Lake," which is a 10-core silicon for the LGA1151 platform, and which is yet another "Skylake" derivative built on existing 14 nm process. This chip is real, and will be Intel's last line of defense against AMD's first 7 nm "Zen 2" socket AM4 processors, with core-counts of 12-16.
Sources:
momomo_us (Twitter), ChipHell
Elsewhere in the industry, and sticking with Intel, we've known since November 2018 of the existence of "Comet Lake," which is a 10-core silicon for the LGA1151 platform, and which is yet another "Skylake" derivative built on existing 14 nm process. This chip is real, and will be Intel's last line of defense against AMD's first 7 nm "Zen 2" socket AM4 processors, with core-counts of 12-16.
44 Comments on Intel to Refresh its LGA2066 HEDT Platform This Summer?
This only makes sense. Intel knows AMD is working on the next iteration of Ryzen and they know it's going to make a marked improvement over the current gen.
Intel have a new architecture ready, it's been ready for nearly two years, just waiting for the node to be ready. What they should have done instead of Coffee Lake rev 1 & 2 and Comet Lake(?) is a "backport" of Ice Lake to 14nm.
The refresh is different because consumers are still, for the most part, clueless. Think in the clockspeed days of P4 and AMD Newcastle chips... AMD had to name their chips so that their naming convention reflected Intel performance (Athlon 64x2 3800 for example). Now, instead of clocks and IPC, which matter to most, we see meager IPC bumps and more cores/threads when software can't use it... I hope we see software gain momentum on this, but, since hex's and octo's have been out for nearly a decade now, I don't see much point in going over a 6c/12t CPU for 95% of people.... even the 'enthusiasts' here. ;)
Microsoft Corp. chairman Bill Gates once said 640K of memory was more than anyone needed.
It's going to be another few years before more than 6c/12t are going to be useful for the average Joe.;)
Back on topic.
Most non-server tasks, including multithreaded tasks, scale better on fewer faster cores than many slower cores. More cores are of course welcome, as long as we don't have to sacrifice too much core speed.
guess that's because its basically the same CPU on a smaller node with little 2-3% IPC improvements per gen.
And a few added instructions....
If your lucky and find a EVGA Classified SR2 for cheap; that platform is still good for a lot of stuff....
I'll shut my pie hole now...
I've stated my findings and personal experience .....
Right now the 7820x is excellent (especially since when i bought it, my options were 7700k, 1800x or 7820x) and with overclocking it beat the pants off everything else out there at the time. But it definitely could use a better cache hierarchy and a bit of an ipc boost to keep up with the 9900k/z390 setup.
If they release something semi reasonable and cascade lake X has faster cache latency and memory this will be a perfect upgrade for the skylake X crowd.
As it stands, the Skylake-X platform doesnt seem like it can hold onto the HEDT crown from Zen 2, so hoping for something a special from intel.
At that time, 640KB was enough for the average home user.
Those security vulnerabilities are really smegging me off, though. Over a year later since the original announcement and we're still finding new crap... just as I thought it would be. It's a never ending battle. :(
Only way to truly mitigate would be to change the way their speculative execution takes place.
In reality though, it seems like more a theory at this point. The proof-of-concept attacks are not super reliable, and leaks data at B/s or kB/s, and I'm not sure if you can even leak arbitrary targeted parts of memory this way. So it will take a good while to dump a larger chunk of memory like this, and in reality memory will be moved around much quicker than you are able to dump it.
The actual thing you should worry about is the performance penalties of the workarounds, which luckily has become less with more refined approaches. These bugs have led to scheduling changes in both Windows, Linux and BSD, so perhaps some good have come of it after all…