Tuesday, October 1st 2019

Intel 10th Gen Core X "Cascade Lake" HEDT Processors Launch on October 7

October 7 promises to be an action-packed day, with not just AMD's launch of its Radeon RX 5500 series graphics card, but also Intel's 10th generation Core X "Cascade Lake" HEDT processors in the LGA2066 package. With AMD having achieved near-parity with Intel on IPC, the focus with the 10th generation Core X will be on price-performance, delivering double the number of cores to the Dollar compared to the previous generation. Intel will nearly halve the "Dollars per core" metric of these processors down to roughly $57 per core compared to $103 per core of the 9th generation Core X. This means the 10-core/20-thread model that the series starts with, will be priced under $600.

The first wave of these processors will include the 10-core/20-thread Core i9-10900XE, followed by the 12-core/24-thread i9-10920XE around the $700-mark, the 14-core/28-thread i9-10940XE around the $800-mark, and the range-topping 18-core/28-thread i9-10960XE at $999, nearly half that of the previous-generation i9-9980XE. There is a curious lack of a 16-core model. These chips feature a 44-lane PCI-Express gen 3.0 root complex, a quad-channel DDR4 memory interface supporting up to 256 GB of DDR4-2933 memory (native speed), and compatibility with existing socket LGA2066 motherboards with a BIOS update. The chips also feature an updated AES-512 ISA, the new DLBoost instruction set with a fixed-function hardware that accelerates neural net training by 5 times, and an updated Turbo Boost Max algorithm. Intel will extensively market these chips to creators and PC enthusiasts. October 7 will see a paper-launch, followed by November market-availability.
Source: VideoCardz
Add your own comment

35 Comments on Intel 10th Gen Core X "Cascade Lake" HEDT Processors Launch on October 7

#26
kapone32
xkm1948i am no CPU engineer. From what I heard ringbus is good for low latency; meshis good for scaling up core numbers.
Got it I was just looking them up online I will have to do some deep reading when i get home today.
Posted on Reply
#27
btarunr
Editor & Senior Moderator
Vya DomusWhy would anyone use it though when every single ML framework has a GPU accelerated back-end. This is an advantage in a non existent fight.
It helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.
Posted on Reply
#28
Unregistered
Imsochoboplease watch epox vox's review and you understand why :)
You mean the guy who can't cool his 7980XE properly and doesn't understand how to use it, yeah that's some REALLY smart source you've got there.
#29
xkm1948
btarunrIt helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.
Radeon Institint's Tensorflow support is heavily based upon community support and self-development. Researchers will have to put in a crap ton of money and human power to get it off the ground and going for any serious work.Some machine learning lab have looked into this at my institution. They were trying to develop their own Tensorflow pipeline for analyzing tumor biopsy to look for signs of metastasis. They spent a good year on it going nowhere. Ended up going back to the tried and true Nvidia solution. At least they only bought one MI25 so not a whole lot of capital lost on that.


Point is, Nvidia has absolute dominance in the ML/DL/AI hardware market. They have a very mature software ecosystem as well and a super good customer service team dedicated to researchers across the globe. It will take some serious effort from Intel to bite off a piece from this ever increasing pie.
Posted on Reply
#30
Steevo
Considering the option for drop in AVX accelerators is already here the need for on die AVX isn't really there. Or they are full of it with Phi and other FPGA's they have built.

Also the pricing really lets you know how much they were making with their almost monopoly built on sketchy security riddled products.
Posted on Reply
#31
Vya Domus
btarunrIt helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.
Even without Tensor cores most AMD GPUs will vastly outperform any CPU with AVX 512. I struggle to justify how a company would be more willing to spend thousands of dollars on multiple CPU nodes to get the same throughput that could have been obtained with one or two GPUs (AMD or Nvidia) at a fraction of the cost.

Let me be frank with the example you provided, if one buys a whole bunch of those eye-wateringly expensive AMD Instinct cards but they end up using DL boost on CPUs to accelerate their ML workloads that means they are severely out of touch with whatever they were supposed to accomplish.

I can't find a single instance when these CPU would make sense over any other GPU solution as far as ML is concerned, there just isn't any. It's a feature stuck in a no man's land. Intel has upcoming GPUs so why do they insist on these solutions that are clearly not up to the task is beyond me.
Posted on Reply
#32
efikkan
It's good to see the competition is finally working.
gmn 17Any new motherboards for this release?
Yes, ASUS, MSI, Gigabyte etc. will launch new motherboards, but existing ones will be fully compatible with a BIOS update.
kapone32Interesting, I thought the "Cascade Lake" CPUs were also ring bus based? I will look it up but what are the main architectural differences between the 2?
Perhaps you're mixing it with the upcoming Comet Lake-S?
Cascade Lake-SP/X have always had a mesh interconnect.
Posted on Reply
#33
phanbuey
xkm1948Radeon Institint's Tensorflow support is heavily based upon community support and self-development. Researchers will have to put in a crap ton of money and human power to get it off the ground and going for any serious work.Some machine learning lab have looked into this at my institution. They were trying to develop their own Tensorflow pipeline for analyzing tumor biopsy to look for signs of metastasis. They spent a good year on it going nowhere. Ended up going back to the tried and true Nvidia solution. At least they only bought one MI25 so not a whole lot of capital lost on that.


Point is, Nvidia has absolute dominance in the ML/DL/AI hardware market. They have a very mature software ecosystem as well and a super good customer service team dedicated to researchers across the globe. It will take some serious effort from Intel to bite off a piece from this ever increasing pie.
That and they have free classes and samples on setting it up, and a community/tech support behind it.
Posted on Reply
#34
dicktracy
Now it's AMD turn to stop increasing prices and move that 64 cores @ $2000.
Posted on Reply
#35
king of swag187
Assuming the prices are right, AMD's going to have to up the ante.
Posted on Reply
Add your own comment
Dec 18th, 2024 17:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts