Monday, November 5th 2018

Intel Announces Cascade Lake Advanced Performance and Xeon E-2100

Intel today announced two new members of its Intel Xeon processor portfolio: Cascade Lake advanced performance (expected to be released the first half of 2019) and the Intel Xeon E-2100 processor for entry-level servers (general availability today). These two new product families build upon Intel's foundation of 20 years of Intel Xeon platform leadership and give customers even more flexibility to pick the right solution for their needs.

"We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers' system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers," said Lisa Spelman, Intel vice president and general manager of Intel Xeon products and data center marketing.
Cascade Lake advanced performance represents a new class of Intel Xeon Scalable processors designed for the most demanding high-performance computing (HPC), artificial intelligence (AI) and infrastructure-as-a-service (IaaS) workloads. The processor incorporates a performance optimized multi-chip package to deliver up to 48 cores per CPU and 12 DDR4 memory channels per socket. Intel shared initial details of the processor in advance of the Supercomputing 2018 conference to provide further insight to the company's extended innovations in workload types.

Cascade Lake advanced performance processors are expected to continue Intel's focus on offering workload-optimized performance leadership by delivering both core CPU performance gains1 and leadership in memory bandwidth constrained workloads. Performance estimations include:
  • Linpack up to 1.21x versus Intel Xeon Scalable 8180 processor and 3.4x versus AMD EPYC 7601
  • Stream Triad up to 1.83x versus Intel Scalable 8180 processor and 1.3x versus AMD EPYC 7601
  • AI/Deep Learning Inference up to 17x images-per-second2 versus Intel Xeon Platinum processor at launch.
Intel SGX on the Intel Xeon E-2100 processor family delivers hardware-based security and manageability features to further secure customer data and applications. This feature is currently unique to the Intel Xeon E processor family and allows new entry-level servers featuring an Intel Xeon E-2100 processor to provide an additional layer of hardware-enhanced security measures when used with properly enabled cloud applications.
The Xeon E-2100 processor is targeted at small- and medium-size businesses and cloud service providers. The processor supports workloads suitable for entry-level servers, but also has applicability across all computing segments requiring enhanced data protections for the most sensitive workloads.
Small businesses deploying Intel Xeon E-2100 processor-based servers will benefit from the processor's enhanced performance and data security. They will allow businesses to operate smoothly by supporting the latest file-sharing, storage and backup, virtualization, and employee productivity solutions.
Intel Xeon E-2100 processors are available today through Intel and leading distributors.
Add your own comment

36 Comments on Intel Announces Cascade Lake Advanced Performance and Xeon E-2100

#1
Nephilim666
Oh wow their 48 core is faster than the AMD epyc 32 core they compared it against :respect::rolleyes:
Posted on Reply
#2
randomUser
Nephilim666Oh wow their 48 core is faster than the AMD epyc 32 core they compared it against :respect::rolleyes:
I guess this is fair. AMD did compare their 32 cores with intels 28 or 24 cores, can't remember exactly.
Posted on Reply
#3
noel_fs
randomUserI guess this is fair. AMD did compare their 32 cores with intels 28 or 24 cores, can't remember exactly.
Yeah, but with same or lower price. These 48c are gonna be x3 price of amd 32c. And if i recall correctly they didnt have 32c?
Posted on Reply
#4
geon2k2
Did they test the new CPUs with HT on, while the others with HT/SMT off?
Posted on Reply
#5
DeathtoGnomes
geon2k2Did they test the new CPUs with HT on, while the others with HT/SMT off?
They hid the chiller behind the desk too. :D:eek:

Anyone praising the "better than Epyc" numbers now should wait until actual review testing, you look very foolish. It may be true but its really too early to believe anything Intel says. Intel has a habit of fudging numbers for PR stunts.
Posted on Reply
#6
MDDB
geon2k2Did they test the new CPUs with HT on, while the others with HT/SMT off?
As the last slide indicates, when comparing in Linpack against Epyc, the AMD cpus had SMT disabled. It doesn't specify whether the Intel cpus had HT on or not, though. Bets?
Posted on Reply
#7
kastriot
Price, price always wins..
Posted on Reply
#8
First Strike
geon2k2Did they test the new CPUs with HT on, while the others with HT/SMT off?
Doesn't SMT worsen Linpack performances? I've seen a lot of suggestions of turning off SMT when doing HPC.
Posted on Reply
#9
Basard
So, they seriously tested a dual socket AMD EPYC system, with SMT off (64 threads), vs a dual socket Xeon system with 96 total cores and (no mention of HTT on of off)? It's no wonder why it was up 3.4x, they enabled 3x the threads!
Posted on Reply
#10
PerfectWave
randomUserI guess this is fair. AMD did compare their 32 cores with intels 28 or 24 cores, can't remember exactly.
Cos Intel dont have 32 cores LUL
Posted on Reply
#11
R0H1T
noel_fsYeah, but with same or lower price. These 48c are gonna be x3 price of amd 32c. And if i recall correctly they didnt have 32c?
Same core count Intel chips are way more pricier atm like 1.5x-2.5x in most cases, if not more.
Posted on Reply
#12
GreiverBlade
randomUserI guess this is fair. AMD did compare their 32 cores with intels 28 or 24 cores, can't remember exactly.
fair indeed ... 32-28= 4 core more 48-32=16core more ... yep comparison with 4x more core difference than the previous is fair :laugh:
Posted on Reply
#13
First Strike
BasardSo, they seriously tested a dual socket AMD EPYC system, with SMT off (64 threads), vs a dual socket Xeon system with 96 total cores and (no mention of HTT on of off)? It's no wonder why it was up 3.4x, they enabled 3x the threads!
Hey, counting threads in HPC workload is not an advisable move. SMT can hinder HPC performance according to some.
And what's wrong with this 3.4x anyway? Skylake-SP has two AVX-512 execution unit per core and zen1 has two 128-bit ADD and two 128-bit MUL instead. No surprise a crushing advantage in LINPACK. It has always been the case in HPC.
Posted on Reply
#14
mat9v
They have tested 2x48 cores Intel vs 2x 7601 EPYC, with EPYC having SMT disabled, and their comparative old Platinum system have HT disabled too. No word on Cascade Lake having HT disabled though.
More, they are testing without security patches (read small print below their results) on both Windows and Linux (tests on Linux done in 2017 on 3.10.0 kernel).
It is a shame what Intel is falling down to.
Posted on Reply
#15
Basard
@First Strike
I suppose.... But who's gonna turn HTT off when they throw this chip into a machine? Why would they even put it on the chip in the first place if performance is so great without it? I'm guessing that they just cherry-pick a few programs that benefit from disabling it, run some tests, and say "see 3.4X!"
I see they also disabled it for their DL inference.
@mat9v
They write "2 AMD EPYC" in the config details slide.... Also, whatever the "stream triad" test is, they run results from a June 2017 test of Epyc...
I dunno, they may as well hand the chips over to Principled Technologies, lol.
Posted on Reply
#16
Zubasa
First StrikeHey, counting threads in HPC workload is not an advisable move. SMT can hinder HPC performance according to some.
And what's wrong with this 3.4x anyway? Skylake-SP has two AVX-512 execution unit per core and zen1 has two 128-bit ADD and two 128-bit MUL instead. No surprise a crushing advantage in LINPACK. It has always been the case in HPC.
I wonder what the TDP on AVX-512 workloads would be for the Xeons, either the the CPUs runs at a lower clock to to compensate, or the power consumption and heat goes through the roof.
Posted on Reply
#17
mat9v
BTW, anybody seen that small info on last slide?
They are comparing not to actual working system based on 2x48 cores MPC but to PROJECTIONS of its performance !!!!
Posted on Reply
#18
First Strike
mat9vThey have tested 2x48 cores Intel vs 2x 7601 EPYC, with EPYC having SMT disabled, and their comparative old Platinum system have HT disabled too. No word on Cascade Lake having HT disabled though.
More, they are testing without security patches (read small print below their results) on both Windows and Linux (tests on Linux done in 2017 on 3.10.0 kernel).
It is a shame what Intel is falling down to.
Such HPC marks can be way dirtier. Like the choice of block size in LINPACK. I can say with certainty that this block size is optimal or near-optimal for CSL-AP with MKL. And EPYC with BLIS would probably have a different performance-to-block-size curve.
Posted on Reply
#19
iO
mat9vBTW, anybody seen that small info on last slide?
They are comparing not to actual working system based on 2x48 cores MPC but to PROJECTIONS of its performance !!!!
Using performance projections is questionable but they shouldn't be too far off from reality as Cascadelake AP is basically just 2 of their current Skylake dies "glued" on a substrate.
Posted on Reply
#20
B-Real
Haha. So Intel critized AMD for glueing and now they make the same. I just love how this company ruining its prestige day by day since months.
randomUserI guess this is fair. AMD did compare their 32 cores with intels 28 or 24 cores, can't remember exactly.
LOL. And how much do the Intel CPUs cost?
Posted on Reply
#21
mat9v
iOUsing performance projections is questionable but they shouldn't be too far off from reality as Cascadelake AP is basically just 2 of their current Skylake dies "glued" on a substrate.
While you are right that they are just glued together CPUs, there is one "problem" with that - cooling - will they be able to keep current clocks from 8180 or will they be forced to lower them to keep air cooling a viable solution. Granted, they are using 2 separate dies so heat density will be lower, but will that be enough?
Anyway, this monster will not be cheap by any standards, hopefully not more then 20k$ but who knows, it also will not be a drop-in replacement so it will be present in new servers only. By the time it will hit the market EPYC2 with 64 cores will be selling (probably) and from what info is available, it will probably be even cheaper then EPYC1 to manufacture.
If the 8x8 + IOX configuration is real, Intel will be so deep in trouble it is not even funny.
Posted on Reply
#22
Vya Domus
So ... glued together huh ?

My prediction was correct that they eventually wouldn't have a choice but to "glue together" dies.
ZubasaI wonder what the TDP on AVX-512 workloads would be for the Xeons
Horrendous, no doubt. Intel is getting dangerously close to having a CPU that requires extreme cooling and when you have an entire floor of these that's going to be very problematic.

That being said, it looks like Rome will be in a league of its own when it comes down to just about everything.
Posted on Reply
#23
m4dn355


G̶l̶u̶e̶l̶e̶s̶s̶ ̶D̶e̶s̶i̶g̶n̶
Posted on Reply
#24
Unregistered
Damage control PR & Marketing in the professional server space is much less effective than in the less informed customer market.

Although buying corporate customers still works, AMD EPYC is flying past intel.
#25
GC_PaNzerFIN
Multi Chip Package LOL! :D

Your own engineering marketing team thinks its crap!

Posted on Reply
Add your own comment
Dec 26th, 2024 11:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts