Tuesday, August 6th 2019

Next-generation Intel Xeon Scalable Processors to Deliver Breakthrough Platform Performance with up to 56 Processor Cores

Intel today announced its future Intel Xeon Scalable processor family (codename Cooper Lake) will offer customers up to 56 processor cores per socket and built-in AI training acceleration in a standard, socketed CPU as part of its mainline Intel Xeon Scalable platforms, with availability in the first half of 2020. The breakthrough platform performance delivered within the high-core-count Cooper Lake processors will leverage the capabilities built into the Intel Xeon Platinum 9200 series, which today is gaining momentum among the world's most demanding HPC customers, including HLRN, Advania, 4Paradigm, and others.

"The Intel Xeon Platinum 9200 series that we introduced as part of our 2nd Generation Intel Xeon Scalable processor family generated a lot of excitement among our customers who are deploying the technology to run their high-performance computing (HPC), advanced analytics, artificial intelligence and high-density infrastructure. Extended 56-core processor offerings into our mainline Intel Xeon Scalable platforms enables us to serve a much broader range of customers who hunger for more processor performance and memory bandwidth."
-Lisa Spelman, vice president and general manager of Data Center Marketing, Intel Corporation
The future Intel Xeon Scalable processors (codename Cooper Lake) will deliver twice the processor core count (up to 56 cores), higher memory bandwidth, and higher AI inference and training performance compared to the standard Intel Xeon Platinum 8200 processor. The future 56-core Cooper Lake processor is expected to deliver a lower power envelope than the current Intel Xeon Platinum 9200 processors. Cooper Lake will be the first x86 processor to deliver built-in high-performance AI training acceleration capabilities through new bfloat16 support added to Intel Deep Learning Boost (Intel DL Boost ). Cooper Lake will have platform compatibility with the upcoming 10nm Ice Lake processor.

For more than 20 years, Intel Xeon processors have delivered the platform and performance leadership that gives data center and enterprise customers the flexibility to pick the right solution for their computing needs. Next-generation Intel Xeon Scalable processors (Cooper Lake) build off Intel's uninterrupted server processor track record by delivering leadership performance for customers' real-world workloads and business application needs.

Intel Xeon Platinum 9200 processors are available for purchase today as part of a pre-configured systems from select OEMs, including Atos, HPE, Lenovo, Penguin Computing, Megware and authorized Intel resellers. Learn more information about the Intel Xeon Platinum 9200 processors.
Add your own comment

56 Comments on Next-generation Intel Xeon Scalable Processors to Deliver Breakthrough Platform Performance with up to 56 Processor Cores

#26
Crustybeaver
nemesis.ieRe-read what he said, it sounded like he wants ALL of the vendors to have good products so we the consumer win, which is the sane way to look at it.
Yes and that's how I interpreted it. In which case would it not have made sense stating that he's pulling for AMD to offer more competition to Nvidia, thus offering more competitive pricing and better innovation?
Posted on Reply
#27
Mephis
CrustybeaverWhat do you mean by pulling for Nvidia? AMD is the inferior product line :confused:
I'm pulling for AMD and Intel in the CPU space and Nvidia, AMD and Intel in the GPU Space. Which should be the way everyone looks at it, but we all know that it isn't.

I would list the MB, RAM, SSD, PSU and case manufacturers, but for some reason people don't have allegiances in those markets (if they do, I havent seen it yet.)
Posted on Reply
#28
danbert2000
AMD is after Intel's golden goose. Gotta love the drama. It wasn't long ago that Intel barely had to lift a finger in the server space, other than beating their old processors by 5%. I don't know if AMD has the institutional support to get their EPYC chips in mainstream server parts yet, but there is some toe-dipping happening. As long as AMD makes an unbeatable value proposition, either the enterprise space will have to start offering products, or Intel will start slashing prices to keep them happy. I hope it's the former.
Posted on Reply
#29
Crustybeaver
MephisI'm pulling for AMD and Intel in the CPU space and Nvidia, AMD and Intel in the GPU Space. Which should be the way everyone looks at it, but we all know that it isn't.

I would list the MB, RAM, SSD, PSU and case manufacturers, but for some reason people don't have allegiances in those markets (if they do, I havent seen it yet.)
I don't pull for one brand over another. I look out with baited breath every time a high end AMD GPU is announced, only to be disappointed. I don't have an allegiance to one brand over another, I'm open to competition and think it's great for the consumer, but if one consistently outperforms the other then it's clear which is going to get the nod.
Posted on Reply
#30
cucker tarlson
svan71In this scenario cores are king.
not if the chip has some sort of case-specific hw accelerator like nvidia's tensor/rt cores.I think intel realized what nvidia did too
56 processor cores per socket and built-in AI training acceleration in a standard, socketed CPU
Posted on Reply
#31
Steevo
MephisI don't think anyone can deny that AMD has made huge progress with their recent designs. I guess my biggest point of contention is the notion that Intel is doomed. There is no doubt their backs are up against a corner, but they have been there before and we all know what the result was. I personally am pulling for both companies (Nvidia too), because customers win when there is strong competition.



Agreed that is impressive. Just want to point out that the price isn't really that big of deal when you compare it to total cost of ownership. Things like RAM, Software and Service dwarf the cost of the CPUs.
Please don't infer things I didn't say, I never said Intel was doomed. Don't fearmonger or use overblown thoughts on your own part to inject ideas into a conversation.
Posted on Reply
#32
Patriot
cucker tarlsonnot if the chip has some sort of case-specific hw accelerator like nvidia's tensor/rt cores.I think intel realized what nvidia did too
Vnni is already out in cascade lake, it's helpful for inferencing more than training, gpus are still king.... Same with AVX 512, the drawbacks outweigh the gains.
en.wikichip.org/wiki/x86/avx512vnni
Intel is just competing with themselves on fpgas with vnni... they are going for an accelerated ecosystem where you have to use all things intel. Intel vnni cpus,fpgas, Xe, onmipath, etc etc.... (though they just killed off 200gbit omnipath)

Intel's challenge to cuda is one api
www.hpcwire.com/2019/05/08/intel-puts-7nm-gpu-on-roadmap-for-2021-oneapi-coming-this-year/

Intel has a hard 2-3yrs ahead of it, but it should rebound, they do have Zen's architect on their payroll afterall... looking forward to buying their stock on discount down the road.
AMD has been maintaining a good cadence with these impressive releases...but they are going to burn out of roadmap after Zen 4/5... and then what?
Hoping they can remain competitive and not make another bulldozer, It's kinda crazy that Keller gets to compete with his own designs...
Posted on Reply
#33
HD64G
The security problems of Intel CPUs along with the much great er efficiency and scalability and price advantage of Rome from AMD will turn the tables sooner than expected in the server market as well. Almost all the critical aspects for a server are in favor of AMD atm. My 5 cents.
Posted on Reply
#34
cucker tarlson
HD64GThe security problems of Intel CPUs along with the much great er efficiency and scalability and price advantage of Rome from AMD will turn the tables sooner than expected in the server market as well. Almost all the critical aspects for a server are in favor of AMD atm. My 5 cents.
you mean compared to current intel cpus or the next gen ?
Posted on Reply
#35
Berfs1
windwhirlNot every bit of performance depends on core counts.
it does when you are comparing against possibly 2 64 core CPUs. That have 128 PCIe lanes each. Cough cough upcoming epyc. 112 vs 256 threads, which one would you take? Not to mention they don’t take a butt load of energy (400W vs 180W x2). Essentially that’s 7.14W vs 2.81W per core respectively. So um, intel’s chip sucks.
Posted on Reply
#36
Unregistered
PatriotIt's kinda crazy that Keller gets to compete with his own designs...
Of all the skilled engineers in the world, yes it is very crazy that only Jim Keller could out engineer the design he was previously lead Engineer on.

Are there so few x86 engineers out there even with Intel flaunting their tens of thousands of engineers can't compete with him?

And can only imagine the NDA nightmares...
#37
nemesis.ie
JK may actually be working on AI (as he did at Tesla) and other things at Intel.

There were quite a few others (who are still at AMD) in the Zen team. A team builds things these days, generally not one person, although one person may have a lot of influence.
Posted on Reply
#38
efikkan
yakkOf all the skilled engineers in the world, yes it is very crazy that only Jim Keller could out engineer the design he was previously lead Engineer on.

Are there so few x86 engineers out there even with Intel flaunting their tens of thousands of engineers can't compete with him?
I seriously doubt that "rock stars" like Keller, Raja etc. are doing any actual engineering these days.
While managers with a technical background is generally much better suited to do good management decisions than non-technical managers, they are still probably limited to "high-level" architectural features, resource prioritization etc.
The people who do the hard work are the core team of engineers below them, but their ability to do their job is of course dependent on good management.
Posted on Reply
#39
Mephis
SteevoPlease don't infer things I didn't say, I never said Intel was doomed. Don't fearmonger or use overblown thoughts on your own part to inject ideas into a conversation.
If you thought I was talking about you in particular, I wasn't. It was a comment about an overall general tone on this forum. No need to get so excited.
Posted on Reply
#40
londiste
Berfs1it does when you are comparing against possibly 2 64 core CPUs. That have 128 PCIe lanes each. Cough cough upcoming epyc. 112 vs 256 threads, which one would you take? Not to mention they don’t take a butt load of energy (400W vs 180W x2). Essentially that’s 7.14W vs 2.81W per core respectively. So um, intel’s chip sucks.
You know, Rome is doing just fine without exaggerations.
- Power-wise the thread count comparison might be apt but leaving that aside for the moment even current Xeon 9200 do work with 2-socket resulting in 224 threads.
- According to the leaked list, 64-core Romes are 200W and 225W. Hopefully AMD is not doing the same thing as Ryzen 3000 series has on the desktop where default settings are +35% to that.
- 128 PCI-e lanes each means 128 PCI-e lanes total for dual-socket configuration or as servethehome speculates, perhaps 160 with generational improvements. Intel is not doing better - usual, up to 28 core, Xeons have 48 lanes per CPU (96 lanes for dual) and Xeon 9200 has 40 lanes per CPU (80 lanes for dual).

Architecture is not the problem. Power and 14nm is Intel's problem today. If they are stuck with this until they can figure out how 10nm works or if they have another approach, we'll see.
Posted on Reply
#41
Patriot
londisteYou know, Rome is doing just fine without exaggerations.
- Power-wise the thread count comparison might be apt but leaving that aside for the moment even current Xeon 9200 do work with 2-socket resulting in 224 threads.
- According to the leaked list, 64-core Romes are 200W and 225W. Hopefully AMD is not doing the same thing as Ryzen 3000 series has on the desktop where default settings are +35% to that.
- 128 PCI-e lanes each means 128 PCI-e lanes total for dual-socket configuration or as servethehome speculates, perhaps 160 with generational improvements. Intel is not doing better - usual, up to 28 core, Xeons have 48 lanes per CPU (96 lanes for dual) and Xeon 9200 has 40 lanes per CPU (80 lanes for dual).

Architecture is not the problem. Power and 14nm is Intel's problem today. If they are stuck with this until they can figure out how 10nm works or if they have another approach, we'll see.
Couple of notes:
Rome boards are designed with expectation of 250w/socket, either for milan or for turbo, reviews will tell.
128 lanes of PCIE 4 per cpu, or when configured in dual cpu mode half the lanes are coordinated as XGMI links which are x16 links but a more efficient protocol giving lower latency and higher bandwidth.

Server makers can opt to use 3 or 4 XGMI links giving an extra possible 32 lanes but that would sacrifice inter-socket bandwidth while increasing the needs for it. I think its a bad play as 128 pcie 4 lanes is a shitton of bandwidth...

Intel 9200 is BGA and boards and chips have to be bought from intel its a 200k sort of play without ram... and almost no one is buying first gen. It draws too much power, there is no differentiation to be had between vendors... it's just not a good thing. Intel has sort of listened and made a gen2 with cooperlake being socketed and upgradable to icelake.

Comparing 9200 and rome is not useful as it's not really in the market. Intel having 96 pcie 3.0 lanes vs 128-160 pcie 4.0 lanes is just an insane bandwidth difference. As far as server config is concerned I expect many single proc rome servers, and most dual proc to be configured with 3 xgmi links.

Intel will retail single threaded performance advantage in the server realm most likely, but will be dominated in anything that can use the insane amount of threads AMD is offering.

As far as what Keller is working on... he is VP of SOC and is working on die stacking and other vertical highly integrated density gains...
He claiming 50x density improvements over 10nm and it is "virtually working already"
Posted on Reply
#43
Steevo
MephisIf you thought I was talking about you in particular, I wasn't. It was a comment about an overall general tone on this forum. No need to get so excited.
No worries, I was just responding to your statement under where you quoted me.
Posted on Reply
#45
Crackong
DeathtoGnomesI wonder how much glue Intel is using on this.
It is not glue it is pancakes

(From wikichip)
Posted on Reply
#46
Vya Domus
windwhirlNot every bit of performance depends on core counts.
Well these exist and they clearly serve a purpose. So yes, here every bit of performance depends on core counts.
Posted on Reply
#47
Berfs1
londisteYou know, Rome is doing just fine without exaggerations.
- Power-wise the thread count comparison might be apt but leaving that aside for the moment even current Xeon 9200 do work with 2-socket resulting in 224 threads.
- According to the leaked list, 64-core Romes are 200W and 225W. Hopefully AMD is not doing the same thing as Ryzen 3000 series has on the desktop where default settings are +35% to that.
- 128 PCI-e lanes each means 128 PCI-e lanes total for dual-socket configuration or as servethehome speculates, perhaps 160 with generational improvements. Intel is not doing better - usual, up to 28 core, Xeons have 48 lanes per CPU (96 lanes for dual) and Xeon 9200 has 40 lanes per CPU (80 lanes for dual).

Architecture is not the problem. Power and 14nm is Intel's problem today. If they are stuck with this until they can figure out how 10nm works or if they have another approach, we'll see.
No, it is 128 PER CPU. AMD confirms it on their website. You have a total of 256 with 2 Epyc CPUs. Though, finding a way to USE all 256, that’ll require lots of hardware (but I’m sure some server users will find a way to use that many). Plus for second gen, its 128 PCIe 4.0 lanes per cpu. That’s yummy. Also, intel’s 56 core cpu is soldered, meaning you have to buy the motherboard. Can’t swap the cpu in case something happens. Even intel is making custom cooling solutions for it, depending on the U size of the server chassis. Whereas Epyc can be used in much more places.
Posted on Reply
#48
londiste
Berfs1No, it is 128 PER CPU. AMD confirms it on their website. You have a total of 256 with 2 Epyc CPUs.
No, you don't. With the same configuration as Naples you get 128 lanes for 2 CPUs. This is the default configuration. With additional configuration allowed for system builders you can have 128-160 lanes for 2 CPUs.
Posted on Reply
#49
Patriot
Berfs1No, it is 128 PER CPU. AMD confirms it on their website. You have a total of 256 with 2 Epyc CPUs. Though, finding a way to USE all 256, that’ll require lots of hardware (but I’m sure some server users will find a way to use that many). Plus for second gen, its 128 PCIe 4.0 lanes per cpu. That’s yummy. Also, intel’s 56 core cpu is soldered, meaning you have to buy the motherboard. Can’t swap the cpu in case something happens. Even intel is making custom cooling solutions for it, depending on the U size of the server chassis. Whereas Epyc can be used in much more places.
No just no.... read before writing and learn.
PatriotCouple of notes:
Rome boards are designed with expectation of 250w/socket, either for milan or for turbo, reviews will tell.
128 lanes of PCIE 4 per cpu, or when configured in dual cpu mode half the lanes are coordinated as XGMI links which are x16 links but a more efficient protocol giving lower latency and higher bandwidth.

Server makers can opt to use 3 or 4 XGMI links giving an extra possible 32 lanes but that would sacrifice inter-socket bandwidth while increasing the needs for it. I think its a bad play as 128 pcie 4 lanes is a shitton of bandwidth...

Intel 9200 is BGA and boards and chips have to be bought from intel its a 200k sort of play without ram... and almost no one is buying first gen. It draws too much power, there is no differentiation to be had between vendors... it's just not a good thing. Intel has sort of listened and made a gen2 with cooperlake being socketed and upgradable to icelake.

Comparing 9200 and rome is not useful as it's not really in the market. Intel having 96 pcie 3.0 lanes vs 128-160 pcie 4.0 lanes is just an insane bandwidth difference. As far as server config is concerned I expect many single proc rome servers, and most dual proc to be configured with 3 xgmi links.

Intel will retail single threaded performance advantage in the server realm most likely, but will be dominated in anything that can use the insane amount of threads AMD is offering.

As far as what Keller is working on... he is VP of SOC and is working on die stacking and other vertical highly integrated density gains...
He claiming 50x density improvements over 10nm and it is "virtually working already"
The amendments on power are coming, more detailed reviews on power usage.
225w is the official top sku, I see gigabyte allowing CTDP up to 240w.

What we do know is dual 64c use less than dual 28c by a healthy margin, and 1 64c is about all it takes to match or better dual 28c.

The 2020 "competition" is a socketed version of the 9200, so the bga will no longer be an issue, power probably still will be, or it won't be very competitive.
Currently on an AMD unoptimized path (not using even AVX2 which rome supports) Using AVX512 on Intel, a dual 8280 2x 10k chip will match a 2x 7k Rome setup, give rome AVX2 and that will never happen.
techmagnet56-core $10000 ..64-cores $7000 yeah no brainer.
Nonono tech, its 10k for 28c ... these 56c chips are 20-40k each and you have to have 2 soldered down on an intel board...
Intel is going to have to offer 80% + discounts to sell chips.
Posted on Reply
#50
Berfs1
PatriotNo just no.... read before writing and learn.

The amendments on power are coming, more detailed reviews on power usage.
225w is the official top sku, I see gigabyte allowing CTDP up to 240w.

What we do know is dual 64c use less than dual 28c by a healthy margin, and 1 64c is about all it takes to match or better dual 28c.

The 2020 "competition" is a socketed version of the 9200, so the bga will no longer be an issue, power probably still will be, or it won't be very competitive.
Currently on an AMD unoptimized path (not using even AVX2 which rome supports) Using AVX512 on Intel, a dual 8280 2x 10k chip will match a 2x 7k Rome setup, give rome AVX2 and that will never happen.




Nonono tech, its 10k for 28c ... these 56c chips are 20-40k each and you have to have 2 soldered down on an intel board...
Intel is going to have to offer 80% + discounts to sell chips.
Yep, ur right, i need to learn...
londisteNo, you don't. With the same configuration as Naples you get 128 lanes for 2 CPUs. This is the default configuration. With additional configuration allowed for system builders you can have 128-160 lanes for 2 CPUs.
sure.

Yall acting like you know everything, tell my why this particular CPU says it supports 128 lanes? www.amd.com/en/products/cpu/amd-epyc-7551p

Don't even tell me "dual socket", its a P CPU. Clearly my glasses are working.
Posted on Reply
Add your own comment
Nov 27th, 2024 15:23 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts