Friday, December 11th 2020

Alleged Intel Sapphire Rapids Xeon Processor Image Leaks, Dual-Die Madness Showcased

Today, thanks to the ServeTheHome forum member "111alan", we have the first pictures of the alleged Intel Sapphire Rapids Xeon processor. Pictured is what appears to be a dual-die design similar to Cascade Lake-SP design with 56 cores and 112 threads that uses two dies. The Sapphire Rapids is a 10 nm SuperFin design that allegedly comes even in the dual-die configuration. To host this processor, the motherboard needs an LGA4677 socket with 4677 pins present. The new LGA socket, along with the new 10 nm Sapphire Rapids Xeon processors are set for delivery in 2021 when Intel is expected to launch its new processors and their respective platforms.

The processor pictured is clearly a dual-die design, meaning that Intel used some of its Multi-Chip Package (MCM) technology that uses EMIB to interconnect the silicon using an active interposer. As a reminder, the new 10 nm Sapphire Rapids platform is supposed to bring many new features like a DDR5 memory controller paired with Intel's Data Streaming Accelerator (DSA); a brand new PCIe 5.0 standard protocol with a 32 GT/s data transfer rate, and a CXL 1.1 support for next-generation accelerators. The exact configuration of this processor is unknown, however, it is an engineering sample with a clock frequency of a modest 2.0 GHz.
Source: ServeTheHome Forums
Add your own comment

83 Comments on Alleged Intel Sapphire Rapids Xeon Processor Image Leaks, Dual-Die Madness Showcased

#26
Chrispy_
Intel has spent 3 years spreading FUD about AMD's glue.

The irony and shame here is fantastic.
Posted on Reply
#27
ShurikN
HD64GMax 96 cores per die most possibly.
I think you mean 96 per socket. 96 per die would make it a 768 core cpu :D
Posted on Reply
#28
efikkan
Vya DomusStill not quite competitive enough with Milan for a 10nm product and Zen 3 Epyc is just months away. Ain't looking good, especially since it's rumored that AMD will remain on 64 core configurations. In other words they think these are no threat.
A very bold claim considering we know nothing about Sapphire Rapids' performance characteristics.
Posted on Reply
#29
HansRapad
efikkanA very bold claim considering we know nothing about Sapphire Rapids' performance characteristics.
even if the performance is lower, the overall feature it gave still attractive, Xeon ecosystem already matured, and Intel isn't selling CPU, they sell Solution, so that's include Software as well

which AMD is very lacking on software side
Posted on Reply
#30
Chrispy_
HansRapadeven if the performance is lower, the overall feature it gave still attractive, Xeon ecosystem already matured, and Intel isn't selling CPU, they sell Solution, so that's include Software as well

which AMD is very lacking on software side
99% of these go and run Linux or VMWare though. The software advantage you're thinking of doesn't exist in the enterprise server space, and most of the heavy compute nodes don't run Windows.

The chance of seeing these in workstations is pretty slim, perhaps even zero.
Posted on Reply
#32
DeathtoGnomes
efikkanA very bold claim considering we know nothing about Sapphire Rapids' performance characteristics.
In the market where these are going performance will still be below epycs.

I could be mistaken but I'm guessing you're talking single core performance, no? :p
Posted on Reply
#33
Unregistered
Love the lols at Intel when this is exactly what a Ryzen is in effect. However big the dies are there is a gap so calling it glued is dumb as bricks, but as is the norm here now, AMD users rule the roost
Posted on Edit | Reply
#34
DeathtoGnomes
tiggerLove the lols at Intel when this is exactly what a Ryzen is in effect. However big the dies are there is a gap so calling it glued is dumb as bricks, but as is the norm here now, AMD users rule the roost
turnabout is fair play, Intel is guilty of doing exactly what they accused AMD of doing with chip designs. Gluing together stuff to make more stuff look bigger and intense. :D
Posted on Reply
#35
efikkan
DeathtoGnomesIn the market where these are going performance will still be below epycs.
I could be mistaken but I'm guessing you're talking single core performance, no? :p
Not just single core performance, no.
Most of you guys in here don't seem understand how the server market works; servers are purpose built, and especially when it comes to high-end servers, the only thing that matters total throughput in one specific workload, regardless if that is achieved with 4 or 400 cores. Average benchmark scores are usually irrelevant here (that's only something we consumer think about). If one CPU model is superior for a specific workload, it doesn't really matter if the competition has more cores. There will probably be numerous scenarios where Xeons and Epycs win respectively, and sometimes with a significant margin too.
Posted on Reply
#36
medi01
Haile SelassieDual die madness?
Not to be mistaken with glued together mediocrity.
Posted on Reply
#37
Vya Domus
HansRapadIntel has spend so much time on nuturing Xeon platform, that's their core business, even if the performance wasn't the top, the overall feature that they are using and being utilized by their customer cannot be replaced by AMD
Right, that's why AMD is winning high profile contracts left and right. Apparently it's a lot easier for companies to switch than you think.

Intel hasn't updated their Xeon platform with anything noteworthy for around 2 years now, that's an eternity in the server space and customers just don't wont settle for an inferior, slower and more costly platform. These things are custom built with huge support teams dedicated towards their maintenance, it's not like you plop in some racks with Intel hardware and they miraculously "just work" and the AMD ones don't. If that fabled Intel support and ecosystem was worth so much they'd never switch, expect they do, because it isn't.

Oh, and Intel's memory technology business side is so good that apparently they're looking to sell it. Hmm.
Posted on Reply
#38
HD64G
ShurikNI think you mean 96 per socket. 96 per die would make it a 768 core cpu :D
Per socket indeed. :toast:
Posted on Reply
#39
pumero
The photo shows the cancelled LGA4189(-P4) CPU that was Cooper Lake for 1 and 2 socket systems and not Sapphire Rapid.
Ice Lake for LGA4189 (LGA4189-P5) is much delayed but should finally see the light in the first half of 2021.
Posted on Reply
#40
TumbleGeorge
AnarchoPrimitivAMD's Zen4 on 5nm, meaning AMD will still have the node (and efficiency) advantage, and probably another 20% IPC
50% more cores+20% IPC= 80% more performance than EPYC Milan LoL?

96*1.2/64=1.8=80%

Where is Intel in a picture?
Posted on Reply
#41
hardcore_gamer
They'll heavily rely on AVX-512 benchmarks to market this.
Posted on Reply
#42
TumbleGeorge
hardcore_gamerThey'll heavily rely on AVX-512 benchmarks to market this.
For avx-512 related task yes...in some specific workstation but now this depend server usage?
Posted on Reply
#43
hardcore_gamer
TumbleGeorgeFor avx-512 related task yes...in some specific workstation but now this depend server usage?
I don't even know if AVX-512 has any advantages over GPU compute in any usage scenarios. Especially now that GPU clocks are reaching AVX-512 clocks. But Intel will still use the AVX-512 benchmarks to "prove" that they're beating the competition.

Quoting Linus Torvalds - "I'd much rather see that transistor budget used on other things that are much more relevant. Even if it's still FP math (in the GPU, rather than AVX-512). Or just give me more cores (with good single-thread performance, but without the garbage like AVX-512) like AMD did."
Posted on Reply
#44
Vya Domus
hardcore_gamerI don't even know if AVX-512 has any advantages over GPU compute in any usage scenarios.
It doesn't. Everyone would like to say that in latency sensitive applications it does, however that makes little sense in reality because the type of application that can be sped up using SIMD is likely to be highly data independent and for those sort of algorithms throughput is more important than latency.

Basically there is a contingency between something that needs high levels of parallelization and something that needs low latency, those two properties are highly orthogonal with respects to each other, in other words applications that "need" both parallel processing and low latency don't really exist. Wide SIMD support in CPUs is stupid, it's a development that should have never been taken this far, there are simply better ways to do massively parallel computation.
Posted on Reply
#45
SaLaDiN666
Vya DomusRight, that's why AMD is winning high profile contracts left and right. Apparently it's a lot easier for companies to switch than you think.

Intel hasn't updated their Xeon platform with anything noteworthy for around 2 years now, that's an eternity in the server space and customers just don't wont settle for an inferior, slower and more costly platform. These things are custom built with huge support teams dedicated towards their maintenance, it's not like you plop in some racks with Intel hardware and they miraculously "just work" and the AMD ones don't. If that fabled Intel support and ecosystem was worth so much they'd never switch, expect they do, because it isn't.

Oh, and Intel's memory technology business side is so good that apparently they're looking to sell it. Hmm.
So you telling me that it took 3 years of Intel doing absolutely nothing, massive security issues on their side and Amd offering nearly 2 x more cores/performance for the same price in order to buy AMD?

Yes, that's the Intel infrastructure, support and sw for you.

And yes, once you build custom infrastructure and sw solutions around the hw, with Intel, it is the matter of changing the hw, adjusting of a few things and you are free to go, since everything is compatible.

When you are upgrading to Amd or to different hw vendor, you have to build the entire infrastructure again. Not to mention the non existing Amd proconsumer support.

Funny thing is actually, that Intel server business is growing massively, just over 30% during the last Q. The demand is higher than the supply from their side and every Q reaching a new record.
Posted on Reply
#46
r9
But can it run Cyberpunk ?!
Posted on Reply
#47
TumbleGeorge
r9But can it run Cyberpunk ?!
Only soundtrack :D
Posted on Reply
#48
Vya Domus
SaLaDiN666Not to mention the non existing Amd proconsumer support.
What the hell does the server segment have to do with "proconsumer" support, whatever that is.
SaLaDiN666So you telling me that it took 3 years of Intel doing absolutely nothing, massive security issues on their side and Amd offering nearly 2 x more cores/performance for the same price in order to buy AMD?
It's though to make a dent in this segment since you need to prove that your platform in constantly advancing, something that currently AMD is doing and Intel isn't, so yes it actually takes this long. Plus, these things are built to be used for multiple years, so most of the data centers that are currently in use or are just about to be replaced/upgraded come from a time when EPYC didn't even exist or was jsut announced. And talking about integration, currently you can get exceptional GPU hardware from AMD which is really important these days. From the Intel you're still left waiting in the dust and you need to get that separately which means more money more time and more support required. Yeah, things are looking really good for the Intel customer.

Explain to me what do you do if you bought a ton of Intel based servers 3 years ago and you need to replace them with something much more capable and power efficient and you realize Intel has hardly moved an inch since then. Do you keep your ever increasingly inferior platform around because of this fabled support while your competitors fly past you in terms of running costs because they switched to AMD ?
SaLaDiN666Funny thing is actually, that Intel server business is growing massively, just over 30% during the last Q. The demand is higher than the supply from their side and every Q reaching a new record.
You do realize this isn't a zero sum game right ? Intel and AMD can both have record growth simultaneously, in fact they do, the point is their business would have grown even more had they been more competitive. Oh and their problems with supply are because they massively screwed up their manufacturing and are forced to ship products based on 6 year old node. The demand isn't the problem, they are.
Posted on Reply
#49
efikkan
hardcore_gamerI don't even know if AVX-512 has any advantages over GPU compute in any usage scenarios. Especially now that GPU clocks are reaching AVX-512 clocks. But Intel will still use the AVX-512 benchmarks to "prove" that they're beating the competition.
There should be no doubt that AVX-512 is much faster than AVX2. Not only does it have twice the width, it also adds a lot of more operations and flexibility, which also should allow compilers to autovectorize even more code (in cases where the programmers don't use intrinsics directly).
But keep in mind that running AVX2 code through AVX-512 units will have no real benefit.

Even VIA has implemented AVX-512 in their latest design, despite running it through two fused 256-bit vector units. This may seem pointless to some of you, but it still will gain benefits such as; 1) new types of operations in AVX-512, 2) improved instruction cache utilization and 3) better ISA compatibility with future software. This is kind of analogous to when Sandy Bridge added AVX(1) support, despite having only fused 128-bit vector units. (or Zen1)
hardcore_gamerQuoting Linus Torvalds - "I'd much rather see that transistor budget used on other things that are much more relevant. Even if it's still FP math (in the GPU, rather than AVX-512). Or just give me more cores (with good single-thread performance, but without the garbage like AVX-512) like AMD did."
And people who don't know better will use this quote forever, despite it being total BS.
SIMD inside the CPU has basically no latency, and can be mixed in with other operations. Communicating with a GPU is only worth it for huge bactches of data, due to the extreme latency.
Vya DomusIt doesn't. Everyone would like to say that in latency sensitive applications it does, however that makes little sense in reality because the type of application that can be sped up using SIMD is likely to be highly data independent and for those sort of algorithms throughput is more important than latency.
What?
Vya DomusBasically there is a contingency between something that needs high levels of parallelization and something that needs low latency, those two properties are highly orthogonal with respects to each other, in other words applications that "need" both parallel processing and low latency don't really exist. Wide SIMD support in CPUs is stupid, it's a development that should have never been taken this far, there are simply better ways to do massively parallel computation.
There are many types of parallelism. SIMD in the CPUs are for parallelism on a smaller scale intermixed with a lot of logic, while multithreading is for larger independent chunks of work, and GPUs for even larger computationally dense (but little logic) chunks of work.
Posted on Reply
#50
voltage
interesting, I am looking forward to seeing result reviews.
Posted on Reply
Add your own comment
Dec 21st, 2024 21:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts