• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Alleged Intel Sapphire Rapids Xeon Processor Image Leaks, Dual-Die Madness Showcased

Intel has spent 3 years spreading FUD about AMD's glue.

The irony and shame here is fantastic.
 
Still not quite competitive enough with Milan for a 10nm product and Zen 3 Epyc is just months away. Ain't looking good, especially since it's rumored that AMD will remain on 64 core configurations. In other words they think these are no threat.
A very bold claim considering we know nothing about Sapphire Rapids' performance characteristics.
 
A very bold claim considering we know nothing about Sapphire Rapids' performance characteristics.

even if the performance is lower, the overall feature it gave still attractive, Xeon ecosystem already matured, and Intel isn't selling CPU, they sell Solution, so that's include Software as well

which AMD is very lacking on software side
 
even if the performance is lower, the overall feature it gave still attractive, Xeon ecosystem already matured, and Intel isn't selling CPU, they sell Solution, so that's include Software as well

which AMD is very lacking on software side
99% of these go and run Linux or VMWare though. The software advantage you're thinking of doesn't exist in the enterprise server space, and most of the heavy compute nodes don't run Windows.

The chance of seeing these in workstations is pretty slim, perhaps even zero.
 
...Or as Intels marketing department would describe it to their partners: 'glued together cpus'


... which is a good thing right?!?!

/s

Why not, AMD did it
 
A very bold claim considering we know nothing about Sapphire Rapids' performance characteristics.
In the market where these are going performance will still be below epycs.

I could be mistaken but I'm guessing you're talking single core performance, no? :p
 
Love the lols at Intel when this is exactly what a Ryzen is in effect. However big the dies are there is a gap so calling it glued is dumb as bricks, but as is the norm here now, AMD users rule the roost
 
Love the lols at Intel when this is exactly what a Ryzen is in effect. However big the dies are there is a gap so calling it glued is dumb as bricks, but as is the norm here now, AMD users rule the roost
turnabout is fair play, Intel is guilty of doing exactly what they accused AMD of doing with chip designs. Gluing together stuff to make more stuff look bigger and intense. :D
 
In the market where these are going performance will still be below epycs.
I could be mistaken but I'm guessing you're talking single core performance, no? :p
Not just single core performance, no.
Most of you guys in here don't seem understand how the server market works; servers are purpose built, and especially when it comes to high-end servers, the only thing that matters total throughput in one specific workload, regardless if that is achieved with 4 or 400 cores. Average benchmark scores are usually irrelevant here (that's only something we consumer think about). If one CPU model is superior for a specific workload, it doesn't really matter if the competition has more cores. There will probably be numerous scenarios where Xeons and Epycs win respectively, and sometimes with a significant margin too.
 
Last edited:
Intel has spend so much time on nuturing Xeon platform, that's their core business, even if the performance wasn't the top, the overall feature that they are using and being utilized by their customer cannot be replaced by AMD

Right, that's why AMD is winning high profile contracts left and right. Apparently it's a lot easier for companies to switch than you think.

Intel hasn't updated their Xeon platform with anything noteworthy for around 2 years now, that's an eternity in the server space and customers just don't wont settle for an inferior, slower and more costly platform. These things are custom built with huge support teams dedicated towards their maintenance, it's not like you plop in some racks with Intel hardware and they miraculously "just work" and the AMD ones don't. If that fabled Intel support and ecosystem was worth so much they'd never switch, expect they do, because it isn't.

Oh, and Intel's memory technology business side is so good that apparently they're looking to sell it. Hmm.
 
Last edited:
The photo shows the cancelled LGA4189(-P4) CPU that was Cooper Lake for 1 and 2 socket systems and not Sapphire Rapid.
Ice Lake for LGA4189 (LGA4189-P5) is much delayed but should finally see the light in the first half of 2021.
 
They'll heavily rely on AVX-512 benchmarks to market this.
 
For avx-512 related task yes...in some specific workstation but now this depend server usage?

I don't even know if AVX-512 has any advantages over GPU compute in any usage scenarios. Especially now that GPU clocks are reaching AVX-512 clocks. But Intel will still use the AVX-512 benchmarks to "prove" that they're beating the competition.

Quoting Linus Torvalds - "I'd much rather see that transistor budget used on other things that are much more relevant. Even if it's still FP math (in the GPU, rather than AVX-512). Or just give me more cores (with good single-thread performance, but without the garbage like AVX-512) like AMD did."
 
Last edited:
I don't even know if AVX-512 has any advantages over GPU compute in any usage scenarios.

It doesn't. Everyone would like to say that in latency sensitive applications it does, however that makes little sense in reality because the type of application that can be sped up using SIMD is likely to be highly data independent and for those sort of algorithms throughput is more important than latency.

Basically there is a contingency between something that needs high levels of parallelization and something that needs low latency, those two properties are highly orthogonal with respects to each other, in other words applications that "need" both parallel processing and low latency don't really exist. Wide SIMD support in CPUs is stupid, it's a development that should have never been taken this far, there are simply better ways to do massively parallel computation.
 
Right, that's why AMD is winning high profile contracts left and right. Apparently it's a lot easier for companies to switch than you think.

Intel hasn't updated their Xeon platform with anything noteworthy for around 2 years now, that's an eternity in the server space and customers just don't wont settle for an inferior, slower and more costly platform. These things are custom built with huge support teams dedicated towards their maintenance, it's not like you plop in some racks with Intel hardware and they miraculously "just work" and the AMD ones don't. If that fabled Intel support and ecosystem was worth so much they'd never switch, expect they do, because it isn't.

Oh, and Intel's memory technology business side is so good that apparently they're looking to sell it. Hmm.

So you telling me that it took 3 years of Intel doing absolutely nothing, massive security issues on their side and Amd offering nearly 2 x more cores/performance for the same price in order to buy AMD?

Yes, that's the Intel infrastructure, support and sw for you.

And yes, once you build custom infrastructure and sw solutions around the hw, with Intel, it is the matter of changing the hw, adjusting of a few things and you are free to go, since everything is compatible.

When you are upgrading to Amd or to different hw vendor, you have to build the entire infrastructure again. Not to mention the non existing Amd proconsumer support.

Funny thing is actually, that Intel server business is growing massively, just over 30% during the last Q. The demand is higher than the supply from their side and every Q reaching a new record.
 
But can it run Cyberpunk ?!
 
Not to mention the non existing Amd proconsumer support.

What the hell does the server segment have to do with "proconsumer" support, whatever that is.

So you telling me that it took 3 years of Intel doing absolutely nothing, massive security issues on their side and Amd offering nearly 2 x more cores/performance for the same price in order to buy AMD?

It's though to make a dent in this segment since you need to prove that your platform in constantly advancing, something that currently AMD is doing and Intel isn't, so yes it actually takes this long. Plus, these things are built to be used for multiple years, so most of the data centers that are currently in use or are just about to be replaced/upgraded come from a time when EPYC didn't even exist or was jsut announced. And talking about integration, currently you can get exceptional GPU hardware from AMD which is really important these days. From the Intel you're still left waiting in the dust and you need to get that separately which means more money more time and more support required. Yeah, things are looking really good for the Intel customer.

Explain to me what do you do if you bought a ton of Intel based servers 3 years ago and you need to replace them with something much more capable and power efficient and you realize Intel has hardly moved an inch since then. Do you keep your ever increasingly inferior platform around because of this fabled support while your competitors fly past you in terms of running costs because they switched to AMD ?

Funny thing is actually, that Intel server business is growing massively, just over 30% during the last Q. The demand is higher than the supply from their side and every Q reaching a new record.

You do realize this isn't a zero sum game right ? Intel and AMD can both have record growth simultaneously, in fact they do, the point is their business would have grown even more had they been more competitive. Oh and their problems with supply are because they massively screwed up their manufacturing and are forced to ship products based on 6 year old node. The demand isn't the problem, they are.
 
Last edited:
I don't even know if AVX-512 has any advantages over GPU compute in any usage scenarios. Especially now that GPU clocks are reaching AVX-512 clocks. But Intel will still use the AVX-512 benchmarks to "prove" that they're beating the competition.
There should be no doubt that AVX-512 is much faster than AVX2. Not only does it have twice the width, it also adds a lot of more operations and flexibility, which also should allow compilers to autovectorize even more code (in cases where the programmers don't use intrinsics directly).
But keep in mind that running AVX2 code through AVX-512 units will have no real benefit.

Even VIA has implemented AVX-512 in their latest design, despite running it through two fused 256-bit vector units. This may seem pointless to some of you, but it still will gain benefits such as; 1) new types of operations in AVX-512, 2) improved instruction cache utilization and 3) better ISA compatibility with future software. This is kind of analogous to when Sandy Bridge added AVX(1) support, despite having only fused 128-bit vector units. (or Zen1)

Quoting Linus Torvalds - "I'd much rather see that transistor budget used on other things that are much more relevant. Even if it's still FP math (in the GPU, rather than AVX-512). Or just give me more cores (with good single-thread performance, but without the garbage like AVX-512) like AMD did."
And people who don't know better will use this quote forever, despite it being total BS.
SIMD inside the CPU has basically no latency, and can be mixed in with other operations. Communicating with a GPU is only worth it for huge bactches of data, due to the extreme latency.

It doesn't. Everyone would like to say that in latency sensitive applications it does, however that makes little sense in reality because the type of application that can be sped up using SIMD is likely to be highly data independent and for those sort of algorithms throughput is more important than latency.
What?

Basically there is a contingency between something that needs high levels of parallelization and something that needs low latency, those two properties are highly orthogonal with respects to each other, in other words applications that "need" both parallel processing and low latency don't really exist. Wide SIMD support in CPUs is stupid, it's a development that should have never been taken this far, there are simply better ways to do massively parallel computation.
There are many types of parallelism. SIMD in the CPUs are for parallelism on a smaller scale intermixed with a lot of logic, while multithreading is for larger independent chunks of work, and GPUs for even larger computationally dense (but little logic) chunks of work.
 
Back
Top