What is the point of such combo today, when everything is "in cloud"? Shared GPU-CPU mem? That's too situational.
Workstations.
No. IMO Ryzen 3000 will be out on June 7th, this is why all the MB teasing X570.
We'll see how this goes. Maybe they'll show just few models.
Looking at just a single CPU, moving the <=8C variants to 7nm makes little sense (slightly lower power consumption, not a big deal if everyone accepts/likes how much Zen+ pulls).
So it's down to the issue of yields and using bad chiplets for low-core SKUs. Does it save them enough money to cover whatever TSMC asks? Only AMD knows this.
Given how expensive 7nm is, my guess is that they would make more by keeping the Zen+ lineup for everything that doesn't need chiplets. They could add PCIe4.0 and call it Zen++.
>8C Ryzen (this year) and >4C APU (next year?) would be Zen2.
And as 7nm gets cheaper, AMD will start fresh on a new socket (late 2020 or 2021).
Look it TH review, Epyc uses 16x memory dims vs 12 on Xeon 8280:
Now when you look at large scale Data Center- this is big, before even looking at 64C Rome.
When you look at cloud, peak power consumption is not the statistic you want to see.
Xeon CPUs use more power, but are faster. The key information is how much time and energy is needed to complete a task ("time to solve" and "energy to solve").
There's no clear winner in this. In many scenarios EPYC has 10% advantage. In some Intel pulls ahead thanks to lower latency and more robust instruction set.
There are way to many variables to say that one server CPU is better than another
in general.
Also, I think we focus on power draw way to much. Over 3 years a typical server will consume energy worth 10% of the hardware.
Remember that some server-specific software is licensed per core (databases being the most significant). We don't think about it often, but simple fact is: if you need 32C EPYC to match a 28C Xeon, the license cost stemming from 4 extra cores may eat all savings an EPYC may provide.