Intel used a multiplier-unlocked derivative of the Xeon Platinum 8180 "Skylake-SP" processor in this demo. The Xeon Platinum 8180 "Skylake-SP" is a $10,000 processor with a 205W rated TDP at its nominal clock speed of 2.50 GHz, with a Turbo Boost frequency of 3.80 GHz. The company achieved a 100% overclock to 5.00 GHz, using extreme cooling, and considering that TDP is calculated taking into account a processor's nominal clock (a clock speed that all cores are guaranteed to run at simultaneously), the company could have easily crossed 350W to 400W TDP stabilizing the 5.00 GHz overclock. If a 205W TDP figures in the same sentence as 2.50 GHz nominal clocks, it doesn't bode well for the final product. It will either have a very high TDP (higher still taking into account its unlocked multiplier), or clock speeds that aren't much higher than the Xeon Platinum 8180.
A lot of assumptions here. I'd ask you if you have any idea what the power draw for 5 GHZ ThreadRipper 1950X is, I can tell you it's the other side of 500W. Power draw as you know very well isn't linear and that relationship between performance and power isn't linear either as we see with Ryzen 7 2nd gen for example (165W~ package power @ 4.2GHz 1.3v).
As for the clock speeds it'll have, basing it on the Xeon Platinum 8180 is premature, as #1 you're assuming there's no process tweaking between now and when this product hits market. #2 There are key differences between the Xeon part and the HEDT part, which allows INTEL to switch off parts of the die that are not needed (like data links for 2/4-way MP systems) which in turn frees up headroom both in terms of power and clock frequencies. You, much like I or anyone else can't know what these differences are and as such it is imperative to pose more questions, rather than make blanket statements.
AMD's offering could be cheaper and more efficient, besides being fast. An overall superior halo product almost always has a spillover PR to cheaper client-segment products across platforms; and the client GPU industry has demonstrated that for the past two decades.
That may be true, but AMD has yet to produce the superior* product to INTEL's offerings with any Zen based solution for the desktop which could have the aforementioned Halo effect.
Moreover, what does superior mean? Does that mean cheaper, lower power consumption perhaps, better performance or is it a combination? Exactly what is superior because for instance, INTEL still has the performance lead in some application types and Zen+ doesn't change that. In the editorials on this very website, 8700K is still faster in games than Ryzen 7 2700X for instance. AVX is still significantly slower on Summit and Pinnacle Ridge than on CFL. So perhaps it's important to flash out what we mean when we say superior. Zen/Zen+ could very well be superior for sure, but what does that mean exactly and where?
AMD is already selling 16 cores at $999, and beating Intel's $999 10-core i9-7900X in a variety of HEDT-relevant tasks. The company has already demonstrated that its 24-core Threadripper II is faster than Intel's $1,999 18-core i9-7980XE. It would surprise us if AMD prices this 24-core part double that of its 16-core part, and so it's more likely to end up cheaper than the i9-7980XE.
Totally! There's just no beating the TR offerings in
price vs performance where productivity applications that scale with core or rather thread count are concerned. I can't know AMD's pricing so I won't speculate on it, but based on previous SKUs as you alluded too - 24-Core TR4 could very well cost less than 7980XE or at the very least the same price, especially if as shown at the Computex - it nails the 7980XE in Blender and such applications.
Intel cannot beat the 32-core Threadripper II on the X299/LGA2066 platform, because it has maxed out the number of cores the platform can pull. The Skylake HCC (high core count) silicon, deployed on 12-core, 14-core, 16-core, and 18-core LGA2066 processors, is already motherboard designers' nightmare, many of whom have launched special "XE" variants of their top motherboard models that offer acceptable overclocking headroom on these chips, thanks to beefed up VRM.
Ok so this is where you may be getting it wrong. The issue with the 18 core 7980XE was not the VRM on any motherboard, but the VRM COOLING and this is an important distinction. ASRock XE variants of their X299 boards have the same VRM as the regular version, but what's been changed is the VRM Cooling solution. Most of them simply adding more surface area to the heatsink, hence dissipating more heat and allowing the VRM temp to fall back to optimal operating temps - maintaining performance. Even on the GIGABYTE SOC Champion X299, yes it was never a retail board, but it exemplifies this. To overclock on this board using the 7980XE, one needs to actively cool the VRM and for extreme overclocking, you need to have a container for the VRM and cool it with LN2. The VRM can tolerate the power draw, but it's the cooling solution that cannot.
7980XE was not a motherboard designers nightmare (BTW, there are teams within motherboard vendors, it's not a single monolith. There's a Thermal solutions team separate from there's a BIOS Team etc.), the VRM on these boards could always handle the loads. It is the Thermal Design teams which created cooling solutions which could not adequately dissipate heat from the VRM.
Coming up with a newer platform, namely revising the Purley 1P enterprise platform for the client-segment, with its large LGA3647 socket and 6-channel memory interface, is the only direction in which Intel could have gone to take on the new wave of Threadrippers. AMD, on the other hand, has confirmed that its 24-core and 32-core Threadripper II chips are compatible with current socket TR4 motherboards based on the AMD X399 chipset. It's possible that the next wave of TR4 motherboards could have 8-channel memory interface, wider than that of Intel's Skylake XCC silicon, and both forwards and backwards compatibility with current-generation Threadripper SKUs (at half the memory bus width) and future Threadripper chips.
Ok so we have to use caution here as assumption or speculation can lead are off the path as it has here.
HEDT platforms were always derived from the server parts, that holds true for AMD and INTEL. AMD pushed out Zen on their one and only Zen supporting platform a year ahead of the desktop parts. A the time it was a fully realized configuration (32 cores/128 pcie lanes and 8 channels). They literally used that very platform and socket for ThreadRipper, but gutted it where they deemed it necessary (half the memory channels, half the PCI-Express lanes etc ). This has everything to do with cost and nothing to do with being nice or being thoughtful to the end user. It's literally cheaper for AMD to support fewer sockets of which is just 2 for Zen.
So when you say or suggest rather, than this is panic from INTEL (as per editorial title) and this is
the only direction INTEL could have taken (this last part which is true mind you) - Careful to not make it seem as if it's ever been any different.
It's always been the case that both INTEL and AMD adapt server parts for their HEDT or high end platforms. The decision to use such a socket and configuration by INTEL has nothing to do with AMD. The reason they didn't use FCLGA2066 was literally because that pin socket was/is confined to 1S configurations and 18 cores. These CPUs don't have any UPI or QPI or additional memory channel links. That they used a different socket is only natural as that's the only other avenue there is for core scalability. Intel segmenting their sockets in this manner and adapting them accordingly pre-dates AMD's Zen by years. Look back at LGA1567/1366/1156 from 2010 (Three sockets back in 2010, much like it is today with Three sockets 1151/2066/3647)
AMD's forward compatibility, has nothing to do with anything buy cost considerations for them, not the user. The SP3r2 socket is massive with over 4,000 contacts or pins. Exactly like SP3 for EPYC which also has an identical pin number of 4094, it's their LGA3647 equivalent.
AMD has two sockets for cost purposes, INTEL has always had three (at least for the past 9 years, just like
AMD in 2010 had AM3, C32 & G34, segmented and separated by core count, dram channels, interconnects etc ). It would be odd to suggest that INTEL is panicking using the socket argument as they react to their one and only competitor in the space - levering the same socket segmentation they have had for nearly a decade maybe more. Developing a CPU takes time, about 5 years supposedly from paper to product. I'd be interested to know outside of levering existing technology, what INTEL or any other company for that matter in this position could have done that would not be viewed as "panic". That is, What does a rational reaction to you look like in a material way (socket choice, platform etc.) for INTEL, barring the 5GHz carfuffle?
I am not defending the 5GHz snafu as it wasn't necessary, proved nothing and created unnecessary controversy. The power of the CPU could have easily been demoed using the CPU at a more realistic clock speed (It would still likely be faster in CBR15 than anything else we have or will have for 2018) .