Sunday, June 10th 2018

Intel's 28-core HEDT Processor a Panic Reaction to 32-core Threadripper
At Computex 2018, we witnessed two major HEDT (high-end desktop) processor announcements. Intel unveiled a client-segment implementation of its "Skylake XCC" (extreme core count) silicon, which requires a new motherboard, while AMD announced a doubling in core-counts of its Ryzen Threadripper family, with the introduction of new 24-core and 32-core models, which are multi-chip modules of its new 12 nm "Zen+" die, and compatible with existing X399 chipset motherboards. With frantic increases in core counts, the practicality of these chips to even the most hardcore enthusiast or productivity professional diminishes. The Computex 2018 demos reek of a pissing-contest between the x86 processor giants, with AMD having an upper hand.
The HEDT segment is intended to occupy the space between client desktops and serious scalar workstations. Intel is frantically putting together a new HEDT platform positioned above its current LGA2066 (X299) platform, built around its Purley enterprise platform, and a variant of the LGA3647 socket (this chip + your X299 motherboard is no bueno). This socket is needed to wire out the 28-core Skylake XCC (extreme core count) silicon, which has a six-channel DDR4 memory interface. The company put up a live demo at the teaser of this unnamed processor, where it was running at 5.00 GHz, which led many to believe that the processor runs at that speed out of the box, at least at its maximum Turbo Boost state, if not nominal clock. Intel admitted to "Tom's Hardware," that it "forgot" to mention to the crowds that the chip was overclocked.Overclocking the 28-core chip was no small effort. It took an extreme cooling method, specifically a refrigerated heat-exchanger, coupled with a custom motherboard (we suspect GIGABYTE-sourced), to keep the processor bench-stable at 5.00 GHz. Intel's defense to Tom's Hardware was that "in the excitement of the moment," its on-stage presenter "forgot" to use the word "overclocked." Gregory Bryant, SVP client-computing at Intel not only omitted "overclocked" from his presentation, but made sure to stress on "5 GHz," as if it were part of the chip's specifications.
"What's amazing is that trade-off, this actually being a 5 GHz in single-threaded performance frequency and not...having to sacrifice that for this kind of multi-threaded performance, so you've got kind of the best of both worlds. So, you guys want to see us productize that thing? Tell you what, we'll bring that product to market in Q4 this year, and you'll be able to get it," he said.
Rival AMD, meanwhile, showed off its 24-core and 32-core Ryzen Threadripper II processors, with its 24-core part beating Intel's i9-7980XE 18-core chip under ordinary air cooling.
Intel used a multiplier-unlocked derivative of the Xeon Platinum 8180 "Skylake-SP" processor in this demo. The Xeon Platinum 8180 "Skylake-SP" is a $10,000 processor with a 205W rated TDP at its nominal clock speed of 2.50 GHz, with a Turbo Boost frequency of 3.80 GHz. The company achieved a 100% overclock to 5.00 GHz, using extreme cooling, and considering that TDP is calculated taking into account a processor's nominal clock (a clock speed that all cores are guaranteed to run at simultaneously), the company could have easily crossed 350W to 400W TDP stabilizing the 5.00 GHz overclock. If a 205W TDP figures in the same sentence as 2.50 GHz nominal clocks, it doesn't bode well for the final product. It will either have a very high TDP (higher still taking into account its unlocked multiplier), or clock speeds that aren't much higher than the Xeon Platinum 8180.
Consider the AMD EPYC 7601 for a moment, which is the fastest 32-core 1P EPYC SKU. It ticks at 2.20 GHz, with a boost frequency of 3.20 GHz, but has its TDP rated lower, at 180W. Now consider the fact that AMD is building the 32-core Threadripper II with more advanced 12 nm "Zen+" dies, and it becomes clear that the 24-core and 32-core Threadrippers are the stuff of nightmares for Gregory Bryant, not because AMD will make more money out of them than Intel makes out of its 28-core G-man in a football jersey, but because AMD's offering could be cheaper and more efficient, besides being fast. An overall superior halo product almost always has a spillover PR to cheaper client-segment products across platforms; and the client GPU industry has demonstrated that for the past two decades.
AMD is already selling 16 cores at $999, and beating Intel's $999 10-core i9-7900X in a variety of HEDT-relevant tasks. The company has already demonstrated that its 24-core Threadripper II is faster than Intel's $1,999 18-core i9-7980XE. It would surprise us if AMD prices this 24-core part double that of its 16-core part, and so it's more likely to end up cheaper than the i9-7980XE.
Intel cannot beat the 32-core Threadripper II on the X299/LGA2066 platform, because it has maxed out the number of cores the platform can pull. The Skylake HCC (high core count) silicon, deployed on 12-core, 14-core, 16-core, and 18-core LGA2066 processors, is already motherboard designers' nightmare, many of whom have launched special "XE" variants of their top motherboard models that offer acceptable overclocking headroom on these chips, thanks to beefed up VRM.
Coming up with a newer platform, namely revising the Purley 1P enterprise platform for the client-segment, with its large LGA3647 socket and 6-channel memory interface, is the only direction in which Intel could have gone to take on the new wave of Threadrippers. AMD, on the other hand, has confirmed that its 24-core and 32-core Threadripper II chips are compatible with current socket TR4 motherboards based on the AMD X399 chipset. It's possible that the next wave of TR4 motherboards could have 8-channel memory interface, wider than that of Intel's Skylake XCC silicon, and both forwards and backwards compatibility with current-generation Threadripper SKUs (at half the memory bus width) and future Threadripper chips.
PC enthusiasts nurse an expensive hobby, but the commercial success of NVIDIA TITAN V graphics card (or lack thereof) shows that there are limits to how many enthusiasts have $3,000 to spend on a single component.
The HEDT segment is intended to occupy the space between client desktops and serious scalar workstations. Intel is frantically putting together a new HEDT platform positioned above its current LGA2066 (X299) platform, built around its Purley enterprise platform, and a variant of the LGA3647 socket (this chip + your X299 motherboard is no bueno). This socket is needed to wire out the 28-core Skylake XCC (extreme core count) silicon, which has a six-channel DDR4 memory interface. The company put up a live demo at the teaser of this unnamed processor, where it was running at 5.00 GHz, which led many to believe that the processor runs at that speed out of the box, at least at its maximum Turbo Boost state, if not nominal clock. Intel admitted to "Tom's Hardware," that it "forgot" to mention to the crowds that the chip was overclocked.Overclocking the 28-core chip was no small effort. It took an extreme cooling method, specifically a refrigerated heat-exchanger, coupled with a custom motherboard (we suspect GIGABYTE-sourced), to keep the processor bench-stable at 5.00 GHz. Intel's defense to Tom's Hardware was that "in the excitement of the moment," its on-stage presenter "forgot" to use the word "overclocked." Gregory Bryant, SVP client-computing at Intel not only omitted "overclocked" from his presentation, but made sure to stress on "5 GHz," as if it were part of the chip's specifications.
"What's amazing is that trade-off, this actually being a 5 GHz in single-threaded performance frequency and not...having to sacrifice that for this kind of multi-threaded performance, so you've got kind of the best of both worlds. So, you guys want to see us productize that thing? Tell you what, we'll bring that product to market in Q4 this year, and you'll be able to get it," he said.
Rival AMD, meanwhile, showed off its 24-core and 32-core Ryzen Threadripper II processors, with its 24-core part beating Intel's i9-7980XE 18-core chip under ordinary air cooling.
Intel used a multiplier-unlocked derivative of the Xeon Platinum 8180 "Skylake-SP" processor in this demo. The Xeon Platinum 8180 "Skylake-SP" is a $10,000 processor with a 205W rated TDP at its nominal clock speed of 2.50 GHz, with a Turbo Boost frequency of 3.80 GHz. The company achieved a 100% overclock to 5.00 GHz, using extreme cooling, and considering that TDP is calculated taking into account a processor's nominal clock (a clock speed that all cores are guaranteed to run at simultaneously), the company could have easily crossed 350W to 400W TDP stabilizing the 5.00 GHz overclock. If a 205W TDP figures in the same sentence as 2.50 GHz nominal clocks, it doesn't bode well for the final product. It will either have a very high TDP (higher still taking into account its unlocked multiplier), or clock speeds that aren't much higher than the Xeon Platinum 8180.
Consider the AMD EPYC 7601 for a moment, which is the fastest 32-core 1P EPYC SKU. It ticks at 2.20 GHz, with a boost frequency of 3.20 GHz, but has its TDP rated lower, at 180W. Now consider the fact that AMD is building the 32-core Threadripper II with more advanced 12 nm "Zen+" dies, and it becomes clear that the 24-core and 32-core Threadrippers are the stuff of nightmares for Gregory Bryant, not because AMD will make more money out of them than Intel makes out of its 28-core G-man in a football jersey, but because AMD's offering could be cheaper and more efficient, besides being fast. An overall superior halo product almost always has a spillover PR to cheaper client-segment products across platforms; and the client GPU industry has demonstrated that for the past two decades.
AMD is already selling 16 cores at $999, and beating Intel's $999 10-core i9-7900X in a variety of HEDT-relevant tasks. The company has already demonstrated that its 24-core Threadripper II is faster than Intel's $1,999 18-core i9-7980XE. It would surprise us if AMD prices this 24-core part double that of its 16-core part, and so it's more likely to end up cheaper than the i9-7980XE.
Intel cannot beat the 32-core Threadripper II on the X299/LGA2066 platform, because it has maxed out the number of cores the platform can pull. The Skylake HCC (high core count) silicon, deployed on 12-core, 14-core, 16-core, and 18-core LGA2066 processors, is already motherboard designers' nightmare, many of whom have launched special "XE" variants of their top motherboard models that offer acceptable overclocking headroom on these chips, thanks to beefed up VRM.
Coming up with a newer platform, namely revising the Purley 1P enterprise platform for the client-segment, with its large LGA3647 socket and 6-channel memory interface, is the only direction in which Intel could have gone to take on the new wave of Threadrippers. AMD, on the other hand, has confirmed that its 24-core and 32-core Threadripper II chips are compatible with current socket TR4 motherboards based on the AMD X399 chipset. It's possible that the next wave of TR4 motherboards could have 8-channel memory interface, wider than that of Intel's Skylake XCC silicon, and both forwards and backwards compatibility with current-generation Threadripper SKUs (at half the memory bus width) and future Threadripper chips.
PC enthusiasts nurse an expensive hobby, but the commercial success of NVIDIA TITAN V graphics card (or lack thereof) shows that there are limits to how many enthusiasts have $3,000 to spend on a single component.
160 Comments on Intel's 28-core HEDT Processor a Panic Reaction to 32-core Threadripper
Now I have replaced both that Xeon machine and my i7 Haswell ES machine (3-3.2Ghz) w/ my 1700x and its faster than both machines put together! I thought that it was using an 1800W Corsair IX?
Intel had two motherboard vendors design motherboards for their demo, so this was obviously planned long ahead.
We heard similar outrage when Intel launched Coffee Lake, which many fanboys claims was released to steal the attention of Ryzen. These fanboys needs to get out of their basement and meet the real world.
-----
Meanwhile, there are actual interesting information we could be discussing instead, like:
Intel Website shows multiple packages and internal code names, including Cascade Lake, Ice Lake, Whiskey Lake, and a new Advanced Package?
The reality is you need a 2000w cooler to get ANY Platinum cherry-picked 28-core to 5GHz, and their "5GHz 6-core i7" will likely launch in incredibly limited numbers. However they will grab headlines and some mindshare...
As for the clock speeds it'll have, basing it on the Xeon Platinum 8180 is premature, as #1 you're assuming there's no process tweaking between now and when this product hits market. #2 There are key differences between the Xeon part and the HEDT part, which allows INTEL to switch off parts of the die that are not needed (like data links for 2/4-way MP systems) which in turn frees up headroom both in terms of power and clock frequencies. You, much like I or anyone else can't know what these differences are and as such it is imperative to pose more questions, rather than make blanket statements. That may be true, but AMD has yet to produce the superior* product to INTEL's offerings with any Zen based solution for the desktop which could have the aforementioned Halo effect.
Moreover, what does superior mean? Does that mean cheaper, lower power consumption perhaps, better performance or is it a combination? Exactly what is superior because for instance, INTEL still has the performance lead in some application types and Zen+ doesn't change that. In the editorials on this very website, 8700K is still faster in games than Ryzen 7 2700X for instance. AVX is still significantly slower on Summit and Pinnacle Ridge than on CFL. So perhaps it's important to flash out what we mean when we say superior. Zen/Zen+ could very well be superior for sure, but what does that mean exactly and where? Totally! There's just no beating the TR offerings in price vs performance where productivity applications that scale with core or rather thread count are concerned. I can't know AMD's pricing so I won't speculate on it, but based on previous SKUs as you alluded too - 24-Core TR4 could very well cost less than 7980XE or at the very least the same price, especially if as shown at the Computex - it nails the 7980XE in Blender and such applications. Ok so this is where you may be getting it wrong. The issue with the 18 core 7980XE was not the VRM on any motherboard, but the VRM COOLING and this is an important distinction. ASRock XE variants of their X299 boards have the same VRM as the regular version, but what's been changed is the VRM Cooling solution. Most of them simply adding more surface area to the heatsink, hence dissipating more heat and allowing the VRM temp to fall back to optimal operating temps - maintaining performance. Even on the GIGABYTE SOC Champion X299, yes it was never a retail board, but it exemplifies this. To overclock on this board using the 7980XE, one needs to actively cool the VRM and for extreme overclocking, you need to have a container for the VRM and cool it with LN2. The VRM can tolerate the power draw, but it's the cooling solution that cannot.
7980XE was not a motherboard designers nightmare (BTW, there are teams within motherboard vendors, it's not a single monolith. There's a Thermal solutions team separate from there's a BIOS Team etc.), the VRM on these boards could always handle the loads. It is the Thermal Design teams which created cooling solutions which could not adequately dissipate heat from the VRM. Ok so we have to use caution here as assumption or speculation can lead are off the path as it has here.
HEDT platforms were always derived from the server parts, that holds true for AMD and INTEL. AMD pushed out Zen on their one and only Zen supporting platform a year ahead of the desktop parts. A the time it was a fully realized configuration (32 cores/128 pcie lanes and 8 channels). They literally used that very platform and socket for ThreadRipper, but gutted it where they deemed it necessary (half the memory channels, half the PCI-Express lanes etc ). This has everything to do with cost and nothing to do with being nice or being thoughtful to the end user. It's literally cheaper for AMD to support fewer sockets of which is just 2 for Zen.
So when you say or suggest rather, than this is panic from INTEL (as per editorial title) and this is the only direction INTEL could have taken (this last part which is true mind you) - Careful to not make it seem as if it's ever been any different. It's always been the case that both INTEL and AMD adapt server parts for their HEDT or high end platforms. The decision to use such a socket and configuration by INTEL has nothing to do with AMD. The reason they didn't use FCLGA2066 was literally because that pin socket was/is confined to 1S configurations and 18 cores. These CPUs don't have any UPI or QPI or additional memory channel links. That they used a different socket is only natural as that's the only other avenue there is for core scalability. Intel segmenting their sockets in this manner and adapting them accordingly pre-dates AMD's Zen by years. Look back at LGA1567/1366/1156 from 2010 (Three sockets back in 2010, much like it is today with Three sockets 1151/2066/3647)
AMD's forward compatibility, has nothing to do with anything buy cost considerations for them, not the user. The SP3r2 socket is massive with over 4,000 contacts or pins. Exactly like SP3 for EPYC which also has an identical pin number of 4094, it's their LGA3647 equivalent.
AMD has two sockets for cost purposes, INTEL has always had three (at least for the past 9 years, just like AMD in 2010 had AM3, C32 & G34, segmented and separated by core count, dram channels, interconnects etc ). It would be odd to suggest that INTEL is panicking using the socket argument as they react to their one and only competitor in the space - levering the same socket segmentation they have had for nearly a decade maybe more. Developing a CPU takes time, about 5 years supposedly from paper to product. I'd be interested to know outside of levering existing technology, what INTEL or any other company for that matter in this position could have done that would not be viewed as "panic". That is, What does a rational reaction to you look like in a material way (socket choice, platform etc.) for INTEL, barring the 5GHz carfuffle?
I am not defending the 5GHz snafu as it wasn't necessary, proved nothing and created unnecessary controversy. The power of the CPU could have easily been demoed using the CPU at a more realistic clock speed (It would still likely be faster in CBR15 than anything else we have or will have for 2018) .
A CPU and GPU costs $1000 combined if you want something decent...That has to be the upper limit...
A decent PC is almost pure sticker shock these days...
Intel needs to do what they did in the 775 days...keep a friggin socket for more than 1-2 gen.
I'm on my 4th "core I" version and I only switched back from AMD briefly in 2014...then went AMD briefly last year and back to Intel....4 years 4 platforms...2 ddr3, 2ddr4
But then again AMD had AM3+, FM1, FM2/+ at the same time so technically It's the same...I've had 4 AMD platforms and 4 Intel platforms since 2010ish
I'm surprised that PSU socket and cable is enough for it, too. We must be talking around 15A at 240V here
Alas it's true that Intel is the only game in town for many businesses, but it's not an impossible perception to change if AMD consistently outperform, plus undercut Intel on price, even by a bit. Over time they will gain sales. Difficult to do though and Intel won't sit still for it of course. :ohwell:
I can just imagine his electricity bill. :laugh:
grow up mate, non of fanboism talk here..
Skip to 57 minutes and 22 seconds.
As you can see, the entire damn desk is wheeled onto stage, nothing is visible except the two towers on top of it, and there's no projected image of the system itself.
The only information anyone in that room got about that system's cooling unit, was what they were able to see from their seats, many feet away. They'd have plausibly been able to see the illuminated reservoir, and the RGB RAM, but not a whole lot else about the system. They absolutely would NOT have been able to see the "huge refrigeration unit sitting next to it" because it was underneath the damned desk and nobody saw it until:
1 - Someone got suspicious about that big ol' weird mess of cables coming out the back
2 - Someone tracked the thing down on the computex floor and got pictures of the chiller.
Also note the wording from the transcript:
"Sometimes this benchmark takes minutes to run, but you can see when we have 28 cores running at 5GHz, you'll fly through it, now look at that number, 7334 that is an incredible number. If you look that up tonight you'll see that, it's not the fastest in the world, there are a few that are faster, but they all require 2 or 4 sockets to get what this single socket system can do with 28 cores at 5 GHz."
"Now what's amazing is there's that tradeoff, this actually gives you 5GHz, you can get single-threaded performance frequency, not, you know, not having to sacrifice that for this kind of multithreaded performance, so you got kinda the best of both worlds"
That second sentence with the "not having to sacrifice" and "you got the best of both worlds" ? That's not the kind of thing you say when demonstrating a system you know is massively overclocked. That's the kind of language someone uses when they're trying to justify an actual, achievable, regular use case to a potential customer or consumer.
Intel absolutely, 100% in my opinion, knew exactly what they were doing here and it was NOT "forgetting" to mention an overclock. It was intentionally hiding that fact to try and, as I've said, bank on the press repeating it, and making an excuse later to explain it away to the comparitively small number of more savvy folk that care to dig deeper and find out it's bullshit.
I'm not defending Intel, like I already said, if they are going to demo a product they have just premiered in the same presentation, they should demo it as it will ship. They should tweak it and overclock it.
However, there are certainly enough clues that there was something fishy going on.
Are you going to:
1 - Get super suspicious and potentially tank your site's traffic for the story by not publishing it until you get more info by tracking down the system on the show floor and inspecting it, thus missing your window for clicks and traffic when the story breaks on every other site online while you wait
or are you
2 - Just going to publish the story and start asking your questions while it rakes in pageviews and ad revenue that allows your job to exist? No, its just the reality of tech journalism - you need to publish news first, if you want to get all the clicks that come with being the only source for a story. ( dorbak