Thursday, May 9th 2019

Intel Switches to a "Data Center First" Strategy with 7nm

Intel traditionally released new CPU microarchitectures and new silicon fabrication nodes with the client segment, and upon observing some degree of maturity with both, graduated them to the enterprise segment. With its homebrew 7 nanometer silicon fabrication process that takes flight in 2021, Intel will flip its roadmap execution strategy, by going "Data Center First." Speaking at the 2019 Investors Day summit, Intel SVP and GM of Data Center Group Navin Shenoy revealed that the first product built on Intel's 7 nm process will be a GPGPU accelerator chip derived from the Xe architecture for the Data Center, followed closely by a new server CPU. Both these products come under Shenoy's group. One is a competitor to likes of NVIDIA Tesla and AMD Radeon Instinct, while the other is a Xeon processor competing with AMD EPYC.

Shenoy explained the reason why within his group, the GPGPU product was prioritized over the server CPU. It has to do with redundancy of the GPU silicon, or specifically, the higher potential to harvest partially defective dies than CPU. A GPU has a larger number of indivisible components that can be disabled if found non-functional at the time of quality assurance, and these harvested dies can be used to carve out variants of a main product. An example of this would be NVIDIA carving out the GeForce GTX 1070 (1,920 CUDA cores) from the GP104 silicon that physically has 2,560 CUDA cores. The first manufacturing runs of the GPGPU will give the foundry valuable insights into the way the node is behaving, so it could be refined and matured for the server CPU. With 10 nm, however, Intel is sticking to the client-first model, by rolling out the "Ice Lake" processor towards the end of 2019. Within the Client Computing group, Intel has flipped its roadmap execution such that mobile (notebook) CPUs take precedence over desktop ones.
Source: Intel
Add your own comment

13 Comments on Intel Switches to a "Data Center First" Strategy with 7nm

#1
Vayra86
More slides for the God of Air and Empty Promises!
Posted on Reply
#2
eidairaman1
The Exiled Airman
Gaming is the very last thing on their mind...
Posted on Reply
#3
bug
Interesting twist. FOr the past few generations, not only has Intel debuted new nodes in the client segment, but they actually started with mobile chips (read smaller dies), until the process matures enough.
Posted on Reply
#4
sutyi
bugInteresting twist. FOr the past few generations, not only has Intel debuted new nodes in the client segment, but they actually started with mobile chips (read smaller dies), until the process matures enough.
Not really a "twist" if you think about it. Intel will be out-gunned for a long time in the HPC segment regarding node availability and perf / watt... and that is where he big money is at.
They are willing to "sacrifice" their desktop market, to stop AMD from gaining ground where it truly matters.

10nm mobile chips are "here", don't know when will you see desktop part on that node tho. Probably not very soon.
Posted on Reply
#5
Vayra86
bugInteresting twist. FOr the past few generations, not only has Intel debuted new nodes in the client segment, but they actually started with mobile chips (read smaller dies), until the process matures enough.
I think its plausible 7nm will be TSMC anyway and they may bypass 10nm entirely and give it only to MSDT and mobile. this would be an 'out' for them and they would be able to push 7nm at least more quickly than they could have done on their own..

Might also be less costly than continued refinement of 10nm beyond the clock bumps they already showed us on 14nm. Im seeing that pattern anyway. 10nm is already gaining pluses...
Posted on Reply
#6
kapone32
I think Intel did this to try to mitigate their losses in the Data Center vs AMD's new offerings that are cheaper and are gaining serious traction in adoption.
Posted on Reply
#7
TheoneandonlyMrK
bugInteresting twist. FOr the past few generations, not only has Intel debuted new nodes in the client segment, but they actually started with mobile chips (read smaller dies), until the process matures enough.
What new nodes they've been on 14nm 7 years, or do you nean the 10nm no gpu node no one got in a product.

@Vayra86 bang on 10nm is a dead duck it's on life support as is and clearly is not getting much further investment.
They couldn't fight 7-5nm with 10nm in pr terms, with the consumer masses not knowing about the disparity between foundries.

Interesting you mentioned Tmc, intel don't, they said "Their "7nm node, and also said it will be used on the Xe gpu first, which might cross off any possibility of using tmc or samys foundrie for 7nm GPU, mught not though , since it's not the most elucidating pr outing.
Posted on Reply
#8
Vayra86
theoneandonlymrkInteresting you mentioned Tmc, intel don't, they said "Their "7nm node, and also said it will be used on the Xe gpu first
That's just it. 10nm was 'on track' as well in Intel announcements ;) I'm having a hard time believing the whole story, I think Intel is really looking for an out without losing face all too much. With client/consumer only 10nm they can say 'look, we did it' and move on. They can then recoup some of the expenses by dragging it on for a few iterations, simply by applying the power/clock/turbo trick again in three chapters like 7/8/9th gen.

And perhaps their 7nm is easier and 'on track' as well, who knows, but one thing's for certain, their '2021' 7nm is going to have to be a lot better than TSMCs' current version of it which to me reads a bit like 10nm all over again.

I'm not convinced this is a PR angle to be fair, at least not in the sense that 10nm 'because higher number' can't stand up to 7nm. In the end performance matters. In reviews and for end users performance and perf/watt matter. Not the node. The node is just a means. I mean look at Radeon VII. First 7nm GPU and nobody is saying 'must have because 7nm and Nvidia's higher!'.
Posted on Reply
#9
TheoneandonlyMrK
Vayra86That's just it. 10nm was 'on track' as well in Intel announcements ;) I'm having a hard time believing the whole story, I think Intel is really looking for an out without losing face all too much. With client/consumer only 10nm they can say 'look, we did it' and move on. They can then recoup some of the expenses by dragging it on for a few iterations, simply by applying the power/clock/turbo trick again in three chapters like 7/8/9th gen.

And perhaps their 7nm is easier and 'on track' as well, who knows, but one thing's for certain, their '2021' 7nm is going to have to be a lot better than TSMCs' current version of it which to me reads a bit like 10nm all over again.

I'm not convinced this is a PR angle to be fair, at least not in the sense that 10nm 'because higher number' can't stand up to 7nm. In the end performance matters. In reviews and for end users performance and perf/watt matter. Not the node. The node is just a means. I mean look at Radeon VII. First 7nm GPU and nobody is saying 'must have because 7nm and Nvidia's higher!'.
I agree and said similar a while ago in another thread ,but for Intel's Pr angle which I think is as I said ,and in line with all their prior one's, it will sedate the investers but is light and airey enough for a nudge here or there.
As for Vega 7 v Nvidia, I would say the latest supercomputer announcement puts that to bed regarding it's uses ,it was always a pro compute part, adapting it for gaming it wasn't likely to outperform turing, however I wouldn't mind knowing how close a full Cu part vega 2 would get.
Don't believe the total rubish that vega 56 outperforms 64, that's nonsense though I'd agree the extra Cu's are not often that important.
Posted on Reply
#10
Vayra86
theoneandonlymrkDon't believe the total rubish that vega 56 outperforms 64, that's nonsense though I'd agree the extra Cu's are not often that important.
Ehehe I don't even pay attention to all those AMD GPU miracle stories man. What matters is how they work out of the box, to me. That is what's guaranteed.
Posted on Reply
#11
TheoneandonlyMrK
Vayra86Ehehe I don't even pay attention to all those AMD GPU miracle stories man. What matters is how they work out of the box, to me. That is what's guaranteed.
Fair enough , many are the same, for me it's always what i can make it do that matters, and no doubt Nvidia are doing much better with boost 3 and in general hardware optimization.
Amd references always feel like they give you enough to do more but hold back a bit for reliability or the next respin, for example wtf were those blower coolers about at all, they were not able to achieve much with them.

But on topic i hope you're right on the Xe Gpu because I have no faith in them making it a winning asic on their new 7nm node.
Posted on Reply
#12
remixedcat
kapone32I think Intel did this to try to mitigate their losses in the Data Center vs AMD's new offerings that are cheaper and are gaining serious traction in adoption.
Epyc?? I hear it's doing well.
Posted on Reply
#13
kapone32
remixedcatEpyc?? I hear it's doing well.
That is right and they will be releasing 64 Core CPUs later this year too.
Posted on Reply
Add your own comment
Nov 16th, 2024 10:30 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts