Wednesday, July 17th 2019

Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.

However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.
With 10 nm, Intel targets improved density by as much as 2.7x compared to the last generation of 14 nm transistors. He addressed the five year delay in delivering the 10 nm node being caused by "too aggressive innovation," adding that "... at a time it gets harder and harder, we set more aggressive goal..." and that's the main reason for the late delivery. Additionally he said that this time, Intel will stay at exactly 2x density improvements over two years with the company's 7 nm node, which is supposed to launch in two years and is already in development.

When talking about the future of Intel, Swan has noted that Intel's current market share is 30% of the "silicon market", saying that Intel is trying to diversify its current offerings from mainly CPUs and FPGAs to everything that requires big compute performance, in order to capture rest of the market. He noted that Artificial Intelligence is currently driving big demand for such performance, with autonomous vehicles expected to be a big source of revenue for Intel in the future. Through acquisitions like Mobileye, Intel plans to serve that market and increase the company's value.

You can listen to the talk here.
Add your own comment

111 Comments on Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

#76
Unregistered
Vya DomusWho is mainstream and how do you know what they need ?
Basic consumers.
Vya DomusIf they don't need more that 4 cores what the hell is Intel doing with CPUs like the 9900K and god forbid we mention the 16 core 3950X.
Core wars, AMD has always tried to drive up core counts, sure it's a way of improving multi thread performance but does the 3950x even need to exist? At that stage, you'd be better off picking up a HEDT platform rather than sticking these types of SKUs onto the mainstream when hardly anyone will be able to saturate them especially gamers.
If Intel doesn't shove up core counts to compete against Ryzen, AMD will hold the multi thread performance lead meaning that intel is no longer a "performance at a premium" brand which is quite literally the point of intel products.
#77
bug
Vya DomusWho is mainstream and how do you know what they need ?
This is extremely difficult information to come by, but let me give it a shot: store.steampowered.com/hwsurvey/cpus/
Mainstream is $200 or less. In the US or EU people can afford to spend more, but in many other countries even $200 can be a stretch (and those countries don't get US pricing to begin with).
Posted on Reply
#78
Vya Domus
Xx Tek Tip xXBasic consumers.
Basic consumer as opposed to ... advanced consumers ? Your avoiding to properly define what's mainstream and what isn't which is not surprising because there isn't a clear dividing line. If it's out there and it isn't prohibitively expensive or particularly difficult to use by the average person then it's pretty mainstream. You can play Minecraft on your shiny new 3900X and 9900K pretty easily can you not ? That seems pretty mainstream to me.

"a dual core is enough", bet you've heard that one some years ago, to go from that to "the mainstream isn't going to need more than 4 cores" means things must be continuously changing. And that sure as hell isn't stopping now.

If you can buy a new 8 core with 300 bucks, surprise surprise, that's a mainstream product. What exactly you do with it doesn't even need to be factored in.
Posted on Reply
#79
Unregistered
Vya Domus"a dual core is enough", bet you've heard that one some years ago, to go from that to "the mainstream isn't going to need more than 4 cores" means things must be continuously changing. And that sure as hell isn't stopping now.
Many years ago, it took a very long time for games to even move from running off dual cores to now making use of quad cores, it'll be quite a long time till all games start using more cores as standard
#80
Vayra86
Vya DomusYour point being ? Here's a fun fact, in 2011 Intel released 10 core Nehalem based Xeons, that's an architecture dating back to 2008 may I remind you and it wasn't even that big, relatively speaking, at just over 500 mm^2.

It's painfully obvious they had the capability to deliver more cores to the mainstream market but chose not to. To me as a consumer, the fact that Intel is better at extracting money means jack shit, all I see is a company that has been sandbagging for almost a decade.
The point being, that quite simply high core counts are not new and AMD didn't pioneer it quite as much as people like to believe, for the MSDT segment. After all, the best they could produce until Zen was, realistically, a 6 core Phenom.

What came next was a hopelessly inefficient module that was supposed to act like two cores but in fact performed worse than a single Intel core. It provides some perspective as to what the market 'demanded' and what was being supplied. Those 6 cores may have been relatively popular, but still people were just fine with quads afterwards for a decade. If they weren't, they would have bought Intel HEDT.

That is the point. More cores has nothing to do with affordability, and everything with market demand. It simply wasn't there, despite the wishes of a few enthusiasts.

You then talk about price and so does @bug but I really don't. Price is irrelevant, if there is software demanding a higher performance or core count that is considered mainstream - its also a bit of a chicken/egg situation, that, but even so, if the masses demand a specific workload to be feasible, you can rest assured there'd have been a push towards it - it wasn't ever there. There is certainly a specific set of use cases to be identified as 'mainstream', and it has nothing to do with price. PC's have been priced anywhere along the scale and were capable of the same tasks now as they were ten years ago. The overall theme is that 'we' don't do a whole lot with our computers that warrants a high core count. The majority of service hours for most CPUs is idle time. Even heavier tasks like some encoding or rendering work can be done just fine on quads, people have been doing that for years. The worst case scenario was leaving the PC on overnight, which you did anyway to download from Kazaa... :p

Higher core counts simply weren't viable wrt market demand. And that is why Intel had HEDT and those core counts never trickled down.
Vya DomusIf you can buy a new 8 core with 300 bucks, surprise surprise, that's a mainstream product. What exactly you do with it doesn't even need to be factored in.
Sure, but if you zoom out a bit more, in the long run, will that 8 core CPU be the biggest seller of the product stack, or is it going to be a niche entry within it? That is what really tells you the story here. Companies always try to make us 'buy up' so its not surprising there are parts in the stack that are there to pull more people towards them. Its the same trick Nvidia deploys with GPUs and model numbers. What happens? People complain a bit and then usually resort to buying a lower model number similarly priced to what they feel is acceptable. A small percentage actually buys up.
Posted on Reply
#81
bug
Vayra86The point being, that quite simply high core counts are not new and AMD didn't pioneer it quite as much as people like to believe, for the MSDT segment. After all, the best they could produce until Zen was, realistically, a 6 core Phenom.

What came next was a hopelessly inefficient module that was supposed to act like two cores but in fact performed worse than a single Intel core. It provides some perspective as to what the market 'demanded' and what was being supplied. Those 6 cores may have been relatively popular, but still people were just fine with quads afterwards for a decade. If they weren't, they would have bought Intel HEDT.

That is the point. More cores has nothing to do with affordability, and everything with market demand. It simply wasn't there, despite the wishes of a few enthusiasts.
What he said. If users wanted more cores and Intel didn't deliver, either Intel's sales would have plummeted or we would have seen a surge in HEDT sales.
Posted on Reply
#82
R0H1T
bugWhat he said. If users wanted more cores and Intel didn't deliver, either Intel's sales would have plummeted or we would have seen a surge in HEDT sales.
You obviously left out the 3rd, most critical option - how many people know (of) AMD or how great they are an alternative to Intel? Intel inside still sells even if it's horrible VFM :shadedshu:
Posted on Reply
#83
HTC
Intel's 10nm woes come from two different issues:

1 - their 14nm CPU clocks are too high after many successive refinements, and they are unable to have 10nm with similar clocks
2 - yield issues that aggravate problem #1

Intel can make 10nm CPUs right now but they clock much lower than their 14nm CPUs, meaning not only would it not be an upgrade, it would actually be a downgrade in everything except power consumption.
Only the yield issues can be attributed to "being too aggressive": scaling down some parts of the CPU brings problems they have yet to sort out.

AMD's approach with the IO die not being @ 7nm like the CCDs seems to actually pay off quite handsomely: from what i understand, most of what's in the IO die doesn't scale well if shrinked to a smaller process so opting to have it remain on a higher process VS the CCDs makes a lot of sense.
Posted on Reply
#84
bug
HTCIntel's 10nm woes come from two different issues:

1 - their 14nm CPU clocks are too high after many successive refinements, and they are unable to have 10nm with similar clocks
2 - yield issues that aggravate problem #1

Intel can make 10nm CPUs right now but they clock much lower than their 14nm CPUs, meaning not only would it not be an upgrade, it would actually be a downgrade in everything except power consumption.
Only the yield issues can be attributed to "being too aggressive": scaling down some parts of the CPU brings problems they have yet to sort out.

AMD's approach with the IO die not being @ 7nm like the CCDs seems to actually pay off quite handsomely: from what i understand, most of what's in the IO die doesn't scale well if shrinked to a smaller process so opting to have it remain on a higher process VS the CCDs makes a lot of sense.
Those are just symptoms. The root cause is they went for 4 patterns instead of 3 thinking, "yeah, this should be like 33% harder". It turned out it was way harder than that, leading to the problems you mentioned.
Posted on Reply
#85
kings
WavetrexHaving to pay just for the CPU more than for the entire computer with a quad core was simply insane.
That is no "choice", it's like being mugged at gun-point on the street is a "choice"...

But anyway, I'm glad those days are over.
Finally there is a choice.
Phew !
Depends on CPU, my 6-core 5820K cost 350€ in 2014 ...

3 years later, AMD brought these same cores and performance to the 220€~250€ mark with the R5 1600/X ... perfectly normal after 3 years!
Posted on Reply
#86
efikkan
londisteWould Sunny Cove cores be viable on 14nm? I mean literally, in terms of transistors, die size as well as heat.

Sunny Cove improvements over Skylake pretty much mirror the changes AMD did with Zen2, increase in transistors needed to implement it should be roughly similar as well. Now, while manufacturing dies does not seem to be a big problem (in terms of yields) heat at 14nm is definitely a problem. 9900K is evidence enough. Now, imagine a 40% larger die with similar power density...
I don't know the transistor count either, but I'm fairly sure it would without "major" sacrifices. If needed, you could drop AVX512 and the improved iGPU, which is most of the new transistors. Cache and IO, which is the most sensitive IO-wise would need to be calibrated/redesigned of course.

The 8-core Coffee Lake is ~174mm² including iGPU, and if you consider how much larger the Skylake-X/SP cores are, I think transistors would be the least of the worries. I would be more concerned if the cache improvements would work on an older node, because there timing and distance is crucial.

If you could do this, and achieve ~15% IPC gain, even at ~4.5 GHz boost it would outperform anything we have today. (per core)
londisteThere seems to be more and more evidence that IO does not scale well and relatively less dense IO die seems to support that notion. When it comes to cores themselves, 7nm is more than twice as dense as 14nm - 2.4x, give or take. Zen2 has about 40% more transistors in cores compared to Zen+ (and this might be a conservative estimate given changes with IO)
I assume this is a conscious decision, since the tolerance for IO is lower; one bad wire and the whole channel is dead. As least IO isn't that bad in terms of energy consumption.
Posted on Reply
#87
Juankato1987
KonceptzMore like "we got lazy because we had no competition, and people don't need more than 4 cores"
That's Sad but True....
Posted on Reply
#88
londiste
efikkanI don't know the transistor count either, but I'm fairly sure it would without "major" sacrifices. If needed, you could drop AVX512 and the improved iGPU, which is most of the new transistors. Cache and IO, which is the most sensitive IO-wise would need to be calibrated/redesigned of course.

The 8-core Coffee Lake is ~174mm² including iGPU, and if you consider how much larger the Skylake-X/SP cores are, I think transistors would be the least of the worries. I would be more concerned if the cache improvements would work on an older node, because there timing and distance is crucial.

If you could do this, and achieve ~15% IPC gain, even at ~4.5 GHz boost it would outperform anything we have today. (per core)
I do not think transistors are the main problem - power would be.

Aren't caches big users of transistors at the sizes we are currently looking at? Or would Intel be OK with less L3 cache? Dropping iGPU (best estimate at this point is ~40mm^2) would help in terms of die real estate but not in power. In use cases we care about iGPU is as good as dead silicon. If it is not turned off completely it is powered down and power gated. Same area for actively used transistors would contribute to energy consumption. I remain doubtful.

The other side of the decision that we do not know is how 10nm progression looks from inside Intel. It was supposed to be ready in 2016 and mass production in 2017. If during these years the messaging from manufacturing process team was "any day now", this has a huge impact on whether or not to backport an architecture to an older node and resolve the issues that inevitably turn up. That backport takes about a year if rushed. I don't envy the people who had to make these decisions.
efikkanI assume this is a conscious decision, since the tolerance for IO is lower; one bad wire and the whole channel is dead. As least IO isn't that bad in terms of energy consumption.
There were some comments from Intel when it was revealed that 10nm is a fail. Also, I think AMD guys did say in an interview that IO does not scale well and might even be problematic on smaller nodes. At the same time you are right . IO is not that significant in terms of energy consumption. IO and additional controllers consume power when used and are easy to power gate when not in use. IF/UPI are a bit different here but that is necessary evil and manageable.
Posted on Reply
#89
efikkan
londisteI do not think transistors are the main problem - power would be.
Yes and no. Right now Coffee Lake boosts incredibly high to gain a little extra performance, those last few hundred MHz causes a lot of heat, and power density is more of a problem than total power.
Using the IPC gains to offset the clocks would help deal with the power issues.
londisteAren't caches big users of transistors at the sizes we are currently looking at? Or would Intel be OK with less L3 cache?
If you compare Sandy Bridge:

vs Coffee Lake:

You'll see that the node shrinks have turned the L3 cache from ~33% of the die size down to ~20% of the die. A 4-core Sandy Bridge used to be ~216mm², while an 8-core Coffee Lake is ~174mm². Sunny Cove increases L1D cache by 50% and L2 cache by 100%, but doesn't change L3 cache to my knowledge. L1 and L2 is generally more expensive than L3, but it also depends on how the banks and the associativity is configured.
While L3 is actually located on the "outside" of the die, L2 and especially L1 is tightly integrated into the pipeline. As you probably know, L1 is split in two for a reason, L1I is located close to the front-end of the CPU, while L1D is next to the execution units. This is the most sensitive parts of any design, as the distances affects timing and clocks. Shrinking a design is usually fairly simple, but going the other way may cause challenges if the architecture didn't take that node's constraints into account when designing.
londisteDropping iGPU (best estimate at this point is ~40mm^2) would help in terms of die real estate but not in power. In use cases we care about iGPU is as good as dead silicon. If it is not turned off completely it is powered down and power gated. Same area for actively used transistors would contribute to energy consumption. I remain doubtful.
Dropping the iGPU completely would be an option, but I was more thinking of keeping the Coffee Lake iGPU rather than upgrading to the much larger Ice Lake iGPU, which would be unrealistic on 14nm.
But personally I think iGPUs on anything beyond 6 cores is a waste anyway. The only reason Intel does this is that OEMs want it.
londisteThe other side of the decision that we do not know is how 10nm progression looks from inside Intel. It was supposed to be ready in 2016 and mass production in 2017. If during these years the messaging from manufacturing process team was "any day now", this has a huge impact on whether or not to backport an architecture to an older node and resolve the issues that inevitably turn up. That backport takes about a year if rushed. I don't envy the people who had to make these decisions.
I think more of the poor engineers who have to deal with the constantly changing plans.
I assume that their stance on the readiness of 10nm have been that "these problems can't continue much longer now".
Personally I think a backport would take >1 year for the development until tapeout, then another year for tweaking.
Posted on Reply
#90
lexluthermiester
efikkanDon't forget that it was Intel's repeated "screwups" on 10nm which allowed AMD to catch up.
What "screw ups"? Citation please?
Posted on Reply
#91
Gungar
R-T-BYou don't have to be. You can just look at the codenames and IPC differences and infer that signifigant changes went on under the hood between kentfield and Skylake series.

Wikichip cooberates this, for the engineer inclined.



i7-990x Nehalem/Westmere says hello.

IPC man... it can't be identical chips with it changed. if there weren't innovating the cores, there'd be more of them, rest assured.
I am not talking about IPC differences, i am talking about the fact that maybe 100 of engineers have been working on IPC increase but they haven't managed any increase. It doesn't mean they are on a beach in Hawai waiting for AMD to catch up.
Posted on Reply
#93
bug
lexluthermiesterAhem..
www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/
You were saying?
Thanks for that. HardOCP used to do those, I was actually looking for something like that.

And it's pretty funny when people slam Intel for stagnating in the raw HP race, praise AMD for Zen while at the same time they fail to realize Zen only matches what they say it was stagnant for almost a decade now.
Sure, Zen gives us core galore, but few people actually need them. The good thing is Zen gives us those cores for very little $$$ so if we can get some legroom (even if it's not that useful atm) without breaking the bank, why not?
Posted on Reply
#94
lexluthermiester
bugThanks for that. HardOCP used to do those, I was actually looking for something like that.

And it's pretty funny when people slam Intel for stagnating in the raw HP race, praise AMD for Zen while at the same time they fail to realize Zen only matches what they say it was stagnant for almost a decade now.
Sure, Zen gives us core galore, but few people actually need them. The good thing is Zen gives us those cores for very little $$$ so if we can get some legroom (even if it's not that useful atm) without breaking the bank, why not?
That article showed that not only is Ryzen 3 IPC king currently, but also that they have made great strides in that area as the Ryzen 3 tested shows significant improvement over Ryzen 2, which itself shows improvement over Ryzen 1. I love articles like that which show definitively how things really are. However, Intel does still clock and OC higher..
Posted on Reply
#95
bug
lexluthermiesterThat article showed that not only is Ryzen 3 IPC king currently, but also that they have made great strides in that area as the Ryzen 3 tested shows significant improvement over Ryzen 2, which itself shows improvement over Ryzen 1. I love articles like that which show definitively how things really are. However, Intel does still clock and OC higher..
Are we looking at the same benchmarks? The ones you posted show Zen2 ~10% ahead in Cinebench, less than 10% in other rendering benchmarks and losing (not by much) or at best tying Intel in gaming.
Posted on Reply
#96
lexluthermiester
bugAre we looking at the same benchmarks? The ones you posted show Zen2 ~10% ahead in Cinebench, less than 10% in other rendering benchmarks and losing (not by much) or at best tying Intel in gaming.
Yeah, you need to take a closer look really quick. Those tests showed rather clealy that Ryzen3 improves greatly over previous gen Ryzen offerings and either matches or out performs the 9900k in most of those benchmarks. There are a few where the 9900k edges out the 3900x, but not by much. Based solely on the performance numbers, Ryzen 3 wins the day on IPC.

@W1zzard Would love to see a testing session like the one above to see your handy work at play and to get a second set of examples in this area. It'll make a great article!
Posted on Reply
#97
Freebird
Vya DomusWho is mainstream and how do you know what they need ? If they don't need more that 4 cores what the hell is Intel doing with CPUs like the 9900K and god forbid we mention the 16 core 3950X.
Yeah, one of my college professors told me, "You'll never need the power of the i386 on the desktop." I basically laughed at him and said how can you say that after we went from the 4.77mhz i8086/88 to 8/10/12mhz i80286 in about 3-4 years???

I guess I should've stopped with my 4-core Phenom II X4 925 or my 6-core Phenom II X6 1100T in 2011, since I didn't need more than 4 cores... not sure what to do with this 3900X that just showed up on my door step yesterday...
Posted on Reply
#98
efikkan
FreebirdYeah, one of my college professors told me, "You'll never need the power of the i386 on the desktop." I basically laughed at him and said how can you say that after we went from the 4.77mhz i8086/88 to 8/10/12mhz i80286 in about 3-4 years???
While the "competence" for professors may vary, prediction is a hard thing, even for those who know their stuff. As we see today, basing future prediction just on the past is not smart either.

But there is one important detail about the progress of the 80s and early 90s that is lost to most; while progress was very quick back in those days with 80286 launching in 1982, 80386 in 1985 and 80486 in 1989, adoption was very slow. I read in a magazine that the best selling year for 80286 was 1989. Even when IBM launched their "premium" PS/2 lineup in 1987, the base model had an 8086! Back then 3-4 generations of CPUs were on the market in parallel, quite different from today where we have a mainstream socket to support "everything", and of course HEDT and server sockets, and everything is at least refreshed every 1-2 years. Even when Windows 95 launched and the home computer boom was starting, the typical configuration was a 80486.
FreebirdI guess I should've stopped with my 4-core Phenom II X4 925 or my 6-core Phenom II X6 1100T in 2011, since I didn't need more than 4 cores... not sure what to do with this 3900X that just showed up on my door step yesterday...
Needs can evolve ;)
Posted on Reply
#99
r9
All companies are like that.
I wish all of them are competitive all the time AMD, Intel and NVIDIA.
Want to see Intel enter the GPU market and NVIDIA somehow enter the CPU market.
But none of that will matter soon unfortunately everything is heading to the cloud.
Only customization we gonna be able to do is paint the case of that streaming USB stick. :D
Posted on Reply
#100
lexluthermiester
r9But none of that will matter soon unfortunately everything is heading to the cloud.
Interesting notion. Some of us will never use the cloud because we don't like/accept the concept. Many others literally can't use the cloud because of a lack of internet speed. We are a VERY long way from the cloud being the mainstay/default form of computing..
Posted on Reply
Add your own comment
Aug 25th, 2024 22:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts