Wednesday, July 17th 2019
Intel's CEO Blames 10 nm Delay on being "Too Aggressive"
During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.
However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.With 10 nm, Intel targets improved density by as much as 2.7x compared to the last generation of 14 nm transistors. He addressed the five year delay in delivering the 10 nm node being caused by "too aggressive innovation," adding that "... at a time it gets harder and harder, we set more aggressive goal..." and that's the main reason for the late delivery. Additionally he said that this time, Intel will stay at exactly 2x density improvements over two years with the company's 7 nm node, which is supposed to launch in two years and is already in development.
When talking about the future of Intel, Swan has noted that Intel's current market share is 30% of the "silicon market", saying that Intel is trying to diversify its current offerings from mainly CPUs and FPGAs to everything that requires big compute performance, in order to capture rest of the market. He noted that Artificial Intelligence is currently driving big demand for such performance, with autonomous vehicles expected to be a big source of revenue for Intel in the future. Through acquisitions like Mobileye, Intel plans to serve that market and increase the company's value.
You can listen to the talk here.
However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.With 10 nm, Intel targets improved density by as much as 2.7x compared to the last generation of 14 nm transistors. He addressed the five year delay in delivering the 10 nm node being caused by "too aggressive innovation," adding that "... at a time it gets harder and harder, we set more aggressive goal..." and that's the main reason for the late delivery. Additionally he said that this time, Intel will stay at exactly 2x density improvements over two years with the company's 7 nm node, which is supposed to launch in two years and is already in development.
When talking about the future of Intel, Swan has noted that Intel's current market share is 30% of the "silicon market", saying that Intel is trying to diversify its current offerings from mainly CPUs and FPGAs to everything that requires big compute performance, in order to capture rest of the market. He noted that Artificial Intelligence is currently driving big demand for such performance, with autonomous vehicles expected to be a big source of revenue for Intel in the future. Through acquisitions like Mobileye, Intel plans to serve that market and increase the company's value.
You can listen to the talk here.
111 Comments on Intel's CEO Blames 10 nm Delay on being "Too Aggressive"
Is not too hard to think that AMD made Intel ( as an agreedment) to delay the release of 10nm Desktop chips.
What Intel didn't predict, was AMD making a great job in marketing and pushing core count to home users at great prices.
For me, making comparisons with Intel 14nm and AMD 7nm, and claiming superior tech to AMD, sorry, but its not a real comparison. AMD gets the lead on this timeframe of consumer. but Intel will get the edge again. They are focusing on GPU tech now... that is AMD source redone with Intel knowleadge.
The security problems on Intel CPU yes, they need a fix. but even with that Perf/Watt Intel is way better on a 14nm, go figure at 10nm or 7nm where AMD is now, and Intel is 4 years ago.
( I own a R5 2600 )
The first one being the i7-8700 released Oct 2017
The first ever i7 CPU, the i7-920 was released Nov 2008........
As I said, Intel has been #1 since Nov 2008, they had no reason to keep pushing the envelope until the Ryzen 3000s lit a fire under them.
Whole heart agreed with you. Have seen so many the all MIGHTY companies, down and disappeared in my life time... and will be sure many more to come.
C'est la Vie!
Cheer!
{@}
^(^
PS :
Hopefully now, Intel let the DEVs use more than 4 Cores! : \Evillaughs\
Intel, helded back yes, they got greedy yes ... AMD got the chance to finally get a grip on sales. Intel is still best of plug-n-play system. there are no major RAM configurations needed, no over-the-board cooling, super duper PSU.I bought R5 2600 with all the hardware "hand-picked" and know-how needed to make it stable, it was more simple buying a cheap Mobo, basic DDR4 sticks, simple air cooling and would have to worry about one single setup at BIOS level. I wouldnt recommend any AMD CPU for basic casual gamer. it doesn't worth it. i5-9400F for me was enough, because in 3 to 4 years we have PCIe 5, DDR5 tech all arround ..and that is the tech evolution will leap.
.... And she said... Why?....
I just shakes my head and said nothing....
Just sayinnggg!
"AMD is Too Aggressive"
Fixed that for you
I actually went through some of this... kentsfield, lynnfield, broadwell ... "Oh f***, ANOTHER quad core !??" Well, I'll buy it, no choice.
R7 3700x incoming tomorrow. Bye Intel.
Only the desktop market has been held behind, mobile is already secured and the server market should be their upmost priority.
But your comment show lack of knowledge in semiconductors industry. Going for high core might be beneficial in many cases but it has many drawback especially in power consumption price and other complexity, but again we are talking about manufacturing process and not end product and there is distinct separation between the two. Saying 10 nm delay is due lack of competition yet you clearly say they benefit financially when going to smaller node contradicts your claim. Intel spent ten of billions so far on 10nm either by R&D and building new fabs. Intel also suffered a lot from chip shortages due the delays in 10nm meaning they lose money by not selling more chips, so not moving to the new node after they invested so much only damages them financially so how lack of competitiveness can explain any decision to not move to the new node?
And speaking on technical levels you of course would not know how current use of DUV light source which has 193 nm requires them to use quadruple pattering to achieve a features in tens of nm size. And even after that the features are still far from how they meant to be, in addition to other limitations you perceive as nonexistent judging by your rhetoric and claims.
But now imagine if they screw up 7nm with their 'realistic target' ;)
When it comes to manufacturing process, Intel's 10nm was without a doubt very aggressive. They have eased up the specs by now.
There is a lot of things that went wrong with Intel's 10nm but metal pitch under 40nm (Intel wanted 36nm) seems to be one of the major causes - this necessitated SAQP because EUV was not ready (and it only starts to becomes ready for using in semiconductor mass production about now) and seems to be a major hurdle in couple other ways. 40nm being a soft limit for pre-EUV was theoretically shown a while ago and both TSMC and Samsung opting to stay at 40nm metal pitch until EUV is telling.
Maybe Chipzilla will unleash another post Pentium 4 product. Who knows. The duopoly has got intresting once again. Good for everybody.
I mean, Ryzen 3000 presents great value, R5 3600 and R9 3900x specifically. But let´s be honest, on a lot of tasks they still competing and going behind chips from 2015, at 14nm, while AMD is at 7nm.
I would say the latter.
8 cores is good today for most tasks and 90% of the reviews out there did point out that while the 12c is "faster", it's marginally, even in professional workloads.
Basically, only two mainline activities are faster on the 12c: 3D rendering and 4K video encoding.
For everything else, the extra 4 cores don't bring much (if anything).
On the other hand, the less 2 cores of 2600 and slightly lower frequencies do affect professional productivity quite a lot !
And a personal reason... I used a 6-core for the last 2.5 years (6800K) and going to another 6 core didn't make any sense at all... but doubling to 12 is overkill for all my activities.
So... I picked the "Odd man out".Or... more precisely, the Jack of all Trades, which is perfect for my day to day operations for the next 2-3 years.
If ever there's need for more, a 16c is incoming in a few months.
So ... in reality the 12c (3900X) is the real "odd man out" !
...or you could choose one of AMD's notably slower hex cores (Bullldozer) which have also been around for several years + some.
You had a choice, you just chose not to do it for whatever reason...be it the higher cost or whatever... but, you had a choice. Looks like it. Nothing like spouting off about market stagnanation in core count............and then getting the same core count the other team has. Sure, its cheaper, but, that didn't seem like it was a talking point (until it becomes useful, like now).