Tuesday, August 18th 2015

Intel "Skylake" Die Layout Detailed

At the heart of the Core i7-6700K and Core i5-6600K quad-core processors, which made their debut at Gamescom earlier this month, is Intel's swanky new "Skylake-D" silicon, built on its new 14 nanometer silicon fab process. Intel released technical documents that give us a peek into the die layout of this chip. To begin with, the Skylake silicon is tiny, compared to its 22 nm predecessor, the Haswell-D (i7-4770K, i5-4670K, etc).

What also sets this chip apart from its predecessors, going all the way back to "Lynnfield" (and perhaps even "Nehalem,") is that it's a "square" die. The CPU component, made up of four cores based on the "Skylake" micro-architecture, is split into rows of two cores each, sitting across the chip's L3 cache. This is a departure from older layouts, in which a single file of four cores lined one side of the L3 cache. The integrated GPU, Intel's Gen9 iGPU core, takes up nearly as much die area as the CPU component. The uncore component (system agent, IMC, I/O, etc.) takes up the rest of the die. The integrated Gen9 iGPU features 24 execution units (EUs), spread across three EU-subslices of 8 EUs, each. This GPU supports DirectX 12 (feature level 12_1). We'll get you finer micro-architecture details very soon.
Add your own comment

82 Comments on Intel "Skylake" Die Layout Detailed

#51
Uplink10
FordGT90ConceptWhy not when you have a dedicated GPU?
Because DX12 enables different GPUs to work together and you can assign applications to different GPU. Do rendering on dGPU and meanwhile browse on iGPU.
Posted on Reply
#52
MxPhenom 216
ASIC Engineer
Sony Xperia SYou are wrong.
AMD offered this innovation with the idea to accelerate the general performance in all tasks.
Thanks to those scumbags intel, nvidia, microsoft, other "developers" and co, it probably won't happen.
Boy the ignorance never ends with you.
Posted on Reply
#53
Sony Xperia S
MxPhenom 216Boy the ignorance never ends with you.
And trolling never stops with you. :D

What didn't you undertstand from what I've said and what you argue?

This one:

Personal Supercomputing
Much of a computing experience is linked to software and, until now, software developers have been held back by the independent nature in which CPUs and GPUs process information. However, AMD Fusion APUs remove this obstacle and allow developers to take full advantage of the parallel processing power of a GPU - more than 500 GFLOPs for the upcoming A-Series "Llano" APU - thus bringing supercomputer-like performance to every day computing tasks. More applications can run simultaneously and they can do so faster than previous designs in the same class.

www.amd.com/en-us/press-releases/Pages/amd-fusion-apu-era-2011jan04.aspx
Posted on Reply
#54
MxPhenom 216
ASIC Engineer
Sony Xperia SAnd trolling never stops with you. :D

What didn't you undertstand from what I've said and what you argue?

This one:

Personal Supercomputing
Much of a computing experience is linked to software and, until now, software developers have been held back by the independent nature in which CPUs and GPUs process information. However, AMD Fusion APUs remove this obstacle and allow developers to take full advantage of the parallel processing power of a GPU - more than 500 GFLOPs for the upcoming A-Series "Llano" APU - thus bringing supercomputer-like performance to every day computing tasks. More applications can run simultaneously and they can do so faster than previous designs in the same class.

www.amd.com/en-us/press-releases/Pages/amd-fusion-apu-era-2011jan04.aspx
Wtf does this even have to do with this thread? Get your AMD garbage out of here.
Posted on Reply
#55
tabascosauz
Sony Xperia SAnd trolling never stops with you. :D

What didn't you undertstand from what I've said and what you argue?

This one:

Personal Supercomputing
Much of a computing experience is linked to software and, until now, software developers have been held back by the independent nature in which CPUs and GPUs process information. However, AMD Fusion APUs remove this obstacle and allow developers to take full advantage of the parallel processing power of a GPU - more than 500 GFLOPs for the upcoming A-Series "Llano" APU - thus bringing supercomputer-like performance to every day computing tasks. More applications can run simultaneously and they can do so faster than previous designs in the same class.

www.amd.com/en-us/press-releases/Pages/amd-fusion-apu-era-2011jan04.aspx
Classic

One fanboy around here is ready to ascend to the godlike status of "utterly blind, enslaved fanboy". Spewing PR today, who knows what tomorrow will bring?

The future is not fusion. The future is Sony Xperia S, and it will be the end of us all. I still need you, sanity. It was invaluable to have you at my side through this fruitless battle and now I must retreat to be close to you, sanity. He doesn't understand the meaning of hypocrisy and can't comprehend the fact that all of this Bullshit with a capital B is a phenomenon called marketing. But I must bar myself from continuing to struggle against this unending insanity.
Posted on Reply
#56
FordGT90Concept
"I go fast!1!11!1!"
Uplink10Because DX12 enables different GPUs to work together and you can assign applications to different GPU. Do rendering on dGPU and meanwhile browse on iGPU.
I'll believe it when I see it.
Posted on Reply
#57
Sempron Guy
still amazes me how an Intel only article instantly turns into a "which camp is greener" "which camp has more sh*t on it" argument
Posted on Reply
#58
Joss
FxYou are half retarded
Yes, I'll try to be completely retarded from now on.
Posted on Reply
#59
Sakurai
If you wish Intel to put more effort on replacing the iGPU part on the die with cores or whatever they are, you have to start looking for competitions. And that's where AMD comes into the play. But the bigger problem is, no one wants to buy an AMD chip. That's why all you hypocrites can just sit here and cry. Because you're actively supporting the monopoly, and there's not a single sh!t you can do about it. Enjoy your crippled CPU!
Posted on Reply
#60
newtekie1
Semi-Retired Folder
MxPhenom 2165820k is available. Similar price to 6700k.
Exactly, and that should have been the top of the mainstream market, the HEDT area should be all 8-cores by now except the bottom processor, which is very similar to the top end mainstream. Just like it always has been.
tabascosauzIf the 6700K was a hex-core with no iGPU, why would the 5820K and 5930K even exist?
Same reason the 920 and 860 existed, or the 3820 and 2600K, or the 4820K and 3770K. If you arguments held true, we would have seen all of that with the previous generations.

The bottom of the HEDT has always basically matched the top of the mainstream, in terms of core count. Now we've move to the point where the bottom of the HEDT is 6 cores, and I believe the top of the mainstream should be 6 cores as well.
Posted on Reply
#61
radrok
FordGT90ConceptWhy not when you have a dedicated GPU?


It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.
You can run X99 on dual channel with just 2 sticks, problem solved.
Posted on Reply
#62
FordGT90Concept
"I go fast!1!11!1!"
I know that. You're still talking $200+$400 for X99 versus $100+$350 for Z170 which comes to 33% more expensive for two-year old tech, more power consumption, and lower clockspeed. The only advantages Haswell-E has over Skylake is two more (albeit slower) cores and double the memory capacity. 64 GiB of memory should be fine and for gaming, six cores really don't benefit over four higher throughput cores. If it were Skylake versus Broadwell-E, it would be a harder decision. Skylake versus Skylake-E would be a no brainer.
Posted on Reply
#63
tabascosauz
newtekie1Exactly, and that should have been the top of the mainstream market, the HEDT area should be all 8-cores by now except the bottom processor, which is very similar to the top end mainstream. Just like it always has been.



Same reason the 920 and 860 existed, or the 3820 and 2600K, or the 4820K and 3770K. If you arguments held true, we would have seen all of that with the previous generations.

The bottom of the HEDT has always basically matched the top of the mainstream, in terms of core count. Now we've move to the point where the bottom of the HEDT is 6 cores, and I believe the top of the mainstream should be 6 cores as well.
But it was a little bit different back then. Neither the 920 nor 860 had integrated graphics. Now Intel has moved towards its mainstream platform being accessible to all, not just enthusiasts, leaving HEDT as the only platform with no iGPU. With 6 cores, the mainstream top dog would no longer be achieving parity with the bottom HEDT SKU; it would have a new architecture (HEDT is usually 1 behind), graphics just in case you use Quick Sync or have no dGPU, and six cores. It's easy to bring 4 cores down from HEDT to mainstream, but it's not quite so simple when you have 6 cores and a iGPU to make do with. And no, Intel can't just throw away their iGPU for the overclockable i5 and i7 mainstream SKUs because 1) money spent for new design, not a lot of revenue in return and 2) would lock them into doing the same for the next generations. Think about the latter. If 14nm is so difficult for Intel, how would it be on 10nm?

Also, Sandy Bridge saw the discontinuation of such a two-prong strategy. After all, it wasn't the most logical offering; so the i5 (Clarkdale, not Lynnfield) and i3 SKUs were all about the first-time on-die Intel Graphics while the mainstream i7s were HEDT bottom-feeders brought down to LGA1156? It could be said that 1st Gen core wasn't a blueprint for others to follow; it marked a transition from the old Core 2 all-about-the-CPU lineup to the new stack defined by integrated graphics.

Also, six cores with or without iGPU on LGA115x would put an awful amount of heat in a smaller package than LGA2011. It also would have to do away with TIM, thus confirming my argument that Intel would have to put more money into this new i5/i7K design than it is actually worth.
Posted on Reply
#65
radrok
HEDT isn't coming out before these chips because Intel uses the mainstream core to test the lithography and cores scaling, doesn't make sense to experiment the manufacturing on the big dies. Also when the node is new and less mature yields are lower so it makes sense to use small chips to get production ramping up.

Much like Nvidia gets out the mainstream core before the big die.
Posted on Reply
#66
kn00tcn
Sony Xperia SYou are wrong.
AMD offered this innovation with the idea to accelerate the general performance in all tasks.
Thanks to those scumbags intel, nvidia, microsoft, other "developers" and co, it probably won't happen.
you're praising amd for letting applications utilize the gpu in new greatly improved performance ways... yet intel isnt allowed to offer an identical chip!? you really are a scumbag

the future IS fusion... for ALL platforms, x86, arm, mobile, desktop, supercomputer, every calculation anywhere is simple math & different math is best used on different types of processors

opencl runs everywhere, perfect way to utilize the whole 'apu' from whatever company
Posted on Reply
#67
xorbe
Core count stuck at 4 forever for mainstream. But admittedly I'd rather have faster 4 core chips. I only compile large code bases infrequently, so I wouldn't really be using 8 cores efficiently most of the time.
Posted on Reply
#68
Aquinus
Resident Wat-man
Yet my 3820 is still perfectly adequate? You people complain about the iGPU being huge but, when it's not used it gets power gated which means it consumes no power. Second, if my 3820 (which is still a quad-core on skt2011,) is adequite for just about everything I throw at it, why is there incentive to put more on a mainstream platform that will cost more to produce. Sorry, but the market isn't demanding it, only power users are. If you really have your panties in a bunch about the iGPU, go HEDT, if you have a hard on for cores, go AMD or Xeon. Simple fact is, if you're not happy with mainstream offerings, you're probably not a f**king mainstream user.

People in this thread whine but, if you don't like it, don't buy it! The complaining in this thread is merely astonishing.
tabascosauzAlso, six cores with or without iGPU on LGA115x would put an awful amount of heat in a smaller package than LGA2011. It also would have to do away with TIM, thus confirming my argument that Intel would have to put more money into this new i5/i7K design than it is actually worth.
This. I would roll it into the mainstream platform argument. People who want more cores and no iGPU are power users, not mainstream users. My wife had a Mobility Radeon 265X in her laptop. It has been running on the HD 4000 graphics but she's never noticed a difference. That's because she's like the majority of users out there who would never use it.

We here at TPU are a minority, not a majority. It's amazing how people don't seem to understand that and how the market is driven by profit, not making power users happy.
Posted on Reply
#69
tabascosauz
AquinusWe here at TPU are a minority, not a majority. It's amazing how people don't seem to understand that and how the market is driven by profit, not making power users happy.
In the opinion of one particularly notable user whose name is closely modeled after a smartphone model, "Fuck you, dude! I don't care how difficult it is to go beyond 4 cores in a mainstream substrate package and I don't care how the business world works. Intel should work exclusively for ME and take cues from what I think. Future is Fusion, and even though half of my incomprehensible argument states that Intel's GPUs aren't good enough and need improvement, I still think, quite to the contrary, that the iGPU just needs to go." I think that user also doesn't understand the meaning behind the two little words of "fuck off" either, 2 shitstorms of an article about Skylake later.

Seriously, a lot of people need to search up that WCCFTech (I think) article where a i7-5960X engineering sample was first leaked to the public (after, of course, being delidded improperly and having half of its broken die still epoxied to the IHS). That LGA2011-3 package is huge. Not only is it huge, the 8-core die is also HUGE. The size of the die alone is not too far off that of the entire LGA1150 IHS. When we consider that a six-core die would not be too much smaller (since it needs more than 8MB of L3 if it wants to avoid ending up like the X6 1100T), a 6-core mainstream CPU is just not logical on size alone, without making any mention of the iGPU.
Posted on Reply
#70
Sony Xperia S
kn00tcnyou're praising amd for letting applications utilize the gpu in new greatly improved performance ways... yet intel isnt allowed to offer an identical chip!?
It has never been an Intel idea to offer Fusion products. They have stolen and simply copied without actually having a clue. They still market this for graphics acceleration purposes only.

But why don't they keep their inadequate graphics "acclerators" on motherboards as it always used to be and actually give customers the right to choose what they want?
xorbeCore count stuck at 4 forever for mainstream.
Nope, AMD will change this coming next year. They only need working CPU on 14nm and Intel's bad practices will be gone forever.
Posted on Reply
#71
FordGT90Concept
"I go fast!1!11!1!"
I think if Intel feels compelled to add more cores, it will be on HEDT, not mainstream. Most people that buy these Skylake chips will only use them for browsing the internet and occasionally making movie or picture book. A dual core from a decade ago can do that perfectly well. I can't see more than four cores on mainstream for a very long time.
Posted on Reply
#72
Sony Xperia S
FordGT90ConceptMost people that buy these Skylake chips will only use them for browsing the internet and occasionally making movie or picture book.
i7-6700K for browsing internet? Really?
Wow, do you realise that in most countries these Skylake processors are the top of the line what could be afforded. They are bought by people who are either enthusiasts or pretend to be such, or just watch their budgets tightly ?
FordGT90ConceptA dual core from a decade ago can do that perfectly well. I can't see more than four cores on mainstream for a very long time.
There is the unpleasant feeling with a slow CPU to wait, and wait while it takes its time to process all the required data....... waste of time in enormous scale. If you have the willingness and patience to cope with that.

I just tell you that I'm 99% sure that in 2017 you will start to sing another song. ;)
Posted on Reply
#73
64K
Sony Xperia SNope, AMD will change this coming next year. They only need working CPU on 14nm and Intel's bad practices will be gone forever.
So long as Intel stubbornly insists on providing chips that customers actually want they are doomed to success.

Posted on Reply
#74
Sony Xperia S
So long as Intel stubbornly insists on providing chips that customers actually want they are doomed to success.
Customers have no choice. They just buy what they know and the propaganda machine works and tells them - forget AMD, buy Intel. And because intel has the cash to keep that machine working, it simply still works for them. We will se until when. :)
Posted on Reply
#75
FordGT90Concept
"I go fast!1!11!1!"
So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?


The bottleneck for most consumers isn't CPU but HDD or internet performance. The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy (e.g. a 4K monitor is going to put a lot more stress on the hardware than a 1080p monitor).
Posted on Reply
Add your own comment
Nov 23rd, 2024 04:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts