# Intel "Skylake" Die Layout Detailed



## btarunr (Aug 18, 2015)

At the heart of the Core i7-6700K and Core i5-6600K quad-core processors, which made their debut at Gamescom earlier this month, is Intel's swanky new "Skylake-D" silicon, built on its new 14 nanometer silicon fab process. Intel released technical documents that give us a peek into the die layout of this chip. To begin with, the Skylake silicon is tiny, compared to its 22 nm predecessor, the Haswell-D (i7-4770K, i5-4670K, etc). 

What also sets this chip apart from its predecessors, going all the way back to "Lynnfield" (and perhaps even "Nehalem,") is that it's a "square" die. The CPU component, made up of four cores based on the "Skylake" micro-architecture, is split into rows of two cores each, sitting across the chip's L3 cache. This is a departure from older layouts, in which a single file of four cores lined one side of the L3 cache. The integrated GPU, Intel's Gen9 iGPU core, takes up nearly as much die area as the CPU component. The uncore component (system agent, IMC, I/O, etc.) takes up the rest of the die. The integrated Gen9 iGPU features 24 execution units (EUs), spread across three EU-subslices of 8 EUs, each. This GPU supports DirectX 12 (feature level 12_1). We'll get you finer micro-architecture details very soon.





*View at TechPowerUp Main Site*


----------



## GhostRyder (Aug 18, 2015)

Wow, that's a lot of room dedicated just to the GPU.  Bet those Xeons look great without that space taken up so they can cram cores!


----------



## ensabrenoir (Aug 18, 2015)

.............In before the lame comments.......*ITS ALL ABOUT THE GPU!!!!!!!!!!!!*


----------



## FordGT90Concept (Aug 18, 2015)

I intend to disable the GPU via UEFI.


----------



## ZenZimZaliben (Aug 18, 2015)

btarunr said:


>



Look at the size of that GPU. It takes up more then 1/3 of the chip....I do not understand why a K class chip even has a freaking IGP. 90% of the people interested in K series run dedicated gpu. SO basically 35% of the cost of the chip is going towards an IGP that will never be used. I would much rather pay for 35% more cores. With the IGP gone it would be easy to fit in at least 2 more cores with out expanding the die.


----------



## Sony Xperia S (Aug 18, 2015)

That's unbelievable - 50% for graphics and 50% of the shared die area for the real stuff.

Of course, this sucks, and it sucks even worse when you know that Intel does nothing at least to try to develop software environment to unleash all those wasted transistors.
40% of the die - to be not used.


----------



## n-ster (Aug 18, 2015)

Is there any downside to having a disabled IGP there in terms of thermal/power usage?


----------



## Hades (Aug 18, 2015)

n-ster said:


> Is there any downside to having a disabled IGP there in terms of thermal/power usage?


You just save power. That's it


----------



## n-ster (Aug 18, 2015)

Hades said:


> You just save power. That's it



I meant is there a negative effect to have that dead-weight IGP there versus a chip that would completely cut out the IGP?


----------



## btarunr (Aug 18, 2015)

The IGP is this big not because Intel hopes you'll play Battlefront with it at 1080p. It's because Intel has to cope with the new wave of >1080p displays, such as 4K and 5K (particularly with high-res notebooks). This IGP has just enough juice to play 4K 60Hz video without stuttering. The API support is up-to-date just so you'll probably do DX12 asymmetric multi-GPU using discrete GPUs in the future, where your display is plugged into the IGP.


----------



## Patriot (Aug 18, 2015)

n-ster said:


> I meant is there a negative effect to have that dead-weight IGP there versus a chip that would completely cut out the IGP?


You can use quicksync to accelerate certain tasks that may not be coded in such a way to use your discrete card.


----------



## Disparia (Aug 18, 2015)

Sony Xperia S said:


> That's unbelievable - 50% for graphics and 50% of the shared die area for the real stuff.
> 
> Of course, this sucks, and it sucks even worse when you know that Intel does nothing at least to try to develop software environment to unleash all those wasted transistors.
> 40% of the die - to be not used.



I know!

I mean, I would know if I didn't know that Intel had provided developer guides since 2011:
https://software.intel.com/en-us/articles/intel-graphics-developers-guides

Or if I had missed when Intel put support behind OpenCL back in 2014:
http://streamcomputing.eu/blog/2014...opencl-as-the-heterogeneous-compute-solution/

Or didn't attend any of the numerous Intel-sponsored development events every year:
https://software.intel.com/en-us/


----------



## peche (Aug 18, 2015)

scumbag intel ....


----------



## ZenZimZaliben (Aug 18, 2015)

Jizzler said:


> I know!
> 
> I mean, I would know if I didn't know that Intel had provided developer guides since 2011:
> https://software.intel.com/en-us/articles/intel-graphics-developers-guides
> ...



I see what you did there.


----------



## ppn (Aug 18, 2015)

Waits for 8-core by intel


----------



## R-T-B (Aug 18, 2015)

It's no secret Intel wants to go full SOC.

The only thing really missing is decent graphics.  This is filling that gap.  No, enthusiasts don't like it, but they also make up like 10% of the market if I'm being generous...


----------



## Slizzo (Aug 18, 2015)

peche said:


> scumbag intel ....



I'm sorry, Scumbag? For what exactly? Improving performance for users while lowering TDP?


----------



## ZenZimZaliben (Aug 18, 2015)

Slizzo said:


> I'm sorry, Scumbag? For what exactly? Improving performance for users while lowering TDP?



I don't want a lower TDP with 10% performance increase. I want HUGE Performance gains and TDP doesn't matter at all.  It's an i7 K chip. It should not be a power sipper and it also shouldn't have an IGP. However this is the first release at 14nm and hopefully the next release will ditch the IGP and go for more cores.


----------



## ERazer (Aug 18, 2015)

still not worth upgrading, intel made sandy bridge such a badass


----------



## peche (Aug 18, 2015)

ERazer said:


> still not worth upgrading, intel made sandy bridge such a badass


no sandies were soldered chips... this crappy new ones still use crappy paste...


----------



## Joss (Aug 18, 2015)

You can always go for AMD... oops, I forgot they don't release a new chip for 3 years.

Joke apart that's exactly the problem: the Red team is not putting up a fight and Intel can do as they please.
Problem is, considering what AMD did with Fury I'm not hoping much for their next FX series (if there is one).


----------



## Yorgos (Aug 18, 2015)

Hades said:


> You just save power. That's it


Actually you don't.
Using a GPU while browsing, fapping, e.t.c. consumes much more power than having the iGPU enabled and turning on and off the dGPU for the heavy staff.
Also, your dGPU lives longer, unless you use nVidia and nVidia decides when to cripple your h/w.


----------



## deemon (Aug 18, 2015)

Slizzo said:


> I'm sorry, Scumbag? For what exactly? Improving performance for users while lowering TDP?



Thats the problem... intel does not improve the performance... not by much. Ever anymore.


----------



## tabascosauz (Aug 18, 2015)

I'm beating a dead horse here, but the intent of *HEDT* is to satisfy the exact conditions that most of you seem to expect from a top of the product stack *mainstream* i5/i7.

The complaints won't stop until Intel gets rid of the GPU, and they really won't stop because Intel is not going to take that GPU off. This die is going to power the other desktop i5s and i7s and those are parts intended for powering a 1080P monitor at work without the assistance of a dGPU. Not throwing money into the water for a pointless GPU-less Skylake design that basically gets them no $$$ at all is actually a pretty smart business plan, believe it or not.

Not a lot of praise where it's deserved, I'm afraid. I wouldn't be showering Intel with praise 24/7, but I didn't see any positive comments about the 5820K when it was released, only "only 28 PCIe lanes? What if I need the extra 12 to get maximum performance while getting off every day?" Tried to appease the enthusiasts with a 6-core HEDT part below $400, well, I guess that didn't work out very well, did it? The 5820K wasn't even a forced hand; it could very well have been a carbon copy of the 4820K, just on Haswell, as there are 4-core E5 V3s a-plenty to prove that.

Give people something better, and they'll find something better to complain about.


----------



## peche (Aug 18, 2015)

deemon said:


> Thats the problem... intel does not improve the performance... not by much. Ever anymore.


thanks didnt saw the message!
you took my words...
point #2: intel knows pretty much the problem and differences  about soldering and thermal paste on their CPU die's.....
but they still use that shitty paste... so?


----------



## Uplink10 (Aug 18, 2015)

FordGT90Concept said:


> I intend to disable the GPU via UEFI.


Why?


ZenZimZaliben said:


> Look at the size of that GPU. It takes up more then 1/3 of the chip....I do not understand why a K class chip even has a freaking IGP. 90% of the people interested in K series run dedicated gpu.


That. K chips are premium beacuse you get the privilege of overclocking even though in the end performance per dollar is lower than if you bought non-K chip. I imagine these people have enough money for dGPU.


btarunr said:


> This IGP has just enough juice to play 4K 60Hz video without stuttering.


The CPU part can already do that without the iGPU hardware acceleration.


----------



## Fx (Aug 18, 2015)

Joss said:


> Joke apart that's exactly the problem: the Red team is not putting up a fight and Intel can do as they please.
> Problem is, considering what AMD did with Fury I'm not hoping much for their next FX series (if there is one).



You are half retarded to be laying blame on AMD for putting out a respectable new GPU. Cry more please; plenty of tissues for you fellas.


----------



## tabascosauz (Aug 18, 2015)

Fx said:


> You are half retarded to be laying blame on AMD for putting out a respectable new GPU. Cry more please; plenty of tissues for you fellas.



Why does his comment regarding R9 Fury qualify him as a half-retard? It's an AMD product, and the Fury X's failure rests squarely on AMD's shoulders. In any case, AMD took a page out of Nvidia's Titan book with the Fury X, only to see it overshadowed by the much more appropriately priced Fury shortly afterwards. Fury is a competitive product. Fury X is a little questionable; making a card that excels only in the niche market of watercooled SFF is not in AMD's best financial interests.


----------



## radrok (Aug 18, 2015)

ppn said:


> Waits for 8-core by intel



http://ark.intel.com/products/82930...ssor-Extreme-Edition-20M-Cache-up-to-3_50-GHz



Spoiler


----------



## ZenZimZaliben (Aug 18, 2015)

radrok said:


> http://ark.intel.com/products/82930...ssor-Extreme-Edition-20M-Cache-up-to-3_50-GHz



On the new architecture at 14nm. Eventually this will trickle into Xeons... Already here as Xeon D.


----------



## Fx (Aug 19, 2015)

tabascosauz said:


> Why does his comment regarding R9 Fury qualify him as a half-retard? It's an AMD product, and the Fury X's failure rests squarely on AMD's shoulders. In any case, AMD took a page out of Nvidia's Titan book with the Fury X, only to see it overshadowed by the much more appropriately priced Fury shortly afterwards. Fury is a competitive product. Fury X is a little questionable; making a card that excels only in the niche market of watercooled SFF is not in AMD's best financial interests.



The context of the thread is about performance, not money/performance for tiers. In that regards, Nano, Fury and Fury X are all doing just fine.

On the CPU side, yeah, Intel can manipulate the market however they want because they have it like that.


----------



## vega22 (Aug 19, 2015)

ZenZimZaliben said:


> On the new architecture at 14nm. Eventually this will trickle into Xeons... Already here as Xeon D.



i bet they could double up the cores with the the 14nm fab if they did no igp and qpi shits.

they just wont as it would kill x99 without doubling up the E range first.


----------



## MxPhenom 216 (Aug 19, 2015)

Seriously, all these people complaining about the IGP to core ratio, news flash, Intel has a platform with no igp and more cores x79/99. Maybe look into that than bitch about mainstream platform not having enough cores. When software is still just barely using more than 2. Nothing has changed.


----------



## R-T-B (Aug 19, 2015)

tabascosauz said:


> I'm beating a dead horse here, but the intent of *HEDT* is to satisfy the exact conditions that most of you seem to expect from a top of the product stack *mainstream* i5/i7.
> 
> The complaints won't stop until Intel gets rid of the GPU, and they really won't stop because Intel is not going to take that GPU off. This die is going to power the other desktop i5s and i7s and those are parts intended for powering a 1080P monitor at work without the assistance of a dGPU. Not throwing money into the water for a pointless GPU-less Skylake design that basically gets them no $$$ at all is actually a pretty smart business plan, believe it or not.
> 
> ...



But HEDT costs too many monies...


----------



## newtekie1 (Aug 19, 2015)

In the space they wasted on shitty barely capable graphics they could have stick another 4 cores...



MxPhenom 216 said:


> Seriously, all these people complaining about the IGP to core ratio, news flash, Intel has a platform with no igp and more cores x79/99. Maybe look into that than bitch about mainstream platform not having enough cores. When software is still just barely using more than 2. Nothing has changed.



The IGPU is going to be wasted too. And x99 is stupid expensive, anything with 8-cores is $1,000 just for the processor.


----------



## ppn (Aug 19, 2015)

At what point does that Skylake look laughable compared to previous similarly sized chip priced around 100$ the 2-core on 32nm 149mm².  355mm² 5960X is clearly not mainstream.

Now imagine that Intel offers instead of just the 4-core 133mm² w/IGP (rerely used), another SKU 8-core 133mm² noIGP true Mainstream-like, the IGP replaced with something useful at no cost at all, except copy.paste some cores.  Simple as that.


----------



## MxPhenom 216 (Aug 19, 2015)

newtekie1 said:


> In the space they wasted on shitty barely capable graphics they could have stick another 4 cores...
> 
> 
> 
> The IGPU is going to be wasted too. And x99 is stupid expensive, anything with 8-cores is $1,000 just for the processor.



And you think that if they add 4 more cores to the 6700k, that the price wouldn't change? HAHAHAHA


----------



## newtekie1 (Aug 19, 2015)

MxPhenom 216 said:


> And you think that if they add 4 more cores to the 6700k, that the price wouldn't change? HAHAHAHA



It wouldn't be anywhere near $1,000, that is the point.

Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.


----------



## kn00tcn (Aug 19, 2015)

Sony Xperia S said:


> That's unbelievable - 50% for graphics and 50% of the shared die area for the real stuff.
> 
> Of course, this sucks, and it sucks even worse when you know that Intel does nothing at least to try to develop software environment to unleash all those wasted transistors.
> 40% of the die - to be not used.


make up your mind, is it 50% (it obviously isnt, get a ruler) or is it 40% or what is it

unbelievable, we have a gpu that is capable of video encoding & opencl


----------



## semantics (Aug 19, 2015)

newtekie1 said:


> It wouldn't be anywhere near $1,000, that is the point.
> 
> Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.


Market segmentation, lack of competition from AMD.


----------



## ensabrenoir (Aug 19, 2015)

..........maybe i shouldn't have used a big font.....its sad to see so many that just don't get it...... 
there is a HEDT line for a reason........ this is mainstream so........

.*.....ITS ALL ABOUT THE IGPU!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!*


----------



## blunt14468 (Aug 19, 2015)

radrok said:


> http://ark.intel.com/products/82930...ssor-Extreme-Edition-20M-Cache-up-to-3_50-GHz
> 
> 
> 
> Spoiler


I am almost certian to go this route.


----------



## tabascosauz (Aug 19, 2015)

newtekie1 said:


> It wouldn't be anywhere near $1,000, that is the point.
> 
> Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.



This is the issue. And unlike the gentleman from earlier, I will not resort to calling you a half-retard even though I disagree with your views, because we are all civilized around here and are entitled to our own opinions.

It was clearly a gamble, in some respects, for Intel to make the entry HEDT SKU a hex-core. On one side, a reasonable consumer might welcome it as a long-anticipated update to the HEDT lineup, at a reasonable price of ~$380, same as the quad-core 4820K. On the other hand, others will (without having any intention of buying X99) look at the 5820K and ask Intel / complain to other people that the 4790K should have been hex-core too.

Seriously?

If the 6700K was a hex-core with no iGPU, why would the 5820K and 5930K even exist? For the sole purpose of offering more PCIe lanes? Quad-channel DDR4 (oh look, double the bandwidth, must be double the FPS too)? Intel would be shooting itself in the foot.

1. Extra costs into the 6700K. Can't take a Xeon die like the 5820K and 5930K because it's not LGA2011. Need to make a new hex-core die on LGA1151. In the end, no one ends up buying it because the extra R&D costs warrant a higher price tag, and everyone says "DX12 is coming, FX-8350 offers similar performance for about 1/2 the price". Lost lots of money here.

2. 5820K and its successor just die. I mean, what else are these two supposed to do (in addition to the 5930K, also dead)? Next, people boycott LGA2011 and say that "unless the 5960X's successor is a 10-core, I won't buy anything 2011". Jesus. So what is Intel supposed to do now? Lost more money here.

3. Intel slips back into the Pentium days and becomes no better than AMD. These few years have been about forcing the TDP down (don't look at the 6700K, look at the fact that Broadwell desktop was 65W and the other Skylake SKUs are of lower TDP than Haswell). Six-core would mean 140W on LGA1151. There aren't any stock coolers on LGA2011 because none of Intel's stock coolers (except for that one oddity during the Westmere era that was a tower) can handle that kind of heat output. 140W? Better get better VRMs, because those H81M-P33 MOSFETs aren't going to take on a six-core. And "hey look, AMD actually has stock coolers that can handle their CPUs". Lots of confusion and more money lost.

4. What happens to the other LGA1151 SKUs? Did they suddenly cease to exist? 28 PCIe lanes for the 6700K and...what? Would you want to try and explain this disjointed lineup to anyone? "Oh yeah, the top dog in the LGA1151 family is really, really powerful, but although the rest are all 6th Gen Core, they all suck in comparison."

"Intel deserves this dilemma because they cheated by winning over the OEMs anticompetitively" is not a valid argument in this scenario. Those were pre-Netburst eradication days. Prior to Carrizo, there were plenty of opportunities for AMD in the OEM laptop and desktop market, since Trinity/Richland and to a lesser extent, Kaveri APUs offered much more to the average user than a i3/i5.


----------



## MxPhenom 216 (Aug 19, 2015)

newtekie1 said:


> It wouldn't be anywhere near $1,000, that is the point.
> 
> Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.



5820k is available. Similar price to 6700k.


----------



## Viruzz (Aug 19, 2015)

ZenZimZaliben said:


> Look at the size of that GPU. It takes up more then 1/3 of the chip....I do not understand why a K class chip even has a freaking IGP. 90% of the people interested in K series run dedicated gpu. SO basically 35% of the cost of the chip is going towards an IGP that will never be used. I would much rather pay for 35% more cores. With the IGP gone it would be easy to fit in at least 2 more cores with out expanding the die.



I agree with you, I was thinking the same, but here are some counter points i was thinking to myself:
1. Intel iGPU supports Intel QuickSync, if you compress videos its godsend, its faster then CPU and its faster then GPU accelerated compression like CUDA and whatsitsname AMD equivalent basically its the fastest way to compress videos.
2. It supports DXVA2 and you can offload video decoding when you watch movies, even bluray and 3D
3. You always have a GPU! Things like your GPU died, your testing something, you tried to overclock your GPU with Bios flash and now it works but no signal... etc (basically its good to have an extra GPU)
4. In future (when we might be already dead), DX12 games will support MANY Graphics card mode (dont confuse with SLI) so every GPU, no mater its maker and company will be able to work together, so every gamer with iGPU will get some FREE FPS Boost
5. This is about Broadwell CPUs only, if you take a look at the benchmarks of all 3 modern CPUs, Skylake, Broadwell and Haswell, when they down clocked or overclock to same speed, in some games Broadwell gets like extra 20+ FPS just because it has iGPU with 120MB of fast eDRAM that also works as L4 cache (if iGPU enabled it allocates up to 50% of eDRAM for L4 cache, if you disable iGPU, everything works as L4 cache)
Right now Broadwell is in fact the FASTEST CPU for gaming! Just think 3.3Ghz Broadwell vs 4Ghz 4790K, both get 82FPS in Metro Redux FHD+Max Quality, 97FPS in Tomb Raider FHD+High Quality
Any smart gamer (not me) will buy a Broadwell CPU from a web site like (Digital Lottery) that has guaranteed overclock of 4Ghz and above and his system going to rape both Skylake and Haswell in every game, and im 90% sure  that its going to beat the 2 CPUs that come after Skylake unless they have same iGPU technology with eDRAM


Basically what im saying is that you are 10000% right there is absolutely no need for an iGPU on i7 series of processors what we NEED is 128/256MB of Fast L4 Cache!!!
Intel CAN DO IT, we seen it already with Broadwell, just remove the iGPU and keep the cache or even better Increase it to 256Mb, and keep the same price for the CPU.
I dont know how much every part cost but somehow im sure that a GPU is more expensive then some Cache!


End Rant


----------



## FordGT90Concept (Aug 19, 2015)

Uplink10 said:


> Why?


Why not when you have a dedicated GPU?



MxPhenom 216 said:


> 5820k is available. Similar price to 6700k.


It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.




Viruzz said:


> 5. This is about Broadwell CPUs only, if you take a look at the benchmarks of all 3 modern CPUs, Skylake, Broadwell and Haswell, when they down clocked or overclock to same speed, in some games Broadwell gets like extra 20+ FPS just because it has iGPU with 120MB of fast eDRAM that also works as L4 cache (if iGPU enabled it allocates up to 50% of eDRAM for L4 cache, if you disable iGPU, everything works as L4 cache)
> Right now Broadwell is in fact the FASTEST CPU for gaming! Just think 3.3Ghz Broadwell vs 4Ghz 4790K, both get 82FPS in Metro Redux FHD+Max Quality, 97FPS in Tomb Raider FHD+High Quality
> Any smart gamer (not me) will buy a Broadwell CPU from a web site like (Digital Lottery) that has guaranteed overclock of 4Ghz and above and his system going to rape both Skylake and Haswell in every game, and im 90% sure  that its going to beat the 2 CPUs that come after Skylake unless they have same iGPU technology with eDRAM


Bare in mind that Broadwell's GPU is substantially larger than Skylake's.  Broadwell also has an MCM'd memory chip too.  Seems pretty silly how Skylake doesn't best Broadwell in that area because it certainly could at least match it.

I think Intel was reeling from 14nm being so difficult and they're looking for ways to offset their costs.  Keeping costs down with Skylake was their answer. 10nm is going to be very, very difficult (and costly) to reach.


----------



## MxPhenom 216 (Aug 19, 2015)

FordGT90Concept said:


> Why not when you have a dedicated GPU?
> 
> 
> It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.
> ...


I was talking CPU alone. As an enthusiast, spending some extra cash on ram shouldn't be a problem,seeing how you'll be spending a royal shit ton on the rest of the system anyways.


----------



## Viruzz (Aug 19, 2015)

FordGT90Concept said:


> Why not when you have a dedicated GPU?
> 
> 
> It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.
> ...




I wasn't talking about Integrated GPU performance but the benefits of L4 cache that we get in Broadwell chips that has 120mb of eDRAM for integrated graphics, this eDRAM is also used as additional L4 cache for the CPU.
there are no L4 cache in any other CPU's


----------



## Sony Xperia S (Aug 19, 2015)

kn00tcn said:


> make up your mind, is it 50% (it obviously isnt, get a ruler) or is it 40% or what is it



50-50 CPU-GPU and 40 of the total area.
You are the one who need to reread posts and try to understand them better.



kn00tcn said:


> unbelievable, we have a gpu that is capable of video encoding & opencl



Take it to where you took it out from. I didn't ask intel for the graphics part. I guess no one has ever asked them to integrate it on the die. They force you and screw you very big time.

Scumbags intel.


----------



## tabascosauz (Aug 19, 2015)

Sony Xperia S said:


> 50-50 CPU-GPU and 40 of the total area.
> You are the one who need to reread posts and try to understand them better.
> 
> 
> ...



The hypocrisy is real. Since FM2+ is AMD's modern platform, as AM3+ is only hanging on with 3rd party controllers (and AMD is basically disowning it), let's take a look at the 6800K's die.






Oh dear, 40% of the die is dedicated to the iGPU. What about the 7850K?






*gasp* is that 60%? 70%? Before you start going on about how this is an "APU", it really isn't very different from Intel's mainstream "CPUs". Also, since it's apples to apples, try getting video output out of a FX-8350 and a 990FXA-UD3. Hm? Black screen?



Sony Xperia S said:


> 50-50 CPU-GPU and 40 of the total area.
> I didn't ask intel for the graphics part.



Hey, I *didn't ask AMD for a huge iGPU on die*. But look at what I got anyway.


----------



## Sony Xperia S (Aug 19, 2015)

You are wrong.
AMD offered this innovation with the idea to accelerate the general performance in all tasks.
Thanks to those scumbags intel, nvidia, microsoft, other "developers" and co, it probably won't happen.


----------



## Uplink10 (Aug 19, 2015)

FordGT90Concept said:


> Why not when you have a dedicated GPU?


Because DX12 enables different GPUs to work together and you can assign applications to different GPU. Do rendering on dGPU and meanwhile browse on iGPU.


----------



## MxPhenom 216 (Aug 19, 2015)

Sony Xperia S said:


> You are wrong.
> AMD offered this innovation with the idea to accelerate the general performance in all tasks.
> Thanks to those scumbags intel, nvidia, microsoft, other "developers" and co, it probably won't happen.



Boy the ignorance never ends with you.


----------



## Sony Xperia S (Aug 19, 2015)

MxPhenom 216 said:


> Boy the ignorance never ends with you.



And trolling never stops with you. 

What didn't you undertstand from what I've said and what you argue?

This one:

*Personal Supercomputing*
Much of a computing experience is linked to software and, until now, software developers have been held back by the independent nature in which CPUs and GPUs process information. However, AMD Fusion APUs remove this obstacle and allow developers to take full advantage of the parallel processing power of a GPU - more than 500 GFLOPs for the upcoming A-Series "Llano" APU  - thus bringing supercomputer-like performance to every day computing tasks. More applications can run simultaneously and they can do so faster than previous designs in the same class.

http://www.amd.com/en-us/press-releases/Pages/amd-fusion-apu-era-2011jan04.aspx


----------



## MxPhenom 216 (Aug 19, 2015)

Sony Xperia S said:


> And trolling never stops with you.
> 
> What didn't you undertstand from what I've said and what you argue?
> 
> ...


Wtf does this even have to do with this thread? Get your AMD garbage out of here.


----------



## tabascosauz (Aug 19, 2015)

Sony Xperia S said:


> And trolling never stops with you.
> 
> What didn't you undertstand from what I've said and what you argue?
> 
> ...


Classic

One fanboy around here is ready to ascend to the godlike status of "utterly blind, enslaved fanboy". Spewing PR today, who knows what tomorrow will bring?

The future is not fusion. The future is Sony Xperia S, and it will be the end of us all. I still need you, sanity. It was invaluable to have you at my side through this fruitless battle and now I must retreat to be close to you, sanity. He doesn't understand the meaning of hypocrisy and can't comprehend the fact that all of this Bullshit with a capital B is a phenomenon called marketing. But I must bar myself from continuing to struggle against this unending insanity.


----------



## FordGT90Concept (Aug 19, 2015)

Uplink10 said:


> Because DX12 enables different GPUs to work together and you can assign applications to different GPU. Do rendering on dGPU and meanwhile browse on iGPU.


I'll believe it when I see it.


----------



## Sempron Guy (Aug 19, 2015)

still amazes me how an Intel only article instantly turns into a "which camp is greener" "which camp has more sh*t on it" argument


----------



## Joss (Aug 19, 2015)

Fx said:


> You are half retarded


Yes, I'll try to be completely retarded from now on.


----------



## Sakurai (Aug 19, 2015)

If you wish Intel to put more effort on replacing the iGPU part on the die with cores or whatever they are, you have to start looking for competitions. And that's where AMD comes into the play. But the bigger problem is, no one wants to buy an AMD chip. That's why all you hypocrites can just sit here and cry. Because you're actively supporting the monopoly, and there's not a single sh!t you can do about it. Enjoy your crippled CPU!


----------



## newtekie1 (Aug 19, 2015)

MxPhenom 216 said:


> 5820k is available. Similar price to 6700k.



Exactly, and that should have been the top of the mainstream market, the HEDT area should be all 8-cores by now except the bottom processor, which is very similar to the top end mainstream. Just like it always has been.



tabascosauz said:


> If the 6700K was a hex-core with no iGPU, why would the 5820K and 5930K even exist?



Same reason the 920 and 860 existed, or the 3820 and 2600K, or the 4820K and 3770K.  If you arguments held true, we would have seen all of that with the previous generations.

The bottom of the HEDT has always basically matched the top of the mainstream, in terms of core count.  Now we've move to the point where the bottom of the HEDT is 6 cores, and I believe the top of the mainstream should be 6 cores as well.


----------



## radrok (Aug 19, 2015)

FordGT90Concept said:


> Why not when you have a dedicated GPU?
> 
> 
> It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.



You can run X99 on dual channel with just 2 sticks, problem solved.


----------



## FordGT90Concept (Aug 19, 2015)

I know that.  You're still talking $200+$400 for X99 versus $100+$350 for Z170 which comes to 33% more expensive for two-year old tech, more power consumption, and lower clockspeed.  The only advantages Haswell-E has over Skylake is two more (albeit slower) cores and double the memory capacity.  64 GiB of memory should be fine and for gaming, six cores really don't benefit over four higher throughput cores.  If it were Skylake versus Broadwell-E, it would be a harder decision.  Skylake versus Skylake-E would be a no brainer.


----------



## tabascosauz (Aug 19, 2015)

newtekie1 said:


> Exactly, and that should have been the top of the mainstream market, the HEDT area should be all 8-cores by now except the bottom processor, which is very similar to the top end mainstream. Just like it always has been.
> 
> 
> 
> ...



But it was a little bit different back then. Neither the 920 nor 860 had integrated graphics. Now Intel has moved towards its mainstream platform being accessible to all, not just enthusiasts, leaving HEDT as the only platform with no iGPU. With 6 cores, the mainstream top dog would no longer be achieving parity with the bottom HEDT SKU; it would have a new architecture (HEDT is usually 1 behind), graphics just in case you use Quick Sync or have no dGPU, and six cores. It's easy to bring 4 cores down from HEDT to mainstream, but it's not quite so simple when you have 6 cores and a iGPU to make do with. And no, Intel can't just throw away their iGPU for the overclockable i5 and i7 mainstream SKUs because 1) money spent for new design, not a lot of revenue in return and 2) would lock them into doing the same for the next generations. Think about the latter. If 14nm is so difficult for Intel, how would it be on 10nm?

Also, Sandy Bridge saw the discontinuation of such a two-prong strategy. After all, it wasn't the most logical offering; so the i5 (Clarkdale, not Lynnfield) and i3 SKUs were all about the first-time on-die Intel Graphics while the mainstream i7s were HEDT bottom-feeders brought down to LGA1156? It could be said that 1st Gen core wasn't a blueprint for others to follow; it marked a transition from the old Core 2 all-about-the-CPU lineup to the new stack defined by integrated graphics.

Also, six cores with or without iGPU on LGA115x would put an awful amount of heat in a smaller package than LGA2011. It also would have to do away with TIM, thus confirming my argument that Intel would have to put more money into this new i5/i7K design than it is actually worth.


----------



## FordGT90Concept (Aug 19, 2015)

HEDT used to be first.


----------



## radrok (Aug 19, 2015)

HEDT isn't coming out before these chips because Intel uses the mainstream core to test the lithography and cores scaling, doesn't make sense to experiment the manufacturing on the big dies. Also when the node is new and less mature yields are lower so it makes sense to use small chips to get production ramping up.

Much like Nvidia gets out the mainstream core before the big die.


----------



## kn00tcn (Aug 19, 2015)

Sony Xperia S said:


> You are wrong.
> AMD offered this innovation with the idea to accelerate the general performance in all tasks.
> Thanks to those scumbags intel, nvidia, microsoft, other "developers" and co, it probably won't happen.


you're praising amd for letting applications utilize the gpu in new greatly improved performance ways... yet intel isnt allowed to offer an identical chip!? you really are a scumbag

the future IS fusion... for ALL platforms, x86, arm, mobile, desktop, supercomputer, every calculation anywhere is simple math & different math is best used on different types of processors

opencl runs everywhere, perfect way to utilize the whole 'apu' from whatever company


----------



## xorbe (Aug 20, 2015)

Core count stuck at 4 forever for mainstream.  But admittedly I'd rather have faster 4 core chips.  I only compile large code bases infrequently, so I wouldn't really be using 8 cores efficiently most of the time.


----------



## Aquinus (Aug 20, 2015)

Yet my 3820 is still perfectly adequate? You people complain about the iGPU being huge but, when it's not used it gets *power gated* which means it *consumes no power*. Second, if my 3820 (which is still a quad-core on skt2011,) is adequite for just about everything I throw at it, why is there incentive to put more on a mainstream platform that will cost more to produce. Sorry, but the market isn't demanding it, only power users are. If you really have your panties in a bunch about the iGPU, go HEDT, if you have a hard on for cores, go AMD or Xeon. Simple fact is, if you're not happy with mainstream offerings, you're probably not a f**king mainstream user.

People in this thread whine but, if you don't like it, don't buy it! The complaining in this thread is merely astonishing.


tabascosauz said:


> Also, six cores with or without iGPU on LGA115x would put an awful amount of heat in a smaller package than LGA2011. It also would have to do away with TIM, thus confirming my argument that Intel would have to put more money into this new i5/i7K design than it is actually worth.


This. I would roll it into the mainstream platform argument. People who want more cores and no iGPU are power users, not mainstream users. My wife had a Mobility Radeon 265X in her laptop. It has been running on the HD 4000 graphics but she's never noticed a difference. That's because she's like the majority of users out there who would never use it.

We here at TPU are a minority, not a majority. It's amazing how people don't seem to understand that and how the market is driven by profit, not making power users happy.


----------



## tabascosauz (Aug 20, 2015)

Aquinus said:


> We here at TPU are a minority, not a majority. It's amazing how people don't seem to understand that and how the market is driven by profit, not making power users happy.



In the opinion of one particularly notable user whose name is closely modeled after a smartphone model, "Fuck you, dude! I don't care how difficult it is to go beyond 4 cores in a mainstream substrate package and I don't care how the business world works. Intel should work *exclusively *for *ME* and take cues from what *I *think. Future is Fusion, and even though half of my incomprehensible argument states that Intel's GPUs aren't good enough and need improvement, I still think, quite to the contrary, that the iGPU just needs to go." I think that user also doesn't understand the meaning behind the two little words of "fuck off" either, 2 shitstorms of an article about Skylake later.

Seriously, a lot of people need to search up that WCCFTech (I think) article where a i7-5960X engineering sample was first leaked to the public (after, of course, being delidded improperly and having half of its broken die still epoxied to the IHS). That LGA2011-3 package is *huge*. Not only is it huge, the 8-core die is also *HUGE*. The size of the die alone is not too far off that of the entire LGA1150 IHS. When we consider that a six-core die would not be too much smaller (since it needs more than 8MB of L3 if it wants to avoid ending up like the X6 1100T), a 6-core mainstream CPU is just not logical on size alone, without making any mention of the iGPU.


----------



## Sony Xperia S (Aug 20, 2015)

kn00tcn said:


> you're praising amd for letting applications utilize the gpu in new greatly improved performance ways... yet intel isnt allowed to offer an identical chip!?



It has never been an Intel idea to offer Fusion products. They have stolen and simply copied without actually having a clue. They still market this for graphics acceleration purposes only.

But why don't they keep their inadequate graphics "acclerators" on motherboards as it always used to be and actually give customers the right to choose what they want?



xorbe said:


> Core count stuck at 4 forever for mainstream.



Nope, AMD will change this coming next year. They only need working CPU on 14nm and Intel's bad practices will be gone forever.


----------



## FordGT90Concept (Aug 20, 2015)

I think if Intel feels compelled to add more cores, it will be on HEDT, not mainstream.  Most people that buy these Skylake chips will only use them for browsing the internet and occasionally making movie or picture book. A dual core from a decade ago can do that perfectly well.  I can't see more than four cores on mainstream for a very long time.


----------



## Sony Xperia S (Aug 20, 2015)

FordGT90Concept said:


> Most people that buy these Skylake chips will only use them for browsing the internet and occasionally making movie or picture book.



i7-6700K for browsing internet? Really? 
Wow, do you realise that in most countries these Skylake processors are the top of the line what could be afforded. They are bought by people who are either enthusiasts or pretend to be such, or just watch their budgets tightly ?



FordGT90Concept said:


> A dual core from a decade ago can do that perfectly well.  I can't see more than four cores on mainstream for a very long time.



There is the unpleasant feeling with a slow CPU to wait, and wait while it takes its time to process all the required data....... waste of time in enormous scale. If you have the willingness and patience to cope with that.

I just tell you that I'm 99% sure that in 2017 you will start to sing another song.


----------



## 64K (Aug 20, 2015)

Sony Xperia S said:


> Nope, AMD will change this coming next year. They only need working CPU on 14nm and Intel's bad practices will be gone forever.



So long as Intel stubbornly insists on providing chips that customers actually want they are doomed to success.


----------



## Sony Xperia S (Aug 20, 2015)

> So long as Intel stubbornly insists on providing chips that customers actually want they are doomed to success.



Customers have no choice. They just buy what they know and the propaganda machine works and tells them - forget AMD, buy Intel. And because intel has the cash to keep that machine working, it simply still works for them. We will se until when.


----------



## FordGT90Concept (Aug 20, 2015)

So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?


The bottleneck for most consumers isn't CPU but HDD or internet performance.  The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy (e.g. a 4K monitor is going to put a lot more stress on the hardware than a 1080p monitor).


----------



## Aquinus (Aug 20, 2015)

Sony Xperia S said:


> Customers have no choice. They just buy what they know and the propaganda machine works and tells them - forget AMD, buy Intel. And because intel has the cash to keep that machine working, it simply still works for them. We will se until when.


That's not the point. There simply are limitation to what more cores can do. Not every application can fully utilize a quad core, not because they're not multi-threaded applications but because there is too much locking going on to actually realize that much CPU compute. Intel isn't pushing cores ahead because it costs money for very little gain for the average consumer.



FordGT90Concept said:


> So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?
> 
> 
> The bottleneck for most consumers isn't CPU but HDD or internet performance.  The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy.


Not just that but actually utilize all those cores. Most applications run well more than 4 threads, it's just that more often than not, thread have to block for input or for another thread to release resources that need to be thread safe. More cores is great for servers but I see absolutely no justification for it on a typical consumer platform. I agree with you on this point if it wasn't obvious.


----------



## FordGT90Concept (Aug 20, 2015)

Aquinus said:


> More cores is great for servers...


Only if the server can utilize those extra resources.  That's my point in asking that rhetorical question.

I think the only thing on the horizon that changes from the quad-core-is-enough consumer paradigm is virtual reality.


----------



## Aquinus (Aug 20, 2015)

FordGT90Concept said:


> I think the only thing on the horizon that changes from the quad-core-is-enough consumer paradigm is virtual reality.


How is that not GPU compute? Excluding VR, high resolution puts more strain on the GPU compute than CPU compute. I wouldn't call that an argument for more CPU power however I think that just validates our points that there isn't a whole lot of purpose to more cores for your average consumer.


----------



## FordGT90Concept (Aug 20, 2015)

The VR system has to not only compute the virtual environment but also the real environment.  For example, the system Valve is playing with uses LADAR to make sure the user doesn't crash into anything in the physical realm.  As VR improves, it could even incorporate real objects into the fictional world compounding the amount of CPU resources needed.


----------



## Aquinus (Aug 20, 2015)

FordGT90Concept said:


> The VR system has to not only compute the virtual environment but also the real environment.  For example, the system Valve is playing with uses LADAR to make sure the user doesn't crash into anything in the physical realm.  As VR improves, it could even incorporate real objects into the fictional world compounding the amount of CPU resources needed.


My point is that isn't VR computation done on the GPU much like 3D is before it's written to the screen? The LADAR stuff makes sense since it's essentially a pseudo-realtime thing but, the VR stuff feels like it would fall in the realm of GPU compute because of the nature of what it's doing to what's being rendered. Much like draw calls and AA, it's highly parallel. Either way, I still feel the need for more GPU power will continue to outpace the need for more CPU compute, at least for the time being.


----------



## FordGT90Concept (Aug 20, 2015)

As far as I know, the GPU would only take care of the game elements they take care of now (rendering and physics).  The CPU is much better suited for the workloads that are unique to VR (like LADAR).  These aren't highly parallel because other components have to respond to the new data at every pass.

Oh, sure.  Pretty much all software out there that uses the GPU will "smoke them if you got them."  CPU workloads are always more subdued by design because excessive CPU use is just...wasteful.  GPU demand will always exceed CPU simply due to the nature what they do.

All I'm saying is we haven't seen the end of new workloads for CPUs.  VR is just one example.


----------



## radrok (Aug 20, 2015)

FordGT90Concept said:


> So if you had an 8-core, 16-thread processor, what would you do on a regular basis--that is time sensitive--that pushes it over 50% CPU load?
> 
> 
> The bottleneck for most consumers isn't CPU but HDD or internet performance.  The bottleneck for gamers is the hardware found in the PlayStation 4 and Xbox One tied to how ridiculous of a monitor(s) they buy (e.g. a 4K monitor is going to put a lot more stress on the hardware than a 1080p monitor).



I actually find myself disabling hyperthreading when I'm not working on this PC, gaming wise anything more than 4/6 cores is just pure epeen but since I often like to render mid-work I like to have the option to do that quickly to see what will come out in the end 

Anything that renders pegs the cpu to 100% constantly, the program I use scales up to 64 threads, I could use more but 2p platforms are crap for multi-purpose computing that also requires single threaded high performance.


----------

