Tuesday, May 12th 2015
95W TDP of "Skylake" Chips Explained by Intel's Big Graphics Push
Intel's Core "Skylake" processor lineup, built on the company's swanky new 14 nanometer fab process, drew heads to its rather high 95W TDP for quad-core parts such as the Core i7-6700K and Core i5-6600K, even though their 22 nm predecessors, such as the i7-4770K and the i5-4670K run cooler, at 84W TDP. A new leaked slide explains the higher TDP. Apparently, Intel is going all-out with its integrated graphics implementation on Core "Skylake" chips, including onboard graphics that leverage eDRAM caches. The company is promising as much as 50% higher integrated graphics performance over "Haswell."
Although the chips have high rated TDP, the overall energy efficiency presents a different story. SoCs based on "Skylake" will draw as much as 60% lower power than "Haswell" based ones, translating into 35% longer HD video playback on portable devices running these chips. Intel's graphics performance push is driven by an almost sudden surge in display resolutions, with standards such as 4K (3840 x 2160) entering mainstream, and 5K (5120 x 2880) entering the enthusiast segment. Intel's design goal is to supply the market with a graphics solution that makes the two resolutions functional on desktop and video, if not gaming.
Source:
AnandTech Forums
Although the chips have high rated TDP, the overall energy efficiency presents a different story. SoCs based on "Skylake" will draw as much as 60% lower power than "Haswell" based ones, translating into 35% longer HD video playback on portable devices running these chips. Intel's graphics performance push is driven by an almost sudden surge in display resolutions, with standards such as 4K (3840 x 2160) entering mainstream, and 5K (5120 x 2880) entering the enthusiast segment. Intel's design goal is to supply the market with a graphics solution that makes the two resolutions functional on desktop and video, if not gaming.
72 Comments on 95W TDP of "Skylake" Chips Explained by Intel's Big Graphics Push
Relax man he can have his own opinion..................!!!!
For god sake and btw i agree with him.
I dont give a shit what if i5/i7 k series are for mainstream platforms, i know that the majority of the buyers are gamers and performance users THAT THEY DONT NEED INTEGRATED GPU!!
About haswell-e which is without integrated gpu, i would choose it if i had more than 3% on gaming performance(4790k vs 5820k).When i dont have that 3% and in order to get it,i must double my budget for a 5960x sorry but "NO THANKS"
PS: sorry for my english :S
You bet I'm an asshole, anyone here at TPU knows that I'm not afraid to speak my opinion. You really registered just to say that? How about not trying to throw the thread off topic. :slap:
Everyone already knows I'm an ass, you don't need to point out the obvious.
In summary: Welcome to the mainstream market. It demands iGPUs so Intel includes it on most of their GPUs and have built the PCH around having that iGPU. I personally think it's more dumb to leave out an iGPU when you have all this dedicated crap in your motherboard for it. It's even more dumb to use the power argument because Intel has been power-gating iGPUs since Sandy, so it's not like it even adds to the heat when you have a discrete card. I personally find the argument of leaving the iGPU out amusing since most people with it, use it. Just because your a gamer doesn't mean the entire market is full of PC gamers (even if it should be.)
Lastly, I would prefer having an iGPU powerful enough to do everything I want with it instead of a discrete GPU that's overkill. It depends on what you're using it for, and most people use Facebook, look at email, video, and do everything that isn't 3D acceleration. So if Intel is thinking about how to make more money, I can bet you they're not thinking about catering to gamers.
Side note: I'm on a laptop with an Iris Pro in it now and it works fine for everything that isn't gaming. ;)
Its also nice for when you want to run a super slim mITX and need CPU power, but have no desire (or room) for a dedicated GPU. That way you can still have accelerated video playback, fast encoding, etc in that mini case.
"kinda feels like a waste when most PC enthusiasts will be using a dedicated GPU but then again I can see why too, Intel wanna compete & be above AMD on all levels" For the 3rd time already did you not read my comment? I said I understand why? Get it through your dense head.
Fixed one of your sentences, call me immature but I don't care, this site needs more humour & less ego.
Also making comments like this only serve to perpetuate an argument outside the realm of the topic of the thread and I will have no part of that.
And for those who don't play a lot of games but need a good CPU, it saves some bucks not having to buy a dedicated GPU as well.
Make no mistake, you are paying Intel about 50$ for something you will probably never use. It's borderline unfair practice IMO, because you're not given a choice.
OTOH, the cost of R&D for great improvements in this field from one generation to another is probably big, so Intel needs to absorb that by selling integrated GPU's to the masses. That's the best personal justification I can come up with.
Case in point:
i7 4770:
www.newegg.com/Product/Product.aspx?Item=N82E16819116900&cm_re=i7_4770-_-19-116-900-_-Product
Xeon E3:
www.newegg.com/Product/Product.aspx?Item=N82E16819117316&cm_re=xeon_e3-_-19-117-316-_-Product
It's basically the same CPU, but the Xeon is $50 cheaper.
The E3 Xeon 1245v3 is probably a better comparison because the clocks are the same and both have iGPUs. Only the cheapest e3 Xeons don't have iGPUs iirc, not the most performant ones.
All in all, it's still a zero sum game unless there are particular needs for your system.
None of this changes the fact though that no one would really notice a difference if the iGPU was there or not and if you're already spending >500 USD, 10 dollars won't make a huge difference.
Now, $50 is pocket change for some and a lot of money for others, depending on where you hail from, but one thing is certain: it's better spent on a superior discrete card. Or more RAM... Or a higher capacity SSD, you name it. Especially if you're never gonna use that iGPU. It's better when I decide what my money goes into instead of Intel.
Direct comparison between the two:
ark.intel.com/compare/80910,75122
they upped the TDP to 95w for that IGP yet it's still not on the level of a A10-7850K (100w)IGP level and the Kaveri IGP doesn't have eDRAM, yet it's still a bit closer than the HD4600 was ofc.
bear with the french language in the pics ... numbers are universal :roll:
ok CPU side it's totally not the same case ... ;)
conclusion: if i want a cpu with a IGP and no need for a discrete, but keeping the Hybrid CFX option in mind... i go AMD and Kaveri instead of Skylake even in 2015/16 (or Godavari or the next APU since Kaveri is bound to a refresh soon)
To the points raised thus far:
1) The inclusion of an iGPU increases the price of a processor.
Technically yes. Realistically, no. The inclusion of an iGPU is a design choice. They start out with it, and it's due to their target audience. Their main audience isn't gaming enthusiasts, it's business applications where multiple tabbed spreadsheets and flash presentations are the most graphically demanding things required. For such usage an iGPU is a minor increase in expense, that pays off hugely with cutting out dedicated video cards. We might not like it as gamers, but we're such a small market segment it is a moot point.
2) AMD competing with Intel, via the APU, is what spurred Intel's development of an iGPU
Nope. Somehow people forget that development of hardware takes years. If Intel was responding to the APU it would just be breaking the 1080p video barrier. This is a move from Intel that was precipitated by ARM. Their recent forays into tablet devices, along with the fact that they cite extra video playback time, is a dead give away. Intel has already relegated AMD to the scrap bin, in no small part to the fact that AMD said they were pulling out of the CPU market. The APU is good, but only because they strapped a functional GPU to a CPU.
3) Intel graphics are crap (paraphrased)
Yeah, I can't argue that. The salient point here is patent law. Nvidea and AMD own a ton of key patents for modern graphics solutions. As neither is looking to license that patent to Intel, they've got to reinvent the wheel. In the span of less than a decade Intel has gone from buggy trash to competent for business use. That's a huge leap, considering AMD and Nvidea took much longer to get there. If you're in doubt of this argument I'd ask you to compare Duke Nukem 3d to Hitman: Blood Money. That's one decade of difference, and you've still got some janky, boxy figures. In comparison the Sandy Bridge iGPU (2011) has already gotten to competent 1080p playback and it's only from 2014.
4) You're a shill for Intel
I wish. If I was paid for this crap I'd be able to enjoy a lot more. As it stands, I'm hoping that Intel sinks too much into iGPU development, Zen is as good as suggested, and Intel gets caught with their pants down. That would precipitate another genuinely great CPU generation, akin to the Sandy Bridge era. Skylake is unlikely to do this, and from the sounds of it just be another 10-15% performance increase. Hopefully this time it's without forfeiting overclocking ability. Energy efficiency is great, but you can't sell several hundred dollars of silicon based on a 60% efficiency increase, when the net savings would require a system run for years before breaking even.
5) Intel including an iGPU is unfair
Simple response: buy something else. I'm unaware of Intel possessing a monopoly. You can buy a CPU from AMD, or perhaps a small fleet of ARM powered devices. Want performance, then buy Intel. It's crap to say, but it's reality. If I want a fast car I pay an insane amount for a Veyron. If I want a pickup truck I buy a Toyota. I can't complain to Toyota that they don't make a budget super car. What you're asking is that Toyota suddenly starts making super cars, when their pickup market represents 90%+ of the global market and prints money. While an automotive analogy isn't perfect, it does highlight the absurdity of catering to a niche market, no?
I'm looking forward to how my words are misconstrued as Intel fanboyism. What I appreciate is performance, and AMD can't do it. If you pay the Nvidea tax you've acquiesced to this point. Most important, reality seems to be against the counter argument. Look at a Steam hardware survey, and most people use an Intel CPU and Nvidea GPU. It seems as though the market has spoken. While wishing for the glory days of the AthlonXP is reasonable, you have to deal with reality. Right now, Intel could have a 0% performance increase with Skylake, focusing only on iGPU, and still make money. Either understand that, or continue to argue that you are somehow special and deserving of a unique CPU. The former is reality, with the later being fantasy bordering on narcissism.
OTOH my amd-7850k integrated GPU is behaving as a dGPU which means that I have full support for all the goodies that a dGPU has, unlike the intel iGPU which supports several features from OpenGL (I am running on linux) and is making many programs to crash or misbehave.
On the other APU based laptop that I have, it was dirty cheap with an underpowered APU at 15Watts but every time I use a program that runs on openGL then it behaves flawlessly.
It doesn't matter how big the next iGPU will be, or how many frames there going to be, the fact is that intel does not care enough for their iGPUs to have the full-featured.
I laugh at people owing Macraps that run on Intel w/o a dGPU, and they are stuck with a shitty gpu for ever.
Transistor count AMD K5 (1996): 4,300,000 - 251 mm^2
Transistor count AMD K10 (2007): 463,000,000 - 283 mm^2
Transistor count Core i7 (2011): 1,160,000,000 - 216 mm^2 (total count)
Transistor count Core i7 (2014): 1,400,000,000 - 177mm^2 (total count)
Assuming that the AMD CPU transistor count mirrors that of a GPU (it's a stretch, but makes things easier), a 100 fold increase leads to a respectable increases in graphical fidelity.
Let's assume that the Intel offering has a 20% increase in the transistor count (dedicated to IGPU), and it's initially 20% of the transistors. 336000000-232000000 = 104000000 => 9% increase in transistor count.
You're telling me that a 10,000% increase in transistor count is comparable to a 9% transistor count increase. Seriously? Transistor density is important, but this is just silly. Even if you add in architectural improvements, transistor count isn't some magic stick to wave around and claim means everything.
Sources are always good:
www.notebookcheck.net/Intel-HD-Graphics-Sandy-Bridge.56667.0.html
-
www.anandtech.com/show/7003/the-haswell-review-intel-core-i74770k-i54560k-tested/5
-
www.wagnercg.com/Portals/0/FunStuff/AHistoryofMicroprocessorTransistorCount.pdf