Friday, August 14th 2015

Intel Core "Skylake" Processors Start Selling

Retail availability of the two Core "Skylake" SKUs Intel debuted, the Core i7-6700K and Core i5-6600K, begins today. This is when you will be able to pick up a boxed chip off the shelf, or order one online. To help ease the socket confusion, online retailers are selling bundles of these chips with compatible socket LGA1151 motherboards at a nominal discount, some of which include DDR4 memory, depending on the motherboard bundled. On its own, the Core i7-6700K is priced at US $343, while the Core i5-6600K is priced at $250.

The i7-6700K offers clock speeds of 4.00 GHz out of the box, with Turbo Boost frequency of 4.20 GHz. It also offers 8 MB of L3 cache, and HyperThreading, which enables 8 logical CPUs for the OS to address. The Core i5-6600K, on the other hand, offers 3.50 GHz clocks with 3.90 GHz Turbo Boost. It offers 6 MB of L3 cache, and lacks HyperThreading. Both are quad-core chips, with unlocked base-clock multipliers, for overclocking. The retail packages of both chips lack stock cooling solutions, so you need to have an LGA115x-compatible cooler ready. The TDP of both chips is rated at 91W. Intel will put out some of the finer micro-architecture details on the 16th of August, 2015. More Core i5 quad-core SKUs in the series will be released on the 29th of August, 2015. Dual-core Core i3 SKUs will be launched towards the end of September, 2015.
Add your own comment

92 Comments on Intel Core "Skylake" Processors Start Selling

#51
Aquinus
Resident Wat-man
64KI was hoping you would chime in on this. iirc I've seen you posting about the difficulties in writing code using multiple threads as a programmer. I've only done a small amount of programming in Basic and Assembly Language a long long time ago but it's easy to see how one subroutine could be dependent on the values of variables from another part of a program and will need to be performed sequentially. To perform the subroutine at the same time as the other part of the program where the values of the variables are being determined in the first place wouldn't work or be any faster. That's just one example.
I figured that there would be some interest. It's just putting it together but such a write-up couldn't simply be an essay. I would need visual tools and example code to be able to clearly demonstrate it. If I did this, I would probably do the write-up in GitHub or something using github's flavor of markdown since it can do syntax highlighting for the language I would do examples in. This isn't something I could write up in one day. It would take me some time to build the examples and revise the written parts a few times before letting it loose on the public.

People don't realize it but multi-threaded applications spend a lot of time waiting for input from somewhere else. Every time a thread has to lock, it's not being useful. There are techniques for making this "less bad," but there are some instances where you can't do work because of the nature of the application.

If there is enough public interest in such a document, I would be willing to start making something.

Edit: If I were to do this, the documentation would simply be part of a (probably,) Clojure project that can be run, tested, and played with. I would also try to keep it relatively simple as I'm not looking to write a dissertation. :p
Posted on Reply
#52
lilhasselhoffer
FordGT90ConceptI regret to inform you of this. You can get better than that right now if you don't mind selling both kidneys.
Sweet baby jebus, 18 cores and 36 threads. That's absolutely insane, but the tray cost of $7174 kinda makes that CPU seem "reasonable" by Intel pricing standards.


Of course, that's $200 per thread (7174/36). If that kind of pricing were offered on "enthusiast" level cores you'd wind up with a 5930k being $2400 (currently sits at $580 on Newegg). Alternatively, you could get the "mainstream" offering of the 4790k for $1600 (currently sitting at $240 on Newegg).



Kinda seems like adding a load of more cores doesn't lead to gaming performance, dang near logarithmically increases CPU cost, and actually forces every core to clock slower (2.5 base for the Xeon, 3.5 base for the 5930k, and 4.0 base for the 4790k). Really, the last high end processors that utilized 2.x frequencies designed specifically for gaming were in the Core 2 series. Yes, IPC improvements mean 2.4 GHz today is better than 2.6 GHz half a decade ago, but that doesn't excuse a huge slide backwards.


Why do we need more cores again?
Posted on Reply
#53
FordGT90Concept
"I go fast!1!11!1!"
Because AMD fanboys said so! :D


Newegg and Amazon still don't have i7-6700K inventory. :(
Posted on Reply
#54
jagd
I don't get people who like act intel shareholder , why is this hostility against AMD ? Intel is milking users because there is not competition with a high performance cpu . It gives 3-5% performance increase at new tic-toc cycles and some people even defend this.

Why should we had more cores until now ? Because Ghz/speed war is over , only real performace gain will come from core count and a bit from ipc increase ,it is why. We need more cores if we want more performance.
pechemoar moar cores, cores here , cores there ...
where are you from AMD?
Posted on Reply
#55
FordGT90Concept
"I go fast!1!11!1!"
Sony Xperia SBecause the guys at AMD are always FIRST with innovation. Remember 64-bits? The same story. I guess if it depends on Intel, we would have never gotten it. :D
IA64 which predates Clawhammer by quite a few years. Intel rightfully wanted to kick x86 to the curb; it has a ton of baggage we're still stuck with.
Sony Xperia SAt this very moment, my Windows 7 Task Manager reports 648 active running threads and 61 processes.
Please, don't tell me that you need sequential processing on all of them and they are dependant on each other. :laugh:
You only need one, heavy, async multithreaded process to saturate any processor...even that 36 core Xeon.
Posted on Reply
#56
Sony Xperia S
FordGT90ConceptIA64 which predates Clawhammer by quite a few years. Intel rightfully wanted to kick x86 to the curb; it has a ton of baggage we're still stuck with.
Yeah, but the only thing that is certain is that Intel never knew and never gave this technology for mainstream use. They just were ready to release it sometime in the undetermined future.
Posted on Reply
#57
FordGT90Concept
"I go fast!1!11!1!"
AMD beat Intel to the punch. This was the same time Intel was dragging its heels with Netburst. A lot of bad decisions were made at Intel at that time.
Posted on Reply
#58
RejZoR
While x86 has tons of "baggage", it's also a reason why we can run apps from 15 years ago without major issues. Something I absolutely hate about consoles for example. They have ZERO backward compatibility. And with their short life span, the content for them is very short lived. Especially with everything so tied to their online services. PS2 can still be used today after all this time, I'm not so sure you could do anything with PS4 when it'll be this old as PS2...
Posted on Reply
#59
Frick
Fishfaced Nincompoop
Sony Xperia SAnd that is crap, compared to what could have been in a better world.
Life is not worth living dude. It can't be. The grass is always greener in my head, so I'm off commiting suicide in WoW.
Posted on Reply
#60
Sony Xperia S
FrickLife is not worth living dude. It can't be. The grass is always greener in my head, so I'm off commiting suicide in WoW.
Good luck in hell then. :)

Meanwhile, we will fight to achieve our goals, because if we rely on people who have no clue, it would always be without proper order.
Posted on Reply
#61
64K
jagdI don't get people who like act intel shareholder , why is this hostility against AMD ? Intel is milking users because there is not competition with a high performance cpu . It gives 3-5% performance increase at new tic-toc cycles and some people even defend this.

Why should we had more cores until now ? Because Ghz/speed war is over , only real performace gain will come from core count and a bit from ipc increase ,it is why. We need more cores if we want more performance.
More cores isn't the solution when the average user/app/game doesn't use the extra cores anyway. It is true that the silicon GHz war is over but read up on the direction Intel is heading in. They are planning to go with new materials for 7nm and beyond and there is potential to increase clocks significantly and it's not the distant future that they will be bringing this tech. We've got around a year to see the 10nm die shrink of Skylake called Canonlake and around a year after that to see a new 10nm architecture and then after that Intel is leaving silicon behind so maybe 3 years. Possibly sooner if Intel skips a new architecture for 10nm.

The future is not more and more cores.

arstechnica.com/gadgets/2015/02/intel-forges-ahead-to-10nm-will-move-away-from-silicon-at-7nm/
Posted on Reply
#63
64K
Sony Xperia SIn your dreams only. Your dreams are my nightmares. And it is AMD who will save us all. :)

www.winbeta.org/news/amd-directx-12-will-finally-unlock-true-potential-your-multi-core-cpu


Yes, this is one of the many things that MS is promising for DX12. They also claim that DX12 will be able to do Split Frame Rendering and you will be able to use an AMD card and a Nvidia card working together to render each frame. They also claim that DX12 will make it easier for programmers to code games. What we will actually get could possibly be a buggy mess that developers will not use for a long time. Too soon to tell right now but I hope that DX12 will at least be able to make better use of cores. Whether you have 4 cores or 8 cores you will benefit from that.
Posted on Reply
#64
Aquinus
Resident Wat-man
Sony Xperia SIn your dreams only. Your dreams are my nightmares. And it is AMD who will save us all. :)

www.winbeta.org/news/amd-directx-12-will-finally-unlock-true-potential-your-multi-core-cpu


How about some sources from an institution other than AMD or Intel? I suspect there is some bias that comes along with internal reviews as we've seen first hand. Considering there aren't any true comparisons between DX11 and 12 yet, I think these claims can't exactly be verified yet. It also says nothing for current software. With that said, as a software developer who actively writes multi-threaded code, I can say that this is such an over-generalization because parallelism depends on the workload. There is no telling how it will behave other than for any arbitrary task. There is no "one size fits all" when it comes to multi-threading.

Either way, I would like to see a game with both DX11 and 12 implementations and see how performance varies. Until then, we really don't have anything reliable to tell us how it will go.
Posted on Reply
#66
64K
Sony Xperia SThere are articles, for instance this one from pcworld, if it is an institution worth mentioning.

www.pcworld.com/article/2900814/tested-directx-12s-potential-performance-leap-is-insane.html
I think pretty much everyone is saying that DX12 has the potential to do good things for gamers. Note the conclusion in the article


"But back in reality

Before you look at the results from these tests and assume you're going to see a frickin' 10x free performance boost from DX12 games later this year, zing, zam, zow! You won't. So ease off the hype engine.

What's realistic? I'd expect anywhere from Microsoft's claims of 50-percent improvement all the way to the what we're seeing here in Futuremark's test. This will depend very much on the kind of game and the coding behind it.

The reality is this test is a theoretical test, although one made with advice from Microsoft and the GPU vendors. This test reveals the potential, but translating that potential into an actual game isn't quite as easy.

We won't know for sure until actual DirectX-12-enabled games ship. Microsoft estimates that will be the end of this year."



We're all hoping for the best Sony Xperia. We're just not counting our chickens before they're hatched. Making a CPU buying decision based on what DX12 may be able to bring wouldn't be a good idea imo because we won't know what it will deliver until the first DX12 game drops.
Posted on Reply
#67
alucasa
I stopped gaming some years ago and am mostly into hobby 3D modelling using Blender, Vue, and Terragen. Blender which I use the most frequently cannot use more than 8 threads for rendering, so no, I don't need a CPU that has more than 8 threads.
Posted on Reply
#68
Aquinus
Resident Wat-man
64K"But back in reality
Before you look at the results from these tests and assume you're going to see a frickin' 10x free performance boost from DX12 games later this year, zing, zam, zow! You won't. So ease off the hype engine.

What's realistic? I'd expect anywhere from Microsoft's claims of 50-percent improvement all the way to the what we're seeing here in Futuremark's test. This will depend very much on the kind of game and the coding behind it.

The reality is this test is a theoretical test, although one made with advice from Microsoft and the GPU vendors. This test reveals the potential, but translating that potential into an actual game isn't quite as easy.

We won't know for sure until actual DirectX-12-enabled games ship. Microsoft estimates that will be the end of this year."
I was just about to quote that exactly bit of the article. We've known this already but the real question is if it will in help in reality. This benchmark in 3DMark was designed to test draw calls *and only* draw calls. Nothing more, nothing less. It's like doing a Whetstone and Dhrystone benchmark on your CPU and somehow tying it directly to gains in games.
Posted on Reply
#69
lilhasselhoffer
Sony Xperia SIn your dreams only. Your dreams are my nightmares. And it is AMD who will save us all. :)

www.winbeta.org/news/amd-directx-12-will-finally-unlock-true-potential-your-multi-core-cpu


I will put this simply. You are either ignorant with a short attention span, or a troll.


Let's explore the ignorance and short attention span. First off, what current games run DX12? None. What are you basing your magical assumptions off of? Magical numbers from AMD. The same artificially inflated and cherry picked numbers that irrevocably proved that Bulldozer beat the pants off of Intel processors? The same architecture that AMD has officially stated was idiotic and is being ejected with Zen? If you want to make artificial numbers your dogma, then you should get an echo chamber. AMD and Intel both BS us, and until we confirm their claims in reality absolutely everything they say should be taken as optimistic figures.

As far as the short attention span, read my earlier responses. They include the fact that DX12 and Open GL/CL are supposed to make multi-threaded rendering new standards. The previous responses also indicate that CPU development isn't done over night, and that any new developments will take years to be seen in CPUs. While we're on the subject, they also say carving out 6 cores from a 4 core CPU is pants-on-head stupid. It seems like you'd make that point, so I thought it reasonable to cover it.



Assuming you're a troll, go eat a big bag of lemons. Maybe then that sickening smile will be destroyed.




So we're clear about our, relatively recent, history. Netburst was moronic, and AMD beat Intel to the 64-bit punch. That was over a decade ago, yet we still run 99% of programs natively in 32 bit. Most games currently run from some flavor of DX11 right now, but before DX11 rose to prominence DX9 was the standard. DX10 came bundled with the turd that was Vista, so despite improvements over DX9 there was never significant adoption of DX10. AMD beat the pants off of Intel in the P4 era, but has been trailing ever since the release of Phenom. AMD has consistently offered processors with more cores than Intel (do you remember Thuban?) on their mainstream offerings, yet somehow still manages to under perform on the CPU side.

If you are somehow still lacking a grasp on this conversation, please reread previous posts. Your points have all been addressed, answered, and reanswered when asked a second time. Either find some new material, or start your own thread about how mainstream offerings need more than 4 cores. In case you missed it, this thread is about Skylake.
Posted on Reply
#70
caleb
Any sense to ditch my 2500K for 1080p gaming ?
Posted on Reply
#71
64K
calebAny sense to ditch my 2500K for 1080p gaming ?
No. That 2500k is still good. If anything upgrade your GPU if you are still using that 560 Ti in your specs.
Posted on Reply
#72
caleb
64KNo. That 2500k is still good. If anything upgrade your GPU if you are still using that 560 Ti in your specs.
Yeah what I was thinking too. I hate not working with hardware anymore - I miss building new platforms. That sexy smell of new motherboard !
Posted on Reply
#73
Sony Xperia S
calebYeah what I was thinking too. I hate not working with hardware anymore - I miss building new platforms. That sexy smell of new motherboard !
There is a smell of a new video card too.

But... the best now is to wait a year or two and check the offers then.

Because there will be 14/16 nm products for both graphics, and from CPU suppliers - both of them.
Posted on Reply
#74
horik
Skylake CPUs are available here more than a week ago, but I will get it next month, hopefully prices will drop a bit.
Posted on Reply
#75
Vlada011
DieinafireWill this be faster then a amd cpu?
hheee, hheee... faster than AMD CPU...
Are you aware that all AMD CPU work on Crosshair 4 Formula motherboard AM3... Unboxing of that board was before 5 years.
But best board for them is launched before I think 4 years.
AMD Fans wait new Messiah, some Zen. They still didn't learn, after Bulldozer, Hawaii "TITAN Killer", Fury X "TITAN X Killer" they still didn't learn, and AMD's Quantum project with Intel processor is not enough to tell them what to do.

Z170 is good platform for Mini ITX builds. I7-6700K, GSkill have some nice 2x 8GB DDR4 Dual Channels.
Real thing for EVGA Hadron case, GTX980 and Intel 730 SSD.
But only for small rigs, anything bigger and more expensive is wasting money compare to X99.
Posted on Reply
Add your own comment
Nov 22nd, 2024 13:09 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts