Dude, your expectations are completely off the charts. I completely agree with you that if you have a Sandy Bridge processor from 4 years ago, it's still stupid to even upgrade. And the reason for that is that Intel has a clear monopoly in the industry, and are being huge cocksuckers by feeding us peanut upgrades. Think about it; the i7 2600K came out 4 years ago, and it still performs more or less excactly the same as a modern i7 5770K in gaming, and in other task is maybe 10-15% worse in performance. 15% in 4 years. I would generally consider 60% a relevant increase, and with the pace that Intel is going, you won't need to change your CPU until at least 16 years. That sound stupidly long time, and we can assume that the terrible pace will be improved from Intel one way or other (because that's how naively optimistic we consumers are). Let's assume that the Zen-processor are actually as good as AMD promises, and forces Intel to make a bigger (even if small) push to keep it quite a bit ahead. Let's say AMD knock it out of the park and shorten 16 to 8 years. That's still a lot!
And you might think 8 years still is too much. But think about it: I have the i7 2600K. It has been 4 years, and I'm still able to overclock it as well as your processor and run games like BF4 in more or less the exact same FPS as yours, whereas your CPU is maybe 10% (hardly anything) in video editing programs.
This hype about Skylake is stupid and is something that I see every year from commentators about "the next Intel processors"; they never are any relevant improvements. Why the fuck do people say that they are "waiting for Skylake", when even the performance numbers Intel give us (which usually are best-case scenarios in certain specific areas) is hardly relevant?
Oh and btw, your processor is already a bottleneck, thanks to Intel nothing giving a fuck about improving technology ever since 2010. That's why you got Mantle and DX12, and probably other improvements too in the future. Because when Intel don't do their job, someone else will (in some way or other). Mantle and DX12 gives a percentage-wise upgrade to games on CPUs (especially the old ones) that you get by going from a 5 year old intel processor to a new one. That's what happens when you get 30-40% increase in GPUs and only 5-10% increase every year.
It's getting to a point where I get frustrated even hearing or reading about Intel. It's bad enough to watch reviews upon reviews of "Intel's newest chip" on hardware sites that don't do the processor the justice by slaughtering them and giving them deserving bad scores -- knowing very well that it also might have an impact on Intel. But to see general, smart and talented people on forums like these -- people whose integrity I trust more than that of reviewers -- talk about how they are excited about Intel's new chip or that it is impressive, makes me depressed.
The i7 2600K is my last and final Intel chip, no matter what. I don't even fucking care if AMDs Zen processor end up becoming only as good as the 2600K (which is a triumph in itself for AMD) and not any upgrade for me. There is no fucking way I'm supporting 1 cent of my money on Intel anymore, not for the way they have been acting. They have literally deleted half a decade of potential development of processor chips, the way I see it. Excusing it as market strategy and reality in our neoliberal world is not good enough argument in my book.
/endrage
*awaiting the shitstorm*
Below: Why you're wrong on every level:
Intel not about being evil or anything, it's simply that adding more cores or more threads simply does not affect the average machine. If you want a nice, value-oriented machine, go get a simple i5-44xx, pair it with a good H97 or Z97 motherboard, drop a good GPU and you'll be hard pressed to find games that run better on more expensive chips. Intel knows that because they have very, very extensive validation labs running all kinds of software, as well as a very, very large network of partners asking them different things for their next CPU. Gamedevs also know that, afterall, they build said games, and they react appropriately lazily, because concurrency (the hard part in multithreading anything) is hard, yo. To see what I mean for gaming, just compare a 5930K (6C/12T) to a 4690K (4C/4T) - <2fps difference in most tests, and in most of them, the 5930K is slower than the 4690K, simply because it can't turbo boost as high (
http://www.anandtech.com/bench/product/1261?vs=1316 ). Much the same story with the 5960X (8C/16T) vs 4690K (4C/4T) (
http://www.anandtech.com/bench/product/1261?vs=1317 )
So, if games and general purpose computing doesn't really scale with cores (unlike virtualization and databses for example), what can we try? Oh I know, let's clock the chips faster! Oh, wait.. we tried that one, and it turns out, you can't really raise clockspeeds much higher than they already are due to thermals and error rates for mainstream chips. Intel learnt this the hard way with NetBurst (remember the launch when they promised 10-50GHz?). AMD mistakenly thought they could do better with Bulldozer - they couldn't. The 5GHz chip they push now has a laughable 220W TDP compared to it's 4.2GHz brethren at 125W, and ships with a CLC. Sure, it looks fast on paper, but a similarly-priced 4690K (at launch at least) matches the FX-9590 quite easily (scroll down for gaming benchmarks):
http://www.anandtech.com/bench/product/1261?vs=1289 . In the tests AT runs, the 9590 wins so few benchmarks that are so niche for most people they're basically irrelevant. I mean, how often is file archival CPU-bound for you? How often are you rendering and encoding video?
So.. now that's over, what HAS Intel been doing since they decomissioned the NetBurst based spaceheater cores (did you know they managed to make dual-core variants of Prescott hit 3.73GHz on air at 65nm?)? Let's have a look:
2006: Core 2. Conroe was introduced, based on an updated dual-core variant of the Pentium M, it get Instruction set parity with the Pentium 4s, and finally fit for duty as the main platform. Clockspeeds went down from the 3.2-3.6GHz typical down to 2.4-3.0GHz, depending on variant and core counts (1, 2 or 4).
2007: Penryn launch. Die-shrunk Conroe, among the first to hit 4GHz on air, added SSE4.1 instructions, 50% more cache and other minor tweaks. Very similar to the CPUs-die changes on SB -> IVB for example.
2008: Nehalem got pushed out. Here, we find Intel implementing the first third of it's platform upgrade: the memory controller was moved onto the CPU die, leaving the Northbridge (renamed to an I/O Hub) to be but a shell of it's former glory, now being nothing more than a glorified PCI Express controller. It also introduced triple-channel RAM. Together with the IMC-CPU integration, it made the memory system of Nehalem much faster, and finally competitive with AMD's similar approach that they implemented first with their K8 (S754 and S940, later S939, then Phenom) CPUs may years ago. The x86 Core itself was an updated Penryn/Conroe core, itself an updated P6 core used in the Pentium Pro, 2 and 3. Nehalem also saw the return of HyperThreading, not seen since the Pentium 4. It was a nice performance boost at the time.
On the very high-end server market, Intel introduced the hilariously expensive Dunnington 6-core chip (Xeon 7500 series). Interestingly, Dunnington is based off Penryn. Over the years, we'd end up seeing more and more of the high-end chips have later launches than the mainstream laptop and desktop chips.
2009: Lynnfield: A smaller, cheaper variant based around Nehalem cores. This implemented the second half of Intel's platform evolution by moving the PCIe controller on to the CPU itself. This left only the southbridge as an external component (now renamed the Platform Controller Hub, or PCH), itself connected via a modified PCIe link Intel called DMI.
2010: Arrandale, Clarkdale, Westmere and Nehalem-EX. The Arrandale, Clarkdale and Westmere were 32nm die-shrink of the Nehalem chips. The Westmere core was an updated Nehalem core, adding a few more instructions, the most interesting being the AES-NI Instruction set to do AES crypto very, very fast compared to software implementations.
Arrandale and Clarkdale were the mobile and desktop Dual-core variants. with these chips, Intel finally integrated their iGPU into the CPU. Fun fact: the GPU was 45nm while the rest was 32nm on these chips.
Westmere was the desktop chip, only two variants of which were ever launched under the i7 moniker: the 980 and 990X, most notable for bringing 6-core CPUs to the desktop, but not much else.
The Nehalem-EX (aka Beckton) on the other hand was Intel's new crowning jewel, with 8 HyperThreaded cores, lots of cache, a quad-channel memory controller, a brand new socket (LGA1567, to fit the pins needed for the increased I/O) and compatibility with 8 sockets and beyond (for the 7500 series at least). After 2 years, Dunnington was finally put to rest, and really high-end servers finally got the much improved memory controller and I/O Hub design.
2011: Sandy Bridge (LGA1155), Westmere-EX.
Sandy Bridge was the third and final piece of Intel's platform evolution, introducing a brand-new core built from the ground up. A sharp contrast to what had so far (outside of a brief stint with NetBurst) been a long, long line of repeatedly extended P6 cores. The results of this new architecture showed themselves immediately. Sadly for the server people, the immense validation timescales needed for server chips meant that by this stage, the desktop platform was basically a generation ahead of the servers, and thus while desktop users enjoyed the benefits of Sandy Bridge, servers made do with Westmere-EP. Available in up to quad-core, hyperthreaded variants, and a fair bit faster than the higher-end i7-900 series above it. Thanks to this new architecture, Intel was able to push TDP quite far down. Down to 17W in fact, and launched the Ultrabook initiative to push thin, light, stylish laptops in an effort to revitalize the PC market.
Westmere-EX. A comparatively minor update to the MP platform, adding more cores (now up to 10 HyperThreaded cores) and the newer, updated Instructions of the Westmere core.
2012: Sandy Bridge-E/EP, Ivy Bridge.
Ivy Bridge was a simple Instruction Set update and heftier than expected iGPU upgrade, together with a 22nm die shrink from the 32nm used on Westmere and Sandy Bridge. Fairly simple stuff. Reduced power consumption, minorly improved performance. Thanks to the extremely low power and much improved GPU, IVB made the Ultrabook form factor far, far less compromised than the previous SB-based machines. IVB-Y CPUs were also introduced, allowing for 7W chips, in the hopes of seeing use in tablets. Due to die cracking issues, Intel was forced to use TIM inside their heatspreader rather than solder, and still has to even now on the smaller chips. This reduces thermal conductivity and makes overclocking the K-series ships harder if one does not wish to delid their CPU.
Sandy-Bridge-E/EP: Introduced the LGA2011 socket, bringing quad-channel RAM and the shiny new SB cores to the mainstream server market and finally replacing the aging i7-900 series of chips. These came in variants ranging from 4 cores all the way up to 8 cores. These did not find their way into the MP platform, leaving MP users to slug onwards with the aging Nehalem-EX and Westmere-EX chips, though 4-socket variants of SB-E were released.
2013: IVB-E/EP, Haswell.
And now we reach the end of the line, with Haswell. Haswell is a CPU entirely focused on reducing power consumption to the minimum possible, though there is still a performance improvement of 10-20% overall, with much bigger improvements possible when using the AVX2 instructions. The TSX instructions were expected to provide a large improvement for mult-ithreaded tasks by making programming concurrent systems easier. Sadly, these were bugged, and Intel released a microcode update to disable the instructions.
IVB-E/EP were direct die-shrink upgrades to SB-E/EP, allowing for even more cores (up to 12 now) and higher clockspeeds.
2014: IVB-EX, HSW-E/EP, Broadwell-Y
IVB-EX finally launches (January), bringing with it yet another new socket, LGA2011-1. Thanks to moving part of the memory controller onto an external module, the CPUs support much larger amounts of RAM. Together with an increased core count of up to 15 cores per CPU, higher IPC and clockspeeds than Westmere-EX. DBAs and Big Data analysts are overjoyed at being able to put huge amounts of data into much faster RAM.
HSW-E/EP launched later in the year (September), bringing with it more cores (now up to 18 cores per CPU) memory improvements from moving to DDR4 (requiring a new socket, LGA2011-3, along the way) as well as the new instruction set extensions, most notably the AVX2 Instruction set. HSW-E is also notable for being the first consumer 8-core CPU.
BDW-Y (launched as Core M) also launches in September, providing very low-power CPUs for tablet usage. These are pure die-shrinks, so there are almost no performance improvements. Due to low 14nm yields, no other CPUs based on BDW are launched.
2015: BDW-U, BDW-H, BDW-C, BDW-E/EP/EX, Xeon-D, HSW-EX, Skylake
BDW-U, the low-power Ultrabook-optimised CPUs launch in February. Most laptops now being powered by Ultrabook CPUs regardless of size class update to it.
BDW-H, now primarily targeted at SFF machines (though still usable in high-performance laptops, having a maximum TDP of 47W) and BDW-C (full-sized desktops) are repeatedly delayed. Some market placement changes mean that the unlocked BDW-C CPUs now have the top of the line iGPU, including a large eDRAM cache. These are expected to be short-lived on the market as Skylake lanches soon after.
BDW-E/EP/EX: much like the rest of the BDW family, are expected to be pure die-shrinks and end up being drop-in replacements for HSW-E/EP/EX. Expect the server variants to add even more cores.
Xeon-D SoC is also launched in 2015, without much fanfare. These have 4 or 8 BDW cores with HT, dual channel DDR4 support, and integrate the PCH onto the same die as the CPU. The CPU is BGA-only, but with an 80W TDP, far outpaces the Xeon E-3 platform for CPU-bound tasks, uses less power and simplifies motherboard design. It also integrates 4 ethernet controllers - 2 gigabit and 2 10gigabit, 24 PCIe 3.0 lanes from the CPU and 8 PCIe 2.0 lanes from the integrated PCH.
HSW-EX: the final server segment Intel hadn't launched HSW for. Basically bringing up core-count, adding the new fixed TSX-NI instructions and adding DDR4 support to the 4+ socker server market. Thanks to the external memory modules (called buffers by Intel) being, well, external, these can also run DDR3 memory when fitted with DDR3 memory risers
Skylake: Another mobile focused release, with the focus still on reducing power consumption. Aside from the socketed desktop and server platforms, all Skylake CPUs are now SoCs, with the PCH integrated into the CPU core, with the expectation of reduced power consumption and simpler board design.
In conclusion: Desktop performance improvements slowed down when Sandy-Bridge was launched, because all the obvious improvements had been implemented by then. What remained was to improve power efficiency as far as it could go, so Intel went just there. The fact that there were able to extract 5-20% of performance improvement (depending on which generational jump you look) while also dropping power consumption across the board, AND improving power efficiency to the point where a full Broadwell Core can be used in a fanless environment like a tablet is simply a testament to the sheer excellence of Intel's engineering teams. On the server side, we've seen a steady, relentless increase in core counts every generation. Combined with the improvements in IPC, this means that a modern 2699 v3 CPU is easily over 4 times as fast as the Dunnington core that brought Intel into the 8-socket market to compete with IBM POWER (and now OpenPOWER) and SUN (now Oracle) SPARC. Oddly enough, it has made those machines lower in price as a direct result of providing serious competition (yes, Intel providing competition to lower prices?! Who woulda thought!).
Now sure, I can be a whiny crybaby and whine how Intel isn't doing anything and resting on their laurels and fucking the desktop market over and not improving performance, but I have to look at the big picture (mostly because I have to in order to choose which server and server CPU I need for my dual-socket server), and in the big picture, Intel has been continuously improving, and as much as I'd love to see a socketed, overclockable variant of the Xeon-D (preferably with ECC memory support), I also completely understand why Intel hasn't introduced such a CPU yet: it's simply unneeded for the vast, vast majority of people, from the most basic facebook poster all the way up to the dual-GPU gamers. For those who want even more, we've have the LGA1366, then LGA2011, then LGA2011-3 platforms, PLX-equipped motherboards (for up to quad-card setups) and dual-GPU graphics cards available to us. Sure, Intel could price everything lower, but I ask you, honestly, if you were in Intel's position, would you not charge more for higher-performing parts?
So sure, I am excited about next-gen, more for the high-end server chips than anything else, but Xeon-D and Skylake integrating the PCH into the CPU is also very exciting to me - for the first time ever we might have a completely passive x86 chip, but with good graphics! how is that not exciting?! How is it not exciting that we'll have x86 CPUs in our phones that we can drop into a dock and have the PHONE drive full windows desktop apps? Or do full dual-boot between, multiple OSes?! I am excited, and I'm really happy I am, it's just that my current excitement is over slightly different things than 12-year old me would be excited about.
Finally, a little note about Mantle, DX12 and Vulkan: see my above comment about concurrency is hard. A direct consequence to that is that optimizing the API is easier than writing high-performance concurrent code. And you won't find your average game studio working on releasing a new game ever 2-3 years investing anywhere as much time into making their games scale very well with threads compared to, say, people working on PostgreSQL or MapReduce or Ceph; especially at the rate gamedevs are paid and the hours they work.
EDITS: Corrected S940 to K8, as pointed out by newtekie1, various spelling corrections, clarifications and added HSW-EX (which I completely forgot about)