it was actually free with nvidia cards, but it is not as bad as other games I guess. Still better to review new cards with newer more relevant games and for overclocking it's probably better to see the performance gain with synthetics and that would be easier on time than testing with a game.
dx12 is already relevant if you're buying a GPU now.
people should be careful about this driver overhead business. dx11 might just plain suck.
- DX12 is only relevant when games start using it and especially its top feature levels (the actual DX12 features such as tiled rendering), which won't be earlier than the next GPU upgrade for any enthusiast, ie 2017 or later
- Look at the state of DX11/DX9 native games and you see why this is true
- W10 is just about as relevant as DX12 is at this time
- Synthetics can be cheated, games far less likely so. Synthetics do not show any kind of real-world gains, for example Valley shows better results due to higher VRAM clocks than 3DMark does, while 3DMark scores are also subject to change as benchmarks get heavier over time (Fire Strike > Fire Strike Extreme etc.). Synthetics are just about useless for testing an overclock, they are only useful for comparing different cards performance wise, and only marginally so.
- Battlefield 3 shows very linear gains from overclocking (either/both core/mem), is never cpu bound, bug free and quick to run, making for a perfect
indicator for overclocking gains, much more so than a synthetic non-real-gaming scenario. Newer games are not an advantage for benchmarks in reviews, but rather prone to changes by drivers and even game updates, all things BF3 does not suffer from any longer.
- AMD is not going to magically gain x% of free performance due to DX12 or wddm 2.0. It does offer ways for them to more efficiently code their driver and engineer their hardware, but it is utter stupidity to expect AMD to gain meaningful performance where Nvidia would not. The bottlenecks in current games are *not* related to either DX12 or wddm, because developers simply avoided those DX11 bottlenecks altogether; the games are going to run like shit on any kind of system on DX11 anyway. The current bottlenecks in gaming are CPU load related and VRAM related. The current CPU bottleneck is related to draw calls, but not entirely restricted to it; cpu bottlenecks also happen because of a lack of true multhithreaded engines, ie Starcraft 2 which runs on 1 core for example. The VRAM bottleneck is being tackled well in Maxwell and much less efficiently with AMD's implementation; AMD is stuck with HBM or an extremely wide bus and cannot gain similar efficiency from an optimized compression technology like Nvidia does with Maxwell, at least thus far.
- Fury is a handicapped and not very well balanced card: extremely fast memory with a maxed out (die space limited) core that is bogged down by so-so drivers that are in turn capped by CPU overhead. A crapload of CU's with an astounding shortage of ROP's. AMD had to compromise because of limited die space, and this compromise together with the fresh new HBM could have never turned into a well-rounded and efficient card. It doesn't, and it never will. AMD's performance gain from HBM is diminished by lack of ROP's/die space for the core, they will need 14nm to take advantage of it. Seems like Nvidia has timed HBM/Pascal much better, which, once again, is no surprise.
My two $