Tuesday, February 9th 2016
Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?
Rise of Tomb Raider could be among the first AAA games that take advantage of DirectX 12, with developer Crystal Dynamics planning a massive update that adds a new renderer, and new content (VFX, geometry, textures). The latest version of the game features an ominously named "DX12.dll" library in its folder, and while it doesn't support DirectX 12 at the moment, a renderer selection has appeared in the game's launcher. DirectX 12 is currently only offered on Windows 10, with hardware support on NVIDIA "Kepler" and "Maxwell" GPUs, and on AMD Graphics CoreNext 1.1 and 1.2 GPUs.
Source:
TweakTown
39 Comments on Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?
to much bs around about what supports what. ya nv has to add more to the driver to get dx12 working right and it adds overhead but whatever. it will balance out.. hopefully leaning towards the better side. like when tessellation went wild it balanced out enough so both camps had a fair chance. well that was nvidia holding a lead with tesselation performance and now its amd with compute performance.
in a way they have both held each other back at times.. for the sake of monopoly and the windows ecosystem
Rise of the Tomb Raider uses async compute to render breathtaking volumetric lighting on Xbox One
DX11 also promised better performance compared to DX9 and 10 and it ended up being slower just because they crammed 500 trillion metric tons of unnecessary tessellation in games. And I think they'll do exactly the same nonsense with DX12. All the advertised boost will be gone because they'll stuff bunch of useless stuff you have to look for with magnifying glass instead of just letting it be the way it is and boost performance significantly.
I don't understand their logic. Sure, fancier graphics sell a bit better with the enthusiast gamers, but better overall performance sells well with average gamers as well as enthusiasts, because that means the game will probably run super fast even at 4K with max details. Which is also something these days. Not sure why they bet everything on enthusiasts but cry in the same breath how PC gamers don't buy in huge enough numbers. I wonder why. Latest TR games already look good, no need to overbloat them with quasi quality and derp the performance entirely.
I mean if game performance is crippled, even with absolutely low settings, then this may be a valid argument, but talking about max. settings and demanding they should be reduced makes just no sense at all.
And I have a feeling they'll do exactly the same nonsense with DX12 gains. The reason why I'm ranting about it? Games still look like shite and running like they used to. Meaning you don't really improve anything. Not performance and not visuals. Priorities. Something devs on consoles actually understand, but on PC, well fuck that, gamers can just buy extra 32GB of RAM and QuadSLi cards because we're lazy and can't be bothered doing shit properly. That's why.
Also your logic doesn't float. So, I'll be decreasing my game details, making game actually look worse to achieve performance just because someone decided to spend 300 billion polygons on some retarded rock on the side of the road. I've seen way too much retardedly designed games to just forget about it. We could make leaps in performance while increasing overall level of detail of all games thanks to performance boost of DX12, but instead they'll just waste it all on pointless nonsense.
Nonetheless is tesselation in combination with displacement maps a very interesting feature wich adds a lot to the realism in a rendered scene. All I read out of your rant is that you want control over the tesselation iterations and propably a way to reduce tesselation when the surface to be tesselated is farther away (some kind of LoD). I'm pretty sure the latter is already implemented in most games if not all (although maybe not configurable).
If you don't like tesselation at all, it can be switched of in the settings dialog. If you want tesselation enabled, but need to adjust the amount of tesselation, this can be configured (at least with NVIDIA) in the application profile for said game and is called 'Maximum Tessellation Level'.
Regarding performance of Tomb Raider: On my system, the 2013 Tomb Raider game looked worse compared to the 2016 version (obviously), and also performed worse FPS-wise (I game at 1600p btw. which is about 2x the 1080p pixel count).
Back then I had AMD HD6970 hardware and today I own a pair of NVIDIA 980Ti's.
But even today with the SLI setup, both games stay above 60 FPS all the time with everything turned up and on. So no reason to complain about sub-par performance here.
which games or benchmarks (that arent crysis 2 or hairworks enabled... or ubisoft unoptimized) have so much 'tesselation' (as if that's the only new feature devs use) causing worse than dx9 or even ogl performance, with no option to turn off or override in the driver?
star trek mmo's dx11 update with identical visuals brought so much performance that they had to make PR about it
even in the dx10 days, when i had 4870x2, crysis warhead was smoother in dx10 than dx9 even if the fps counter was a few numbers lower, i have never seen a regression with a new dx version unless it's brand new buggy drivers or OS... plus, nvidia has proven multithreaded gains
the main detail i see that hurts fps the most is HB/HDAO, so i switch to SSAO or off instead, given that my 570m or 660 arent powerful gpus... dont you think i would be one of the first to notice problems with dx11? how can i trust your word with your high end specs without real examples
by the way, crysis 2 runs great for how it looks, you are free to turn off tesselation or tweak the cvars rather than switching to dx9
also breit's logic is fine if you're only turning down the newly added details that are already disabled in the previous dx version of the game but that's an optional feature... if it's running dx12 it IS dx12, what else can they call it?? 'getting dx12' is different from 'is dx12'
your dream should be 'built from the ground up for dx12', aka exclusive
it should be in the release notes too, the same way there was a dx11/bf3 golden driver in 2012
edit: q9550 3.6ghz, 4gb ram... are you additionally saying single 4870 ran better than CF or simply that dx9 ran much more acceptable?