Tuesday, February 9th 2016
Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?
Rise of Tomb Raider could be among the first AAA games that take advantage of DirectX 12, with developer Crystal Dynamics planning a massive update that adds a new renderer, and new content (VFX, geometry, textures). The latest version of the game features an ominously named "DX12.dll" library in its folder, and while it doesn't support DirectX 12 at the moment, a renderer selection has appeared in the game's launcher. DirectX 12 is currently only offered on Windows 10, with hardware support on NVIDIA "Kepler" and "Maxwell" GPUs, and on AMD Graphics CoreNext 1.1 and 1.2 GPUs.
Source:
TweakTown
39 Comments on Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?
To me, this sounds like the recipe for better performance with the same visual fidelity you get now (whatever level people set their game at).
EDIT: Nevermind. I've been misinterpreting the direction of the calls, as well as the term "better use of the CPU cores" to mean more involvement by the entire CPU, not just one or two. :)
Heck, this means I have even less incentive than I already had to upgrade my underworked (unless I'm playing Total War games) CPU.
no one does car analogies better than amd haha
"Woohoo, we gained 30FPS with DX12, lets spend 45FPS on pointless garbage props on the scene." Because reasons. And so that we can brag how epic our engine is because it runs like shit.
I'm saying this because tessellation had the SAME purpose. And they fucked it up. Tessellation is a seamless replacement of multi-model LOD. Remember how in the past car and enemy models shifted from low poly model to high poly model as you approached them? That was the point of tessellation. A performance optimization just like LOD to achieve good performance without compromising visuals because there are no sudden jumps between models since polygons gradually decrease or increase in number. And what have devs done? "Look, 3 million polygons on this gas mask" while 2 meters away, bunch of railings and other elements all blocky because no one bothered to make them high poly in the first place so LOD can actually take away the data. You can always create high detail models and let tessellation decrease the number. You can't do it in the reverse order, expecting a poly multiplier to predict how things should look like when you increase the polygon count. ATi's TruForm did that and we all know how things looked when you enabled it in otherwise unsupported game. Everyone became like inflated balloons. Or, the tessellation engine pumps out polygons on a surface that doesn't really yield any visual gains. If a flat surface is made of 4 polygons or 3 million, it'll still look flat in both cases. But performance will sink like crazy. After all, polygon count is still very much a factor even though pixels are of an bigger issue these days with all the shader effects. But burdening a graphic card with excesive polygons isn't helping as a whole when a game scene is moving through the rendering pipeline.
you are COMPLETELY describing crysis 2 & all its negative press about tesselating flat walls, it doesnt describe tomb raider/saints row/homefront/battlefield/bioshock/cod/civilization/dirt/deus ex/f1/far cry/gta5/grid/max payne/medal of honor/metro/sleeping dogs/witcher3 (with specifically hairworks disabled)/hawx2/the list keeps going...
once again, you can disable tesselation or cap it, extremely easy on amd drivers for example, the result is no more performance drop from your tesselation boogeyman!
this premise is insane... every time there is more gpu power, devs add more graphical effects, things get more demanding, completely unrelated to what API is being used since APIs dont come out every year
want to know why i had a 4870x2 back in 2008? there was no choice! you could not get 1080p 60fps in the latest games on a single gpu from either side... nowadays a single gpu gets you that at lower power while looking great
the pace of performance has been greater than the increased load by newly added effects that you're free to disable, not to mention the fact that most devs dont go nuts since they still have to be scalable on consoles, so huffing & puffing isnt going to change reality
And it does matter what API is used. DX10 and DX11 have things that DX9 doesn't have, particularly behind the scenes things that don't affect visuals as much as they do performance. Premise of tessellation is that you input high detail models into the game and engine then cuts their detail down. It can be distance, user settings or even framerate based. Instead, large majority of games that I know (Metro and AvP being two I'm sure of) do the opposite. Buff up poly count on models that don't actually benefit much from it. In such cases it's better going more aggressive on the poly culling than enhancing visuals since low end users will benefit more from it. For example in Unreal Tournament sort of games where things are happening so fast you don't have time to admire nice curved surfaces. But having extra performance from lower poly count where it doesn't matter is of greater benefit for everyone.