Tuesday, February 9th 2016

Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?

Rise of Tomb Raider could be among the first AAA games that take advantage of DirectX 12, with developer Crystal Dynamics planning a massive update that adds a new renderer, and new content (VFX, geometry, textures). The latest version of the game features an ominously named "DX12.dll" library in its folder, and while it doesn't support DirectX 12 at the moment, a renderer selection has appeared in the game's launcher. DirectX 12 is currently only offered on Windows 10, with hardware support on NVIDIA "Kepler" and "Maxwell" GPUs, and on AMD Graphics CoreNext 1.1 and 1.2 GPUs.
Source: TweakTown
Add your own comment

39 Comments on Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?

#1
xfia
thanks..
to much bs around about what supports what. ya nv has to add more to the driver to get dx12 working right and it adds overhead but whatever. it will balance out.. hopefully leaning towards the better side. like when tessellation went wild it balanced out enough so both camps had a fair chance. well that was nvidia holding a lead with tesselation performance and now its amd with compute performance.
in a way they have both held each other back at times.. for the sake of monopoly and the windows ecosystem
Posted on Reply
#2
Solaris17
Super Dainty Moderator
I cant wait until this becomes a reality. I am really curious (if this comes to pass) as too what the performance numbers will be after the change. I would also love it if TPU spear headed the change should we get one. comparing 11 vs 12 in this title.
Posted on Reply
#3
the54thvoid
Super Intoxicated Moderator
Solaris17I cant wait until this becomes a reality. I am really curious (if this comes to pass) as too what the performance numbers will be after the change. I would also love it if TPU spear headed the change should we get one. comparing 11 vs 12 in this title.
Given how good the game looks, any change in performance between DX11 and DX12, would be an interesting debate.
Posted on Reply
#4
john_
DirectX 12 is currently only offered on Windows 10, with hardware support on NVIDIA "Kepler" and "Maxwell" GPUs, and on AMD Graphics CoreNext 1.1 and 1.2 GPUs.
For a couple of months now DX12 is supported also on Fermi. Also GCN 1.0 had support for DX12 from the first day of Windows 10 official announcement.
Posted on Reply
#5
Xzibit
This probably just means the console extensions are being re-activated being it was already DX12 and was downgraded to DX11 for the PC.

Rise of the Tomb Raider uses async compute to render breathtaking volumetric lighting on Xbox One
During its presentation at SIGGRAPH 2015, Eidos Montreal revealed some interesting technical details regarding its forthcoming action adventure game, Rise of the Tomb Raider. The developer gave an overview of the advanced rendering techniques that are being used in Lara Croft’s latest outing.

Of all the rendering techniques used in the game, the most fascinating is its use of asynchronous compute for the generation of advanced volumetric lights. For this purpose, the developer has employed a resolution-agnostic voxel method, which allows volumetric lights to be rendered using asynchronous compute after the rendering of shadows, with correctly handled transparency composition.

The developer has also used its own in-house SSAO technique dubbed ‘Broad Temporal Ambient Obscurance’ (BTAO). The technique is inspired by SAO (Scalable Ambient Obscurance), and is claimed to be vastly superior to the industry leading ambient occlusion technique i.e. HBAO, in terms of both quality and performance. According to the developer, this new ambient occlusion technique is temporally stable and provides both near and far occlusion.

Like The Order: 1886 on the PS4, Rise of the Tomb Raider will also make use of the Sample Distribution Shadow Maps (SDSM) algorithm in order to allow efficient shadow map rendering. By adjusting itself to scene and camera position, the technique will enhance the overall shadow quality in the game.

Lastly, the game features procedural snow deformation that leverages compute shaders and hardware tessellation for fast and detailed results. Due to the agnostic nature of this approach, the developer believes that this technique will have many other applications in the future.
Posted on Reply
#6
bug
Or, DX12 may be a feature that got the axe and the DLL is a leftover. We'll see.
Posted on Reply
#7
RejZoR
I wonder if DX12 will be of better performance or they'll fuck it up just like with EVERY single bloody "new and faster DX" version of a game.
DX11 also promised better performance compared to DX9 and 10 and it ended up being slower just because they crammed 500 trillion metric tons of unnecessary tessellation in games. And I think they'll do exactly the same nonsense with DX12. All the advertised boost will be gone because they'll stuff bunch of useless stuff you have to look for with magnifying glass instead of just letting it be the way it is and boost performance significantly.

I don't understand their logic. Sure, fancier graphics sell a bit better with the enthusiast gamers, but better overall performance sells well with average gamers as well as enthusiasts, because that means the game will probably run super fast even at 4K with max details. Which is also something these days. Not sure why they bet everything on enthusiasts but cry in the same breath how PC gamers don't buy in huge enough numbers. I wonder why. Latest TR games already look good, no need to overbloat them with quasi quality and derp the performance entirely.
Posted on Reply
#8
Breit
RejZoRI wonder if DX12 will be of better performance or they'll fuck it up just like with EVERY single bloody "new and faster DX" version of a game.
DX11 also promised better performance compared to DX9 and 10 and it ended up being slower just because they crammed 500 trillion metric tons of unnecessary tessellation in games. And I think they'll do exactly the same nonsense with DX12. All the advertised boost will be gone because they'll stuff bunch of useless stuff you have to look for with magnifying glass instead of just letting it be the way it is and boost performance significantly.

I don't understand their logic. Sure, fancier graphics sell a bit better with the enthusiast gamers, but better overall performance sells well with average gamers as well as enthusiasts, because that means the game will probably run super fast even at 4K with max details. Which is also something these days. Not sure why they bet everything on enthusiasts but cry in the same breath how PC gamers don't buy in huge enough numbers. I wonder why. Latest TR games already look good, no need to overbloat them with quasi quality and derp the performance entirely.
I don't quite agree. If you want your games to run faster, tone down the details! What's the point in saying my game runs super fast with full details and demand that at the same time the developers should not pack too much details into these games so that performance stays up? You know that cranking up details is totally optional, right? :rolleyes:

I mean if game performance is crippled, even with absolutely low settings, then this may be a valid argument, but talking about max. settings and demanding they should be reduced makes just no sense at all.
Posted on Reply
#9
huguberhart
Is that screenshot from right before the end of the game?
Posted on Reply
#10
RejZoR
BreitI don't quite agree. If you want your games to run faster, tone down the details! What's the point in saying my game runs super fast with full details and demand that at the same time the developers should not pack too much details into these games so that performance stays up? You know that cranking up details is totally optional, right? :rolleyes:

I mean if game performance is crippled, even with absolutely low settings, then this may be a valid argument, but talking about max. settings and demanding they should be reduced makes just no sense at all.
You don't understand what they've been doing with tessellation. 300 billion polygons. All wasted on some pointless shit you can't even see while railings 2 meters away are all blocky and made of 15 polygons. It's stupid and not only makes game run like shit even on top end rigs, it totally defeats the point of making things faster if you then waste all the gains in pointless stupid nonsense.

And I have a feeling they'll do exactly the same nonsense with DX12 gains. The reason why I'm ranting about it? Games still look like shite and running like they used to. Meaning you don't really improve anything. Not performance and not visuals. Priorities. Something devs on consoles actually understand, but on PC, well fuck that, gamers can just buy extra 32GB of RAM and QuadSLi cards because we're lazy and can't be bothered doing shit properly. That's why.

Also your logic doesn't float. So, I'll be decreasing my game details, making game actually look worse to achieve performance just because someone decided to spend 300 billion polygons on some retarded rock on the side of the road. I've seen way too much retardedly designed games to just forget about it. We could make leaps in performance while increasing overall level of detail of all games thanks to performance boost of DX12, but instead they'll just waste it all on pointless nonsense.
Posted on Reply
#11
xfia
RejZoRYou don't understand what they've been doing with tessellation. 300 billion polygons. All wasted on some pointless shit you can't even see while railings 2 meters away are all blocky and made of 15 polygons. It's stupid and not only makes game run like shit even on top end rigs, it totally defeats the point of making things faster if you then waste all the gains in pointless stupid nonsense.

And I have a feeling they'll do exactly the same nonsense with DX12 gains. The reason why I'm ranting about it? Games still look like shite and running like they used to. Meaning you don't really improve anything. Not performance and not visuals. Priorities. Something devs on consoles actually understand, but on PC, well fuck that, gamers can just buy extra 32GB of RAM and QuadSLi cards because we're lazy and can't be bothered doing shit properly. That's why.

Also your logic doesn't float. So, I'll be decreasing my game details, making game actually look worse to achieve performance just because someone decided to spend 300 billion polygons on some retarded rock on the side of the road. I've seen way too much retardedly designed games to just forget about it. We could make leaps in performance while increasing overall level of detail of all games thanks to performance boost of DX12, but instead they'll just waste it all on pointless nonsense.
hopefully there will be a nice surprise for us
Posted on Reply
#12
GhostRyder
Sounds awesome, I want to see what it changes!
Posted on Reply
#13
PP Mguire
DX12 comes with a lot of advancements that previous iterations can't compare to. Overhead and drawcalls isn't really something a lazy dev can fuck up now.
Posted on Reply
#14
Breit
RejZoRYou don't understand what they've been doing with tessellation. 300 billion polygons. All wasted on some pointless shit you can't even see while railings 2 meters away are all blocky and made of 15 polygons. It's stupid and not only makes game run like shit even on top end rigs, it totally defeats the point of making things faster if you then waste all the gains in pointless stupid nonsense.

And I have a feeling they'll do exactly the same nonsense with DX12 gains. The reason why I'm ranting about it? Games still look like shite and running like they used to. Meaning you don't really improve anything. Not performance and not visuals. Priorities. Something devs on consoles actually understand, but on PC, well fuck that, gamers can just buy extra 32GB of RAM and QuadSLi cards because we're lazy and can't be bothered doing shit properly. That's why.

Also your logic doesn't float. So, I'll be decreasing my game details, making game actually look worse to achieve performance just because someone decided to spend 300 billion polygons on some retarded rock on the side of the road. I've seen way too much retardedly designed games to just forget about it. We could make leaps in performance while increasing overall level of detail of all games thanks to performance boost of DX12, but instead they'll just waste it all on pointless nonsense.
300 billion is probably a bit much. ;)
Nonetheless is tesselation in combination with displacement maps a very interesting feature wich adds a lot to the realism in a rendered scene. All I read out of your rant is that you want control over the tesselation iterations and propably a way to reduce tesselation when the surface to be tesselated is farther away (some kind of LoD). I'm pretty sure the latter is already implemented in most games if not all (although maybe not configurable).
If you don't like tesselation at all, it can be switched of in the settings dialog. If you want tesselation enabled, but need to adjust the amount of tesselation, this can be configured (at least with NVIDIA) in the application profile for said game and is called 'Maximum Tessellation Level'.

Regarding performance of Tomb Raider: On my system, the 2013 Tomb Raider game looked worse compared to the 2016 version (obviously), and also performed worse FPS-wise (I game at 1600p btw. which is about 2x the 1080p pixel count).
Back then I had AMD HD6970 hardware and today I own a pair of NVIDIA 980Ti's.
But even today with the SLI setup, both games stay above 60 FPS all the time with everything turned up and on. So no reason to complain about sub-par performance here.
Posted on Reply
#15
xfia
Breit300 billion is probably a bit much. ;)
Nonetheless is tesselation in combination with displacement maps a very interesting feature wich adds a lot to the realism in a rendered scene. All I read out of your rant is that you want control over the tesselation iterations and propably a way to reduce tesselation when the surface to be tesselated is farther away (some kind of LoD). I'm pretty sure the latter is already implemented in most games if not all (although maybe not configurable).
If you don't like tesselation at all, it can be switched of in the Settings dialog. If you want tesselation enabled, but need to adjust the amount of tesselation, this can be configured (at least with nvidia) in the application profile for said game and is called 'Maximum Tessellation Level'.

Regarding performance of Tomb Raider: On my system, the 2013 Tomb Raider game looked worse compared to the 2016 version (obviously), and also performed worse FPS-wise (I game at 1600p btw. which is about 2x the 1080p pixel count).
Back then I had AMD HD6970 hardware and today I own a pair of NVIDIA 980Ti's.
But even today with the SLI setup, both games stay above 60 FPS all the time with everything turned up and on. So no reason to complain about sub-par performance here.
AMD got all the tesselation settings and even AMD optimized for wild tesselation that needs to be tamed.
Posted on Reply
#16
Brother Drake
I wish developers would stop saying their game is DX12 when they just paste on one or two minor features. If a game is truly DX12 it has to be designed that way from the ground up. To me the most important feature of DX12 is the ability to use multiple non identical GPUs. I especially like that DX12 can utilize the iGPU to do post processing while the graphics card(s) do the rest of the work It's estimated that this will provide at least a 20% performance boost without costing anything at all. Free? I like it! Also I can keep using my Maxwell GPU when I upgrade to Pascal. No more SLI/Crossfire bridges.
Posted on Reply
#17
RejZoR
If they used two controllable settings like "Quality that you have now with awesome DX12 framerate boost" and "Use all settings in unnecessarily stupendous way", I'm fine with it. But my hunch says, they'll go the stupendous way only...
Posted on Reply
#18
Ferrum Master
Steam just got an update and stated about DX12 support.
Posted on Reply
#19
Solaris17
Super Dainty Moderator
Ferrum MasterSteam just got an update and stated about DX12 support.
Thanks man
Posted on Reply
#20
xfia
Brother DrakeI wish developers would stop saying their game is DX12 when they just paste on one or two minor features. If a game is truly DX12 it has to be designed that way from the ground up.
nah not really dx12 is a evolution since back in the day with dx9...there is plenty of dx11 games that can be played in dx9 mode and dx12 is dx11 with draw calls off the chart so there is no more wasted gpu power. what you say is the most important part to you will most likely put you into a minority even tho it is really cool. being that only a small percentage even use more than one gpu. getting amd and nv to play ball with each other to make it happen is a slim chance since it would mostly be slim profit if any at all.
Posted on Reply
#21
kn00tcn
RejZoRblah blah blah crysis 2
what a load of CRAP, every single one? i'm not sorry, you're spreading FUD

which games or benchmarks (that arent crysis 2 or hairworks enabled... or ubisoft unoptimized) have so much 'tesselation' (as if that's the only new feature devs use) causing worse than dx9 or even ogl performance, with no option to turn off or override in the driver?

star trek mmo's dx11 update with identical visuals brought so much performance that they had to make PR about it

even in the dx10 days, when i had 4870x2, crysis warhead was smoother in dx10 than dx9 even if the fps counter was a few numbers lower, i have never seen a regression with a new dx version unless it's brand new buggy drivers or OS... plus, nvidia has proven multithreaded gains

the main detail i see that hurts fps the most is HB/HDAO, so i switch to SSAO or off instead, given that my 570m or 660 arent powerful gpus... dont you think i would be one of the first to notice problems with dx11? how can i trust your word with your high end specs without real examples

by the way, crysis 2 runs great for how it looks, you are free to turn off tesselation or tweak the cvars rather than switching to dx9

also breit's logic is fine if you're only turning down the newly added details that are already disabled in the previous dx version of the game
Brother DrakeI wish developers would stop saying their game is DX12 when they just paste on one or two minor features. If a game is truly DX12 it has to be designed that way from the ground up. To me the most important feature of DX12 is the ability to use multiple non identical GPUs. I especially like that DX12 can utilize the iGPU to do post processing while the graphics card(s) do the rest of the work It's estimated that this will provide at least a 20% performance boost without costing anything at all. Free? I like it! Also I can keep using my Maxwell GPU when I upgrade to Pascal. No more SLI/Crossfire bridges.
but that's an optional feature... if it's running dx12 it IS dx12, what else can they call it?? 'getting dx12' is different from 'is dx12'

your dream should be 'built from the ground up for dx12', aka exclusive
Posted on Reply
#22
PP Mguire
kn00tcnwhat a load of CRAP, every single one? i'm not sorry, you're spreading FUD

which games or benchmarks (that arent crysis 2 or hairworks enabled... or ubisoft unoptimized) have so much 'tesselation' (as if that's the only new feature devs use) causing worse than dx9 or even ogl performance, with no option to turn off or override in the driver?

star trek mmo's dx11 update with identical visuals brought so much performance that they had to make PR about it

even in the dx10 days, when i had 4870x2, crysis warhead was smoother in dx10 than dx9 even if the fps counter was a few numbers lower, i have never seen a regression with a new dx version unless it's brand new buggy drivers or OS... plus, nvidia has proven multithreaded gains

the main detail i see that hurts fps the most is HB/HDAO, so i switch to SSAO or off instead, given that my 570m or 660 arent powerful gpus... dont you think i would be one of the first to notice problems with dx11? how can i trust your word with your high end specs without real examples

by the way, crysis 2 runs great for how it looks, you are free to turn off tesselation or tweak the cvars rather than switching to dx9

also breit's logic is fine if you're only turning down the newly added details that are already disabled in the previous dx version of the game


but that's an optional feature... if it's running dx12 it IS dx12, what else can they call it?? 'getting dx12' is different from 'is dx12'

your dream should be 'built from the ground up for dx12', aka exclusive
Crysis and Warhead both ran like poop on my 4870 Crossfire setup in DX10. That was with 8GB of RAM in custom tweaked Vista with a high clocked e8400. Not rebutting the rest of your post, just found the 4870x2 comment a tad weird compared to my own experience.
Posted on Reply
#23
kn00tcn
PP MguireCrysis and Warhead both ran like poop on my 4870 Crossfire setup in DX10. That was with 8GB of RAM in custom tweaked Vista with a high clocked e8400. Not rebutting the rest of your post, just found the 4870x2 comment a tad weird compared to my own experience.
which driver? i didnt mean it was fine around the game's launch so i'm also trying to remember, i have the fraps shots on another hard drive... but there was a definite change in the driver in summer 2009 that greatly reduced input lag during vsync so it's possibly around this time (far cry 2 was also improved, since then the dx10 minimum is equal to dx9 minimum with average+max dx10 being higher than dx9, not to mention less stuttering)

it should be in the release notes too, the same way there was a dx11/bf3 golden driver in 2012

edit: q9550 3.6ghz, 4gb ram... are you additionally saying single 4870 ran better than CF or simply that dx9 ran much more acceptable?
Posted on Reply
#24
newtekie1
Semi-Retired Folder
I'm less concerned with DX12 eye candy and more concerned with DX12 performance improvements.
Posted on Reply
#25
PP Mguire
kn00tcnwhich driver? i didnt mean it was fine around the game's launch so i'm also trying to remember, i have the fraps shots on another hard drive... but there was a definite change in the driver in summer 2009 that greatly reduced input lag during vsync so it's possibly around this time (far cry 2 was also improved, since then the dx10 minimum is equal to dx9 minimum with average+max dx10 being higher than dx9, not to mention less stuttering)

it should be in the release notes too, the same way there was a dx11/bf3 golden driver in 2012

edit: q9550 3.6ghz, 4gb ram... are you additionally saying single 4870 ran better than CF or simply that dx9 ran much more acceptable?
Mine was 09 something I want to say before summer. My DX9 performance was noticeably smoother and better than DX10 in single or Crossfire 4870. No way I could remember what drive it was.
newtekie1I'm less concerned with DX12 eye candy and more concerned with DX12 performance improvements.
This,.
Posted on Reply
Add your own comment
Dec 21st, 2024 23:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts