Tuesday, February 28th 2017

Is DirectX 12 Worth the Trouble?

We are at the 2017 Game Developers Conference, and were invited to one of the many enlightening tech sessions, titled "Is DirectX 12 Worth it," by Jurjen Katsman, CEO of Nixxes, a company credited with several successful PC ports of console games (Rise of the Tomb Raider, Deus Ex Mankind Divided). Over the past 18 months, DirectX 12 has become the selling point to PC gamers, of everything from Windows 10 (free upgrade) to new graphics cards, and even games, with the lack of DirectX 12 support even denting the PR of certain new AAA game launches, until the developers hashed out support for the new API through patches. Game developers are asking the dev community at large to manage their expectations from DirectX 12, with the underlying point being that it isn't a silver-bullet to all the tech limitations developers have to cope with, and that to reap all its performance rewards, a proportionate amount of effort has to be put in by developers.

The presentation begins with the speaker talking about the disillusionment consumers have about DirectX 12, and how they're yet to see the kind of console-rivaling performance gains DirectX 12 was purported to bring. Besides lack of huge performance gains, consumers eagerly await the multi-GPU utopia that was promised to them, in which not only can you mix and match GPUs of your choice across models and brands, but also have them stack up their video memory - a theoretical possibility with by DirectX 12, but which developers argue is easier said than done, in the real world. One of the key areas where DirectX 12 is designed to improve performance is by distributing rendering overhead evenly among many CPU cores, in a multi-core CPU. For high-performance desktop users with reasonably fast CPUs, the gains are negligible. This also goes for people gaming on higher resolutions, such as 1440p and 4K Ultra HD, where the frame-rates are low, and the performance tends to be more GPU-limited.
The other big feature introduced to the mainstream with DirectX 12, is asynchronous compute. There have been examples of games that take advantage of gaining more performance out of a certain brand of GPU than the other, and this is a problem, according to developers. Hardware support to async compute is limited to only the latest GPU architectures, and requires further tuning specific to the hardware. Performance gains seen on closed ecosystems such as consoles, hence, don't necessarily come over to the PC. It is therefore concluded that performance gains with async compute are inconsistent, and hence developers should manage their time on making a better game than focusing on a very specific hardware user-base.
The speakers also cast aspersions on the viability of memory stacking on multi-GPU - the idea where memory of two GPUs simply add up to become one large addressable block. To begin with, the multi-GPU user-base is still far too small for developers to spend more effort than simply hashing out an AFR (alternate frame rendering) code for their games. AFR, the most common performance-improving multi-GPU method, where each GPU in a multi-GPU renders an alternative frame for the master GPU to output in sequence; requires that each GPU has a copy of the video-memory mirrored with the other GPUs. Having to fetch data from the physical memory of a neighboring GPU is a performance-costly and time-consuming process.
The idea behind new-generation "close-to-the-metal" APIs such as DirectX 12 and Vulkan, has been to make graphics drivers as less relevant to the rendering pipeline as possible. The speakers contend that the drivers are still very relevant, and instead, with the advent of the new APIs, their complexities have further gone up, in the areas of memory management, manual multi-GPU (custom, non-AFR multi-GPU implementations), the underlying tech required to enable async compute, and the various performance-enhancement tricks the various driver vendors implement to make their brand of hardware faster, which in turn can mean uncertainties for the developer in cases where the driver overrides certain techniques, just to squeeze out a little bit of extra performance.

The developers revisit the question - is DirectX 12 worth it? Well, if you are willing to invest a lot of time into your DirectX 12 implementation, you could succeed, such as in case of "Rise of the Tomb Raider," in which users noticed "significant gains" with the new API (which were not trivial to achieve). They also argue that it's possible to overcome CPU bottlenecks with or without DirectX 12, if that's relevant to your game or user-base. They also concede that Async Compute is the way forward, and if not console-rivaling performance gains, it can certainly benefit the PC. They're also happy to have gotten rid of AFR multi-GPU as governed by the graphics driver, which was unpredictable and had little control (remember those pesky 400 MB driver updates just to get multi-GPU support?). The new API-governed AFR means more work for the developer, but also far greater control, which the speaker believe, will benefit the users. Another point he made is that successful porting to DirectX 12 lays a good foundation for porting to Vulkan (mostly for mobile), which uses nearly identical concepts, technologies and APIs as DX12.
Memory management is the hardest part about DirectX 12, but the developer community is ready to embrace the innovation (such as mega-textures, tiled-resources, etc). The speakers conclude stating that DirectX 12 is hard, it can be worth the extra effort, but it may not be, either. Developers and consumers need to be realistic about what to expect from DirectX, and developers in particular to focus on making good games, rather than sacrificing their resources on becoming tech-demonstrators than on actual content.
Add your own comment

72 Comments on Is DirectX 12 Worth the Trouble?

#26
Caring1
the54thvoidAnd if you really want to say Nvidia can't do DX12, what are the fastest 2-3 single gfx cards right now?
Being the fastest or most powerful, does not equate to good Dx12 performance.
It's like building a bigger V8 to compare with high tech turbo 4's they may win a race, but they brute force it, no refinement.
Posted on Reply
#28
bug
Caring1Being the fastest or most powerful, does not equate to good Dx12 performance.
It's like building a bigger V8 to compare with high tech turbo 4's they may win a race, but they brute force it, no refinement.
Well, the bigger V8 will most likely outlast that high tech turbo, so there's that :D
Posted on Reply
#29
Solidstate89
DX12 for Deus-Ex Mankind Revolution was a pretty huge difference in performance and frame stuttering. I could actually crank up the graphics options even higher than the DX11 render path, and not lose any frame rate at all. The framerate itself was also smoother.

I've been impressed with my limited experience with DX12 games so far.
Posted on Reply
#30
Imsochobo
Solidstate89DX12 for Deus-Ex Mankind Revolution was a pretty huge difference in performance and frame stuttering. I could actually crank up the graphics options even higher than the DX11 render path, and not lose any frame rate at all. The framerate itself was also smoother.

I've been impressed with my limited experience with DX12 games so far.
Same here, huge benefits in DX12 on my old entertainment system, it was like making a 1055T suddenly live a year longer.

The xeon system doesn't have a decent graphicscard in it and it really really hates putting one in it too, bluescreens so it is a cpu brute with 10 cores 20 thread.

hence my 1055T gaming system which runs everything I want up to about this point.. somehow... it's really crap.. but still games...
Posted on Reply
#31
the54thvoid
Intoxicated Moderator
medi01Fury X actually beats 980Ti AND 1070 in the linked benchmarked (scroll down to DirectX12)
Read my post. I clearly say stock 980ti is a limp fish. But most OC to 20% gains. The Gaming X 980ti is faster than Fury X. I use my res of 1440p.
Posted on Reply
#33
m1dg3t
For the amount of money developers are charging for games they have a lot of balls to complain it's "too much work". I am not content with complacency.

Seems more like the industry is trying to hamstring advancement. In the name of profits. Again. IMHO
Posted on Reply
#34
ADHDGAMING
Why are they using possibly the worst Graphics card at DX12 to show DX12 performance ...
Posted on Reply
#35
olymind1
I don't know, should not we wait at least until Shader Model 6.0 is released, or isn't it supposed be part of DirectX 12, before we or the devs bury DX12?
Posted on Reply
#37
bug
m1dg3tFor the amount of money developers are charging for games they have a lot of balls to complain it's "too much work". I am not content with complacency.

Seems more like the industry is trying to hamstring advancement. In the name of profits. Again. IMHO
Try to minimize the effort and maximize profits, isn't this what any industry is about?

What's missing from the picture is what is the amount of extra effort needed to get those improvements from DX12. Cause if you're spending 10% extra time to get 10% better performance, that's one thing. If you're spending 100% extra time to get 10% better performance, that's another. I'm a developer and I have an idea, but I won't speculate.
Posted on Reply
#38
m1dg3t
bugTry to minimize the effort and maximize profits, isn't this what any industry is about?

What's missing from the picture is what is the amount of extra effort needed to get those improvements from DX12. Cause if you're spending 10% extra time to get 10% better performance, that's one thing. If you're spending 100% extra time to get 10% better performance, that's another. I'm a developer and I have an idea, but I won't speculate.
If that's what satisfies the market, yes. Consumers have the power. NOT the corporations. If a pile of poop is what it takes to make you happy, that is all you're going to get.

TBH I think the fact that the smaller player is pushing for these advancements while the larger player is ignoring/dismissing them is quite telling. Let the hardware guys focus/work on hardware and let the software guys make the absolute best use of it. We all win. :)
Posted on Reply
#39
W1zzard
bugWhat's missing from the picture is what is the amount of extra effort needed to get those improvements from DX12. Cause if you're spending 10% extra time to get 10% better performance, that's one thing. If you're spending 100% extra time to get 10% better performance, that's another.
It's more like you spend 50% addtional time (over the time spent for the DX11 engine you already have), just to not be worse .. reach the same performance.

Ubisoft had a (highly technical) presentation, on porting Anvil Engine to DX12. Their first internal versions, where they just did a simple port from DX11 to DX12 without touching on any features, they got 200% worse frametimes.
Posted on Reply
#40
nguyen
Rise of the Tomb Raider is probably the best optimized DX12 title out there so when the developer said DX12 is not worth it it's probably true. I would rather play game when it launch rather than 3 months later for 10% better performance, maybe for gamers on a budget who wait for steam sale DX12 is rather intriguing.
Posted on Reply
#41
DRDNA
W1zzardIt's more like you spend 50% addtional time (over the time spent for the DX11 engine you already have), just to not be worse .. reach the same performance.

Ubisoft had a (highly technical) presentation, on porting Anvil Engine to DX12. Their first internal versions, where they just did a simple port from DX11 to DX12 without touching on any features, they got 200% worse frametimes.
:eek::eek::eek:o_Oo_Oo_O:eek::eek::eek: Wow that's not encouraging.
Posted on Reply
#42
Xzibit
Consumer complains about Ports

Now Porting studio complain about porting from DX11 to DX12.

Stop porting and make DX12 the standard
Posted on Reply
#43
Prince Valiant
Developers sure like to kick the sheets around when it comes to this stuff. I don't know why they're still pouting about not having super fine control over the hardware when they know they're going to dump a lot on nVidia and AMD to fix in the drivers anyway.
XzibitConsumer complains about Ports

Now Porting studio complain about porting from DX11 to DX12.

Stop porting and make DX12 the standard
Add in Vulkan support alongside DX12 and everyone wins.
Posted on Reply
#44
Steevo
bugWell, the bigger V8 will most likely outlast that high tech turbo, so there's that :D
No, they usually don't. The high tech turbo will outlast big V8s at the track, besides being lighter, there are fewer moving parts.

It's why heavy industry doesn't use V8 engines and usually prefers inline6 engines with one or two turbos.

Developers are whiny about DX12 as it IS more complicated to code for, and it's a much tougher gate to entry VS system managed but less robust code.

The nice thing is, good and great developers will use it to its fullest.

Lastly, DX11 was so unused that GTA5 didn't become the performance optimization wonder that it is..... just the difference in a developer wanting to do a great job vs slop.
Posted on Reply
#45
TheGuruStud
bugBut that's exactly the thing developers are complaining about: a lot of work with results that only show if you have a rather new AMD card and in some cases also an older CPU.
Do I really need to point you to steam statistics? 46% are still running dual cores. I imagine MANY of the quads are older, too.

Devs are lazy assholes. They don't care about majority (and likely vast majority) of users. They want to hide their shitty work with new hardware. We see it nearly every game release, now.
Posted on Reply
#46
bug
TheGuruStudDo I really need to point you to steam statistics? 46% are still running dual cores. I imagine MANY of the quads are older, too.

Devs are lazy assholes. They don't care about majority (and likely vast majority) of users. They want to hide their shitty work with new hardware. We see it nearly every game release, now.
Actually, the vast majority of users is the only thing devs care about, cause that's where sales come from.

And yes, 46% of the users may be running dual cores, but just below half of them have both a DX12 GPU and run Win10. And in Steam's view, 750Ti or HD7900 qualifies as a DX12 card. I'm not disputing that people use older systems, but those are not the people playing AAA games, which is where you expect DX12 to make its entrance. Whatever is out there, isn't enough to worth the effort, that's what the article says. And while I know devs are lazy, in this case I don't think they're wrong.
Posted on Reply
#47
TheGuruStud
bugActually, the vast majority of users is the only thing devs care about, cause that's where sales come from.

And yes, 46% of the users may be running dual cores, but just below half of them have both a DX12 GPU and run Win10. And in Steam's view, 750Ti or HD7900 qualifies as a DX12 card. I'm not disputing that people use older systems, but those are not the people playing AAA games, which is where you expect DX12 to make its entrance. Whatever is out there, isn't enough to worth the effort, that's what the article says. And while I know devs are lazy, in this case I don't think they're wrong.
How can they play AAA games when a lot are mediocre looking, but require high end hardware to run? It's a self-fulfilling prophecy.

And it's not like only AAA titles matter, b/c they're total shit on the whole. I don't play them, either.
Posted on Reply
#48
dir_d
I can see where Vulkan and GPUOpen can really shine in this situation with pre-canned scripts to do what you need. Modify the code to your needs. Now if the developers used something like that i feel they could waste less time and make their timeline. Only problem is now is alot of developers will have to build code from scratch until the repositories start to fill.
Posted on Reply
#49
Vayra86
TheGuruStudHow can they play AAA games when a lot are mediocre looking, but require high end hardware to run? It's a self-fulfilling prophecy.

And it's not like only AAA titles matter, b/c they're total shit on the whole. I don't play them, either.
Doesn't that just prove his point? There is no market for shit games and if there is one, it evaporates shortly after release (No Man's Sky anyone?). And you can polish a turd, it's still a turd.

What Nixxes is saying (not a person, its a company, but ok) is that the adoption rate of DX12, alongside its limited advantage on CURRENTLY dominant hardware among users, is not worth the effort. And that is just about similar to what most of the sane people on this forum said when AMD's Mantle vanished and all eyes turned to DX12 as the new miracle API.

It's the same as with all economics; if there is no good business case, nobody's doing the business. And for DX12, this is very true in most cases for the vast majority of games, because most games, especially triple A console ports, are hardly CPU intensive because then the consoles couldn't run them. You don't need the majority of theoretical performance advantage you get with DX12 because that advantage reflects mostly on CPU load, not GPU.
Posted on Reply
#50
EarthDog
This type of concern will not bode well for adaptation. It also makes me worry about 'RX 4x0 cards being better in DX12 titles down the road' if there is such concern at the ROI...
Posted on Reply
Add your own comment
Jul 22nd, 2024 16:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts