As I've stated many times before, people will be disappointed from Direct3D 12 because they have the wrong expectations. Most of this is caused by hype of course, but the tabloid press does certainly have their share of the blame.
The core problem is that Direct3D 12 is in fact the least exciting major version in ages, it does in fact have no
major changes, at least when compared to version 8, 9, 10 and 11.
So let me address the common misconceptions one by one.
0 - "Low level API"
This is touted as the largest feature that probably everyone have heard about, yet most people don't have a clue what it means.
"Low level" is at best a exaggeration, "slightly lower level" or "slightly more explicit control". Any programmer would get confused by calling this a low level API. What we are talking of here is slightly more explicit control over memory management, lower overhead operations and fewer state changes. All of this is good, but it's not what I would call a
low level API, it's not like I have more features in the GPU code, more control over GPU scheduling or other means of direct control over the hardware. So it's not such a big deal really.
Lower overhead is always good, but in order for there to be a performance gain, then there needs to be some overhead to remove in the first place. API overhead consists of two parts; overhead in the driver (will be addressed in
(1)) and overhead in the game engine interfacing with the driver. Whether CPU overhead becomes a major factor is depending on how the game engine works. A very efficient engine using the API efficiently will have low overhead, so upgrading it to Direct3D 12 (or Vulkan) will yield minimal gains. Any game engine which is struggling with CPU overhead will guaranteed have much more overhead inside the game engine
itself than the API anyway, so the only solution is to rewrite it properly. A clear symptom of this misconseption is all the people cheering for being able to use
more API calls.
There are two ways to utilize a new API in a game engine:
- Create a completely separate and optimized render path.
- Create a wrapper to translate the new API into the old.
Guess which option "all" games so far is using? (The last one)
The whole point of a low level API is gone when you wrap it in an abstraction to make it behave like the old thing. Game engines have to be built from scratch specifically for the new API to have
any gain at all from it's "lower level" features. People have already forgot disasters from the past like Crysis, a Direct3D 9 game with Direct3D 10 slapped on top, and why did it perform worse on a better API do you think?
1 - AMD
There is also the misconception that AMD is "better" suited for Direct3D 12, which is caused by a couple factors. First of all, more games are designed for (AMD based) consoles, which means that a number of games will tilt a few percent extra in favor of AMD.
Do you remember I said that the API overhead improvements consisted of two parts? Well the largest one is in the driver itself. Nvidia chose to bring as much as possible to all APIs
(1)(2), while AMD retained it for Direct3D 12 to show a larger relative gain. So Nvidia chose to give all games a small boost, while AMD wanted a PR advantage instead.
When it comes to scalability in unbiased games, they scale more or less the same.
2 - Multi-GPU
Since Dirct3D 12 is able to send queues to GPUs of unmatched vendor and performance, many people and journalists assumed this meant multi-GPU scalability across everything. This is very far from the truth, as different GPUs will have to be limited to separate workloads, since transfer of data between GPUs is limited to a few MB per frame to minimize latency and stutter. This means you can offload a separate task like physics/particle simulations to a different GPU, and then just transfer a state back, but you can't split a frame in two and render two halves with a dynamic scene.
3 - Async compute
Async compute is also commonly misunderstood. The purpose of the feature is to utilize idle GPU resources for
other workloads to improve efficiency. E.g. while rendering, you can also load some textures and encode some video, since this uses different GPU resources.
If you compare Fury X vs. 980 Ti you'll see that Nvidia beats AMD even though AMD has ~52% more theoretical performance. Yet, when applying async compute to some games AMD get a much higher relative gain. Fans always touts this as superior performance for AMD, but fails to realize that AMD has a GPU with >1/3 of the cores idling, which makes a greater potential to do other tasks simultaneously. So it's in fact the inefficiency of Fury wich gives a greater potential here, not the brilliance. It's also important to note that in order to achieve such gains each game has to be fine tuned to the GPUs, and optimizing a game for an inferior GPU architecture is always wrong. As AMD's architectures become better, we will see diminishing returns from this kind of optimization. Remember, the point of async compute is not to do similar tasks on inefficient GPUs, but to do different types of tasks.
Conclusion
So is Direct 3D 12 worth the trouble?
If you're building it from scratch and not adding any abstractions or bloat then yes!
Or even better, try Vulkan
instead.
-----
The idea behind new-generation "close-to-the-metal" APIs such as DirectX 12 and Vulkan, has been to make graphics drivers as less relevant to the rendering pipeline as possible.
That didn't make any sense at all, please try again. None of the new APIs make the drivers
less relevant.
No, so far Direct3D 12, NOT DirectX 12 (why do people are still calling that?!) , has almost no relevance for existing games.
When people don't make that distinction, it's a sign that they don't know the difference, and when that's the case they clearly don't know anything about the subject at hand.
Unless a game is build from ground up to use D3D 12, no significant performance gains can be seen compared with D3D 11.
If only people understood this at least. See my
(0).
I think the main focus should be on building game engines natively on D3D12 instead of D3D9 or 10/11. Unless you have an Unreal Engine 5, for example, or Frostbite 3 build with that in mind, all the games are going to suck on D3D12.
Yes. If you have a game in the works already, just continue with the old API. If you are starting from scratch, then go with the latest
exclusively.
That's not how things work. For an engine, there is no "native" thing. You're confusing game engine and renderer (render path). They are not the same thing. But they do work together to output image you see in the end. There may be things that explicitly depend on game engine support and are hard to do later, but majority isn't.
Calling it "native" might be a stretch, but it's about adding what's called a wrapper or abstraction layer. To a large extent you can make two "similar" things behave like the same at a significant overhead cost. Any coder will understand how an abstraction works.
Let's say you're building a game and you want to target Direct3D 9 and 11, or Direct3D 11 and OpenGL, etc.. You can then create completely separate render paths for each API (basically the whole renderer), or you create an abstraction layer. So when you build a "common API" that hides all the specifics of each API, and then create a single pipeline using the abstraction. Not only is this wasting a lot of CPU cycles on pure overhead, you'll also get the "worst of every API". Because when you break down the rendering into generic API calls, this has to be compatible with the least efficient API, which means that the rendering will not translate into an optimal queue for each API. This will result in a "naive" queue of API calls, far from optimal.