Wednesday, July 6th 2016
Microsoft Refines DirectX 12 Multi-GPU with Simple Abstraction Layer
Microsoft is sparing no efforts in promoting DirectX 12 native multi-GPU as the go-to multi-GPU solution for game developers, obsoleting proprietary technologies like SLI and CrossFire. The company recently announced that it is making it easier for game developers to code their games to take advantage of multiple GPUs without as much coding as they do now. This involves the use of a new hardware abstraction layer that simplifies the process of pooling multiple GPUs in a system, which will let developers bypass the Explicit Multi-Adapter (EMA) mode of graphics cards.
This is the first major step by Microsoft since its announcement that DirectX 12, in theory, supports true Mixed Multi-Adapter configurations. The company stated that it will release the new abstraction layer as part of a comprehensive framework into the company's GitHub repository with two sample projects, one which takes advantage of the new multi-GPU tech, and one without. Exposed to this code, game developers' learning curve will be significantly reduced, and they will have a template on how to implement multi-GPU in their DirectX 12 projects with minimal effort. With this, Microsoft is supporting game developers in implementing API native multi-GPU, even as GPU manufacturers stated that while their GPUs will support EMA, the onus will be on game-developers to keep their games optimized.
Source:
GitHub
This is the first major step by Microsoft since its announcement that DirectX 12, in theory, supports true Mixed Multi-Adapter configurations. The company stated that it will release the new abstraction layer as part of a comprehensive framework into the company's GitHub repository with two sample projects, one which takes advantage of the new multi-GPU tech, and one without. Exposed to this code, game developers' learning curve will be significantly reduced, and they will have a template on how to implement multi-GPU in their DirectX 12 projects with minimal effort. With this, Microsoft is supporting game developers in implementing API native multi-GPU, even as GPU manufacturers stated that while their GPUs will support EMA, the onus will be on game-developers to keep their games optimized.
37 Comments on Microsoft Refines DirectX 12 Multi-GPU with Simple Abstraction Layer
Also we can thank Vulkan, or Microsoft wouldn't care that much to bring advantages on DirectX 12. Working on making multi GPU on DirectX 12 easier, can help convince developers to not ignore this feature.
I had two of them.. 8800's. That's like ten years ago..lol..
I've always had a problem with sli and I've always had a sli system, since the 8800 series..
Micro stutters, crashes, incompatibility with games, artifacts, weird shadows, not working right with 3D vision... The list goes on and on.
But finally a good reason to go sli again.
I said it's been like 10 years and it's 2016.. Math time! Ten years ago it was 2006..
If it came out in 2004, which it did.. I was off by two years.. Sue me..
It started with the 6 series, not the 8 series..
en.wikipedia.org/wiki/Scalable_Link_Interface
Also...
Sometimes I wish 3dfx wouldn't go under and we'd have epic battles of 3 big graphics vendors. I mean, just imagine what all 3dfx could do with DX12 they couldn't back in its day. They were over 15 years ahead of time and only thing stopping them was technology itself because it just wasn't ready for their radical ideas.
In before UWP only.
I remember seeing Quake 2 on 800x600 in miniGL API, after I play it for a couple of hours on 320x240 software mode... MIND BLOWN!!
Understand carefully ( it will let developers bypass the Explicit Multi-Adapter (EMA) mode of graphics cards)
this means if once a profile been made under this it will only work under DX12
"so called b**tard M'soft want people to rely on their OS forever"
they can never do something just for the sake of development
on topic: at last, i cant wait for the dx12 to take effect..
If they could make this work so transparent and smooth and to distribute the load according to each GPU power and on top of this to make this smooth with low 99% frame time than that would be awesome ... however I have the feeling it is still too early and too complex as of now.
Its one thing to make it work in a game as proof of concept, another thing to provide good experience and scalability for most of the games. I also had one of those, I was a student back then and bought it second hand and OMG it was flying in Quake 3 :)
Just asked the guy before you not to call me "youngster" so I know you're trolling..
I'm a 39 year old building inspector, not a "young one"....
We already clarified the points you are making by the way.. Nvidias version of sli started with the 6 series in 2004. Bought, stolen, whatever, jitterbug.