Monday, October 26th 2015

DirectX 12 Mixed Multi-GPU: It Works, For Now

One of biggest features of DirectX 12 is its asymmetric multi-GPU that lets you mix and match GPUs from across brands, as long as they support a consistent feature-level (Direct3D 12_0, in case of "Ashes of the Singularity"). It's not enough that you have two DirectX 12 GPUs, you need DirectX 12 applications to make use of your contraption. Don't expect your older DirectX 11 games to run faster with a DirectX 12 mixed multi-GPU. Anandtech put Microsoft's claims to the test by building a multi-GPU setup using a Radeon R9 Fury X, and a GeForce GTX 980 Ti. Some interesting conclusions were drawn.

To begin with, yes, alternate-frame rendering, the most common multi-GPU method, works. There were genuine >50% performance uplifts, but nowhere of the kind you could expect from proprietary multi-GPU configurations such as SLI or CrossFire. Second, what card you use as the primary card, impacts performance. Anandtech found a configuration in which the R9 Fury X was primary (i.e. the display plugged to it), and the GTX 980 Ti secondary, to be slightly faster than a configuration in which the GTX 980 Ti was the primary card. Mixing and matching different GPUs from the same vendor (eg: a GTX 980 Ti and a GTX TITAN X) also works. The best part? Anandtech found no stability issues in mix-matching an R9 Fury X and a GTX 980 Ti. It also remains to be seen how long this industry-standard utopia lasts, and whether GPU vendors find it at odds with their commercial interests. Multi-GPU optimization is something both AMD and NVIDIA spend a lot of resources on. It remains to be seen how much of those resources they'll be willing to put on a standardized multi-GPU tech, and away from their own SLI/CrossFire fiefdoms. Read the insightful article from the source link below.
Source: AnandTech
Add your own comment

55 Comments on DirectX 12 Mixed Multi-GPU: It Works, For Now

#26
midnightoil
the54thvoidYes but is a lot of this not to do with the XDMA implementation by AMD? The crossfire situation of older cards (Tahiti and beyond) is not so great. So you mean the XDMA pathway Fury uses (as being from Hawaii and upwards) means it communicates better with the API or better with the Nvidia card? I suppose that makes sense if the Nvidia card isn't normally communicating with another card via pci....
On mature drivers, Tahiti did much better. It did however have appalling frame times compared with NV, which were brought to par and then leapt ahead. Obviously there were quite a few cards between Tahiti and Fiji, too.

I dunno what it is. Probably a mixture of a lot of things. Perhaps part of it's due to the fact that GCN is designed from the ground up to work in parallel and none of NVIDIA's are.
Posted on Reply
#27
GhostRyder
You know, its cool but I would really have to see this in person. Plus I feel they will not allow this forever... But that is just my opinion.
Posted on Reply
#28
the54thvoid
Intoxicated Moderator
midnightoilThe reason they blocked it years ago was that many people wanted to buy a high end AMD card and then have a slave $50 NV card for Phys-X.
A minor difference in semantics but an important one nonetheless - It would (in Nvidia's mind) make people hesitate to buy an AMD card if they wanted to run Physx. They didn't do it because of 'many people wanted to buy a high end AMD card'.
The irony is, 'Physx' was over hyped and under utilised. Pretty effects that got boring real quick.

With this set up, I don't think a driver level hack will help Nv's cause. I imagine the DX12 API might require certain basic driver features to run the hardware as 'bare' as possible. It might hurt Nvidia's performance if they intentionally hobble certain DX12 feature abilities.
Posted on Reply
#29
FordGT90Concept
"I go fast!1!11!1!"
crazyeyesreaperI think the fact Lucid got bought and dissappeared is the problem they dropped off the face of the earth basically well as far as anyone is concerned. Even then how did Microsoft get the tech? Did they license it? Did they create their own?
I wouldn't be surprised, at all, if this is possible because of Mantle. The closer-to-the-metal programming Mantle/D3D12 allows developers to simulate SLI/Crossfire in the API.
crazyeyesreaperdid you fail to read? FURY X + FURY AMd has no problems allowing the same gpu but different tiers operate in Xfire its been this way for years example 7950 + 7970 . Nvidia does no allow that regardless. DX12 Fury X + 980Ti is faster than 2x AMD gpus in proper X-fire.
They didn't test Fury X + Fury X nor GTX 980 Ti + GTX 980 Ti. The big jump for Fury X might be because of DX12 more so than pairing two cards.
Posted on Reply
#30
btarunr
Editor & Senior Moderator
crazyeyesreaperdid you fail to read? FURY X + FURY AMd has no problems allowing the same gpu but different tiers operate in Xfire its been this way for years example 7950 + 7970 . Nvidia does no allow that regardless. DX12 Fury X + 980Ti is faster than 2x AMD gpus in proper X-fire.

Fury X + Fury shows 66% gain over Fury X by itself

Fury X + 980Ti shows 75% gain.

This is proprietary tech vs a DX12 feature aka agnostic and the agnostic option is kicking the crap out of said proprietary solution.
We don't know if FuryX+Fury was running CrossFire, or if Anand disabled it in CCC to let DX12 native multiGPU work.
Posted on Reply
#31
crazyeyesreaper
Not a Moderator
Still the Fury X + 980 Ti has a performance scaling over the stock Fury X in league with the average scaling of Crossfire on its own.

If we look at W1zzards review the average scaling is around 65%. granted this is Nano + Nano at 64% and Nano + Fury X at 66%

Looking at the agnostic tech its scaling without Crossfire is about the same as AMD's xfire scaling at 4k across the board = impressive. Add to that Fury X + 980Ti gives a 75% increase of Fury X = a performance increase in multi GPU thats technically higher than the Crossfire average. Not to say that Crossfire doesn't scale better after all its dependent upon title but this would point to the fact that its possible for the DX12 multiGPU option of being just as efficient as the proprietary technologies. And its baked into DX12. Meaning if developers implemented it Multi-GPU users wouldnt have to wait on AMD or Nvidia for profiles. Profiles would basically only be necessary for legacy apps.

Games to tend to scale between 40-90% with the dominant average being 65-75%. If an agnostic API can without drivers offer 65-75% thats pretty good. Even better if frame time variance improves because relying on AMD's drivers for that is very hit of miss lol. On top of that being able to mix and match does offer some leeway for the various game tech that both companies push. Its all a pipe dream really but a Fury X + 980Ti seems to offer better performance scaling than 2x AMD cards with better frame time variance. But without a conclusive SLI and Xfire comparison its a bit moot i suppose.
Posted on Reply
#32
terroralpha
pardon me while i shit my pants in excitement.
Posted on Reply
#33
rooivalk
What happens when you mix between high end GPU and low end GPU?
Posted on Reply
#34
FordGT90Concept
"I go fast!1!11!1!"
I suspect a pretty serious loss in framerate. The faster GPU will have to wait for the slower GPU to finish the alternate frame. If it doesn't result in framerate loss, then they're doing some magical wizardry.
Posted on Reply
#35
64K
FordGT90ConceptI suspect a pretty serious loss in framerate. The faster GPU will have to wait for the slower GPU to finish the alternate frame. If it doesn't result in framerate loss, then they're doing some magical wizardry.
That's what I think too. Even if one of the things that MS talked about with DX12 implementing Split Frame Rendering the faster card would complete it's half of the frame and have to wait on the slower card to complete it's half of the frame unless DX12 had some way to know how much slower the slower card was and assign it a smaller portion of the frame. Say 3/4 of the frame for the faster card and 1/4 for the slower card.
Posted on Reply
#36
FordGT90Concept
"I go fast!1!11!1!"
That would fall under the category of "magical wizardry" though. :laugh:
Posted on Reply
#37
rooivalk
FordGT90ConceptI suspect a pretty serious loss in framerate. The faster GPU will have to wait for the slower GPU to finish the alternate frame. If it doesn't result in framerate loss, then they're doing some magical wizardry.
That's better. I thought it's gonna be tearing.
Posted on Reply
#38
Ja.KooLit
interesting. Now I can have igp to put in use :) (hopefully)
Posted on Reply
#39
BiggieShady
Funny how best results are achieved by combining fury and 980ti and it's even funnier how order seems to be important :laugh:
Posted on Reply
#40
Prima.Vera
This is all nice and good, however this game's graphics still looks like from the late 90's, seriously. C&C Generals had better graphics, and I could run it on a Voodoo 2 card. I really don't get why does it need all those resources....
Posted on Reply
#41
Solidstate89
midnightoilIf you mean AMD, I very much doubt they'll block it. If you mean NVIDIA, it would seem abundantly obvious that they will, given their long history of anti-competitive practice and the fact that the mixed setups make it even more clear that AMD scales much better in multi-gpu scenarios, whether AMD only or mixed.
They can't do shit to block it unless they decided not to support DirectX 12. The unliked EMA state is completely up to the developers as to how it's used and if they want to use it in the first place. So unless either AMD or nVidia decided they no longer wish to support DX12, they don't have a say in the matter.
Posted on Reply
#42
Solidstate89
btarunrWe don't know if FuryX+Fury was running CrossFire, or if Anand disabled it in CCC to let DX12 native multiGPU work.
"native multiGPU" or implicit mode just does AFR with Crossfire or SLI. It does nothing that isn't already done by nVidia or AMD. The point is to still allow developers that don't want to get into the nitty gritty of developing their own EMA multi-GPU environment, to simply use what's already available. So yes, it was using Crossfire.

That's my understanding of it from the AT article anyways.
Posted on Reply
#43
bpgt64
I wonder, could I use a AMD GPU, with a Nvidia GPU, but leverage Gsync from the Nvidia in order to connect to the monitr, and still leverage DX12 mutli gpu to do this?
Posted on Reply
#44
FordGT90Concept
"I go fast!1!11!1!"
NVIDIA would have to be primary. I suspect it would work.

Bare in mind that it is pretty doubtful most games will even be coded to use multiple GPUs outside of Crossfire/SLI. I suspect the only reason why they did it was because of all the publicity the game got from the async compute discovery. Playing with Direct3D 12 has been a great way for them to promote the game.
Posted on Reply
#45
GhostRyder
bpgt64I wonder, could I use a AMD GPU, with a Nvidia GPU, but leverage Gsync from the Nvidia in order to connect to the monitr, and still leverage DX12 mutli gpu to do this?
That would be some sort of magic that might make all our heads explode.
Posted on Reply
#46
Athlonite
I would rather the OS control it fully rather than having to rely on whether a game will work with it or not
Posted on Reply
#47
geon2k2
I don't see why would someone do such a mix of cards of approximately the same power for AFR alone, it will be for sure better to go with the same brand that is present already in the system.

I'm more interested to see asymmetric processing than alternative frame, and obviously it has to be cross vendor. In this way if you upgrade for let's say 30% more performance, you can still use the old card to process maybe 40% of the objects and the new one to process the rest of them and compose the final image.

This would be a much better alternative than just selling the old card for pennies.

And they showed this capability of DX12 as well, with an APU processing some objects and an add in card processing the rest of the image.
Posted on Reply
#48
medi01
bpgt64I wonder, could I use a AMD GPU, with a Nvidia GPU, but leverage Gsync from the Nvidia in order to connect to the monitr, and still leverage DX12 mutli gpu to do this?
If nVidia's GPU is the master, I don't see the problem.
Posted on Reply
#49
deemon
Now the question remains:
Can one use NVidia GPU accelerated hairworks + physx and AMD FreeSync in such a setup at the same time?
Posted on Reply
#50
FordGT90Concept
"I go fast!1!11!1!"
I think PhsyX hardware acceleration is disabled in the presence of an AMD card. I suspect NVIDIA did the same to HairWorks. FreeSync would require the AMD to be primary. It might work because the primary card has to always transmit the completed frames to the monitor in a language it understands.
Posted on Reply
Add your own comment
Jul 22nd, 2024 19:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts