Tuesday, August 1st 2017

AMD Giving Up on CrossFire with RX Vega

AMD is reportedly scaling down efforts on its end to support and promote multi-GPU technologies such as CrossFire, and not in favor of open-standards such as DirectX 12 native multi-GPU, either. Speaking to GamersNexus, an AMD representative confirmed that while the new Radeon RX Vega family of graphics cards support CrossFire, the company may not allocate as many resources as it used to with older GPU launches, in promoting or supporting it.

This is keeping up with trends in the industry moving away from multi-GPU configurations, and aligns with NVIDIA's decision to dial-down investment in SLI. This also more or less confirms that AMD won't build a Radeon RX series consumer graphics product based on two "Vega 10" ASICs. At best, one can expect dual-GPU cards for the professional or GPU-compute markets, such as the Radeon Pro or Radeon Instinct brands.
Source: GamersNexus.net
Add your own comment

30 Comments on AMD Giving Up on CrossFire with RX Vega

#1
FordGT90Concept
"I go fast!1!11!1!"
It was a driver solution (like SLI). AMD wants a hardware solution (through Infinity Fabric).
Posted on Reply
#2
ZoneDymo
well was it indeed not sorta the plan that DX12 etc can simple use multi gpu's inherently?
Posted on Reply
#3
Toothless
Tech, Games, and TPU!
ZoneDymowell was it indeed not sorta the plan that DX12 etc can simple use multi gpu's inherently?
From both brands and nearly any GPU. Seems like NVIDIA and AMD are letting DX12 handle that and decided to get lazy.
Posted on Reply
#4
RejZoR
ZoneDymowell was it indeed not sorta the plan that DX12 etc can simple use multi gpu's inherently?
Even DX12 has bunch of pairing modes and I think they don't want either. Infinity Fabric is what they are pursuing on GPU's as well. It works really well with CPU's, so, naturally they want the same on GPU's. Quite frankly, I can't agree more with that. It has to be a hardware solution because software ones proved time and time again they are rubbish.
Posted on Reply
#5
Chaitanya
ToothlessFrom both brands and nearly any GPU. Seems like NVIDIA and AMD are letting DX12 handle that and decided to get lazy.
Basically putting trust in hands of game developers who tend to be lazier than driver teams at either of those two GPU makers. We have seen far too many badly optimised PC ports of console parts anyways due to lazy developers.
Posted on Reply
#6
evernessince
RejZoREven DX12 has bunch of pairing modes and I think they don't want either. Infinity Fabric is what they are pursuing on GPU's as well. It works really well with CPU's, so, naturally they want the same on GPU's. Quite frankly, I can't agree more with that. It has to be a hardware solution because software ones proved time and time again they are rubbish.
Multi-GPU has always been a problem because of the latency and that you have a CPU as a middle man.

If AMD is able to bring it's infinity fabric to GPUs it will completely change the market. The cost of making high end GPUs would go way down and it would be incredibly easy to make SKUs to fit every section of the market. The cost for AMD is simple and you need only look at the CPU market right now for a clue. If AMD can make a profit on the 1700 at $320, they most certainly are making a profit on the 1950x at $1,000, which is two Ryzen 1700 CPUs together. Unlike Intel, AMD doesn't have a massive die with poor yields. They have two smaller dies with near perfect yields. So many advantages to a modular CPU approach and I think AMD may only be scratching the surface of what we will see in the future.
Posted on Reply
#7
TheGuruStud
They can barely fix engine/game problems with drivers (which is NOT their job). There's no point extending that job for SLI/xfire, too. The devs are getting worse and worse, so there's no point.
Posted on Reply
#8
The Von Matrices
I think this is a good choice. My experiences with Crossfire have been frustrating at best. I'd rather they stop supporting the feature and reallocate their driver team elsewhere rather than continuing to advertise a feature that doesn't live up to its promise because they never invest enough development resources into it.

I now wonder how they are going to promote the ThreadRipper platform if Crossfire is deprecated. Yeah, 60 PCIe lanes is great, but how many do you need when you're limited to one GPU?
Posted on Reply
#9
Bytales
Ahhh crap, does this mean i wont be seeing a DUAL VEGA graphics Card ? that was kinda like what i was looking for !
Posted on Reply
#10
TheGuruStud
The Von MatricesI think this is a good choice. My experiences with Crossfire have been frustrating at best. I'd rather they stop supporting the feature and reallocate their driver team elsewhere rather than continuing to advertise a feature that doesn't live up to its promise because they never invest enough development resources into it.

I now wonder how they are going to promote the ThreadRipper platform if Crossfire is deprecated. Yeah, 60 PCIe lanes is great, but how many do you need when you're limited to one GPU?
Direct the blame to devs. They're the ones not implementing it or riding the Nvidia gravy train. Look at games that are properly supported. Scaling is excellent.

You think AMD/Nvidia can't do dual GPUs just fine? They've were doing it for many years. I never had issues with 8800GT SLI or 4890 xfire (hell, 6950s were great, too). It really started going to shit when I had 7950s. By the time 290X rolled around, I'd had enough of worthless devs, so single card I went.
Posted on Reply
#11
bug
Well, after pushing the technology for over 10 years and not reaching any significant market penetration, what would you expect? That didn't stop people from crapping all over Nvidia when they dropped SLI from Pascal cards though.
Posted on Reply
#12
alwayssts
FordGT90ConceptIt was a driver solution (like SLI). AMD wants a hardware solution (through Infinity Fabric).
That's kind of been my sentiment on the sitch as well.

I don't know if 1*HBM2 stack makes sense for a ~Polaris level chip on 7nm (as opposed to simply 128-bit and GDDR6) and then scaling it that way as opposed to a Vega shrink (and then x2), but I most certainly expect that to be the case in some fashion. I like the idea of 4x because one chip (in essence what would be a ~75w+- card) could more-or-less simply be bolted onto an APU (for example a 65w Zen 2), but OTOH that ends up becoming questionable use of silicon at some point given each chip has to have things (like UVD etc) that one would think would be redundant (unless that also scaled, which would be kind of weird) and hence less efficient than a somewhat larger chip.

I don't have the time to look up the Polaris launch event where Raja *pretty much confirmed* that was happening, nor the fact Hynix's HBM slides have shown two AMD-looking dummy chips using their own stacks of HBM on one package since before even Fiji's release, but at the former he did indeed speak of MCM in a fashion in which he described it as both coming, but (at the time, now almost 1.5 years ago) they weren't quite there yet. Given the 'scaling' quality given as pretty much the only tease of Navi, it's not difficult to put 1+1 together...as it were. Clearly this has been in the works for a looong time.

IIRC Charlie even spoke of it eons ago...perhaps when he worked at The Inq or had just started S|A? I seem to recall an article from a long-long-long time ago by one of those OG insider cats that pretty much described that being the plan several years down the road and distinctly on the roadmap (I believe it was three generations from then at that point...perhaps around the time of kepler/Southern Islands and Eric Demers departure [ie when AMD stopped talking about interesting aspects of their engineering for a good long while]), but I could be wrong.

Either way...Sounds dope AF. :)

For those worried about it, even nVIDIA discussed the prospect openly as recently as 1 month ago:

research.nvidia.com/publication/2017-06_MCM-GPU%3A-Multi-Chip-Module-GPUs

(Note the last sentence of the summary in which nVIDIA confirms the 'old' way kind of sucks.)
Posted on Reply
#13
cowie
I think most should be happy about amd officially saying they wont support it too much,for the last 5 or 6 years it has been crap.

I blame all the "oh dx12 is going to be so great take me w10"still nothing that made games look better just run better on older hardware.
so with all your trust in bs ms/game devs we have lost more in gaming then we gained.
Posted on Reply
#14
alwayssts
cowieI think most should be happy about amd officially saying they wont support it too much,for the last 5 or 6 years it has been crap.

I blame all the "oh dx12 is going to be so great take me w10"still nothing that made games look better just run better on older hardware.
so with all your trust in bs ms/game devs we have lost more in gaming then we gained.
I think a large part of it is the divide between AMD using general purpose units (which could often exploit dx12) while nvidia has stuck to more efficient 'core' shader ratios + fixed function SFUs (similar to how we used to see 5+1 or 4+1 in old AMD arch, or 2 dual-issue MADD + MUL in early DX10 parts) largely satiated by DX11 in some cases more efficiently when built toward them, but that's just my opinion (which is largely backed up by Polaris' performance vs GP106). I would imagine that would be both of their arguments, anyway. AMD would blame nvidia for being slow to innovate/catch up to new techniques (which in the meantime penalizes AMD's designs for building towards it), which isn't either new or wrong...while nvidia would claim efficiency (given a SFU is tiny and one of AMD's CUs can only do 16 SFU ops iirc) as the reason, which also isn't wrong.

I often wonder if that will be part of nVIDIA's transition to a smaller process or 'innovative new arch' at some point. Instead of undershooting the ideal shader/rop ratio and making up for it with clockspeed, instead biting the bullet and going over the ideal compute/rop ratio which is greater than 224 but less than 256 for 4 ROPs; perhaps even sacrificing SFUs for more potential compute. While it may be necessary at some point, I bet they would both hate to vindicate what has essentially been AMD's design for the last several years as well as lose a competitive advantage that they *know* devs will program towards because of nvidia's market share.

I'm sure there are other reasons, but that's one that's always been stuck in my craw for whatever reason.
Posted on Reply
#15
Solidstate89
That really seals the deal for me on whether to ever consider AMD GPUs. nVidia has scaled back SLI development, but they're still providing a really good amount of support for dual-SLI setups (which really, was the only one that could ever remotely be considered worth it). Not a driver update goes by where I don't see updated or new SLI profiles.

This is really sad to see AMD taking this position given all the high resolution, high refresh rate monitors we're getting these days - even a flagship GPU isn't enough to run a modern AAA game without losing frames and dropping below optimal FPS with 4K and high refresh rate monitors.

They were already only focusing on dual-CF setups so scaling back even farther from that level of support is disconcerting.
Posted on Reply
#16
xkm1948
So what's gonna populate those excessive amount of pcie slots then? Also, makes extra pcie lanes obsolete? Back to the AGP days?
Posted on Reply
#17
neatfeatguy
I can't comment on Crossfire, but I never really had any issues with SLI going all the way back to the 7xxx series:
7600 GT
8800 GTS 640
8800 GTS 512 (Step-up Program from EVGA, traded in the 640MB models)
GTX 280
GTX 570
GTX 980Ti (though I've removed one card, putting it in another build. one handles all my gaming just fine, even at 5760x1080)

I can see that with the current hardware and advances from GPUs, a high-end current gen is very powerful. Most people don't game 4k with maxed/ultra settings and expect to pull 60fps. A high-end GPU on 2k is more than enough, at least it would be for me. It makes sense to scale back SLI/Crossfire support, considering it can be handled at the development level and not driver level on DX12. Now that AMD and Nvidia have made it common knowledge that SLI/Crossfire are not supported as much, devs really won't feel they need to support multi-GPUs in games now and won't put forth the time/money/effort to code for it in their games.
Posted on Reply
#18
Th3pwn3r
TheGuruStudThey can barely fix engine/game problems with drivers (which is NOT their job). There's no point extending that job for SLI/xfire, too. The devs are getting worse and worse, so there's no point.
Yep, it has always been way too buggy since the beginning in my opinion to run SLI or Crossfire.
Posted on Reply
#19
mandelore
Pretty disappointing for me anyways. Been waiting on a viable upgrade from my 295x2 and 290x trifire. Hell even just a replacement for the 295x2. Vega was my hopes and just looks like a very very long wait for a disappointment. Had been hoping on a dual gpu vega.
Posted on Reply
#20
TheinsanegamerN
So AMD pushes their new HDET chip with tons of PCIE lanes while simultaniously saying they will not support multiple GPUs going forward?

This from the same company that would not stop going on about how two 480s could beat a 1080?

Between this and VEGA's lackluster output, this just seems to confirm that AMD doesnt care much about GPUs anymore. We'll see how well it works out for them. So far, relying on devs for DX12 optimization and multi GPU has been a trainwreck. This is also going to put a dent in high refresh rate and high rez monitors. Multi GPU was the easiest way to get a 1440p144 monitor working at full speed. 4k144 will be a pipe dream for years if we have to rely on single GPU.
Posted on Reply
#21
Vayra86
TheinsanegamerNSo AMD pushes their new HDET chip with tons of PCIE lanes while simultaniously saying they will not support multiple GPUs going forward?

This from the same company that would not stop going on about how two 480s could beat a 1080?

Between this and VEGA's lackluster output, this just seems to confirm that AMD doesnt care much about GPUs anymore. We'll see how well it works out for them. So far, relying on devs for DX12 optimization and multi GPU has been a trainwreck. This is also going to put a dent in high refresh rate and high rez monitors. Multi GPU was the easiest way to get a 1440p144 monitor working at full speed. 4k144 will be a pipe dream for years if we have to rely on single GPU.
- HEDT for gaming was never sensible, so I'm not sure what market AMD is missing there. And I applaud them for confirming that with this move.
- Multi GPU has always been troublesome and above all costly - why support something that is only a niche for a small part of the market, when you already have only like 20% market share left.
- DX12 and multi GPU isn't taking off because its now up to developers in other words its up to shareholders so never going to happen
- The dual 480 story... well let's just forget that quickly, and let's also forget about AMD and marketing, they'll never get married

No, honestly, let them focus on getting Navi up to scratch ASAP so we can actually move ahead, instead of focusing on worthless crap that never really ever worked as well as a single card and never really delivered more than a 10% perf/dollar win on the short term. I'll happily take my multi-GPU as a single board and let them solve their scaling and compatibility on a hardware level by just glueing the stuff together.

4k144 is a pipe dream anyway because there is not a single interface yet with sufficient bandwidth (I believe only DP 1.4 can do this?) and above all, high refresh gaming with multiple cards is latency fiesta so I really don't see the advantage here. You get FPS and you add latency and frame time variance issues by the truckloads.
Posted on Reply
#22
TheinsanegamerN
Vayra86- HEDT for gaming was never sensible, so I'm not sure what market AMD is missing there. And I applaud them for confirming that with this move.
- Multi GPU has always been troublesome and above all costly - why support something that is only a niche for a small part of the market, when you already have only like 20% market share left.
- DX12 and multi GPU isn't taking off because its now up to developers in other words its up to shareholders so never going to happen
- The dual 480 story... well let's just forget that quickly, and let's also forget about AMD and marketing, they'll never get married

No, honestly, let them focus on getting Navi up to scratch ASAP so we can actually move ahead, instead of focusing on worthless crap that never really ever worked as well as a single card and never really delivered more than a 10% perf/dollar win on the short term. I'll happily take my multi-GPU as a single board and let them solve their scaling and compatibility on a hardware level by just glueing the stuff together.
Given they just spent 3 years dressing up fiji and somehow managing to not improve over polaris in perf/watt, Somehow I doubt navi will be very good.

When is the last time AMD delivered on the GPU front, the 290x? Sure, then can glue, but if the arch falls further and further behind, they will only end up delaying the inevitable. AMD needs to get a modern GPU architecture out the gate at some point. The glue in ryzen works because per core ryzen is much closer to intel then any construction core. If ryzen was glued together bulldozer cores, it would still suck.

If navi is just gled together vega cores, it will suck. They need a proper base to build off of.
4k144 is a pipe dream anyway because there is not a single interface yet with sufficient bandwidth (I believe only DP 1.4 can do this?)
Bit of a contradiction there, isnt there? Is there no single interface, or is there a single interface?
Posted on Reply
#23
Vayra86
TheinsanegamerNGiven they just spent 3 years dressing up fiji and somehow managing to not improve over polaris in perf/watt, Somehow I doubt navi will be very good.

When is the last time AMD delivered on the GPU front, the 290x? Sure, then can glue, but if the arch falls further and further behind, they will only end up delaying the inevitable. AMD needs to get a modern GPU architecture out the gate at some point. The glue in ryzen works because per core ryzen is much closer to intel then any construction core. If ryzen was glued together bulldozer cores, it would still suck.

If navi is just gled together vega cores, it will suck. They need a proper base to build off of.

Bit of a contradiction there, isnt there? Is there no single interface, or is there a single interface?
Nah just wasn't too sure if DP 1.4 was already implemented on cards + monitors because you do need both, re the HDMI 2.0 issue.
Posted on Reply
#24
Prince Valiant
TheGuruStudDirect the blame to devs. They're the ones not implementing it or riding the Nvidia gravy train. Look at games that are properly supported. Scaling is excellent.

You think AMD/Nvidia can't do dual GPUs just fine? They've were doing it for many years. I never had issues with 8800GT SLI or 4890 xfire (hell, 6950s were great, too). It really started going to shit when I had 7950s. By the time 290X rolled around, I'd had enough of worthless devs, so single card I went.
But, but, but, but, but, DX12 is going to allow metal access and the devs are going to give us magic and, and...!

Oh right, it didn't pan out that way :eek:.
Vayra86Nah just wasn't too sure if DP 1.4 was already implemented on cards + monitors because you do need both, re the HDMI 2.0 issue.
DP 1.4 can but it requires compression.
Posted on Reply
#25
THU31
Multi-GPU tech should have died a long time ago. One of the dumbest ideas in the industry, causes nothing but problems for consumers and developers. Just awful.
Posted on Reply
Add your own comment
Nov 21st, 2024 11:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts