# Why can't Crossfire or SLI use the sum of VRAM from all GPUs?



## MaxAwesome (Jun 6, 2012)

Is there a technical or technological impediment to this?

Just curious, as i'm currently on the market for a used HD 6870 to crossfire, and I'm a total beginner when it comes to multi-GPU.


----------



## radrok (Jun 6, 2012)

SLI and CrossfireX usually use AFR (alternate frame rendering) which makes your gpus draw frames in turns, let's say one draws all the odd frames and the other all the even frames.

To be able to do that you must have a copy of everything readily available to the GPUs in questions, for a GPU to access VRAM that does not directly belong to itself it would be very hard to implement software wise and ESPECIALLY bandwidth wise, you would need a very very fast link to provide textures that are only available on another GPU VRAM on that system.


----------



## MaxAwesome (Jun 6, 2012)

Makes perfect sense.

Thanks!


----------



## Dent1 (Jun 6, 2012)

MaxAwesome said:


> Is there a technical or technological impediment to this?
> 
> Just curious, as I'm currently on the market for a used HD 6870 to crossfire, and I'm a total beginner when it comes to multi-GPU.



Because Crossfire / SLI isn't an efficient way of using two cards simultaneously.

From what I understand, the two cards copy or mirror the vram so they can synchronise with one another. 

There is a simplified explanation. If calculation that are being rendered in GPU2, the variable needs to passed back to GPU1 in the same video memory location.  If both video cards have independent memory management there will be situations where it will accidentally overwrite the data when exchanging data.

A solution could be for GPU2 to store data in the main memory, then GPU1 can fetch the data from the main memory. GPU2 would need to know the exact location to extract the data, but this way of buffering is inefficient and would cause delays in fetching and potentially reducing the frame rate, thus defeating the purpose of CF or SLI.

Unlike physical dual GPU cards i.e. 4870X2 which have two GPUs on the PCB, they are more efficient in respect that there would be schedulers and reserve buffer memory to store and synchronise data so the data does corrupt or overwrite one another.


----------



## Aquinus (Jun 6, 2012)

radrok said:


> SLI and CrossfireX usually use AFR (alternate frame rendering) which makes your gpus draw frames in turns, let's say one draws all the odd frames and the other all the even frames.
> 
> To be able to do that you must have a copy of everything readily available to the GPUs in questions, for a GPU to access VRAM that does not directly belong to itself it would be very hard to implement software wise and ESPECIALLY bandwidth wise, you would need a very very fast link to provide textures that are only available on another GPU VRAM on that system.





Dent1 said:


> Because Crossfire / SLI isn't an efficient way of using two cards simultaneously.
> 
> From what I understand, the two cards copy or mirror the vram so they can synchronise with one another.
> 
> ...



Did I hear an echo? 

Also the X2 chips don't work that way, memory is duplicated for the second gpu and they do not communicate with each-other as you described. In fact X2 cards tend to use a PCI-E bridge and physically isn't much different than using two cards with the CFX or SLI bridge. As far as the computer is concerned, they are two different GPUs with their own set of memory dedicated to each GPU and they're essentially that. I call shenanigans.


----------



## Dent1 (Jun 6, 2012)

I didn't know that with the X2s, I always thought they were more efficient in that respect. But we are all in consensus about the CF/SLI issues I guess.


----------



## Black Panther (Jun 6, 2012)

Dent1 said:


> I didn't know that with the X2s, I always thought they were more efficient in that respect. But we are all in consensus about the CF/SLI issues I guess.



From my experience, a single GPU card is always more efficient than a dual gpu one (or 2 cards in CF or SLi). Tried it both with AMD and Nvidia to reach this conclusion. Driver support is also always better for single cards as well. 

It's always better to spend a couple of bucks more for a great single gpu card than go SLi or CF.


----------



## Aquinus (Jun 7, 2012)

Black Panther said:


> It's always better to spend a couple of bucks more for a great single gpu card than go SLi or CF.



I don't know about that, I already had one 6870 and adding a second was quite the boost for the price. Granted I don't mind waiting a little bit for stability issues to get smoothed out and I don't tend to play the newest of games the moment they come out.


----------



## Dent1 (Jun 7, 2012)

Black Panther said:


> From my experience, a single GPU card is always more efficient than a dual gpu one (or 2 cards in CF or SLi). Tried it both with AMD and Nvidia to reach this conclusion. Driver support is also always better for single cards as well.
> 
> It's always better to spend a couple of bucks more for a great single gpu card than go SLi or CF.



I meant dual GPU on a single PCB is more efficent than CF / SLI. 

But yes, I agree a powerful single GPU is more efficient than two average ones in CF, mostly due to software limitations and drivers.




Aquinus said:


> I don't know about that, I already had one 6870 and adding a second was quite the boost for the price. Granted I don't mind waiting a little bit for stability issues to get smoothed out and I don't tend to play the newest of games the moment they come out.



I think Black Panther meant spend more on a premium GPU the first time round, so you don't have to CF later (unless you choose to later). When it comes to prolonging the life of an existing GPU, CF/SLI usually the way forward.


----------



## radrok (Jun 7, 2012)

Black Panther said:


> From my experience, a single GPU card is always more efficient than a dual gpu one (or 2 cards in CF or SLi). Tried it both with AMD and Nvidia to reach this conclusion. Driver support is also always better for single cards as well.
> 
> It's always better to spend a couple of bucks more for a great single gpu card than go SLi or CF.



I agree with your statement, adding more than one GPU to a system is going to hit the overall "fluidity" of the gaming experience, I can tell that games are more responsive when not on SLI/CFX and with more than 1 GPU you can percieve stuttering at framerates that shouldn't let you do so.

I'm currently on a multi GPU system and my next purchase will probably be the fastest GPU around so I can call it a day with driver issues etc.

To anyone: Please don't say it is AMD/ATI related, I've had a 480 SLI and that wasn't any better.


----------



## TheoneandonlyMrK (Jun 7, 2012)

Afaik afr does inadvertently utilise the gfx memory seperately, in that their frame buffers should be slightly out of sync, meaning fractionaly different data in flow on the same part


----------



## qubit (Jun 7, 2012)

MaxAwesome said:


> Is there a technical or technological impediment to this?
> 
> Just curious, as i'm currently on the market for a used HD 6870 to crossfire, and I'm a total beginner when it comes to multi-GPU.



Yup, it would be cool if the GTX 690 could use its 4GB RAM as a full 4GB card instead of a 2GB one, duplicated. Unfortunately, it's not the case and I'd like to give a slightly alternative explanation for this.

The root problem is that the GPUs aren't designed to gang together directly into one "super GPU". If they did, they'd have a wide, full bandwidth interface to directly connect to each other and would be physically sitting next to each other on the circuit board. In such a scenario, they would become one large GPU with literally double the rendering power and be able to use the full amount of RAM, rather than halving it like we see now. Benchmarks would then show a full 2x improvement in rendering speed under all situations (CPU bottlenecks not withstanding).

I'm sure that AMD and NVIDIA have built prototypes of something like this, but for some reason haven't made them commercially. Not sure why really, as dual GPUs would then work awesomely well, offering doubled rendering power without any of the inherent drawbacks of current designs.

Of course, GPUs physically sitting in different cards have no chance of doing this and maybe that's why they haven't done this.


----------



## MaxAwesome (Jun 7, 2012)

qubit said:


> Yup, it would be cool if the GTX 690 could use its 4GB RAM as a full 4GB card instead of a 2GB one, duplicated. Unfortunately, it's not the case and I'd like to give a slightly alternative explanation for this.
> 
> The root problem is that the GPUs aren't designed to gang together directly into one "super GPU". If they did, they'd have a wide, full bandwidth interface to directly connect to each other and would be physically sitting next to each other on the circuit board. In such a scenario, they would become one large GPU with literally double the rendering power and be able to use the full amount of RAM, rather than halving it like we see now. Benchmarks would then show a full 2x improvement in rendering speed under all situations (CPU bottlenecks not withstanding).
> 
> ...



From what i've seen, GPUs such as the HD 6000 series and GTX 500 series they scale almost 100% with 2 cards.

Is this what you mean?

Should I go forward with the Hd 6870 crossfire (for the money, I get AMAZING performance) or just sell and go for a faster GPU? I would only have money for a HD 7870... which is much slower then Hd 6870 CFX.


----------



## Deleted member 3 (Jun 7, 2012)

Its a lot simpler actually. GPU A can't access the VRAM on card B fast enough. It would simply cripple memory performance and thus destroy performance.


----------



## Mussels (Jun 7, 2012)

DanTheBanjoman said:


> Its a lot simpler actually. GPU A can't access the VRAM on card B fast enough. It would simply cripple memory performance and thus destroy performance.



^ this.


as it stands, they just release cards with more Vram than they need, so that theres still leftover room for crossfire and SLI to use.


short of a new version of directX or oGL (and corresponding hardware) to be designed from the ground up to scale better in multi GPU, it wont happen.


----------



## babash*t (Jun 7, 2012)

Why can't you accept that it wont


----------



## radrok (Jun 7, 2012)

babash*t said:


> Why can't you accept that it wont



He asked a legit question and to be honest your answer is a bit rude and adds nothing to the thread, would you like if people answered straight like that to your questions? Please don't take this as a personal attack, I just wanted to let you know this is a bit rude.


----------



## Black Panther (Jun 7, 2012)

babash*t said:


> Why can't you accept that it wont



The OP didn't say he doesn't accept it. If you read his post, he specified that he was "just curious" and hence only wanted to expand his knowledge.


----------



## Dent1 (Jun 7, 2012)

MaxAwesome said:


> Should I go forward with the Hd 6870 crossfire (for the money, I get AMAZING performance) or just sell and go for a faster GPU? I would only have money for a HD 7870... which is much slower then Hd 6870 CFX.



Yes,

Pick up a second 6850 or 6870 used on Ebay. They go for $150 used. 

The 7850 would get smoked against the  6870 CF on a bad day.

The 7870 would be as fast or almost as fast as the 6870 CF, but ultimately its still slower and it will cost you $350.

So you've got to ask yourself spend $150 or $350 for relatively the same performance or slightly slower performance.

Now if you can sell your current 6870 for $150, you'd  still have to fork out an additional $200 minimum from your own pocket for the same to slower peformance.


PS. I swear we answered your question 2 months ago. Appears you don't respect our recommendations 
http://www.techpowerup.com/forums/showthread.php?t=162491


----------



## Dent1 (Jun 7, 2012)

Black Panther said:


> The OP didn't say he doesn't accept it. If you read his post, he specified that he was "just curious" and hence only wanted to expand his knowledge.



Believe me the OP doesn't accept it because he has created numerous threads asking about upgrading his 6870 to Crossfire or going HD 78x0.

This thread is just a cover story so he can justify not going with the advice he was previously given.

http://www.techpowerup.com/forums/showthread.php?t=162491

http://www.techpowerup.com/forums/showthread.php?t=165208


----------



## Aquinus (Jun 7, 2012)

The reviews I've seen on 6870 CFX is closer to the performance of the 7950, rather than the 7870. Also keep in mind that the 6870 will give you a 10-13% overclock on air without too much of a problem with a slightly altered fan profile. Both of mine are perfectly happy running at 1Ghz core. I've run into issues here and there, but nothing too major with CFX, but if you're one of those people who likes playing new games the day it comes out CFX might be a little glitchy, but all in all, it is worth the price.


----------



## MaxAwesome (Jun 7, 2012)

Dent1 said:


> Believe me the OP doesn't accept it because he has created numerous threads asking about upgrading his 6870 to Crossfire or going HD 78x0.
> 
> This thread is just a cover story so he can justify not going with the advice he was previously given.
> 
> ...



What? 

Why would I need a cover story? It's true that I have been considering going CFX or getting a new single GPU (in this case the HD 7870).

*I still haven't made up my mind*, but NOW I am more inclined towards HD 6870 CFX than a single HD 7870.

And as such *I HAVE QUESTIONS REGARDING CROSSFIRE*.

What is your problem anyway?


----------



## Aquinus (Jun 7, 2012)

MaxAwesome said:


> And as such I HAVE QUESTIONS REGARDING CROSSFIRE.
> 
> What is your problem anyway?



Then use the same thread and stop asking the same question. I think people have told you the same thing 3 times now. We can't buy the card for you, it is up to you and we've given you all the information we can. I'm not sure if TPU is the place for you to justify the upgrade to yourself, but that is just my opinion. If you're really that hesitant about it, just don't upgrade and wait for the next generation of GPUs.


----------



## cadaveca (Jun 7, 2012)

I'm a sucker for new tech, so have run Crossfire and/or SLi since both were launched. Be that as it may, I still tend to recommend that users buy the best single GPU they can, if running a single monitor. If only I could be happy myself with jsut one monitor...but I cannot, so I'll struggle with multi-GPU issues anyway.


That said, memory across GPUs can be shared, but the cost of doing so is quite high. This cost can be "covered" by other operations, but wit hgraphics and games, becuase the workload is so varied, covering up the issues from latency becomes something that just isn't worth doing.

Video memory, for current cards, is more than adequate, unless doing some GPGPU computing. Sharing memory would pose no benefit at all. It's technically possible, but the gains offered do not justify the work on the driver needed.


----------



## Protagonist (Jun 7, 2012)

The information in this thread has more realistic facts than flaws, so that's why i prefer single GPUs, that's why i hope never to be in the situation to CF/SLI my GPUs. Id say save up money for a flagship single GPU from whichever camp be it Radeon or Geforce, i personally on my final touches in getting a GTX680 or HD7970 only time will tell cash is not the limiting factor for me but rather my problem is getting them locally here in Kenya or getting one from outside the country which is not easy due to Import Duty or I just have to wait for someone i know who can bring me one then cash on the spot payment.


----------



## MaxAwesome (Jun 7, 2012)

Well, it seems some of you think I need some sort of validation or justification for my purchases.

I'm sorry it seems that way, but I simply thought a new thread would be a better choice for a technicality about a certain setup.

Apparently, being curious about something is a "cover story" to justify my actions.


----------



## Wrigleyvillain (Jun 7, 2012)

I'm sure the GPU companies incl the AIB folks don't mind at all that this is the case as it would surely hurt high-end single card sales were it otherwise. If MGPU also "pooled" the VRAM then it becomes a significantly much more attractive option, especially with regards to pairing two or more lower end GPUs together. Hell I would have run my 6850s for much longer if they could provide me 2GB of video memory together.


----------



## Kreij (Jun 7, 2012)

This thread is fine, MaxA. TPU is all about asking questions and getting answers.
No need to justify anything.


----------



## Wrigleyvillain (Jun 7, 2012)

I think it's more than fine as I have never run into better and more enlightening technical explanations of why this is actually the case and finally really understand why.


----------



## Aquinus (Jun 7, 2012)

MaxAwesome said:


> Well, it seems some of you think I need some sort of validation or justification for my purchases.
> 
> I'm sorry it seems that way, but I simply thought a new thread would be a better choice for a technicality about a certain setup.
> 
> Apparently, being curious about something is a "cover story" to justify my actions.



Well, it has always been about the 6870 CFX vs a single GPU like the 7870. Questions that have to do with that can all stay in one place. The point is, we've given you advice about this and you keep asking very similar questions. So I apologize that I assumed that you're trying to justify the purchase to yourself, but that is how it is feeling to me. I'm not saying this to be an ass or to single you out, I'm just saying that no one is really anything anything new and that when push comes to shove, it's really your choice.

So ask yourself two things.

Would you be willing to sacrifice from image quality (micro-stutter) for extra performance or do you want a single GPU that will be faster than your 6870 but slower than two 6870s without the issue of micro-stutter. Additionally it uses more power, but that aside that is really the only question you should be asking yourself because when push comes to shove, you're the one who will be using it. I don't mind the micro-stutter and in a lot of cases it isn't noticeable, but some people go nuts and simply can't stand it.


----------



## Dent1 (Jun 7, 2012)

Edit:

Nevermind. There is plenty of information in this thread already to make a choice. No more can be said.

At the end of the day you have to make a purchase. Technology won't wait, you put off a choice another few months and something better will be out or prices will change as well as our recommendations and idiologies.


----------



## Disparia (Jun 7, 2012)

qubit said:


> ...
> The root problem is that the GPUs aren't designed to gang together directly into one "super GPU". If they did, they'd have a wide, full bandwidth interface to directly connect to each other and would be physically sitting next to each other on the circuit board. In such a scenario, they would become one large GPU with literally double the rendering power and be able to use the full amount of RAM, rather than halving it like we see now. Benchmarks would then show a full 2x improvement in rendering speed under all situations (CPU bottlenecks not withstanding).
> 
> I'm sure that AMD and NVIDIA have built prototypes of something like this, but for some reason haven't made them commercially. Not sure why really, as dual GPUs would then work awesomely well, offering doubled rendering power without any of the inherent drawbacks of current designs.
> ...



My guess is that it's because each core design is already a "super GPU" in a way. Instead designing  to scale up, they design to cut down. Among many examples: The GTX580(GF110). Kill a memory controller and SM unit and you have a GTX570. And later on I believe they cut it down a little more to make a version of the GTX 560.


"But!" I hear someone in the audience exclaim, "Both nVidia and AMD know they're going to build dual-GPU cards for each series, why not design their top core with some sorta super-link!?"

Just doesn't seem practical to design a chip in this way when the overwhelming majority of sales will be of the single-GPU models.


"What about all those bodies the police dug up behind nVidia headquarters?"

No further questions, this interview is over!


----------



## deanchood (Dec 29, 2014)

My experience of crossfire is that the heat created by the workload, games in my case, is spread over twice the surface area.  Decent spacing between the PCIE slots on the motherboard is a must as well as a well vented system case. This meant no GPU slowdown like most do when over heating which is a really noticeable problem in crossfire as both cards are synchronised and therefore governed by the slowest GPU. Also I hadn't considered that I had increased the communications over the PCIE lanes which  I noticed a big games performance increase when upgrading from the intel975X chipset (which ran the two PCIE slots at 8 lanes per slot in crossfire mode) to an inteX58 board which allows both slots to run with 16 lanes each, the new board also had larger spacing between slots for better cooling as I mentioned. I crossfire two HD7770 cards with each card having 1gig dedicated (onboard) RAM and this is effectively seen by the system as 1gig, not 2gig, because both cards still have to hold the same data as they are mirrored. The problem I have found running games at resolutions over 1024*768 means my system RAM gets used as extended video memory. My X58 chipset with 12 Gig DDr3 triple channel can provide the bandwidth required for 60fps at a resolution of 2560*1440, but this is the "BUT" the video memory cached in system RAM takes up nearly twice the amount as that of a single card. I'm running COD Advanced Warfare at 2560*1440 @ 60fps smooth with intermittent clunks only to find that my 12Gig of system RAM is being topped out and causing pagefile (hard drive) access required. Now this could be avoided if I had more dedicated memory on the graphics cards themselves however the cards get pricier so I am going to replace my system RAM to 16 or 24 Gig. I'll re-post here and let you Know how it goes.
 I gauged the performance of my two HD7770 in crossfire and average game FPS above the HD7870 and closer to a HD7950. My two cards combined power use is less, roughly 80Watt each or 160Watt total whereas the 7950 uses 170Watt, nice.
   The bottom line with crossfire is you will need a hunky system, good cooling and software crossfire drive support in order to run them without bottle necking but when you get it right they perform well.


----------



## MT Alex (Dec 29, 2014)

935 days, not too shabby.  Also, you could get a much nicer used GPU, that would serve you much better, for the price of the RAM you want to buy.


----------



## qubit (Dec 29, 2014)

Jizzler said:


> My guess is that it's because each core design is already a "super GPU" in a way. Instead designing  to scale up, they design to cut down. Among many examples: The GTX580(GF110). Kill a memory controller and SM unit and you have a GTX570. And later on I believe they cut it down a little more to make a version of the GTX 560.
> 
> 
> "But!" I hear someone in the audience exclaim, "Both nVidia and AMD know they're going to build dual-GPU cards for each series, why not design their top core with some sorta super-link!?"
> ...


Yes, that's what I suspect too and I'll bet my bottom dollar they've thought about it and wanted to do it. Shame as the performance benefit would be significant.


----------



## newconroer (Dec 29, 2014)

qubit said:


> Yes, that's what I suspect too and I'll bet my bottom dollar they've thought about it and wanted to do it. Shame as the performance benefit would be significant.



http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/

The solution is all in there.


----------



## Aquinus (Dec 29, 2014)

Thread necro much? Last post was on: Jun 7, 2012 at 1:39 PM


----------



## rtwjunkie (Dec 29, 2014)

This here wins the 2014 thread necro revival award!


----------



## qubit (Dec 30, 2014)

newconroer said:


> http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/
> 
> The solution is all in there.


Nice one, sounds amazing.

They've gone much further than just ganging two GPUs sitting next to each other on one card though. It looks like it will solve not only the dual bank memory problem, but also access to system memory where it can actually access it even faster than the CPU. I'd love to see what kind of enhancements this will bring and it sounds like a big performance leap.

If this is implemented in consoles, then we'll see a significant performance leap there too.


----------

