# Multi-GPU + SMP Folding, SLI, Dummy Plugs



## TIGR (Feb 19, 2010)

I'm running Windows 7 Ultimate x64 on an MSI NF980-G65. Only have two 9800GX2s plugged in at the moment as I'm moving parts around. CPU is an Athlon II X4 620. I'm using nVidia 196.34 drivers.

First off, this forum topic says you can fold with SLI enabled and no dummy plugs. But when I try it, I get EUE errors up the wazoo, cards overclocked or not. I need to have SLI disabled, SLI bridge removed, and dummy plugs in place to fold error-free. My shortcut parameters look like this:

*"C:\FAH\GPU\Folding@home.exe" -forcegpu nvidia_g80 -gpu 4*

What's going on there?

Secondly, when I fire up SMP, my GPU folding goes to hell. In this image, where GPUs 1, 2, and 4 (especially 4) drop in GPU usage is when I turned on the SMP client. First there's a little blip while SMP ensures its status, and then when it gets to work, GPU folding takes a hit:






Should I have *Do NOT lock specific cores to CPU* checked in the GPU clients or not?

By the way, I'm using the "regular" SMP client, no VMWare or any of that, as I never could get it to function properly.

Sorry for what probably amounts to dumb questions.


----------



## BUCK NASTY (Feb 19, 2010)

Welcome to folding. You need to reconfigure the priority of the clients. Use the *-configonly* flag to change the priority of the *CPU to idle* and the* GPU's to low*. This will keep the CPU from over running the GPU's.


----------



## TIGR (Feb 19, 2010)

Ah, thank you BUCK. I had everything set to low. How much of a hit does SMP take from going to idle?


----------



## BUCK NASTY (Feb 19, 2010)

TIGR said:


> Ah, thank you BUCK. I had everything set to low. How much of a hit does SMP take from going to idle?


Feeding 4 Nvidia GPU's will cause a 25-40% decrease in CPU output. SMP set to idle will scavenge the balance of CPU cycles left from the GPU's. This is the only optimized manner of maximizing ppd with your configuration. Good luck and let us know how your production improves.


----------



## bogmali (Feb 19, 2010)

The multiple card thing without dummy plugs work but it's kinda buggy. I also have 2 GX2's and I just leave my dummy plugs connected to alleviate having to restart so often.


----------



## TIGR (Feb 19, 2010)

Thanks guys, currently running 25k PPD with the two GX2s and Athlon. Because I just use the regular SMP client I never get more than about 1500 PPD out of it. Not sure if I should be trying something else or just stick with what I've got. May toss a 9800GT in there tonight too but am working on getting a different rig up and running first.

Edit: stupid 548 pointers....


----------



## theonedub (Feb 25, 2010)

I thought I would bump this thread than make a new one for my related experience. 

I have been noticing that with WCG going full bore my folding GPU usage esp on 1888 and 472 (units that cycle usage) has been very erratic. The Precision graphs looked like large mountains with wide valleys between them. So using -configonly I set the core priority to *low* from idle. Now my GPU usage looks like this on a 472 with WCG going in the background:






Thats what it used to look like with WCG suspended. I hope this picks my PPD and efficiency up cause my 2 275s at the clocks in the screenshot have been struggling to produce 16k total- I find that highly lacking for 240 shader cards.  Im going to keep an eye on my Folding PPD for an increase and my WCG PPD to see if it is adversely affected.


----------



## TIGR (Feb 25, 2010)

theonedub, I'm seeing that on some GPUs as well, but PPD seems to be fine. Both systems are getting around 25-30k PPD even though one shows the erratic peaks and valleys you posted on all GPUs while the other shows solid ~100% usage for three GPUs and erratic usage for two GPUs.


----------



## theonedub (Feb 25, 2010)

TIGR said:


> theonedub, I'm seeing that on some GPUs as well, but PPD seems to be fine. Both systems are getting around 25-30k PPD even though one shows the erratic peaks and valleys you posted on all GPUs while the other shows solid ~100% usage for three GPUs and erratic usage for two GPUs.



Weird. Maybe I am expecting too much out of my 275s then, lol? Im going to look at it over the next couple days and if the number suggest no difference I will move it back to how it was. No point in putting them under (what looks to be) more stress if there is no benefit. 

Thanks for the input!


----------

