# WCG GPU Client Discussion Thread



## BinaryMage (Nov 30, 2011)

Announced last September, WCG is planning to release a GPU application, initially for the Help Conquer Cancer project.

Quote from forums:



> Hello WCGrid members,
> 
> The September 2011 update to the Help Conquer Cancer project has been posted. In this update, we announce work presented earlier this year at the High-Performance Computing Symposium in Montreal: the development of an OpenCL (GPU) implementation of HCC. This GPU version is currently running in our lab. We are pleased with its performance, and anticipate its eventual launch on the World Community Grid.
> 
> ...



Timing-wise (thank you KieX) a beta period is slated to start sometime in January 2012. (If you want to participate, you'll need to opt-in on the WCG website)

If this goes through as planned, our whole crunching strategy will likely change. GPUs are far, far more efficient than CPUs, FLOPS-wise. (Floating Point Operations per Second) For example, an overclocked i7 2600k, costing about $310, puts out about 14 GFLOPS. (reference) A videocard at the same price point, the HD 6970, puts out 2700 GFLOPS - 192 times what the i7 does. Now, GFLOPS do not directly translate into BOINC crunching power - but they are related. An i7 puts out maybe 5,000 PPD on an average BOINC project. The 6970 will do 250,000 PPD - a 50-fold increase. (source) Now, these GPU credit numbers are not from WCG, and the actual numbers from WCG GPU crunching will probably differ slightly. But nonetheless, crunching with a videocard instead of a CPU will likely be more efficient by many orders of magnitude. The official supported list of cards is here, but any AMD card supporting OpenCL or nVidia card supporting CUDA should work.

Now, many of you are probably frustrated right now. Your 2600k you just bought for crunching - you now think was maybe not the best investment. I like to think of it like this. We crunch to help people. Not to win contests. Not to accumulate points. Your i7 isn't going to do any less work. It's not going to help people any less. But hopefully using videocards in the future will allow us to help people more. If the GPU application is, say, 40 times more efficient than the CPU one (in tasks finished/day at a certain price point), then that means, if we switch to buying GPUs instead, the project can obtain hopefully useful data much, much faster than it could otherwise. More efficient ways to help people? Sounds good to me, and that's the bottom line.


----------



## (FIH) The Don (Nov 30, 2011)

this will be VERY interesting, may even make me invest in 2-3 more 5870s just for this, as they are dirt cheap now and will get cheaper


----------



## Bow (Nov 30, 2011)

Sweet put my 6950's to work


----------



## f22a4bandit (Nov 30, 2011)

This is awesome news! I wonder how points would scale in crossfire/SLI?


----------



## (FIH) The Don (Nov 30, 2011)

im thinking it will be like folding, where sli/cf has to be disabled


----------



## KieX (Nov 30, 2011)

Thanks for putting up the thread BinaryMage. Might wanna add the links from the Daily Numbers thread to your OP to keep it all in one place.

As far as information on start dates:





> At this time we are not prepared to give you a date for the start of GPU for HCC but we can tell you this:
> 
> 1. At this time, the plan is to start Beta testing before the end of January, 2012. This may change as conditions arise.
> 
> 2. Because of the large variety of processors, we will probably have a longer than average Beta test.


 Source

Looks like it's still some time away from full implementation for end users. Anyone willing to participate in the BETA, will need to remember to opt-in for it on the WCG website.

Now.. the scoring system.. that's going to be interesting. According to WCG the current points are based on a 1 GigaFLOP machine, running full time, producing 100 units of credit in 1 day (~14Boinc points)

That would certainly entail a massive PPD difference over anything a CPU can produce (GPU vs CPU GFlops). And I would not imagine they'll want to go the route of F@H and start differentiating points :shadedshu

A lot of informations still to come, but looks very promising. 

http://boinc.berkeley.edu/flops.php


----------



## f22a4bandit (Nov 30, 2011)

Besides the PPD output (speculation at this point) another great thing about using video cards is their universality. You don't have to worry about new sockets, it's just plug-and-play. I know our video cards aren't stressed that much anyway unless we're gaming, editing, etc., so this just makes using your machine that much easier.


----------



## Chicken Patty (Nov 30, 2011)

F22 has a very good point.  A lot easier to stack up points with the GPU's than it is with CPU's.


----------



## F150_Raptor (Nov 30, 2011)

Your also going to have a hell of alot more heat and a higher power bill.  Both of those were the reasons why I stopped using gpu's with F@H.


----------



## Chicken Patty (Nov 30, 2011)

F150_Raptor said:


> Your also going to have a hell of alot more heat and a higher power bill.  Both of those were the reasons why I stopped using gpu's with F@H.



That is very true.  But I already fold as it is so for me it's not much of a big deal.  Plus I have a fixed rate monthly.  So unless it's a huge jump I don't think they'll tell me anything.  Just bad for those who will get the big hit.


----------



## KieX (Nov 30, 2011)

Chicken Patty said:


> F22 has a very good point.  A lot easier to stack up points with the GPU's than it is with CPU's.



If the points are calculated the same way for GPU then yes.

At the moment it's only a small part of one project which will have support. Might be a lottery for the first Work Units. I'm feeling lukcy


----------



## f22a4bandit (Dec 1, 2011)

F150_Raptor said:


> Your also going to have a hell of alot more heat and a higher power bill.  Both of those were the reasons why I stopped using gpu's with F@H.



Great point, I don't know why I even thought of that! But for those like CP with a fixed electricity bill, it is a great advantage.


----------



## KieX (Dec 1, 2011)

F150_Raptor said:


> Your also going to have a hell of alot more heat and a higher power bill.  Both of those were the reasons why I stopped using gpu's with F@H.



Although that's also partly due to the bigadv bonus system, right?  In F@H until recently an i7 could outdo a couple of GPUs.

I don't think WCG being IBM sponsored want to go down that route of varying bonuses, so it may be that GPU's really are better than CPU for this type of project.

From my GPU folding days I can certainly vouch for 500W+ for dual GPU setups being way less efficient than a good CPU. All speculation until there's any confirmation, of course. But here's to hoping


----------



## BinaryMage (Dec 1, 2011)

F150_Raptor said:


> Your also going to have a hell of alot more heat and a higher power bill.  Both of those were the reasons why I stopped using gpu's with F@H.



True indeed. There are differences, though. First, BOINC assigns credit roughly based on how much data you crunched, while F@H does not necessarily. GPUs are far more efficient in FLOPS/watt than CPUs, so BOINC crunching with GPUs is more efficient in work/watt and credit/watt, while F@H crunching with GPUs is more efficient in work/watt but not credit/watt. In addition, F@H's GPU system is mainly build for CUDA, and doesn't use the more powerful AMD GPUs (speaking in compute power) as effectively as it could.


----------



## FordGT90Concept (Dec 1, 2011)

The reason I switched from F@H to BOINC is because BOINC didn't use GPUs.  The reason is simple: CPUs are more accurate.  An incorrect value often gets caught immediately in a CPU where a GPU requires testing on a separate machine in order to verify it's correctness and still not with 100% certainty.

We're doing science here: slow, correct results are more desirable than fast, incorrect results.

If TPU starts adding GPU work to its project list, I'll have to stop running BOINC for TPU.  I refuse to compromise science for an arbitrary, altogether meaningless number.


----------



## BinaryMage (Dec 1, 2011)

FordGT90Concept said:


> The reason I switched from F@H to BOINC is because BOINC didn't use GPUs.  The reason is simple: CPUs are more accurate.  An incorrect value often gets caught immediately in a CPU where a GPU requires testing on a separate machine in order to verify it's correctness and still not with 100% certainty.
> 
> We're doing science here: slow, correct results are more desirable than fast, incorrect results.
> 
> If TPU starts adding GPU work to its project list, I'll have to stop running BOINC for TPU.  I refuse to compromise science for an arbitrary, altogether meaningless number.



Really? Where did you hear that? I did some quick Google-fu and came up with nothing...

And anyway, BOINC tasks for both GPUs and CPUs are double or triple-checked, with multiple computers performing the same tasks. I doubt BOINC would use GPUs if they didn't produce valid data.


----------



## ChaoticAtmosphere (Dec 1, 2011)

Well electricity is no concern for me as it is included in my rent and it's why I have a 10,000BTU air conditioner in my room dedicated to cooling my rig.....almost like a server room 

More good news, I plan on buying a second HD6870 in 2 weeks  (Or maybe I should buy the bulldozer, hmmmm decisions decisions.....)


----------



## twilyth (Dec 1, 2011)

Hell, I'll load up on 6970's or 7xxx's if each one can do more work than an overclocked 2600k.


----------



## KieX (Dec 1, 2011)

FordGT90Concept said:


> The reason I switched from F@H to BOINC is because BOINC didn't use GPUs.  The reason is simple: CPUs are more accurate.  An incorrect value often gets caught immediately in a CPU where a GPU requires testing on a separate machine in order to verify it's correctness and still not with 100% certainty.
> 
> We're doing science here: slow, correct results are more desirable than fast, incorrect results.
> 
> If TPU starts adding GPU work to its project list, I'll have to stop running BOINC for TPU.  I refuse to compromise science for an arbitrary, altogether meaningless number.



Might want to take a look at the PDF with their initial findings: http://www.cs.toronto.edu/~juris/WCG/UPDATE-SEP2011.pdf

It includes a section on their findings with regards to accuracy. Like you said, GPU is always going to trade off some accuracy but it seems that the researchers find this to be a tolerable margin in order to run simultaneously with CPU WU.


----------



## Chicken Patty (Dec 1, 2011)

I won't argue FORD on the accuracy/GPU subject because he does have a valid point, but keep in mind that the higher than meaningless # is, the more research that is getting done.

Now is this just a personal thing that would not like to Crunch for a team that uses GPU's as well, or you refuse to crunch with GPU's?  I mean, no one here can obviously hold you back, but as a consistant and dedicated cruncher you are, I'm sure no one here would like to see you go.  I know I sure as heck wouldn't.  Not only that, but you've been of great help to the community in other things not just the research.


----------



## ChaoticAtmosphere (Dec 2, 2011)

Hmmmmm, Maybe a bulldozer then.....


----------



## (FIH) The Don (Dec 2, 2011)

twilyth said:


> Hell, I'll load up on 6970's or 7xxx's if each one can do more work than an overclocked 2600k.



i hope that this gpu project wont have nvidia favored like it is in f@h

i hope for some equal competition in this, just for once


----------



## Bow (Dec 2, 2011)

(FIH) The Don said:


> i hope that this gpu project wont have nvidia favored like it is in f@h
> 
> i hope for some equal competition in this, just for once


----------



## Chicken Patty (Dec 2, 2011)

ChaoticAtmosphere said:


> Hmmmmm, Maybe a bulldozer then.....



I'm really dying to have some money left over and maybe build a BD rig.  Just dying to see what it can do.


----------



## BinaryMage (Dec 2, 2011)

(FIH) The Don said:


> i hope that this gpu project wont have nvidia favored like it is in f@h
> 
> i hope for some equal competition in this, just for once



The opposite will likely be true. AMD cards, by raw compute power, are _way_ more powerful than comparatively priced nVidia ones, usually by about a factor of two. (For example, the HD 6990 has roughly 5 TFLOPs versus the GTX 590's 2.5) Folding@home, which was initially designed for CUDA (nVidia), hasn't ever really used AMD cards very efficiently. BOINC, however, does. Assuming this follows similiar trends as the other BOINC projects, you will get roughly double the crunching ability and thus credit from an AMD GPU versus a similarly priced nVidia one. Hope that cheers you up!


----------



## ChaoticAtmosphere (Dec 2, 2011)

I'm decided on bulldozer.....I'll get 2nd 6870 next week.


----------



## Chicken Patty (Dec 2, 2011)




----------



## FordGT90Concept (Dec 4, 2011)

KieX said:


> It includes a section on their findings with regards to accuracy. Like you said, GPU is always going to trade off some accuracy but it seems that the researchers find this to be a tolerable margin in order to run simultaneously with CPU WU.


Yes, there are some situations where a result quickly is more important than a 100% accurate result.  Then again, those sorts of projects should be ran on dedicated Tesla or Streams systems run by the school--not put on distributed computing.  If you're going to spend years to get a result, it better damn well be accurate or you're wasting everyone's time.

An inaccurate result doesn't move science forward--only stagnats it.  If you make a "discovery" based on some overclocked GPUs that generated bad results, it wasn't a discovery at all.  Someone, somewhere will eventually question the results and you're back to square one.  They effectively just wasted years worth of clocks to get a worthless result.




Chicken Patty said:


> Now is this just a personal thing that would not like to Crunch for a team that uses GPU's as well, or you refuse to crunch with GPU's?


I refuse to do scientific calculations on GPUs, period.  If TPU added GPU work to the queue, I would probably switch my computers over to teamless and give them their own stack of projects to work on (no GPU, obviously).


----------



## ChaoticAtmosphere (Dec 4, 2011)

Chicken Patty said:


> I'm really dying to have some money left over and maybe build a BD rig.  Just dying to see what it can do.





FordGT90Concept said:


> Yes, there are some situations where a result quickly is more important than a 100% accurate result.  Then again, those sorts of projects should be ran on dedicated Tesla or Streams systems run by the school--not put on distributed computing.  If you're going to spend years to get a result, it better damn well be accurate or you're wasting everyone's time.
> 
> An inaccurate result doesn't move science forward--only stagnats it.  If you make a "discovery" based on some overclocked GPUs that generated bad results, it wasn't a discovery at all.  Someone, somewhere will eventually question the results and you're back to square one.  They effectively just wasted years worth of clocks to get a worthless result.
> 
> ...



Well after reading all this, I'm beginning to side with Ford. I took part in the F@H project with my 3870 and well, it just wasn't as satisfying as WCG. Besides, I boughyt my GPU primarily to play HD video games. So, chances are I will opt out of crunching with my beloved 6870.

Also, in light of this article, I decided to hold off on the BD purchase and get a 2nd 6870 instead.


----------



## FordGT90Concept (Dec 4, 2011)

Bulldozer might be good when they move to 22nm.  Until then, only stick them in budget machines. XD


----------



## KieX (Dec 4, 2011)

FordGT90Concept said:


> Yes, there are some situations where a result quickly is more important than a 100% accurate result.  Then again, those sorts of projects should be ran on dedicated Tesla or Streams systems run by the school--not put on distributed computing.  If you're going to spend years to get a result, it better damn well be accurate or you're wasting everyone's time.
> 
> An inaccurate result doesn't move science forward--only stagnats it.  If you make a "discovery" based on some overclocked GPUs that generated bad results, it wasn't a discovery at all.  Someone, somewhere will eventually question the results and you're back to square one.  They effectively just wasted years worth of clocks to get a worthless result.
> 
> ...



I agree with you on the accuracy being vital to the research. Wasted cycles aren't good for anyone involved.

At the moment it's just one project that will share GPU WU with CPU. Also BOINC config file allows for GPU to be disabled so only CPU WU are used.

You can therefore continue running WCG minus the GPU WU and/or the HCC project that uses that method. There are still 8 other WCG projects, with more to come that are CPU only and worthwhile causes. I hope that you wouldn't leave knowing you can choose what you contribute towards. Or is it a principle based choice? (i would always prefer for you to stay)


----------



## ChaoticAtmosphere (Dec 4, 2011)

Yes Ford, you are a tenured TPU member and it would be sad to see you go simply on the basis that some members decide to use their GPU. 

I can assure you that I have decided against it and I do thank you for pointing out the fact that GPU's provide less accurate results.


----------



## FordGT90Concept (Dec 4, 2011)

KieX said:


> You can therefore continue running WCG minus the GPU WU and/or the HCC project that uses that method. There are still 8 other WCG projects, with more to come that are CPU only and worthwhile causes. I hope that you wouldn't leave knowing you can choose what you contribute towards. Or is it a principle based choice? (i would always prefer for you to stay)


I have found the option for opting out of Help Conquer Cancer and did so (log in to WCG, go to My Projects).  Are you certain that's the only one involving GPUs?

Uh, or do GPU results get relegated to strictly opting in to the beta program?


----------



## KieX (Dec 4, 2011)

FordGT90Concept said:


> I have found the option for opting out of Help Conquer Cancer and did so (log in to WCG, go to My Projects).  Are you certain that's the only one involving GPUs?
> 
> Uh, or do GPU results get relegated to strictly opting in to the beta program?



BETA program is opt-in only. Those GPU WU won't appear till either later this month or January in that program. The BETA phase will likely take a few months of testing, WCG admins already said it would be much longer than regular testing due to the nature of GPU computing.

For all we know we won't see any actual GPU WU on the actual HCC program till mid-2012.

Whenever Help Conquer Cancer does go live with both types of WU you can still choose to do CPU WU only and not receive GPU by adding this to the cc_config.xml:

```
<cc_config> 
    <options> 
        [B]<no_gpus>1</no_gpus> [/B]
    </options> 
</cc_config>
```

Hope that helps


----------



## twilyth (Dec 4, 2011)

Thanks for the code, but in the newer versions of boinc, isn't there an option for this in the advanced menu?

Personally, I'm going to dump F@H at the end of the month so that I'm ready, willing and able to take beta wu's.  Hopefully the opt-in option will be announced so I don't have to check the site every couple of days.


----------



## KieX (Dec 4, 2011)

twilyth said:


> Thanks for the code, but in the newer versions of boinc, isn't there an option for this in the advanced menu?



You can disable the GPU by selecting Suspend GPU from the menu. But, it will still register that your GPU is capable and ready, so although in theory this works, if you really want to be safe from doing those WU like Ford, best to edit the config file.









twilyth said:


> Personally, I'm going to dump F@H at the end of the month so that I'm ready, willing and able to take beta wu's.  Hopefully the opt-in option will be announced so I don't have to check the site every couple of days.



You need to opt-in to BETA testing as it's not enabled by default. Log on to your WCG page, select Beta Testing from the left hand side, then choose what profiles you want the WU to go to. Just be aware that it is for testing WU from all projects, so there is a chance of finding bad WU's. They're also very rare normally, it is the most difficult Bronze/Silver to earn so even with it enabled don't expect the usual numbers of WU you normally get in WCG.






If the BETA is succesful it's likely that the default setting for HCC will be to enable GPU WU if there's compatible hardware and BOINC preferences are set to allow.

That's as much as I know, from what WCG admins have shared so that should hopefully help those who don't want to join to go trouble free and those who do want to join, know where to look.

Personally I participate in Beta testing as I'm all for helping the IBM staff to get stable and well-tested WU before they go live. Whether I join HCC GPU once it's out.. don't know yet. Will wait till the testing is over and the researchers announce what level of accuracy they achieved.


----------



## twilyth (Dec 4, 2011)

Thanks K.  I've been signed up for betas for a few years but I had a brain fart about this.  For some reason I thought I would have to opt-in for the new project since I currently don't accept Cancer project wu's.  I do shit like that all of the time (miss connections that should be obvious to me and the like). 

And thanks for pointing out the importance of changing the config file too.


----------



## KieX (Dec 4, 2011)

I'm guilty of TL;DR posts sometimes  Normally forget who I'm replying to and go for generic babble in case someone else might need to know


----------



## twilyth (Dec 4, 2011)

Absolutely.  I tend to do that as well but I tend to get criticized for being condescending (not here, but elsewhere).  I think you manage to do it while coming off as just being helpful.  FML.


----------



## FordGT90Concept (Dec 5, 2011)

Mine doesn't even have the Suspend GPU option and it was downloaded yesterday. 

...maybe it is because I did the no_gpus option.


I think I'm just going to stay out of HCC.  I looked at my server's current tasks after changing it and it only had about 5 of probably 100 that were HCC.  I've already contributed a lot to them anyway:
Project: Help Conquer Cancer
Points Generated: 2,481,087
Results Returned: 7,294
Total Run Time: 3:044:12:11:00

3 years worth of CPU cycles...

We've had a good run. XD


----------



## BinaryMage (Mar 15, 2012)

Update (yes I'm still here lol):
Beta WUs (Windows only currently) are now out!

https://secure.worldcommunitygrid.org/forums/wcg/viewthread_thread,32803

Can't seem to get any on my 4850 right now.. hopefully they'll get that up and running ASAP.


----------



## de.das.dude (Mar 16, 2012)

so where do we get the GPU Client? i am eager to start crunching on my new GTS 450!


----------



## mstenholm (Mar 16, 2012)

de.das.dude said:


> so where do we get the GPU Client? i am eager to start crunching on my new GTS 450!



Copied from WCG forum:

No, in order to get Beta (my insert: *GPU-*)WUs you need to do two things:

1. On the "Beta Testing" panel you need to have the "Participate in Beta Testing" checked for the profile(s) you have assigned to the device(s).
2. On the "Device Manager->Device Profiles" panel you need to have "If my computer can process work on my graphics card, then please send me work to run on my graphics card for the projects that I have selected above." checked in the profile you have assigned to the device(s).

Note, the Device Profile verbage is misleading for Beta work. You do not need to have the HCC project selected in the profile to get Betas, just the two things above.


----------



## de.das.dude (Mar 17, 2012)

i still cant find the link to dload it!


----------



## mstenholm (Mar 17, 2012)

de.das.dude said:


> i still cant find the link to dload it!



You have it already. Look at post #36


----------



## de.das.dude (Mar 17, 2012)

ahh.. pic wasnt loading. i have a slow net.

nothing happenings.


----------



## ChaoticAtmosphere (Mar 17, 2012)

I tried the GPU and with what FOrdconcept said and seeing the performance of my computer while it was computing I am opting out.

1. My video card is bought primarily for my visual Eye candy for my gaming.

2. They represent at least 40% of the cost of my computer. Not worth the contribution they will make to the project. And with that investment I'd rather keep them. they are more fragile IMO.

My decision is firm. Sorry all.

On a positive note, I expect my 8 core Gaming/Crunching machine to be functional by May this year.


----------

