• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

TPU's WCG/BOINC Team

Yeah it takes a while once you start, and usually takes about a week to even out with the PPD
 
Everybody is worried about the swine flu, but since the XS mutants invaded, crunching fever has been running rampant. Attempting to quarantine the contagion by creating a WCG subforum has only served to accelerate it's spread. I fear TPU is doomed. :laugh:
 
I thought Fits started it:laugh:
 
Glad to be a Pie-eating part of that.
 
Glad to be a Pie-eating part of that.

Yeah!! Got my first bite of Pie today :rockout:

Pie no10.jpg
 
Server is now crunching. :D

Now I wish I had a quad core server. :(
 
Oh look Blueberry. I can live with that.
 
I got my old p o s Athlon 800Mhz set up to crunch today...it's not too swift but I was like "wtf? it's not doing anything anyway!" so each little bit counts right. It's first problem will take 17 hours to completion :laugh:
 
Hey guys. I just contacted Buck Nasty, the moderator for the TPU F@H team, and I asked him this question....I was wondering if we may be able to come to a mutually beneficial agreement. WCG primarily runs on cpu power, while folding tends to get it's greatest power from GPU's. What if we combined our efforts? F@H members could join and crunch for WCG, and in return, WCG members could join F@H, and run their GPU's for the cause. What do you guys think? Would you guys be willing to make this commitment?
 
I Fold for Team #37726 since 2005 so not for me.

But I'll be sending my WCG points back to XS in a bit so my vote isn't all that critcal. Good idea.
 
Hey guys. I just contacted Buck Nasty, the moderator for the TPU F@H team, and I asked him this question....I was wondering if we may be able to come to a mutually beneficial agreement. WCG primarily runs on cpu power, while folding tends to get it's greatest power from GPU's. What if we combined our efforts? F@H members could join and crunch for WCG, and in return, WCG members could join F@H, and run their GPU's for the cause. What do you guys think? Would you guys be willing to make this commitment?

i have had massive problems getting F@H to run smoothly on my card, but yes the day i do get it sorted/ Stanford get it sorted is the day my electricity bill shoots up
 
Hey guys. I just contacted Buck Nasty, the moderator for the TPU F@H team, and I asked him this question....I was wondering if we may be able to come to a mutually beneficial agreement. WCG primarily runs on cpu power, while folding tends to get it's greatest power from GPU's. What if we combined our efforts? F@H members could join and crunch for WCG, and in return, WCG members could join F@H, and run their GPU's for the cause. What do you guys think? Would you guys be willing to make this commitment?

This is the best way to get more interest in and create comaraderie(sp?) between the teams. We have many guys who run wcg on their cpu's and f@h on thier gpu's on XS.

Start a thread and have people post trades. For instance, I'll run my 8800GT for TPU's f@h team if someone will run at least a dual core on TPU's wcg team.
 
I got my old p o s Athlon 800Mhz set up to crunch today...it's not too swift but I was like "wtf? it's not doing anything anyway!" so each little bit counts right. It's first problem will take 17 hours to completion :laugh:

My server will take 6-7 hours and my desktop will take about 4. (desktop is at stock atm)
 
Your challenge is to keep it. There are quite a few crunchers nipping at your heels wanting that piece of the pie. Could be time to get your credit card out!;)

Already looks like I have been passed :mad:...but I have some more backup coming in the next week or so ;)


Hey guys. I just contacted Buck Nasty, the moderator for the TPU F@H team, and I asked him this question....I was wondering if we may be able to come to a mutually beneficial agreement. WCG primarily runs on cpu power, while folding tends to get it's greatest power from GPU's. What if we combined our efforts? F@H members could join and crunch for WCG, and in return, WCG members could join F@H, and run their GPU's for the cause. What do you guys think? Would you guys be willing to make this commitment?

I already do this and dont really see a big hit on my PPD, as the GPU's PPD are significantly better than the SMP client.
I have ordered a new card to compensate for the difference anyway :)


i have had massive problems getting F@H to run smoothly on my card, but yes the day i do get it sorted/ Stanford get it sorted is the day my electricity bill shoots up


WhiteLotus, if you need any more help to get the GPU2 client running you can PM me if you like and I will do my best to help.
 
Hey guys. I just contacted Buck Nasty, the moderator for the TPU F@H team, and I asked him this question....I was wondering if we may be able to come to a mutually beneficial agreement. WCG primarily runs on cpu power, while folding tends to get it's greatest power from GPU's. What if we combined our efforts? F@H members could join and crunch for WCG, and in return, WCG members could join F@H, and run their GPU's for the cause. What do you guys think? Would you guys be willing to make this commitment?
I won't run F@H for many reasons:
1) Their clients are crap and they show no intent to fix it.
2) The GPU client is a PITA to disable whenever I want to game (horrible FPS if it isn't disabled).
3) I don't like the attitude over at the F@H forums where they encourage overclocking while doing science. Overclocking = inaccurate results = bad science.
4) An extension on point 3, all they care about are points, not science.
5) F@H is only useful on GeForce cards--they make that clear with their point system. The favoritism they show of NVIDIA products is downright sickening.

My conclusion: All they care about is results. They don't care if they are good or bad and they especially don't care about the people that contribute whatever they can for the cause. I've seen it said many times: They don't want people to dust off Pentium II computers to fold on because it slows all their godly GeForce cards down. *bleep* that and *bleep* them.


WCG/BOINC (Berkley) is going places, F@H (Stanford) isn't.
 
Last edited:
I always had the same Idea, but I can't do it on my pc:(
my 8600GT gets suffocated everytime I run it
 
I won't run F@H for many reasons:
1) Their clients are crap and they show no intent to fix it.
2) The GPU client is a PITA to disable whenever I want to game (horrible FPS if it isn't disabled).
3) I don't like the attitude over at the F@H forums where they encourage overclocking while doing science. Overclocking = inaccurate results = bad science.
4) An extension on point 3, all they care about are points, not science.
5) F@H is only useful on GeForce cards--they make that clear with their point system. The favoritism they show of NVIDIA products is downright sickening.

My conclusion: All they care about is results. They don't care if they are good or bad and they especially don't care about the people that contribute whatever they can for the cause. I've seen it said many times: They don't want people to dust off Pentium II computers to fold on because it slows all their godly GeForce cards down. *bleep* that and *bleep* them.


WCG/BOINC (Berkley) is going places, F@H (Stanford) isn't.

Would you please post factual information supporting this statement.

After 5 year of FAH and Boinc, I find this statement completely inaccurate. I am sure that there are many others that take great exception to this ignorant statement.
 
Why do you think computers BSOD when overclocked? A critical bit didn't reach it's destination or stay accurate in its container reporting a value outside of what is expected. The result is a crash. For example, if you have a nForce driver at 0x80000000 in the memory stack and the the reference to it had a bit in the memory not switch rapidly enough causing the value to be 0x80100000. The next device that attempts to access the nForce driver won't find it is looking for and subsequently crash. If it was Windows, cue BSOD.

An error occurs when a value is generated that is not expected/inside of normal parameters. Unstable computers do this at stock (bad processor, bad memory, bad sectors on a hard drive, bad motherboard). Stable computers never do it. The farther any of those listed components are overclocked, the more likely it is to occur.


Put simply, you have to understand how binary works (it only takes one bit being stuck/wrong to report a completely different value than what is intended) to understand how easy it is for a computational error to occur. There are billions of opportunities for this to happen every second in every computer. Overclocking greatly increases the odds that it will happen.

ECC is required in computers with large memory banks to prevent this from happening in a situation where it is not only likely, but inevitable (there's a lot of surface area for an electron to hit and cause a bit to randomly flip).



BOINC has at least two computers calculating everything so if someone screws up, it errors and throws out the result. F@H has no such mechanism to prevent computational errors.


Moreover, GPUs are more likely to error than CPUs because they don't handle any critical information. A GPU could have a bad binary switch in it and you might never even know it.
 
Last edited:
Why do you think computers BSOD when overclocked? A critical bit didn't reach it's destination or stay accurate in its container reporting a value outside of what is expected. The result is a crash. For example, if you have a nForce driver at 0x80000000 in the memory stack and the the reference to it had a bit in the memory not switch rapidly enough causing the value to be 0x80100000. The next device that attempts to access the nForce driver won't find it is looking for and subsequently crash. If it was Windows, cue BSOD.

An error occurs when a value is generated that is not expected/inside of normal parameters. Unstable computers do this at stock (bad processor, bad memory, bad sectors on a hard drive, bad motherboard). Stable computers never do it. The farther any of those listed components are overclocked, the more likely it is to occur.


Put simply, you have to understand how binary works (it only takes one bit being stuck/wrong to report a completely different value than what is intended) to understand how easy it is for a computational error to occur. There are billions of opportunities for this to happen every second in every computer. Overclocking greatly increases the odds that it will happen.

ECC is required in computers with large memory banks to prevent this from happening in a situation where it is not only likely, but inevitable (there's a lot of surface area for an electron to hit and cause a bit to randomly flip).



BOINC has at least two computers calculating everything so if someone screws up, it errors and throws out the result. F@H has no such mechanism to prevent computational errors.


Moreover, GPUs are more likely to error than CPUs because they don't handle any critical information. A GPU could have a bad binary switch in it and you might never even know it.

Some projects have a quorum of one with no redundancy unless there is an error.

The premise that overclocks make bad computations is inaccurate, unstable overclocks making errors is accurate.

The errors produced by my farm have always been traced to memory degrading or OS/Hdd issues, in that the OS corrupts or the "old Hdds that I use wear out. OS corruption in my experience is greater with Windows and I switched to a Linux OS and since have never seen a BSOD.

I have no issues with your comments relating to FAH, as I left when the project would no longer allow me to queue Tinkers:p, and their forum had several "mods" that should not be allowed out in public without police supervision:D

If you require, I can give you the attributes to my farm and their cumulative accomplishments to provide you with a clearer picture of my experience.
 
Some projects have a quorum of one with no redundancy unless there is an error.
The ones that really need two to run a quorum are those where every result is used to create and calculate a subsequent result. That is, if there is an error early on, it will be exponentially off in the end. There's also some situations where you know what your answer should be (at least close to it) so you'll know rather quickly if it was wrong. Whether or not there is redundancy is up to the people adding the project to WCG to decide whether or not it needed.


The premise that overclocks make bad computations is inaccurate, unstable overclocks making errors is accurate.
One in the same to me. Most people that overclock their computers have to eventually lower their clocks because the computer becomes unstable over time (the parts degrade). Every time that happens, you risk producing a bad result. Running on the edge of stability isn't good for science.


The errors produced by my farm have always been traced to memory degrading or OS/Hdd issues, in that the OS corrupts or the "old Hdds that I use wear out. OS corruption in my experience is greater with Windows and I switched to a Linux OS and since have never seen a BSOD.
Unix handles unexpected errors differently than Windows. Memory degrading can be the result of overclocking or normal wear and tear. Again, the need for redundancy at BOINC that is lacking at F@H which overcomes the "x factors."
 
The ones that really need two to run a quorum are those where every result is used to create and calculate a subsequent result. That is, if there is an error early on, it will be exponentially off in the end. There's also some situations where you know what your answer should be (at least close to it) so you'll know rather quickly if it was wrong. Whether or not there is redundancy is up to the people adding the project to WCG to decide whether or not it needed.



One in the same to me. Most people that overclock their computers have to eventually lower their clocks because the computer becomes unstable over time (the parts degrade). Every time that happens, you risk producing a bad result. Running on the edge of stability isn't good for science.



Unix handles unexpected errors differently than Windows. Memory degrading can be the result of overclocking or normal wear and tear. Again, the need for redundancy at BOINC that is lacking at F@H which overcomes the "x factors."

I'll give this one more jab and drop it as we have differing unmovable opinions.

I don't think that you can include "most" in your statement. An avid bencher will probably not run their equipment on a DC project, they probably are more interested in the suicide run....The overclockers that participate in DC would be more likely to obtain a solid "moderate" overclock that IS stable. I have, in my experience, never downclocked because of instability. Am I the rule?? probably not, nor am I the exception.

Overclocked computers are not bad for DC, unstable computer are.

One final query, is a box stock computer capable of error?? If so, it also would be liable to produce inaccurate/erroneous result..correct?
 
The main problem of distributed computing would be errors on the client side. GPU's are especially prone to this like Ford says and you would only know because the image of the 3d program would become artifacted. When they are folding they could produce alot of errors which could lead to a giant waste of time.
 
The main problem of distributed computing would be errors on the client side. GPU's are especially prone to this like Ford says and you would only know because the image of the 3d program would become artifacted. When they are folding they could produce alot of errors which could lead to a giant waste of time.


I am newly involved with GPU computing. I have not done FAH/GPU but only participate at GPUGrid.

I don't have enough practical experience with GPUs to be able to make a valued assessment.

I have very little problems thus far, only with work scheduling...and that is a Boinc client issue.
 
Back
Top