# FAH TPU Top 100



## cine.chris (Jan 20, 2021)

A TPU guy got there first...
EOC Top Point Producers





I was close...
I've been tuning & tweaking, getting all the Nvidia GPUs on Linux.  While I'll never make Top100 rank like @MightyMayfield, I'm really close to a Top100 Producer spot... the EOC24hr (7day/avg) is the stat I watch. Although my#s have been weak the last couple of days, I guess others faired worse as I moved up a chunk! Not what I expected to see.


----------



## phill (Jan 20, 2021)

Loving all these stats you can pull and show


----------



## Jacky_BEL (Jan 24, 2021)

cine.chris said:


> A TPU guy got there first...
> EOC Top Point Producers
> View attachment 184755
> 
> ...


You ranked up into the Top 100 I see , 99th place, so well done !
How much of this comes from moving to Linux?


----------



## cine.chris (Jan 24, 2021)

Jacky_BEL said:


> You ranked up into the Top 100 I see , 99th place, so well done !
> How much of this comes from moving to Linux?


TPU has a good team!
Moving to Linux was a big piece of the changes I've made, especially with ALL my GPU being Nvidia and the CUDA core released at the end of Sept.
I still have a couple of Win10 systems, for desktop work.
Also, I've moved to Supermicro server-class hardware for the dedicated folders.
The desktop mobos just don't have enough PCIe lanes to work with.
And it was cheaper to buy & operate! Turns out the power efficiency is much better too.
This guy (photo attached) is putting out ~8M PPD. The mobo,cpu & ram (with a heatsink I replaced) was US$350.
Just checked.. he's currently running at 8.19M.
I've attached my Linux FAHclient Install Guide for Ubuntu 20.04.  I also run Linux Mint and like the interface much more than Ubuntu Mate.
Buss-wise, I could attach two additional GPU to the system in the photo, but where would they go? Powering them too would be a challenge.
If anyone has an old system around they try out the Linux install.

I did a scatterplot of the September transition to Linux & CUDA from OpenCL.  The more powerful cards like the 3070 2070s & 2060s really benefitted from the switch.
It's a crude plot, but communicates the effects of both CUDA & Linux.  You can see the three different atom count levels in the groupings.

To share how simple it is to build a dedicated Linux server class Folder, this is the primary server that I'm working on.
Dual Xeon E5-2630v3, Supermicro X9DR3, 8GB DDR3 1066 RAM (which I upgraded later) Cost was US$170, free shipping.
This how it looks now...  4ea PCIex16 slots that can be configured to bifurcate!
I have the pieces to complete the fan bar & it'll be working soon.
I should point out that I don't fold on CPUs, a system like this isn't power efficient for CPU folding, but it does have 80 PCIe3 lanes to manage GPUs.


----------



## Jacky_BEL (Jan 29, 2021)

You mention PCIe lanes.
But do PCIe lanes matter that much for folding I wonder?
Have you found interesting info on the subject on how much of a difference it makes on GPU's running x16 x8 x4 or x1?
The way I "think" about it, is that most of the time is spent on calculations on the GPU, and that data transfer is only a fraction of the total time.
Of course, I could be totally wrong in my assumption, maybe this topic has already been resolved somewhere on the net.


----------



## thebluebumblebee (Jan 29, 2021)

I don't believe that PCIe lanes matter all that much.  Imagine how many lanes he had:








						First pic's of 6x GTX 970 F@H rig
					

Been teasing you guy's for awhile and I've had it running for a couple of weeks now on 2x 970's, but the K9A2 Platinum died(respect). Should have the replacement Gigabyte GA-990FXA-UD7 early next week. Until then, the cards just sit and await their new Queen. Hopefully the PSU is up to the task...




					www.techpowerup.com


----------



## mstenholm (Jan 29, 2021)

The PCIe lanes have been discussed a lot on the official FAH and as far as I remember you have to have at least x4. The x4 is hit by a few percent. I never took part in that discussion but I have seen a slight decrease at x8 on some projects. The 17800 I run now on a 2070 at x16 uses 14% bus.


----------



## cine.chris (Jan 30, 2021)

The EVGA Folders have good discussions on this topic. 
Their consensus was a PCIe3 4x slot was ~10% drop. As they would run gaggles of 2080Ti, on a 1660Ti, it might not be noticeable?
Due to the variation in WU, unless you are logging data, it's often difficult to tell a difference.  I've logged 9,847 WU since Sept, so the data I share isn't based on observation or impressions. I even change the client designation with any config changes, so setup data isn't commingled.
PCIe3 8x is fine for most (all?) GPUs, the smaller system above is all PCIe3 8x.
Also the more powerful the GPU the higher the required bandwidth.
A 2060 will take a 15-20% hit on  *PCIe2* 4x slot.
Also, I was seeing a significant hit on an AMD x570 PCH slot.  I even swapped the two 2060s to verify it was the x570 hdwr.
But that's just one data point.  My expectations for that box faded quickly.
Also, throttling a GPU isn't necessarily a bad thing as it conserves the power load & reduces operational temps.
I work to keep everything I run under a 70C max.
However, I like to tinker and find combos that work well together.  My last change didn't work.  I was reverting everything at 5AM.
Weird as it sounds, the workloads interact.  Of course, I'm sure, the Folding experts will tell you that's silly nonsense.    
All good topics for setting up gear & running real tests. 
One real test, is worth a 1,000 'expert' opinions!


----------



## Praystation (Feb 8, 2021)

do you guys think i could make a dent in the rank with my rig?


----------



## Jacky_BEL (Feb 8, 2021)

Praystation said:


> do you guys think i could make a dent in the rank with my rig?


With that many cores, you need to have the configuration right, so all cores can be used.
I have read about it, but I don't remember exactly what it was anymore. I believe you had to insert a line in a config-file telling how many cores to use.


----------



## cine.chris (Feb 8, 2021)

Praystation said:


> do you guys think i could make a dent in the rank with my rig?


Yes, but be prepared to deal with the temps!
You ask on the Folding@Home Discord server, they have some avid CPU Folders.

Certain core count groups are better utilized than others...


----------

