• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

TPU's WCG/BOINC Team

No one asked what the thread was. No inquiring minds? Hard to believe that the thread is from January! Seems like only a month or so....
I said, "it was someone who's been on TPU for a long time, maybe even longer than me." I've been on TPU since 2009, easy_rhino's been around since 2006. I also thought that james888 had responded in that thread. He was actually the first responder. If anyone is interested: Do "gaming" PCs require i5 procs?
I knew what thread it was, I just couldn't find it. I think in memory it also became merged with the guy asking for the lowest power usage computer thread(not the real name) because he was looking at i3's at first.
 
Anyone experienced with blade servers? I'm considering a new project and could use some brains to pick.

Edit: Potentially related, how much interest would there be for inexpensive WCG hosting? :p
 
Last edited:
Anyone experienced with blade servers? I'm considering a new project and could use some brains to pick.

Edit: Potentially related, how much interest would there be for inexpensive WCG hosting? :p
There's a thread on the main forum about this - don't know if you saw it - https://secure.worldcommunitygrid.org/forums/wcg/viewthread_thread,37011

They talk about renting cloud servers and there are price quotes. So there is definitely interest, which I personally found surprising as you'll see if you read the thread. How much interest there might be on our team though is a different question, but if the price is right, I can see some people being interested. I was actually thinking about doing something like this with my dual hex cores and running them for slightly more than the cost of electricity.
 
Anyone experienced with blade servers? I'm considering a new project and could use some brains to pick.
I have some. What are your questions?
 
  • Like
Reactions: xvi
I have some. What are your questions?
Mainly things like recommended brands, special equipment required, any special concerns I'd need to look out for if I'm grabbing everything off of eBay. Also, any other cost-effective alternatives for high-density computing?
Eyeballing an IBM BladeCenter H. Looks like it's compatible with a lot of generations of blades? Some listings mention apart from the case, special power cables, PSUs (fairly obvious), KVM, some fan assembly, and switches? What's standard practice as far as equipment? Are all these extras common, or should I expect them built-in?

Bar-napkin math puts 14 IBM dual x5370 rigs at about $30 a month each with about a year ROI.
 
Hey guys. Do you think another 8 gigs of ram would help my output at all? 8 now Kingston red 1600 @ 1866 non xmp profile dimms. Also if I do buy more would a 16 gig kit xmp version be better than if I just bought 2 more 4 gig sticks?
 
Hey guys. Do you think another 8 gigs of ram would help my output at all? 8 now Kingston red 1600 @ 1866 non xmp profile dimms. Also if I do buy more would a 16 gig kit xmp version be better than if I just bought 2 more 4 gig sticks?
No it wouldn't, impact from RAM is negligible. I have not been able to see a statistically-significant difference in PPD going from bare minimum slow single-channel RAM to more faster ram
 
Cool that's all I needed to know. Thanks Kai.
 
Cool that's all I needed to know. Thanks Kai.
In general, I've found that you want (2+T/4) GB of RAM in your system (at the very least) for optimal performance, where T is the number of threads you're running. You can skate by with less (ie 2GB is enough for a four-thread system that does nothing but crunch) but this will let you run any of the WCG projects without issue.

But to back up my earlier claim, I have three i5-2400 systems running: one with a single 2GB stick of DDR3-10666, one with 1x2GB + 1x1GB of DDR3-10666, and one with 4x2GB DDR3-12800 and all perform almost exactly the same, averaged over a long enough period. Any given day, one may (and will) outperform the other two, but on the whole there's no meaningful difference.
 
Mainly things like recommended brands, special equipment required, any special concerns I'd need to look out for if I'm grabbing everything off of eBay. Also, any other cost-effective alternatives for high-density computing?
Eyeballing an IBM BladeCenter H. Looks like it's compatible with a lot of generations of blades? Some listings mention apart from the case, special power cables, PSUs (fairly obvious), KVM, some fan assembly, and switches? What's standard practice as far as equipment? Are all these extras common, or should I expect them built-in?

Bar-napkin math puts 14 IBM dual x5370 rigs at about $30 a month each with about a year ROI.
The IBM blade chassis are somewhat modular, in a sense that the fan units, switches, etc. are able to be swapped out without taking everything apart. Great for redundant systems and hot-swap. Definitely server power cables that are bigger and beefier than your standard PC or rack server cable. IIRC, the KVM is part of the unit (I could be mistaken) in that you hook up a monitor, keyboard and mouse (or a multiport KVM) to the chassis and then press the button on the front of the blade to assign it to the KVM. Same thing for the optical drive. There's also a Java based management app that includes a VNC-like console, but I think that there are a few things for the chassis set up that needs the direct connect. Here's the specs (in case you haven't already pulled them up):

http://www-03.ibm.com/systems/bladecenter/hardware/chassis/bladeh/specs.html

We've been very happy with IBM blades and the density is nice. We can fit three chassis in a standard rack with a KVM (goes between the three chassis) and switching and still have some extra room. Much nicer than the 2U x35XX series we had, where we only could get about 15-17 depending on what other stuff was in the rack.

HP and Dell also make blades, but we've not been as impressed. Cisco makes some UCS chassis servers that are similar in nature to the IBM Flex chassis, which gets you eight beefy servers in 6 or 8U. We have our VDI environment in Cisco and our server VM environment on IBM. I can't remember off the top of my head which are which, but I think there's at least one HS21 in there. (It was all there before I got in my role.) One of the chassis has some blades that are S771 Xeons and others that are S1366. The newer Sandy Bridge blades are in the Flex or UCS. Some of the dual 1366 blades we use have 18 slots of ram in them - 144GB easy with 8GB sticks :pimp:

BTW, these things are very loud and move a ton of air. Not something you want to have in your basement, but I'm guessing you're not going there ;)
 
BTW, these things are very loud and move a ton of air. Not something you want to have in your basement, but I'm guessing you're not going there ;)
I disagree. I would love to have one in my basement just for the geeky pleasure. As long as I couldn't hear it upstairs that is. It would also deter people from wanting to come to my house which I find to be added value.

If I really went this route, and I wont be unless I am a mega millionaire, I would want to watercool the whole thing anyways. I would figure it out!
 
  • Like
Reactions: xvi
It would also deter people from wanting to come to my house which I find to be added value.
:toast:
So it's not just me then! I feel a whole lot better.:laugh:
 
  • Like
Reactions: xvi
As long as I couldn't hear it upstairs that is. If I really went this route, and I wont be unless I am a mega millionaire, I would want to watercool the whole thing anyways. I would figure it out!
Definitely would hear it upstairs. Our DC is probably 500-600 square feet with two huge AC units (redundancy), nine racks, four VNX and two Centera's (decommed, lots of empty space right now). The whole thing screams when you open the door, which sucks for the Help Desk guys right outside of it. Even with all that noise you could hear these when standing behind the rack (even with my crappy ears and tinnitus).

Asetek was making some watercooling options for DCs:

http://asetek.com/data-center/data-center-coolers.aspx
 
I have seen those asestek, but I would rather go full custom if possible. I also remember reading how microsoft and other data center companies have thought about renting out people's basements for a server farm. This would heat the home. I like this idea. If I was uber rich I would heat my home entirely with a crunching server farm. One can dream.
 
My only regret is that I have but one thank to give.
The IBM blade chassis are somewhat modular, in a sense that the fan units, switches, etc. are able to be swapped out without taking everything apart. Great for redundant systems and hot-swap. Definitely server power cables that are bigger and beefier than your standard PC or rack server cable. IIRC, the KVM is part of the unit (I could be mistaken) in that you hook up a monitor, keyboard and mouse (or a multiport KVM) to the chassis and then press the button on the front of the blade to assign it to the KVM. Same thing for the optical drive. There's also a Java based management app that includes a VNC-like console, but I think that there are a few things for the chassis set up that needs the direct connect. Here's the specs (in case you haven't already pulled them up):
Looks like it's all required to get it working?

http://www-03.ibm.com/systems/bladecenter/hardware/chassis/bladeh/specs.html

We've been very happy with IBM blades and the density is nice. We can fit three chassis in a standard rack with a KVM (goes between the three chassis) and switching and still have some extra room. Much nicer than the 2U x35XX series we had, where we only could get about 15-17 depending on what other stuff was in the rack.
Nicer in just the setup and general usability of it too?

HP and Dell also make blades, but we've not been as impressed. Cisco makes some UCS chassis servers that are similar in nature to the IBM Flex chassis, which gets you eight beefy servers in 6 or 8U. We have our VDI environment in Cisco and our server VM environment on IBM. I can't remember off the top of my head which are which, but I think there's at least one HS21 in there. (It was all there before I got in my role.)
My only concern with density was down the line if I actually get a cabinet. For now, it's probably just going to sit in the garage.
One of the chassis has some blades that are S771 Xeons and others that are S1366. The newer Sandy Bridge blades are in the Flex or UCS. Some of the dual 1366 blades we use have 18 slots of ram in them - 144GB easy with 8GB sticks :pimp:
That's RAM seems excessive for WCG, but at least it's capable. o.0 Are you guys using anywhere near that much?

BTW, these things are very loud and move a ton of air. Not something you want to have in your basement, but I'm guessing you're not going there ;)
Uh oh. Earplugs required loud? I was hoping it's own room at least would be enough. Are they difficult to keep thermally? I figure about 200-250w per blade times 14 is 3500w, with a considerable amount going to heat. (Edit: I see the discussion)


This is starting to sound like it'd have to go in the garage which may have humidity issues. Is this feasible anywhere other than a colo room?

(I think it's either try this or swap a Mazda BP turbo in to my old Escort. I feel like something around here needs to haul ass and it's either going to be virtually or physically.)
 
Last edited:
Hey guys. Do you think another 8 gigs of ram would help my output at all? 8 now Kingston red 1600 @ 1866 non xmp profile dimms. Also if I do buy more would a 16 gig kit xmp version be better than if I just bought 2 more 4 gig sticks?
Ion is right. I run between 4 to 8gb of memory in my dedicated crunchers. I have 16gb in my main rig and there is much difference in ppd between it and it's twin running 8gb. Even the fx rig running 4gb of 1600 runs right around the same ppd.
 
Had some tweaking urges yesterday. Got the main rig Stable 4.4 @ 1.2 volts Ran Boinc 100% for 6 hrs while I slept. Checked it when I got up and max temps were 67 on any core. Mind you it was already 4.3 @1.25 volts but less power consumed with a clock bump makes me happy. I can't see it helping "much" more in the ppd dept but I had to do something since I cant afford The 6 core @GhostRyder has for sale and the challenge is soon.
I will check on it when I get home after work and if all is good I think this may be my sweet spot.
 
My only regret is that I have but one thank to give.

Looks like it's all required to get it working?


Nicer in just the setup and general usability of it too?


My only concern with density was down the line if I actually get a cabinet. For now, it's probably just going to sit in the garage.

That's RAM seems excessive for WCG, but at least it's capable. o.0 Are you guys using anywhere near that much?


Uh oh. Earplugs required loud? I was hoping it's own room at least would be enough. Are they difficult to keep thermally? I figure about 200-250w per blade times 14 is 3500w, with a considerable amount going to heat. (Edit: I see the discussion)


This is starting to sound like it'd have to go in the garage which may have humidity issues. Is this feasible anywhere other than a colo room?

(I think it's either try this or swap a Mazda BP turbo in to my old Escort. I feel like something around here needs to haul ass and it's either going to be virtually or physically.)

H chassis:

chassis.jpg


Fan module (top left), Brocade fiber switch (bottom left), KVM (middle) and PSU (right)

parts.jpg


Standard ethernet switch:

switch.jpg


This is an H chassis running with about 12 of the blades in use. Some of these are S771 Xeons, while five are dual-socket E5-2660's or 2680's and 212GB or 228GB. You can see the power connectors in the top corners of the chassis (beige plugs), dual KVMs on the right and the standard Cisco switches on the left. These use a different fiber switch pair in them than the Brocades above (located along the top and bottom): the brocades would slide in next to the KVMs or the Ethernet switches. IIRC the blades need HBAs installed on them to use fiber, but again I think that's not needed for you.

running.jpg


I took a short video of this running, but it's a 14MB file and I don't think I can attach it to this post. I could email it to ya ;)

Putting yours in the garage might be possible if you don't have to worry about humidity and can vent the exhaust outside somehow. How close are your neighbors? :laugh:

The IBMs are just better overall as far as hardware, support, tools, etc. As for the ram question, yes, we can utilize more than what we have. Most of the VMs we run are low on proc usage, but ram utilization is high, even on the 228GB'ers above (try not to go over 70% ram due to VMware HA).
 
I'll be passing FIH The Don on the team stats soon. :fear:So I gotta know, do I need to bring a Pepsi or something?
 
LOL, that's a great commercial!


That is really great stuff right there, very awesome.



My brother and I are always in this ongoing, never ending battle of Coke vs Pepsi, and I have to say that Coke takes it for me. The only tring I will give to Pepsi is their Wild Cherry Pepsi, as it's very good, even better than Cherry Coke. However, other than that, Pepsi is buuuuuh :laugh:
 
New setup, ready to go for the Challenge! :toast:
x5672.png
 
sweet man!!!!

Just got my 4770k running now! Working on fan control from bios now. it's running low and I have the same temp at stock than at full speed... xD
 
sweet man!!!!

Just got my 4770k running now! Working on fan control from bios now. it's running low and I have the same temp at stock than at full speed... xD
It's a pretty nice setup--running in the mid 60s C crunching, not too loud, and with 16 threads going at once (and at 3.46GHz) it ought to do a good job. First batch of WUs ought to finish some time over night, so I can probably get a (very rough) PPD estimate on it as soon as tomorrow :toast:
I'm still installing Windows Updates on it now--not exactly helped by the 5400RPM HDD :ohwell:

Either way, for just over $200 it's a winner :toast:

EDIT: A very tentative just-over-9k PPD (based on a single WU, so not a very good estimate, but it's the best I have ATM). That means it pretty much ties with the 3930k @ 4.5GHz as my second-best cruncher (behind the Quad Opty setup).
 
Last edited:
Back
Top