# AM1 Athlon 5350 FreeNAS file server



## TRWOV (Jun 5, 2014)

So I've got the task to haul the office "server" (an IOMEGA Home Media case) after the HDD failed.  Thankfully I had an "unofficial" backup  and managed to avert disaster.

So they said, "here are 10,000 pesos, get me a server" and I thought, OK, FreeNAS it is...

Target size is 2TB. We had 360GB of data on the "server" after 5 years of operation so 2TB should last a lifetime.

Parts list:

ASUS AM1I-A mITX
Athlon 5350
2x8GB DDR3-1333
5 x 1TB Black WD drives (4 for RAIDZ2, 1 spare) 
Syba 4 port SATA card (SIL3124)
Samsung 2.5" 16GB MLC SSD (ZIL drive)
Sparkle FSP-400GHS 400w SFX PSU 80+ bronze
Acteck BERN mATX low profile case.

I got a few parts today:







I'll keep you posted


----------



## Sinzia (Jun 5, 2014)

subb'd, I love freeNAS builds!


----------



## bencrutz (Jun 5, 2014)

subbed too


----------



## H3LLSMAN (Jun 5, 2014)

Cool


----------



## TRWOV (Jun 5, 2014)

Reading up on some FreeNAS threads and their own Hardware Recommendation page I think that I don't actually need a ZIL drive since sharing will be using SMB (Windows enviroment) and a ZIL drive is recommended for NFS or if you have lots of I/O. Maybe I'll use the SSD for the OS but I'll determine that after the tests.


----------



## TRWOV (Jun 15, 2014)

I received the board and SATA card over the week:







So I started with the build yesterday.



Here's the Athlon 5350. The heatsink is very small, almost comically so.







AM1. Some assembly required.




The heatsink doesn't come with the push pins installed. You have to put them in.





Make sure to put them in correctly.

Now let's install the CPU:










I don't know if I was doing something wrong or what but pushing the pins into the motherboard mounting holes was way harder than I thought. Easiest way that I found was putting one and then the other... it was either that or grow a third hand.

Doesn't seem too secure, does it?





Everything is in place...






Now to mod the case.....


----------



## _JP_ (Jun 15, 2014)

Awesome pick for a NAS! Subscribed.
Could you please confirm if the cooler pins are 85mm apart? (If you have the time, of course)
I'm trying to find out if there are any cooler mods possible, but haven't found a reliable source for the distance between the pin holes of the retention system.
I thinking a nofan cr-80 can be adapted to it.


----------



## newtekie1 (Jun 15, 2014)

TRWOV said:


> I don't know if I was doing something wrong or what but pushing the pins into the motherboard mounting holes was way harder than I thought. Easiest way that I found was putting one and then the other... it was either that or grow a third hand.



Yeah, it is definitely a pain.  That is the way I've been doing it too, haven't really had a problem.  Though, once installed properly, it definitely is secure.  I could lift the entire system up by the heatsink and it wouldn't budge.



_JP_ said:


> Could you please confirm if the cooler pins are 85mm apart? (If you have the time, of course)



According to AMD's Techdoc on the cooler design, they should be 85mm apart(well 84.9, but close enough).


----------



## TRWOV (Jun 15, 2014)

Modding the case.


The Acteck Bern case is supposed to be used with a vertical stand. Since this thing will weight a lot due to the HDDs I wanted to keep it flat. Unfortunately that meant I had to block the PSU intake.





It comes with an el-cheapo 500w(?) SFX PSU which will, of course, be discarded.






To the spares bin!!!





I had these feet from another case gathering dust.

The frontal feet were easy since I used the holes from the side (now bottom) vent:





After some measuring and drilling:


----------



## TRWOV (Jun 15, 2014)

Now onto the SATA card:


The card comes with a full height bracket. Thankfully I had a low profile bracket lying around.





It doesn't fit completely but should do the trick:





It has a SIL3124 controller which seems to be fully supported by FreeNAS 9.x (according to the FreeBDS 9.2 hardware notes: http://www.freebsd.org/releases/9.2R/hardware.html )


----------



## TRWOV (Jun 15, 2014)

Tying up and some problems...

I had this lying around doing nothing so I thought I'd use it for this build:






but after a quick fitting test it became apparent that there would be a problem for connecting the hard drives:






So I just took the HDD cage from it:






Better! Securing it was a bitch. Had to drill several holes to get it right.


The FSP PSU that will be powering this build:





It doesn't have 80+ certification but FSP states that it meets 80+ Bronze.

Since I couldn't use the Evercool Armor fan bracket I removed the PCI covers and affixed an 80mm fan with double side mounting tape:











Wrapping up...





What a mess. I need shorter and thinner cables.

Ok, now let's install FreeNAS. Burn CD, blah, blah, blah. Reboot.





It sure takes its sweet time. I'm used to my unRAID home server to be up and running in 1 minute. Don't know if it's because it's the first start up or what but I think it took about 5 minutes for the main screen to display.

Closing up:






I'll take the build on site tomorrow and make some tests before deploying.


----------



## Sinzia (Jun 15, 2014)

Looks good! I may have to build something similar... What's the model number of the raid card, and are you doing JBOD and letting FreeNAS do the magic, or are you using the on-card raid?


----------



## TRWOV (Jun 15, 2014)

I'm using the card in no-RAID mode (no logical volumes). Just to get around AM1's lack of ports.

This is the card:
http://www.newegg.com/Product/Produ...4027&cm_re=syba_4_sata-_-16-124-027-_-Product

Remember that it doesn't come with a low profile bracket even though it says "Low profile ready"



One last thing:





There. Perfect.


----------



## newtekie1 (Jun 15, 2014)

I'm interested what is under the heatsink of that card.  The SI chip is handling all the data processing, it doesn't have a separate onboard chip for RAID calculation, it doesn't seem to have any onboard cache that would need cooling either...so what is under that heatsink?  I wonder if they just slapped a heatsink over essentially nothing to make the card look like some of the higher end cards...


----------



## TRWOV (Jun 15, 2014)

I assume it's a PCI to PCIe bridge since SIL3124 is a PCI controller. Don't mind since I'm sure the 250MB/S bandwidth will be enough for 7200RPM HDDs over a gigabit connection, more so with ZFS slowing throughput.

FreeNAS picks up the drives just fine:


----------



## TRWOV (Jun 16, 2014)

The build is hitting 65w on idle which is... kind of high for what I wanted (I was expecting ~45w). Maybe I should have gone with a Sempron 3850?

I guess I'll fiddle with the bios settings and see if I can turn down the GPU frequency and such. I think I'll make some test with CPU freq at 1.3Ghz to emulate a Sempron and see what I get.


----------



## Aquinus (Jun 16, 2014)

TRWOV said:


> The build is hitting 65w on idle which is... kind of high for what I wanted (I was expecting ~45w). Maybe I should have gone with a Sempron 3850?
> 
> I guess I'll fiddle with the bios settings and see if I can turn down the GPU frequency and such. I think I'll make some test with CPU freq at 1.3Ghz to emulate a Sempron and see what I get.



If all 5 WD blacks are spun up, I wouldn't be surprised to see the drives alone pulling 35 to 40-watts, plus the RAID controller pulls a little bit, plus memory pulls a little bit, plus there are losses on the PSU and on the VRMs. 65w for the entire rig isn't all that bad IMHO.

I'm assuming you measured power draw off the wall? If you are, the actual DC usage is probably closer to 50-55-watts, in which case doesn't sound unrealistic for what you're doing with the machine.


----------



## TRWOV (Jun 16, 2014)

Yep, at the wall. I dunno, maybe I was being too optimistic. The drives are rated for 10w (5v @ 0.68A, 12V @ 0.55A) and I could only fit 4 of them so I was expecting 65w at write/read operation. The system spikes to 80w at startup, maybe that's what I'll get in operation.


----------



## Aquinus (Jun 17, 2014)

Perhaps, but don't forget that PCI-E card draws power too and the CPU can't power gate the PCI-E root complex if it's being used. PCI-E 1x low-profile is rated for 10-watts and "high-power" (standard for x4, x8, and x16) is 25-watts. It may not be unrealistic to expect the RAID card to be drawing some of that load. Even if all the drive were consuming between 5 and 10-watts a pop, that's 20 to 40-watts (I thought all 5 were powered up,) plus 10 to 25-watts max for the RAID card, plus 5-10 watts for the RAM, and a 20-Watt TDP on the CPU which *should* idle at a fraction of that. So worst case, the rig might draw 95 watts which seems high to me, but that's the upper-bounds of what each device might draw. on the other hand, the drives might consume as little as 20-watts for all 4, 5-10 watts for the RAID card, 5 for ram, and 5 for the CPU at idle which would total 35-40 watts as a lower bound. So with a 95-watt upper and a ~40-watt lower, and a realistic draw off the wall at 65w, taking into account efficency losses (15% assuming just the PSU?) the real power draw you're seeing is ~55-watts. I would place that on the lower end of the 40-95 watt range. That sounds entirely realistic to me from a number crunching perspective.


----------



## newtekie1 (Jun 17, 2014)

That is one of the problems with using WD Black drives, they have basically the worst idle power usage.  And they will actually spike higher than 10w at startup because they take up way more power to spin up.  That is why most RAID controllers that support more than 4 drives usually use, or at least give the option for, staggered spin-up.  Though, I would have gone with RED drives for this application.  They are only rated for a max of 3w, and they officially support RAID.

Plus, at 65w you are only at about 15% load on that power supply.  Which means for that PSU its efficiency is probably in the 75% range, putting the real power usage in the 45w range.  Which given the hard drives, is about as good as you are going to get.


----------



## TRWOV (Jun 17, 2014)

yeah, I guess I'll look into getting a 200-250w PSU. Thankfully FSP has that covered too: http://www.sparklepower.com/pdf/PC/FSP200-50GSV-5K.PDF

About Reds, they cost a ton here for some reason (Put "Enterprise" in front of something and charge $100 more). I went with Blacks because of their, supposed, higher reliability over Blues. I'm using RAIDZ2 so whenever they support RAID or not is a moot point.

Fiddling around on FreeNAS  it looks as there are some power saving options; I'll have to read up on those as to not fuck things up and in the mean time I'll lower the GPU frequency. That should help with a couple of watts. Not that I'm trying to break records with this thing but I'd like it to be as efficient as possible.


----------



## _JP_ (Jun 17, 2014)

TRWOV said:


> Fiddling around on FreeNAS it looks as there are some power saving options;


I was going to ask you if you hadn't played with those in the meantime. Late to the party...it's becoming a trend for me...
What that option does is enable the powerd algorithms to cycle trough the P-States, relative to cpu load.
I think the default for freeNAS is "adaptive", which works a lot like windows does, with c'n'q/speedstep active.


----------



## TRWOV (Jun 19, 2014)

Just in time, Arstechinca has a FreeNAS vs NAS4Free shodown:  http://arstechnica.com/information-...as-distribution-shootout-freenas-vs-nas4free/

It almost made me go towards NAS4Free for the simplified GUI (I've only setup two unRAID media servers so far) but I guess the snapshot functionality would be extremely handy to have so FreeNAS stays.


----------



## ne6togadno (Jun 19, 2014)

http://arstechnica.com/information-...-and-atomic-cows-inside-next-gen-filesystems/


----------



## TRWOV (Jun 30, 2014)

I forgot to share the numbers 

It looks like the build was extremely overkill 




14GB of free RAM  I think I won't need the SSD in there after all. 


Reads go up to ~90MB/s, writes top at ~60MB/s 












I don't know if ZFS or the SATA card is limiting the write speed but it's a 2x improvement over the previous solution as the maximum write speed was in the low 30s.


----------



## Aquinus (Jun 30, 2014)

Are those transfers over gigabit on Samba?


----------



## TRWOV (Jul 1, 2014)

yes


----------



## Nordic (Jul 1, 2014)

I would use the terminology of overhead room to upgrade to in the future, not overkill. But yes, overkill.


----------



## TRWOV (Jul 4, 2014)

Looks like it wasn't as overkill as I thought, in large transfers free RAM goes down to 6GB.


----------



## newtekie1 (Jul 4, 2014)

I believe that is because FreeNAS uses the RAM as a buffer.  The 5350 can't calculate the parity fast enough so it is filling up RAM with data waiting to be written.  Better hope you don't have a power outage.


----------



## TRWOV (Jul 4, 2014)

I have an UPS


----------



## newtekie1 (Jul 4, 2014)

I'd still find a way to disable the RAM cache, especially since you aren't using ECC memory.


----------



## TRWOV (Jul 5, 2014)

you jinx!!! we just had a blackout!!!  Scrubbing now... I was going to do it tomorrow anyway.


----------



## newtekie1 (Jul 5, 2014)

Going back to your power consumption issue.  You might try undervolting the processor.  I just built a very similar build with a 5350 and an AM1M-A.  It doesn't have any hard drives in it yet, just two SSDs in RAID1 for the OS, but it is only pulling ~25w idle from the wall.  However, that is with a 650w 80+ Gold Power Supply.  I was able to up the multiplier on the CPU to 21 and still lower the voltage by 0.1v.  That doesn't sound like a lot, but it is about a 20% drop in voltage.  I could probably drop it by 0.15v, but I haven't tried yet.  And if you aren't to concerned with performance you could probably even drop the multiplier and lower the voltage even further. Heck, you could drop the multiplier to 13 and lower the voltage, and you'd basically have a 3850.


----------



## TRWOV (Jul 6, 2014)

nah, I think I'll leave it at that. I wouldn't want to risk the system stability over a couple of watts. I think getting a 200w PSU would be more benefical. I'll surely slow down the graphics core though


----------



## newtekie1 (Jul 6, 2014)

Yeah, I'm running my through 24 hours of OCCT to make sure it is stable.


----------



## messerchmidt (Dec 28, 2014)

the asus board allegedly supports ecc ram, why did you not use it? freenas 9.3 working ok?

zfs is a ram pig, uses 1gb per 1tb of storage space. uses ecc to self heal


----------



## TRWOV (Dec 28, 2014)

The only board that has "confirmed" (as in, a few guys say it works) ECC suppoort is the AM1A-M and it seems to be picky about which DIMMs it supports. Only a few guys tried (failed) ECC injection tests so memtest reporting ECC support might be a bug.


----------



## messerchmidt (Dec 28, 2014)

i ended up going xeon 1230v3+supercomputer+ecc because i was a bit worried about data failures. overkill and a power pig. i wish amd released their vaporware opteron x1150 to the masses


----------



## messerchmidt (Jan 7, 2015)

the way we test for ecc on the intel platform does not work with current amd systems according to the people on the freenas forum. without being able to verify it, given how picky zfs apparently is, they suggest sticking to intel.


----------

