# Silent Plex/Backup Server



## bermel72 (Apr 14, 2016)

I need a plex server in my life lol, I fell in love with plex after I was playing around in the play store on my OPO one day, downloaded it found it interesting and later that night installed it on my laptop as a server, now I stream all my music from my laptop to my phone and my ps4, almost everyday. I love the fact you can stream music outside of your home network as well. Anyways

As I said previously I need a more dedicated plex server, I don't want to spend alot of money on it maybe like $500 at the most or somewhere around that (USD of course). It needs to do a couple things.

1. Needs to store my music
2. Needs to store my videos
3. One seperate drive I want my computer backups (laptop - 500 ssd, my computer - 500 ssd, fiances computer - 500 ssd)
4. I need it to stream at the most to 4 different devices.
5. I have an internet that is 90 MB/s download and 3 MB/s upload, will that be fast enough to use inside my network.
6. For  it to be as silent as possible, since it will be sitting right under my desk, the more compact the better.

Thanks.


----------



## puma99dk| (Apr 14, 2016)

How many devices u need to stream to at the same time?

I run a Plex server myself on my gaming rig that my sis has access to, on a 90/90mbit connecting and she steams in original bitrate, that means no quality loss over the original file and it runs fine even with me gaming she haven't complained.


----------



## bermel72 (Apr 14, 2016)

puma99dk| said:


> How many devices u need to stream to at the same time?
> 
> I run a Plex server myself on my gaming rig that my sis has access to, on a 90/90mbit connecting and she steams in original bitrate, that means no quality loss over the original file and it runs fine even with me gaming she haven't complained.



Like 3 devices at the most.


----------



## N-Gen (Apr 14, 2016)

My plex file server is listed in my sig (Basilisk) and it's enough for up to 4 simultaneous HD streams on Plex. 5th one is hit or miss, might work might not. I mainly use it to stream to Raspberry Pi devices although any media I watch on my PCs is also via Plex. It's also hosting all of my photos and their backups, the drives are almost full so throughput internally isn't stellar, so to speak.

Total cost is very cheap, the most expensive parts were the Node 804 at €110 and the EVGA Supernova G2 550W Gold at €100, 8GB RAM was about €44, board was ~€40, HDDs, well you can pick HDDs. OS runs off a 16GB flash drive. So yeah even with my parts (exc. HDDs) you'd be way under budget. Power draw is relatively low, I'm getting on average 55w at the wall with 5 HDDs spinning.

Adding to this: Your internet connection measures what you can receive/send outside of your network. You're going to either have a 100Mbps connection or 1Gbps connection within your house, even newer WiFi is >300Mbps so you're more than safe. My Raspberry Pi devices stream over wireless N so 300Mbps max and I have had no issues whatsoever.


----------



## Frick (Apr 14, 2016)

@N-Gen What is the bottleneck? I'm assuming CPU? Is there a need for that much memory?


----------



## N-Gen (Apr 14, 2016)

The bottleneck should be the CPU, yes. There is no need for 8GB of RAM, in fact the system was built around 4GB but I found a new matching stick off Amazon for €22 so I couldn't pass it up. Sometimes the system uses close to the full 8GB of RAM but that's more related to ZFS rather than Plex so it is purely a ZFS thing. Bear in mind there's 20TB (17TB after RAIDZ) worth of HDD space with only about 4TB free in there and RAM helps a lot in that kind of file system. If you built a FreeNAS machine you'd be requested to use 1GB of RAM per 1TB of data (which isn't necessarily true, it's a bit more complicated than that).


----------



## BigPaPaRu (Apr 14, 2016)

I use a 3 year old computer as my server. Nothing special. Has a POS GT620 , (2) 1TB hard drives, 8 GB memory and windows 10. I have it running Plex, Kodi, Utorrent, Netflix, Amazon video, and VLC. It is silent and always on and has no problem streaming to multiple people all day.


----------



## newtekie1 (Apr 14, 2016)

bermel72 said:


> I have an internet that is 90 MB/s download and 3 MB/s upload, will that be fast enough to use inside my network.



That won't have any affect on streaming inside your network, only streaming outside your network.  And 3MB/s upload should be enough for streaming to one or two devices outside your network without a problem.

My suggestion:
http://pcpartpicker.com/p/7hPwMp


----------



## alucasa (Apr 14, 2016)

NUC makes decent plex servers plus a 2tb 2.5 inch hdd.


----------



## bermel72 (Apr 14, 2016)

What about this:

PCPartPicker part list / Price breakdown by merchant

*CPU:* AMD FX-8350 4.0GHz 8-Core Processor  ($149.99 @ Newegg)
*CPU Cooler:* NZXT Kraken X61 106.1 CFM Liquid CPU Cooler  ($120.39 @ NZXT)
*Motherboard:* Asus M5A78L-M/USB3 Micro ATX AM3+ Motherboard  ($56.98 @ Newegg)
*Memory:* G.Skill Ripjaws X Series 8GB (2 x 4GB) DDR3-1866 Memory  ($48.99 @ Newegg)
*Storage:* Samsung 850 EVO-Series 120GB 2.5" Solid State Drive  ($68.99 @ Amazon)
*Storage:* Hitachi Deskstar NAS 4TB 3.5" 7200RPM Internal Hard Drive  ($159.99 @ Newegg)
*Case:* Fractal Design Node 804 MicroATX Mid Tower Case  ($115.55 @ Amazon)
*Power Supply:* EVGA SuperNOVA G2 550W 80+ Gold Certified Fully-Modular ATX Power Supply  ($71.49 @ Newegg)
*Keyboard:* Logitech Wireless Combo MK270 Wireless Standard Keyboard w/Optical Mouse  ($20.95 @ Amazon)
*Total:* $813.32
_Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-14 11:47 EDT-0400_


----------



## N-Gen (Apr 14, 2016)

bermel72 said:


> What about this:
> 
> PCPartPicker part list / Price breakdown by merchant
> 
> ...



You're only going to be retrieving files from it, an FX-8350 is way overkill for your needs.


----------



## bermel72 (Apr 14, 2016)

N-Gen said:


> You're only going to be retrieving files from it, an FX-8350 is way overkill for your needs.



I'm sorry I do not follow what you are saying. It may be doing trans coding when streaming the videos and music themselves. Do you think a FX-6300 would be better? I looked on passmark and the FX 8350 seems like it rates higher then the i5 and lower then the i7 so I thought it was a good tradeoff


----------



## newtekie1 (Apr 14, 2016)

bermel72 said:


> What about this:
> 
> PCPartPicker part list / Price breakdown by merchant
> 
> ...




I thought the budget was $500?  If you are willing to spend that much, give me a few and I'll put something together.

Edit:  This would be better.

PCPartPicker part list / Price breakdown by merchant

*CPU:* Intel Core i5-6500 3.2GHz Quad-Core Processor  ($194.99 @ Newegg) 
*CPU Cooler:* Cooler Master Hyper 212X 82.9 CFM CPU Cooler  ($39.99 @ Newegg) 
*Motherboard:* ASRock B150M Pro4S Micro ATX LGA1151 Motherboard  ($78.99 @ Newegg) 
*Memory:* Corsair Vengeance LPX 8GB (1 x 8GB) DDR4-2400 Memory  ($27.99 @ Newegg) 
*Storage:* Patriot Blaze 120GB 2.5" Solid State Drive  ($42.99 @ NCIX US) 
*Storage:* Seagate Archive 8TB 3.5" 5900RPM Internal Hard Drive  ($214.99 @ B&H) 
*Case:* Fractal Design Node 804 MicroATX Mid Tower Case  ($79.99 @ SuperBiiz) 
*Power Supply:* EVGA 500W 80+ Bronze Certified ATX Power Supply  ($34.99 @ NCIX US) 
*Keyboard:* Logitech K400 Plus Wireless Mini Keyboard w/Touchpad  ($29.69 @ SuperBiiz) 
*Total:* $744.61
_Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-14 12:43 EDT-0400_


----------



## JunkBear (Apr 14, 2016)

bermel72 said:


> I need a plex server in my life lol, I fell in love with plex after I was playing around in the play store on my OPO one day, downloaded it found it interesting and later that night installed it on my laptop as a server, now I stream all my music from my laptop to my phone and my ps4, almost everyday. I love the fact you can stream music outside of your home network as well. Anyways
> 
> As I said previously I need a more dedicated plex server, I don't want to spend alot of money on it maybe like $500 at the most or somewhere around that (USD of course). It needs to do a couple things.
> 
> ...



Too bad you are in USA I just put my hand on brand new LenovoThinkcentre M73 SFF with 500gigs hdd,  DDR3-4gigs, i5-4430. The full setup including screen keyboard and mouse is 350$ Canadian.


----------



## bermel72 (Apr 14, 2016)

newtekie1 said:


> I thought the budget was $500?  If you are willing to spend that much, give me a few and I'll put something together.
> 
> Edit:  This would be better.
> 
> ...



This setup should run freenas on it correct? I would rather spend a little bit more and be able to run freenas as a headless operating system then sit there and deal with windows as an operating system. Also do you know if you install FreeNas if theres anyway to access documents outside of your home fairly easily?

Basically what I want from this system (if possible on FreeNas)

1. Plex Media Server (Don't plan on upgrading for a couple years with this so it needs to be stable).
2. Backup Service (Personal for Laptop - 500 SSD, 2x Desktops - 500 SSD (each), and my media on my server + extra TB for my important documents)
3. To access files, music, and movies outside my home network from my phones mobile network (Really important).
4. I need it silent so it can sit next to me while I'm on my computer, and it will be etherneted.

Thanks.


----------



## N-Gen (Apr 14, 2016)

While you can run FreeNAS you will not get support from the FreeNAS community if you don't follow their basic guidelines. Which are 1GB of *ECC* RAM per 1TB of data with a minimum 8GB of ECC. While this is not required by FreeNAS or FreeBSD it seems to be required by the community. They will refuse to support you on the basis of "you didn't follow the guidelines".

You can install owncloud on FreeNAS to access files externally.


----------



## Cybrnook2002 (Apr 14, 2016)

For your requirements, you are PERFECT to run (the same as I do, 2 x actually) unRAID. It is at it's core a media server with parity protection (Soon to have dual parity). But also supports VM's and "Docker" containers. They have a pre-built Plex Docker image, all you do is download it, do some basic config (there is a video you can follow), and off it goes.










You would need to buy the license. But I have another setup that is perfect for unRAID, it is:

Q6600
SuperMicro MB (http://www.newegg.com/Product/Product.aspx?Item=N82E16813182151)
4GB (2x2) DDR2
----------------------------
I would sell these for $125 shipped. Then the HDD's are on you to get, board supports 6.

This is more than enough to be a file server, host your shares, run backups, and host your Plex app for streaming.


----------



## newtekie1 (Apr 14, 2016)

bermel72 said:


> This setup should run freenas on it correct? I would rather spend a little bit more and be able to run freenas as a headless operating system then sit there and deal with windows as an operating system. Also do you know if you install FreeNas if theres anyway to access documents outside of your home fairly easily?



It should run FreeNAS, but I wouldn't.  FreeNAS used to be nice, but all the crap now, and the community has turned from a helpful source of information to a bunch of stuck up pre-teens that think their shit don't stink, has made it not worth it.

I just run Windows headless...


----------



## bermel72 (Apr 14, 2016)

newtekie1 said:


> It should run FreeNAS, but I wouldn't.  FreeNAS used to be nice, but all the crap now, and the community has turned from a helpful source of information to a bunch of stuck up pre-teens that think their shit don't stink, has made it not worth it.
> 
> I just run Windows headless...



I'm not against that but I would like to have some sort of HDMI switch, but I can't find any that will support a 1440p 144Hz monitor. I just would really like to get things setup and be able to switch between the two systems if there is ever any sort of problem. I suppose long story short if that doesn't work I could always use something like Team Viewer as a remote control source to run everything but that's a pain in the you know what.

*Edit:*

I just realized I am completely overthinking this. This is the monitor I am using. That being said I should be able to use my main computer on the display port and my server on the HDMI port, meaning that I could just switch the connection on the monitor and not need a switch at all correct?


----------



## newtekie1 (Apr 14, 2016)

bermel72 said:


> I'm not against that but I would like to have some sort of HDMI switch, but I can't find any that will support a 1440p 144Hz monitor. I just would really like to get things setup and be able to switch between the two systems if there is ever any sort of problem. I suppose long story short if that doesn't work I could always use something like Team Viewer as a remote control source to run everything but that's a pain in the you know what.



Just use Remote Desktop.

Or just connect the server to one of the extra inputs on your monitor, and use the monitor to switch.  Who cares if the server is connected with VGA/DVI and only runs at 1920x1080, you won't be seeing it enough for it to matter.


----------



## bermel72 (Apr 14, 2016)

newtekie1 said:


> I thought the budget was $500?  If you are willing to spend that much, give me a few and I'll put something together.
> 
> Edit:  This would be better.
> 
> ...



Thats pretty impressive that you got the 8TB drive in there. This will surely work, so back to my previous question will running Windows 7 Ultimate on this work? Also: 

1. Would I be able to back up multiple computers over the internet on this? And I'm assuming you just set it up like a normal backup process?
2. Would I be able to access files outside my home network, and how would this work?


----------



## newtekie1 (Apr 14, 2016)

bermel72 said:


> will running Windows 7 Ultimate on this work?



Yep, that is what I used on my home server for years.



bermel72 said:


> Would I be able to back up multiple computers over the internet on this? And I'm assuming you just set it up like a normal backup process?



Over the internet or over your local network?  Backing up multiple computers over the local network to your server on the same network is easy, just do it like a normal back up, but select the network drive as the backup location.

Getting computers to back up over the internet is quite a bit harder.



bermel72 said:


> Would I be able to access files outside my home network, and how would this work?



Yes.  You can set up an FTP server software(Filezilla Server is free) on your server.  Then you can access your files over the internet through Filezilla client.  If you want to do things like play music/movies over the internet, plex is your answer there.


----------



## bermel72 (Apr 14, 2016)

newtekie1 said:


> Yep, that is what I used on my home server for years.
> 
> 
> 
> ...



Perfect, backups would just be within my own network, also Filezilla would work, along with plex, cool, I know what I'm going to build now. Thanks so much, I am stealing your build that you came up with to.


----------



## bermel72 (Apr 14, 2016)

newtekie1 said:


> Yep, that is what I used on my home server for years.
> 
> 
> 
> ...



Do you know if Filezilla is supported by android or what a good app is that supports FileZilla?


----------



## N-Gen (Apr 14, 2016)

I wouldn't run the 8TB without backup just alone unless it's used how intended, cold storage. If it's critical data don't leave it just on the 8TB, have an extra copy. You should be doing this REGARDLESS of what drive you're using for data, not just the 8TB.

I can report it works well formatted as NTFS and ZFS filesystems. Definitely will not work under BTRFS if you opt to use it due to some problems within the file system itself, I've tried it and it was horrible. But all in all, pretty ok drives overall and for the price. In my case I mirror the critical data from my RAIDZ(RAID5) array to it so I have an extra copy and keep other non-critical stuff on it. All in all, it's your data and this is just my 2c.


----------



## bermel72 (Apr 14, 2016)

N-Gen said:


> I wouldn't run the 8TB without backup just alone unless it's used how intended, cold storage. If it's critical data don't leave it just on the 8TB, have an extra copy. You should be doing this REGARDLESS of what drive you're using for data, not just the 8TB.
> 
> I can report it works well formatted as NTFS and ZFS filesystems. Definitely will not work under BTRFS if you opt to use it due to some problems within the file system itself, I've tried it and it was horrible. But all in all, pretty ok drives overall and for the price. In my case I mirror the critical data from my RAIDZ(RAID5) array to it so I have an extra copy and keep other non-critical stuff on it. All in all, it's your data and this is just my 2c.



I was planning on this all said and done:

PCPartPicker part list / Price breakdown by merchant

*CPU:* Intel Core i5-6500 3.2GHz Quad-Core Processor  ($194.99 @ Newegg)
*CPU Cooler:* Cooler Master Hyper 212X 82.9 CFM CPU Cooler  ($39.99 @ Newegg)
*Motherboard:* ASRock B150M Pro4S Micro ATX LGA1151 Motherboard  ($78.99 @ Newegg)
*Memory:* Corsair Vengeance LPX 8GB (1 x 8GB) DDR4-2400 Memory  ($27.99 @ Newegg)
*Storage:* Samsung 850 EVO-Series 250GB 2.5" Solid State Drive  ($88.00 @ Amazon)
*Storage:* Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive  ($234.37 @ Amazon)
*Storage:* Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive  ($234.37 @ Amazon)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Case:* Fractal Design Node 804 MicroATX Mid Tower Case  ($84.99 @ NCIX US)
*Power Supply:* EVGA SuperNOVA G2 550W 80+ Gold Certified Fully-Modular ATX Power Supply  ($71.49 @ Newegg)
*Keyboard:* Logitech K400 Plus Wireless Mini Keyboard w/Touchpad  ($29.99 @ Amazon)
*Total:* $1535.14
_Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-14 15:41 EDT-0400_

Basically I have a couple reasons for expanding my budget out so far...

1. Raid 5 (Mirrored Volume) for the 2x4TB Red Drives, for mission critical files.
2. Raid 5 (Mirrored Volume) for the 2x6TB Red Drives, for media (music & movies) - Plex
3. Red 4TB Drive for backups on my other computers.
4. SSD for Windows 7 Ultimate to function and transcode on.
5. Modular PSU, I've had non-modular in the past and they make a mess.
6. Thoughts? Recommendations? Options?


----------



## newtekie1 (Apr 14, 2016)

bermel72 said:


> Do you know if Filezilla is supported by android or what a good app is that supports FileZilla?



I use AndFTP. It will let you access your FTP server. If you just want to play media, you use plex.

There also isn't a real point to multiple RAID arrays like that. Just get 4x6TB in RAID 5. You'll have about the same amount of storage space and it will all be redundant.


----------



## N-Gen (Apr 14, 2016)

With a large amount of disks I'd go RAID6 (or RAIDZ2 under ZFS on Linux/FreeBSD/FreeNAS) and I'd get 2 sticks or RAM even if it's 2x4GB not 2x8GB to enable dual channel.


----------



## bermel72 (Apr 14, 2016)

newtekie1 said:


> I use AndFTP. It will let you access your FTP server. If you just want to play media, you use plex.
> 
> There also isn't a real point to multiple RAID arrays like that. Just get 4x6TB in RAID 5. You'll have about the same amount of storage space and it will all be redundant.





N-Gen said:


> With a large amount of disks I'd go RAID6 (or RAIDZ2 under ZFS on Linux/FreeBSD/FreeNAS) and I'd get 2 sticks or RAM even if it's 2x4GB not 2x8GB to enable dual channel.



Thanks guys this is why I love this community, one of the best forums on the net if you ask me, that makes alot of since, from a software standpoint doesn't Windows 7 Ultimate only do Raid 0, 1, 5, 10? Or in windows they call it mirroring, striping, and spanning volumes. How would you setup a complicated raid array like this? Also how does this raid array work?


----------



## N-Gen (Apr 14, 2016)

You could do it withing Windows or via the controller on your motherboard, it all depends. Within the OS is what we call software RAID, when you use specific controllers whether on the motherboard or as an expansion card that would be called hardware raid.

Pros for hardware raid:


Can be inexpensive (HighPoint) for high throughput
Dedicated controller
Cons for hardware raid:


If the controller fails you need to replace it with a similar/identical controller
Some cards (LSI, Adaptec) can be very expensive
Pros for software raid:


Has better data integrity check depending on the feature set of the chosen OS/file system
Does not require hardware raid controllers
Is fast if given the right hardware
Cons for software raid:


Some file systems based on software raids are near impossible or very expensive to recover
Requires higher spec components for better throughput
If using hardware raid cards as HBAs instead of an actual HBA you may come across issues with the arrays "belonging" to the device and if the card failed you'd need to replace it with an identical card. I've tested this and with my card it wasn't the case, the array worked fine regardless of how it was plugged but it seems to be something that can happen.

That's a very high level view to give you an idea what to expect. I have used both and moved from hardware RAID to software RAID using ZFS because of the data integrity checks it provides. I'm sure Microsoft was developing something on a similar scale. If you're not using Seagate Archive (SMR type) hdds you could also try Rockstor which can use BTRFS, I've tried it, it's well supported with dedicated developers and quite easy to use. RAID arrays, in whichever configuration are easy to use and you can usually do it through a GUI to keep it simple. If you need basic info on RAID levels you should be able to find the info depending on which file system you choose.


----------



## Cybrnook2002 (Apr 14, 2016)

Cybrnook2002 said:


> For your requirements, you are PERFECT to run (the same as I do, 2 x actually) unRAID. It is at it's core a media server with parity protection (Soon to have dual parity). But also supports VM's and "Docker" containers. They have a pre-built Plex Docker image, all you do is download it, do some basic config (there is a video you can follow), and off it goes.
> 
> 
> 
> ...


Last I will add, here is a pic of my home server (soon to have dual parity disks)



















Runs headless in the basement and I use up ipmi to manage the hardware.


----------



## JunkBear (Apr 14, 2016)

FTP ... Fuck The Pussies  servers?!


----------



## newtekie1 (Apr 15, 2016)

bermel72 said:


> Thanks guys this is why I love this community, one of the best forums on the net if you ask me, that makes alot of since, from a software standpoint doesn't Windows 7 Ultimate only do Raid 0, 1, 5, 10? Or in windows they call it mirroring, striping, and spanning volumes. How would you setup a complicated raid array like this? Also how does this raid array work?



You'll want to use the RAID built into the motherboard, or a dedicated card.  Not the Windows RAID.



N-Gen said:


> If the controller fails you need to replace it with a similar/identical controller



All of your information was pretty good except this.  With the Intel controller, the RAID arrays have been compatible with almost all other Intel controllers for at least a generation before.  So if you create an array on a B170 motherboard and that motherboard fails, you can move the drives to pretty much any other halfway modern Intel motherboard, and the array will be recognized and work.  The same is true with Highpoint.  In fact, they pride themselves on the fact that an array can be created with any of the add-on cards, and it will work with any other add-on card.  Highpoint talks about this in their FAQ.

So, yeah, you have to replace the controller with one from the same brand, but it doesn't have to be identical.


----------



## taz420nj (Apr 15, 2016)

Bermel72 - a few things you will want to consider..

It's a VERY bad idea to use RAID 5 for an array that large using large (>2TB) drives.   You only have a single drive's worth of redundancy, and if a second drive fails while a rebuild is in progress - which is a very real possibility, because rebuilding stresses the remaining drives in the array, which are the same age and usually from the same batch as the one that failed - you lose the entire array.

Software RAID - especially a parity flavor - is very CPU intensive.  This is becoming less of a problem as CPUs get more powerful, but there are still potential pitfalls when implementing it on a transcoding media server.  Even with really good hardware, the machine will get swamped if there are multiple high intensity operations happening at the same time..  For example if someone is streaming a movie that Plex has to transcode (and with only 3Mbps upload it will have to transcode everything if you're mobile) and you start copying a rip to the array, the stream(s) might stutter. 

Your servers should ALWAYS have a UPS with unattended shutdown configured, but with software RAID you also run the risk of array problems if the server loses power during a write operation.  Software RAID does not have a safe battery backed write cache, so if power is lost or the system crashes during a write, it will automatically start a verify operation - which not only chews up CPU and bottlenecks the array bandwidth but in a parity array that large it will likely take a day or more to complete. 

Call me old school but I still recommend a good hardware RAID card.  You can get demoted enterprise hardware on ebay really cheap.  For example an 8 port LSI 9560 with BBU can be had for under $100.  Old, but they work great and if it shits the bed you can easily find another one (LSI 9000 cards can also be swapped for other models within the 9000 series without issue).  RAID cards also support staggered spinup (powers the drives up one at a time so your power supply doesn't have a heart attack on powerup)  Software RAID can't so that. Just always make sure you get one with the BBU (battery backup unit).  And Highpoint sucks, stay away from them.

Now going back to the actual array.... 

There's a couple very good (and way too often dismissed/ignored) reasons to use more drives vs bigger drives..  One, 2TB is the largest drive where the array is in an "acceptable" danger zone of losing a second drive on failure of any single drive.  The larger the drives are, the longer a rebuild takes and the bigger beating the remaining drives take, not only for the rebuild operation itself but all data from the missing drive must be calculated on the fly from parity if the array is still being accessed during the rebuild (which if you don't have a hot spare or an extra drive onhand means it will be degraded until you can get one).

Two is the parity cost.  2TB drives are cheap.  4 and 6 TB drives are not.  In a RAID5 array you lose 1 full drive's worth of space to parity.  In RAID 6 you lose 2 drive's worth, which means the larger the drives the larger the parity cost (both in price AND unusable space). So if you use 4x6TB drives in RAID6 (sorry @N-Gen, only a fool would use 6TB drives in RAID5), you're spending $500 just on parity and will only have 12TB free, whereas 8x2TB drives in RAID6  will give you the same protection against 2 drive failures, 3x faster recovery in case of a failure, and have your data less vulnerable to total loss - PLUS it'll cost you $330 less.  With RAID it is cheaper to use MORE drives, not larger.

Food for thought..


----------



## taz420nj (Apr 15, 2016)

Cybrnook2002 said:


> Last I will add, here is a pic of my home server (soon to have dual parity disks)



unRAID is weird.  It's basically software based RAID3/4..  And nobody uses RAID 3/4 because the write performance sucks.


----------



## bermel72 (Apr 15, 2016)

taz420nj said:


> Bermel72 - a few things you will want to consider..
> 
> It's a VERY bad idea to use RAID 5 for an array that large using large (>2TB) drives.   You only have a single drive's worth of redundancy, and if a second drive fails while a rebuild is in progress - which is a very real possibility, because rebuilding stresses the remaining drives in the array, which are the same age and usually from the same batch as the one that failed - you lose the entire array.
> 
> ...




Haha you sound like you know what your talking about, so all in all a hardware raid is better? The Fractal Design 804 have 10 usable 3.5" which equals about 20TB if I use a couple of SATA expansions and your way of using 2TB drives instead of 6TB drives, alright fair enough thats still more then enough space for me. As for the Raid 6 would someone please explain to me what this is and what the difference between a RAID 5 and 6 is? And I can't find a way to do a RAID 6 with just the software, does Windows support only hardware RAID 6? To get around the limitation of the motherboard only having 6 SATA III ports you would just throw in a couple expansions correct, or would the hardware RAID take care of that for you?


----------



## taz420nj (Apr 15, 2016)

bermel72 said:


> Haha you sound like you know what your talking about, so all in all a hardware raid is better? The Fractal Design 804 have 10 usable 3.5" which equals about 20TB if I use a couple of SATA expansions and your way of using 2TB drives instead of 6TB drives, alright fair enough thats still more then enough space for me. As for the Raid 6 would someone please explain to me what this is and what the difference between a RAID 5 and 6 is? And I can't find a way to do a RAID 6 with just the software, does Windows support only hardware RAID 6? To get around the limitation of the motherboard only having 6 SATA III ports you would just throw in a couple expansions correct, or would the hardware RAID take care of that for you?



In my opinion, yes hardware RAID is the way to go (but again you'll always have those that unequivocally stand by software RAID).   You'd plug all of your array drives into the RAID card, NOT the motherboard.  You'd use the ports on the motherboard only for your boot/OS drive and your optical drive.  Do not install Windows on the array.

If it has 10 drive bays, that would allow you 16TB worth using 2TB drives.  As I said, with RAID6 you'll lose two drives' worth of capacity for parity (the redundancy data).   RAID 5 and 6 work exactly the same except that RAID 5 keeps enough parity data on each drive to allow the array to survive the loss of one drive. RAID 6 keeps enough parity data on each drive to allow the array to survive the loss of TWO drives.  This is the important difference when you get into larger arrays.  The laws of probability (and that prick Murphy ) dictate that once a single large drive in an array fails, there is a significant chance of a second drive failing soon after (again due to the increased stress of the rebuild, age of the remaining drives, etc).  If the second drive fails before the first drive can be completely rebuilt, the array data can only survive if there is a second set of parity data available to finish rebuilding both drives.  If this is a RAID 5 array, there is no additional parity available if a second drive fails, so the entire array is destroyed. This is where RAID 6 has the advantage.

If you really think you're going to fill up 16TB within a short timeframe and you're hell-bent on that 10-bay case, then you can use 7x 4TB drives (remember you lose two drives' worth of space) in RAID 6 to give you 20TB.  I still would not use the 6TB drives. 20TB is actually about the break-even point where the 4TB drives become cheaper to use (well, it's about $5 more expensive at 20TB, but any further expansion will be cheaper ), and the 2 drives' worth of parity should still keep you protected.  You would also want to use the 12 port card, so you still have room to expand.

Windows will have nothing to do with the RAID.  It will all be handled by the card.  To Windows it will simply look like a single 16 or 20TB drive.

And if you use that case, make sure you install four fans in the top to keep the drives cool.

And also as far as your desire for headless operation, you can use Remote Desktop + FTP as has already been suggested (I would also install a VPN server, which would allow you to access your entire network, including drive shares), or you can install TeamViewer (free for personal use).  It can do file transfers (albeit a little slow), and they even have an Android app.


----------



## bermel72 (Apr 15, 2016)

taz420nj said:


> In my opinion, yes hardware RAID is the way to go (but again you'll always have those that unequivocally stand by software RAID).   You'd plug all of your array drives into the RAID card, NOT the motherboard.  You'd use the ports on the motherboard only for your boot/OS drive and your optical drive.  Do not install Windows on the array.
> 
> If it has 10 drive bays, that would allow you 16TB worth using 2TB drives.  As I said, with RAID6 you'll lose two drives' worth of capacity for parity (the redundancy data).   RAID 5 and 6 work exactly the same except that RAID 5 keeps enough parity data on each drive to allow the array to survive the loss of one drive. RAID 6 keeps enough parity data on each drive to allow the array to survive the loss of TWO drives.  This is the important difference when you get into larger arrays.  The laws of probability dictate that once a single large drive in an array fails, there is a significant chance of a second drive failing soon after (again due to the increased stress of the rebuild, age of the remaining drives, etc).  If the second drive fails before the first drive can be completely rebuilt, the array data can only survive if there is a second set of parity data available to rebuild both drives.  If this is a RAID 5 array, there is no additional parity available if a second drive fails, so the entire array is destroyed. This is where RAID 6 has the advantage.
> 
> ...



Lots of good information here, I love it. I think that the hardware raid would make it easier on me, and no 16TB is plenty for me, I just had the mindframe that once I build it I wouldn't be upgrading or rebuilding it for a while. I like the idea of the hardware array and windows just sensing it as one massive drive. I'm definetly not set on that case and would consider a different one, if some suggestions were to be made


----------



## wolar (Apr 15, 2016)

taz420nj said:


> In my opinion, yes hardware RAID is the way to go (but again you'll always have those that unequivocally stand by software RAID).   You'd plug all of your array drives into the RAID card, NOT the motherboard.  You'd use the ports on the motherboard only for your boot/OS drive and your optical drive.  Do not install Windows on the array.
> 
> If it has 10 drive bays, that would allow you 16TB worth using 2TB drives.  As I said, with RAID6 you'll lose two drives' worth of capacity for parity (the redundancy data).   RAID 5 and 6 work exactly the same except that RAID 5 keeps enough parity data on each drive to allow the array to survive the loss of one drive. RAID 6 keeps enough parity data on each drive to allow the array to survive the loss of TWO drives.  This is the important difference when you get into larger arrays.  The laws of probability (and that prick Murphy ) dictate that once a single large drive in an array fails, there is a significant chance of a second drive failing soon after (again due to the increased stress of the rebuild, age of the remaining drives, etc).  If the second drive fails before the first drive can be completely rebuilt, the array data can only survive if there is a second set of parity data available to finish rebuilding both drives.  If this is a RAID 5 array, there is no additional parity available if a second drive fails, so the entire array is destroyed. This is where RAID 6 has the advantage.
> 
> ...


I think that case is bad for this scenario , in the drives chamber you can only put fans infront and back and the drives will be one big mass , maybe getting too hot. 
I think a case with direct airflow on the drives will serve better


----------



## taz420nj (Apr 15, 2016)

I'll let someone else make case suggestions but honestly that one doesn't look bad.  However if you need more than 10 bays I think that's probably going to shift you into full tower territory.

And I don't know what your video habits are like but I didn't fill mine up anywhere near as fast as I expected.  I built it a couple years ago with 4x2TB drives (6TB available).  I have about 400 movies (probably 350 in HD, the rest SD), a bunch of TV series, probably 5000-6000 songs, and other miscellaneous stuff on my server..  Right now I'm JUST to the point where I need to add some more drives to the array.  Mine is RAID5, but when I have some spare cash I will be building a new array on a RAID6 card.








wolar said:


> I think that case is bad for this scenario , in the drives chamber you can only put fans infront and back and the drives will be one big mass , maybe getting too hot.
> I think a case with direct airflow on the drives will serve better



There are 4x120mm fan mounts directly above the drive hangers on the top of the box..  Plenty of airflow.


----------



## bermel72 (Apr 15, 2016)

taz420nj said:


> I'll let someone else make case suggestions but honestly that one doesn't look bad.  However if you need more than 10 bays I think that's probably going to shift you into full tower territory.
> 
> And I don't know what your video habits are like but I didn't fill mine up anywhere near as fast as I expected.  I built it a couple years ago with 4x2TB drives (6TB available).  I have about 400 movies (probably 350 in HD, the rest SD), a bunch of TV series, probably 5000-6000 songs, and other miscellaneous stuff on my server..  Right now I'm JUST to the point where I need to add some more drives to the array.  Mine is RAID5, but when I have some spare cash I will be building a new array on a RAID6 card.
> 
> ...



So I went searching for a raid card and I'm totally lost. Is there some sort of guide that can break this all down for me. I feel frustrated at this point cause I have no idea what I'm even looking at on newegg or Google.


----------



## taz420nj (Apr 15, 2016)

New/retail they are very expensive (about $500).  I'm talking about used ones.

This one is the best deal I see at the moment..  The only thing is you'll need to buy the 4-1 SATA cables for it.  The cables that come with it are for connecting it a 12 drive backplane (which isn't what you're using).  The ones you'll need fans out to 4 SATA plugs from each of the card ports.

http://www.ebay.com/itm/AMCC-9650SE...032890?hash=item28157a5e3a:g:Z44AAOSwgApXBSmN

These are the cables ($15 for 2, you'd only need 3 so youll have a spare):

http://www.ebay.com/itm/2x-Mini-10G...095311?hash=item20e642ddcf:g:GloAAOxyx0JTgAoj


----------



## wolar (Apr 15, 2016)

taz420nj said:


> I'll let someone else make case suggestions but honestly that one doesn't look bad.  However if you need more than 10 bays I think that's probably going to shift you into full tower territory.
> 
> And I don't know what your video habits are like but I didn't fill mine up anywhere near as fast as I expected.  I built it a couple years ago with 4x2TB drives (6TB available).  I have about 400 movies (probably 350 in HD, the rest SD), a bunch of TV series, probably 5000-6000 songs, and other miscellaneous stuff on my server..  Right now I'm JUST to the point where I need to add some more drives to the array.  Mine is RAID5, but when I have some spare cash I will be building a new array on a RAID6 card.
> 
> ...


You cannot mount the drive hanger and also the fans , thats one problem with the case


----------



## taz420nj (Apr 15, 2016)

wolar said:


> You cannot mount the drive hanger and also the fans , thats one problem with the case



I don't see anything anywhere to substantiate that claim.  There is a disclaimer about large radiators interfering with the drive cages but not bare fans. I hardly think that's something that nobody would point out given the number of reviews it has had.


----------



## N-Gen (Apr 15, 2016)

newtekie1 said:


> You'll want to use the RAID built into the motherboard, or a dedicated card.  Not the Windows RAID.
> 
> 
> 
> ...



You're right, it's why I put similar/identical not only similar but I think I should have elaborated further and mentioned that they have to have the same algorithm.

For others mentioning the drive capabilities of the Node 804, I know quite a number of people that filled it to capacity without any issues whatsoever. Flow in general is good, it's well built case. We're close to summer so I can report how temps will be during the coming inferno but all in all so far, half full, it's a great case.

Regarding hardware vs. software RAID and which is best, it's all down to personal opinion and requirements. Software RAID can easily end up being more expensive than hardware RAID if your requirements are high. One of the reasons I moved from hardware to software was for possible cases of RAID card failure. The way it's used now, as a HBA, if it fails I can replaced it with any other RAID card/HBA or connect the drives to the motherboard ad still be able to access the array. I could also move it from machine to machine without any issues and just mount it.

Hardware RAID is easier to set up in my opinion and for simple applications HighPoint offer great cards on the cheap. I've owned my 2720SGL for 3 years and it's always delivered what was promised. I even had a power cut during a RAID rebuild right after swapping out a faulty disk and after all my worries when the power came back on it just continued where it left off, all my data is still intact. I just want the data integrity features given by ZFS,  I would have been happy with BTRFS but it doesn't work well with SMR disks. Having a large amount of data 16TB+, foreseeing massive data growth and most of it being critical, those features are a big deal to me.

But like I said, they both work and they're both used in production systems within large business so with a bit of homework you can determine which method is best for your needs.


----------



## N-Gen (Apr 15, 2016)

taz420nj said:


> I don't see anything anywhere to substantiate that claim.  There is a disclaimer about large radiators interfering with the drive cages but not bare fans. I hardly think that's something that nobody would point out given the number of reviews it has had.



Adding to this, it doesn't seem possible to mount the top fans with the HDDs because the fans are mounting on the inside of the case and there's only a few mm until you hit the HDD cage. However, the top is still vented and the 8 disks that are hanging have enough space between them for air to flow well front to back provided good fans are installed.

The only tricky bit with the case is the cage right above the PSU. Cables will be  bit tight to the best option would be angled connectors otherwise there's gonna be a lot of bending, trust me, it's annoying.


----------



## bermel72 (Apr 15, 2016)

N-Gen said:


> Adding to this, it doesn't seem possible to mount the top fans with the HDDs because the fans are mounting on the inside of the case and there's only a few mm until you hit the HDD cage. However, the top is still vented and the 8 disks that are hanging have enough space between them for air to flow well front to back provided good fans are installed.
> 
> The only tricky bit with the case is the cage right above the PSU. Cables will be  bit tight to the best option would be angled connectors otherwise there's gonna be a lot of bending, trust me, it's annoying.



Just get some hyperboreas with 90 CFM.


----------



## Cybrnook2002 (Apr 15, 2016)

taz420nj said:


> unRAID is weird.  It's basically software based RAID3/4..  And nobody uses RAID 3/4 because the write performance sucks.


Hmm, I guess my 100+ MBps writes directly to array (bypassing cache for large writes over 500+GB) would differ, but oh well.  (not trying to stir the pot, but just provide some real world use cases)

P.S. Perhaps in the 200+MBps (teaming), or if I were to setup 10Gb lan in the house I would see "slower" (used loosely here) writes to array, but as it stands now my home is wired for gigabit, and I max it out so I am happy.

In the future if I start upgrading pieces to 10Gb, then I will write directly to cache. (Though I would need to invest in some high $ nic cards, switches, and a router that is all 10Gb capable. I don't see this anytime soon for my home use)

Also, unraid is ZFS with with whole files per disk (not striped), so if I lost a disk, only the data on that disk would be lost, not the entire array (not counting the fact I can still rebuild from parity, and soon to be dual parity in unraid 6.2. So errors in the array are now exact, and not "assumed" corrected on parity). As well, I am using XFS as my file system type.



N-Gen said:


> You're right, it's why I put similar/identical not only similar but I think I should have elaborated further and mentioned that they have to have the same algorithm.
> 
> For others mentioning the drive capabilities of the Node 804, I know quite a number of people that filled it to capacity without any issues whatsoever. Flow in general is good, it's well built case. We're close to summer so I can report how temps will be during the coming inferno but all in all so far, half full, it's a great case.
> 
> ...



This is good advice


----------



## newtekie1 (Apr 15, 2016)

taz420nj said:


> It's a VERY bad idea to use RAID 5 for an array that large using large (>2TB) drives. You only have a single drive's worth of redundancy, and if a second drive fails while a rebuild is in progress - which is a very real possibility, because rebuilding stresses the remaining drives in the array, which are the same age and usually from the same batch as the one that failed - you lose the entire array.



I believe this is a hold over from a long time ago.  Rebuild times are not as long as they used to be.  Heck, when 2TB drives first came out, I built a RAID5 array with them, and rebuild time was over a day.  Now, I just built an array with 6TB drives, and the rebuild time was just under 24 hours. So it took the same amount of time to rebuilt an array with 6TB drive today as it did to rebuilt with 2TB drives several years ago.  The controllers are faster, the computer CPUs are faster(in the case where the controller is using the CPU for calculations), and the drives have gotten faster(particularly write speeds).  This all changes with SMR, because the write speed suffers greatly on those drives, so those would be the only type of drive I would recommend avoiding in a RAID5 array.

Plus, having more disks means way more points of failure.  You are a lot more likely to have a drive fail if you have 20 drives than if you have 4.

Finally, I'm pretty sure it has been mentioned, but RAID is not a replacement for backups.  Your RAID array should be backed up.  Even if it is backed up to something not redundant.  The money you're talking about spending on a dedicated RAID card, and extra drives would be better spent on a couple extra 6TB drives to perform back-ups to.

This is my current server.  F is my main RAID5 array, G is another non-redundant array that F gets backed up nightly to.  Adding a backup is more important than RAID6.
http://tpuminecraft.servebeer.com/pictures/raid.png


----------



## N-Gen (Apr 15, 2016)

newtekie1 said:


> I believe this is a hold over from a long time ago.  Rebuilt times are not as long as they used to be.  Heck, when 2TB drives first came out, I built a RAID5 array with them, and rebuild time was over a day.  Now, I just built an array with 6TB drives, and the rebuild time was just under 24 hours. So it took the same amount of time to rebuilt an array with 6TB drive today as it did to rebuilt with 2TB drives several years ago.  The controllers are faster, the computer CPUs are faster(in the case where the controller is using the CPU for calculations), and the drives have gotten faster(particularly write speeds).  This all changes with SMR, because the write speed suffers greatly on those drives, so those would be the only type of drive I would recommend avoiding in a RAID5 array.
> 
> Plus, having more disks means way more points of failure.  You are a lot more likely to have a drive fail if you have 20 drives than if you have 4.
> 
> ...



With my little HighPoint a 4x3TB RAID5 array full of data took around 36 hours to rebuild when using the RAID card controller. Haven't rebuilt under software yet.

And yet again, like everyone else I will keep stressing on backup. Don't learn the hard way like the rest of us.

Here's mine:


----------



## newtekie1 (Apr 15, 2016)

N-Gen said:


> And yet again, like everyone else I will keep stressing on backup. Don't learn the hard way like the rest of us.




Exactly this.  I'll spend money on a backup long before I spend money on fancy RAID controller and a bunch of smaller drivers.

I use the highpoint 642L, connected to two 5-bay external enclosures.  I upgraded to that from a Highpoint 2300.  And before that I didn't use RAID at all.

But to the point, I just want to share a little story to show why backup is important.  When I was back running just 3 400GB hard drives, not in RAID.  I had one 400GB drive completely full of movies.  I was young, and of course had no backup.  I had just finished ripping and encoding some new DVDs I got, and I went to delete the temporary folders I created for the ripped files.  But I had two explorer Windows open, one with the temp folders highlighted, and the other with my movies folder highlighted.  The wrong window had focus, I hit Shift+Del and clicked yes.  And right when I clicked yes, I realized what I did, and my heart skipped a beat.  I just deleted almost 400GB of movies...with no backup.  Of course I had all the original DVDs, but it took so much time to re-rip and re-encode all those movies!  Saddly, some of the DVDs were damaged and I couldn't re-rip them.


----------



## taz420nj (Apr 15, 2016)

newtekie1 said:


> I believe this is a hold over from a long time ago.  Rebuild times are not as long as they used to be.  Heck, when 2TB drives first came out, I built a RAID5 array with them, and rebuild time was over a day.  Now, I just built an array with 6TB drives, and the rebuild time was just under 24 hours. So it took the same amount of time to rebuilt an array with 6TB drive today as it did to rebuilt with 2TB drives several years ago.  The controllers are faster, the computer CPUs are faster(in the case where the controller is using the CPU for calculations), and the drives have gotten faster(particularly write speeds).  This all changes with SMR, because the write speed suffers greatly on those drives, so those would be the only type of drive I would recommend avoiding in a RAID5 array.
> 
> Plus, having more disks means way more points of failure.  You are a lot more likely to have a drive fail if you have 20 drives than if you have 4.
> 
> ...



First of all it's not a holdover from a long time ago.  It's something that we were warned about when larger drives started coming out.  In fact, the same warning has been raised that in a couple years we will be in the same position with RAID6 vs large drives, necessitating the creation of a triple parity RAID level.

Drives have NOT gotten significantly faster in their classes either.  At the end of the day it's still a 5400RPM consumer grade SATA drive that only has a max read/write of 150MB/s, not a 15k SAS enterprise drive.  Real world sustained transfer over a long period of time (as would be the case during a rebuild) isn't going to be anywhere near that - especially as the drives fill up.

Your examples of array "rebuild" times make no sense.  You "just built" an array with 6TB drives and already had to do a rebuild?  What did you use, Seagates? And you do realize the size of the array isn't what determines the rebuild time, it's the amount of data on it. So if your array of 2TB drives and your array of 6TB drives both had the same amount of data on them then yeah they'd take generally the same amount of time to rebuild.

As far as "RAID isn't a backup" yeah I know.  It's a compromise that gives you a reasonable margin of safety for huge amounts of non-critical data without having to do 1:1 backup.  Remember most of us are using it for video and music - all of which is already compressed, so a backup will be the same size as the array.  When you get into the tens of terabytes it's an expensive waste of hardware to have 1:1 backup on a home media server - for the OP's 16TB array that's another $800 in drives.   Yeah it'll suck if your luck is that bad and you smoke three drives and the array dies but it won't be the end of the world.


----------



## Kursah (Apr 16, 2016)

I run a super cheap Dell Perc 6i w/bbu, a sata adapter cable and right now 4x2TB drives in a 5.5TB RAID5 array, not the best performance...but solid for what it is, doing Plex library files, and hosting 6-8 running VM's with a wider variety of tasks, and I have 0 issues. As said above a BBU is critical if getting a RAID card. In my experience with my home RAID...the battery became discharged and the array went from write back to write through mode...and struggled to write more than 4-5MBs. It was horrendous. Resolved the issue with a few firmware and BBU freshly charged,  I do plan to go to a 6 or 8-drive RAID6 in the future. 

I considered 4TB drives until I did some more research and finally had to fix a large array that another IT company had sold a school we recently gained. Didn't help the school was non-stop accessing but it took the better part of a week to actually successfully rebuild the array. Guess what happened a couple weeks after that??? Yep...another drive failed. Thankfully we had talked them into a NAS backup system with a RAID6 array and are now rebuilding their server array this summer. Ticking time bomb until then though... 4+ TB drives in an array are for those who are patient and have really really good backups and possibly a failover data store/server to keep them alive during the long rebuilds that will happen.

Honestly I get some of the white label enterprise 2TB HDD's off of fleabay, or a seller that also sells there rather and keep a couple extra on hand because they are cheap enough. The array rebuild took around 6 hours when I had one drive fail. But I gotta keep a tight budget and so far I can't complain with the performance of these drives. The one that failed came with the 4 I bought off of a fellow TPU-er that had used them for years so I expected a failure and the price was fair that I had no complaints in cheaply replacing one. 

I do run a backup with a Server 2012 R2 Storage Space (testing purposes more than serious deployment...) setup using 2X2TB and 1X1.5TB...not in any kind of redundancy mode (so damn slow...)..and I needed the space to allow for incremental backups which I didn't have with redundancy. It works well enough, I get SMART report notifications through Event Logs so I'm not worried...but I also sysadmin my site and network 

Not saying my setup is the best, but in a home lab situation the old-ass cheap-ass Perc 6i's + white label 2TB enterprise drives in a RAID 5/6 is a good way to go in my experience. I was able to buy a spare Perc 6i for around $15-20 shipped...and sure it's old tech, long replaced by H700's and beyond...most folks won't notice what they're missing unless they went with SSD array's or SAS drives in the first place...and then you're not in the cheap-ass budget price point anymore anyways.

I agree, even with an array one should backup...but with an array with redundancy and parity hopefully the backup is a "rather have it and not need it" kind of situation. And at least if a drive (5) or two (6) fail in the array, keeping a couple spares on hand...shit can be fixed in a night, day or two. It is all good experience...if someone is truly serious about their data...then being cheap isn't going to solve anything. If I lost all my data I'd be pissed...but I also plan to increase the array, change to a 6 and I'm building up a secondary storage array for backup purposes. We shall see. To each their own. 

My core server, my VM's including the one that runs Plex, Teamspeak, RDGateway, etc. etc., runs great with this array. I run the core OS on an old Samsung 840 120GB SSD...everything else on the array, and the backup on the Storage Spaces combined volume. So far works great, super stable. I guess it depends on what direction you want to take...mine is kind of overcomplicated for what it needs to be...but I'm gaining experience on stuff I need to support professionally, and am able to make things work in my budget which is a win-win for me!


----------



## newtekie1 (Apr 16, 2016)

taz420nj said:


> First of all it's not a holdover from a long time ago. It's something that we were warned about when larger drives started coming out. In fact, the same warning has been raised that in a couple years we will be in the same position with RAID6 vs large drives, necessitating the creation of a triple parity RAID level.



Yes, it is a hold over from a long time ago.  The controllers have gotten faster at rebuilding, because the processors calculating parity have gotten way faster.  That is the same reason that everyone used to warn about RAID5/6 being extremely slow writing, but they actually aren't that slow anymore.  The drastic increase in drive write speed, as well as the large increase in CPU power that can be used to calculate the rebuild, lead to much faster rebuilt time.



taz420nj said:


> Drives have NOT gotten significantly faster in their classes either. At the end of the day it's still a 5400RPM consumer grade SATA drive that only has a max read/write of 150MB/s, not a 15k SAS enterprise drive. Real world sustained transfer over a long period of time (as would be the case during a rebuild) isn't going to be anywhere near that - especially as the drives fill up.



Very untrue.  It may only be 5400RPM, but it's sequential write averages 125MB/s.  Just to put that into a little perspective, a WD Velociraptor, the 10,000 RPM fastest consumer drive you could get back in 2008 only had an average sequential write speed of around 100MB/s.  The first 2TB drives that came out were in the sub-80MB/s average write area.



taz420nj said:


> Your examples of array "rebuild" times make no sense. You "just built" an array with 6TB drives and already had to do a rebuild? What did you use, Seagates? And you do realize the size of the array isn't what determines the rebuild time, it's the amount of data on it. So if your array of 2TB drives and your array of 6TB drives both had the same amount of data on them then yeah they'd take generally the same amount of time to rebuild.



1.) Seagate are currently the most reliable drives on the market.
2.) Every array I build, I fill about 60% full with data, then pull a drive and rebuild it to see how long it takes.
3.) If the time to rebuild is the same, regardless of if you are using 2TB drives or 6TB, and it is only determined by how much data you have stored.  What is the point of using 2TB drives over 6TB drives?  The rebuild time will be the same.  You're arguing against 6TB drives because you say it takes too long to rebuild, but then just said the rebuild time depends on the amount of data on the array, not the size of the drives.  THAT makes no sense.



taz420nj said:


> As far as "RAID isn't a backup" yeah I know. It's a compromise that gives you a reasonable margin of safety for huge amounts of non-critical data without having to do 1:1 backup. Remember most of us are using it for video and music - all of which is already compressed, so a backup will be the same size as the array. When you get into the tens of terabytes it's an expensive waste of hardware to have 1:1 backup on a home media server - for the OP's 16TB array that's another $800 in drives. Yeah it'll suck if your luck is that bad and you smoke three drives and the array dies but it won't be the end of the world.



That is largely what I use my home server for, and I have a 1:1 backup.  It definitely ins't an expensive waste of hardware.  Once you accidentally delete your entire movie folder, you'll know it wasn't a waste at all. 

Of course, in the OP's case, he doesn't need an exact 1:1, since a good portion of the data stored on the server will already be backup data from his desktops.  So he really only has to backup the unique data on the array.  The OP could probably get away with a 8TB external as a backup right now, thats about $200.  Once that starts to get filled up, either get another, or by that time there will probably be even bigger externals he can just replace the 8TB with.  Something tells me the OP doesn't plan to fill the array with data right away, and is building it with a lot of future storage space in mind, so 8TB of backup space should be enough for a while.


----------



## taz420nj (Apr 16, 2016)

newtekie1 said:


> Yes, it is a hold over from a long time ago.  The controllers have gotten faster at rebuilding, because the processors calculating parity have gotten way faster.  That is the same reason that everyone used to warn about RAID5/6 being extremely slow writing, but they actually aren't that slow anymore.  The drastic increase in drive write speed, as well as the large increase in CPU power that can be used to calculate the rebuild, lead to much faster rebuilt time.



Hey McFly, the speed of the parity calculation is moot when the drive's write speed is the bottleneck on a rebuild.



> Very untrue.  It may only be 5400RPM, but it's sequential write averages 125MB/s.  Just to put that into a little perspective, a WD Velociraptor, the 10,000 RPM fastest consumer drive you could get back in 2008 only had an average sequential write speed of around 100MB/s.  *The first 2TB drives that came out were in the sub-80MB/s average write area.*



No.  They weren't.  The WD20EADS (the very first 2TB on the market in case you were wondering) was 100MB/s - and that was one of those lame 5,200RPM "Green" drives.  The first Caviar Blue 5,400 2TB was 130MB/s.  Not too long after that though the green and blue drives stopped being usable in RAID because of the TLER issue in newer models.  Oh and by the way in 2008, Veloceraptor drives were in the 140-150MB/s range.  So I don't know where you're getting your wrong numbers from but mine are coming right from the spec sheets.



> 1.) Seagate are currently the most reliable drives on the market.


Maybe on your planet but not here on Earth.  HGST is still king.  I wouldn't even use a Seagate drive as a doorstop.



> 2.) Every array I build, I fill about 60% full with data, then pull a drive and rebuild it to see how long it takes.


Intentionally shortening the life of the drives "just to see how long a rebuild takes" is one of the dumbest things I've heard in a while.  Thanks for the laugh.



> 3.) If the time to rebuild is the same, regardless of if you are using 2TB drives or 6TB, and it is only determined by how much data you have stored.  What is the point of using 2TB drives over 6TB drives?  The rebuild time will be the same.  You're arguing against 6TB drives because you say it takes too long to rebuild, but then just said the rebuild time depends on the amount of data on the array, not the size of the drives.  THAT makes no sense.



Once again McFly, you are comparing apples to bongo drums.  The likely scenario is you're comparing the time it took to rebuild a full or nearly full array of 2TB drives with the time it took to rebuild a mostly empty array of 6TB drives - because you won't convince me for a second that the two arrays were of the same capacity (since wasting 6TB of parity space on an array smaller than 24TB is moronic), nor is there any way in hell that given that fact if they were both 60% full that they took anywhere near the same amount of time to rebuild.



> That is largely what I use my home server for, and I have a 1:1 backup.  It definitely ins't an expensive waste of hardware.  Once you accidentally delete your entire movie folder, you'll know it wasn't a waste at all.



Yeah because everyone is as careless as you are.



> Of course, in the OP's case, he doesn't need an exact 1:1, since a good portion of the data stored on the server will already be backup data from his desktops.  So he really only has to backup the unique data on the array.  The OP could probably get away with a 8TB external as a backup right now, thats about $200.  Once that starts to get filled up, either get another, or by that time there will probably be even bigger externals he can just replace the 8TB with.  Something tells me the OP doesn't plan to fill the array with data right away, and is building it with a lot of future storage space in mind, so 8TB of backup space should be enough for a while.



So what, you have Seagate stock or something? Their drives are garbage.  They always have been and always will be. That's the reason they're cheap. And on top of that you think putting said garbage into an external enclosure where it can get knocked around is a swell idea??


----------



## puma99dk| (Apr 16, 2016)

taz420nj said:


> Maybe on your planet but not here on Earth.  HGST is still king.  I wouldn't even use a Seagate drive as a doorstop.



HGST is a really good brand, personally i got with WD Enterprise harddrives or RED.

As far as for Seagate not a brand i trust a lot even read they will be suited in the US for a lot of unreliable drives, bcs customers receive new/repaired drives from RMA and they die in like 24hours after running.


----------



## N-Gen (Apr 16, 2016)

The only issues with Seagate are certain batches on release. In my case, I was one of the unlucky people to buy into the first 3TB disks on the market, the infamous ST3000DM drives. In a 4 disk array I've replaced 7 disks, one failed after less than a month of 24/7 use. I now have 2 ST3000DM from a late batch and 2 SV35 disks in the same 4 disk array without a single hiccup for the last year 24/7.


----------



## newtekie1 (Apr 16, 2016)

taz420nj said:


> Hey McFly, the speed of the parity calculation is moot when the drive's write speed is the bottleneck on a rebuild.



When the bottleneck varies, it does matter.  In the past, when it was recommended to not use more than a 2TB drive, we had RAID controllers that were slow as well as slow write speeds on the drives.  At the time it was a two fold problem.  Today, both problems have been greatly reduced.  How is this hard to understand?



taz420nj said:


> No. They weren't. The WD20EADS (the very first 2TB on the market in case you were wondering) was 100MB/s - and that was one of those lame 5,200RPM "Green" drives. The first Caviar Blue 5,400 2TB was 130MB/s. Not too long after that though the green and blue drives stopped being usable in RAID because of the TLER issue in newer models. Oh and by the way the first Veloceraptor drives were in the 140-150MB/s range. So I don't know where you're getting your wrong numbers from but mine are coming right from the spec sheets.



Oh you believe the spec sheets, how cute...  Go look at some actual benchmarks.  Too lazy to do that, no problem, I already did, because I actually inform myself before posting, here are some:

2008 - Fastest Drive Tested is the 10,000RPM WD Velociraptor - 101MB/s
WD20EADS - You claim 100MB/s - Reality = 78MB/s (Hey, didn't I say under 80MB/s?)
6TB WD RED - 175MB/s

You still want to say drives haven't gotten significantly faster over time?  Do you not consider almost 100MB/s, over double the speed, significant?  Because I do.



taz420nj said:


> Maybe on your planet but not here on Earth. HGST is still king. I wouldn't even use a Seagate drive as a doorstop.



No according to what I've read here.  Waits for BS Backblaze link...



taz420nj said:


> Intentionally shortening the life of the drives "just to see how long a rebuild takes" is one of the dumbest things I've heard in a while. Thanks for the laugh.



Drives are meant to be used, and the life span is not greatly affected by a rebuild.  Also, hard drives don't have a limited number of reads/writes, so no, I'm not shortening the lifespan.



taz420nj said:


> Once again McFly, you are comparing apples to bongo drums. The likely scenario is you're comparing the time it took to rebuild a full or nearly full array of 2TB drives with the time it took to rebuild a mostly empty array of 6TB drives - because you won't convince me for a second that the two arrays were of the same capacity (since wasting 6TB of parity space on an array smaller than 24TB is moronic), nor is there any way in hell that given that fact if they were both 60% full that they took anywhere near the same amount of time to rebuild.



They did.  It is amazing what several years of increased hard drive write speeds and more powerful RAID controllers can do.  Again, I've already told you, both arrays were about 60% full.  I never said the two arrays were of the same capacity, the array of 2TB drives was significantly smaller.  Both were 3 drive arrays.



taz420nj said:


> Yeah because everyone is as careless as you are.



I guess you're perfect?  You're telling me you've never accidentally deleted something?  Ever?  I call complete bullshit on that.



taz420nj said:


> So what, you have Seagate stock or something? Their drives are garbage. They always have been and always will be. That's the reason they're cheap. And on top of that you think putting said garbage into an external enclosure where it can get knocked around is a swell idea??



Now I know you are just a troll.  So I'm done here.


----------



## bermel72 (Apr 16, 2016)

What did I start by asking this question. This thread is amazing. Haha. I still think the 1 to 1 ratio is the best solution, let me ask this would 6x6Tb drive be good for this 18tb backed up on 18tb?


----------



## newtekie1 (Apr 16, 2016)

bermel72 said:


> What did I start by asking this question. This thread is amazing. Haha. I still think the 1 to 1 ratio is the best solution, let me ask this would 6x6Tb drive be good for this 18tb backed up on 18tb?



Yes, assuming you mean two 3x6TB RAID5 arrays.  Of course, in the end that will give you about 10TB of actual usable space.

You'll also need an add-on RAID controller for sure to do this.  You can find Intel boards with 6 SATA ports wired to the Intel controller(they all have to wired to the Intel controller if you want to use the motherboard's RAID), but then you won't have a port for the SSD for your OS.


----------



## bermel72 (Apr 16, 2016)

newtekie1 said:


> Yes, assuming you mean two 3x6TB RAID5 arrays.  Of course, in the end that will give you about 10TB of actual usable space.
> 
> You'll also need an add-on RAID controller for sure to do this.  You can find Intel boards with 6 SATA ports wired to the Intel controller(they all have to wired to the Intel controller if you want to use the motherboard's RAID), but then you won't have a port for the SSD for your OS.



So a motherboard with 7 or 8 SATA 3 ports wouldn't work for software raid?


----------



## newtekie1 (Apr 17, 2016)

bermel72 said:


> So a motherboard with 7 or 8 SATA 3 ports wouldn't work for software raid?



If you are going to use Software RAID, that will be fine.  But I recommend against it.


----------



## bermel72 (Apr 17, 2016)

newtekie1 said:


> If you are going to use Software RAID, that will be fine.  But I recommend against it.



Any reason as to why?


----------



## newtekie1 (Apr 17, 2016)

bermel72 said:


> Any reason as to why?



Basically, the speed just isn't there.  Software RAID is slow.


----------



## bermel72 (Apr 17, 2016)

newtekie1 said:


> Basically, the speed just isn't there.  Software RAID is slow.



Ah. Well a good raid card is like $500, by the time I get this all figured out it'll be like 2 grand, which is a little much for me. I was looking at the $1000 to $1500 range.


----------



## newtekie1 (Apr 17, 2016)

bermel72 said:


> Ah. Well a good raid card is like $500, by the time I get this all figured out it'll be like 2 grand, which is a little much for me. I was looking at the $1000 to $1500 range.



http://www.newegg.com/Product/Product.aspx?Item=N82E16816115096

http://www.newegg.com/Product/Product.aspx?Item=N82E16816115064

The Highpoint 2680 is a good starter RAID card that won't break the bank.  Add a couple SFF-8087 to SATA cables, and you've got a decent setup that will handle up to 8 drives in RAID for about $120.  You use that just for the data drives, and connect your OS SSD to the motherboard.

This will allow you to start with your 6 data drives, and still give you some room to expand in the future by adding 2 more drives.  And the 2680 supports OCE(Online Capacity Expansion), so you can just attached another 6TB drive in the future, and use the Highpoint GUI to add it to an array to expand the size of that array.  It goes through a rebuild process to add the extra drive, and once that is done, the space is available.


----------



## bermel72 (Apr 17, 2016)

Hmmm...what's a good processor and motherboard for a decent raid setup, and how much ram would you recommend, I'll show you what I have so far.

PCPartPicker part list / Price breakdown by merchant

*CPU:* AMD FX-8350 4.0GHz 8-Core Processor  ($149.99 @ Newegg) 
*CPU Cooler:* NZXT Kraken X41 106.1 CFM Liquid CPU Cooler  ($94.59 @ NZXT) 
*Motherboard:* Asus M5A99FX PRO R2.0 ATX AM3+ Motherboard  ($107.99 @ Newegg) 
*Memory:* G.Skill Ripjaws X Series 8GB (2 x 4GB) DDR3-1866 Memory  ($48.99 @ Newegg) 
*Storage:* Samsung 850 EVO-Series 250GB 2.5" Solid State Drive  ($87.77 @ OutletPC) 
*Video Card:* EVGA GeForce GTX 750 Ti 2GB Video Card  ($104.99 @ NCIX US) 
*Case:* Fractal Design Define R5 (Black) ATX Mid Tower Case  ($89.99 @ Newegg) 
*Power Supply:* EVGA SuperNOVA P2 650W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($74.99 @ NCIX US) 
*Wired Network Adapter:* Intel EXPI9301CTBLK 10/100/1000 Mbps PCI-Express x1 Network Adapter  ($27.99 @ Amazon) 
*Case Fan:* Fractal Design GP14-WT 68.4 CFM 140mm  Fan  ($11.99 @ NCIX US) 
*Case Fan:* Fractal Design GP14-WT 68.4 CFM 140mm  Fan  ($11.99 @ NCIX US) 
*Total:* $811.27
_Prices include shipping, taxes, and discounts when available_
_Generated by PCPartPicker 2016-04-16 22:17 EDT-0400_


----------



## taz420nj (Apr 17, 2016)

newtekie1 said:


> http://www.newegg.com/Product/Product.aspx?Item=N82E16816115096
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816115064
> 
> ...



LOL Highpoint cards are junk.  They're called "fakeRAID" cards for a reason.  They don't have an onboard RPU/XOR engine, they are basically just HBAs that offload all of the RAID operations to the CPU - and that's why they're dirt cheap new. They are no better than software RAID performance wise. 

@bermel72 I already told you that there's nothing wrong with getting a used/decommissioned card off ebay.   On the last page I linked to a good one that'll cost you $60 including cables..  These are cards that cost $500 new a few years ago.


----------



## newtekie1 (Apr 17, 2016)

bermel72 said:


> Hmmm...what's a good processor and motherboard for a decent raid setup, and how much ram would you recommend, I'll show you what I have so far.
> 
> PCPartPicker part list / Price breakdown by merchant
> 
> ...



Ditch the AMD.  This is better and cheaper:

PCPartPicker part list / Price breakdown by merchant

*CPU:* Intel Core i5-6600K 3.5GHz Quad-Core Processor  ($233.99 @ SuperBiiz) 
*CPU Cooler:* NZXT Kraken X41 106.1 CFM Liquid CPU Cooler  ($94.59 @ NZXT) 
*Motherboard:* ASRock Z170M Pro4S Micro ATX LGA1151 Motherboard  ($87.99 @ Newegg) 
*Memory:* G.Skill NT Series 8GB (2 x 4GB) DDR4-2400 Memory  ($32.99 @ Newegg) 
*Storage:* Samsung 850 EVO-Series 250GB 2.5" Solid State Drive  ($87.77 @ OutletPC) 
*Case:* Fractal Design Define R5 (Black) ATX Mid Tower Case  ($89.99 @ Newegg) 
*Power Supply:* EVGA SuperNOVA P2 650W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($74.99 @ NCIX US) 
*Wired Network Adapter:* Intel EXPI9301CTBLK 10/100/1000 Mbps PCI-Express x1 Network Adapter  ($27.99 @ Amazon) 
*Case Fan:* Fractal Design GP14-WT 68.4 CFM 140mm  Fan  ($11.99 @ NCIX US) 
*Case Fan:* Fractal Design GP14-WT 68.4 CFM 140mm  Fan  ($11.99 @ NCIX US) 
*Total:* $754.28
_Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-16 22:31 EDT-0400_


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> LOL Highpoint cards are junk.  They're called "fakeRAID" cards for a reason.  They don't have an onboard RPU/XOR engine, they are basically just HBAs that offload all of the RAID operations to the CPU - and that's why they're dirt cheap new. They are no better than software RAID performance wise.
> 
> @bermel72 I already told you that there's nothing wrong with getting a used/decommissioned card off ebay.   On the last page I linked to a good one that'll cost you $60 including cables..  These are cards that cost $500 new a few years ago.



And I didn't say you were wrong I just like to hear different options and see what suits my needs better. Currently tho I don't have an eBay account and why I wouldn't be against making an account I'm weird about buying new things. Like I said tho I'm not against it.


----------



## bermel72 (Apr 17, 2016)

newtekie1 said:


> Ditch the AMD.  This is better and cheaper:
> 
> PCPartPicker part list / Price breakdown by merchant
> 
> ...



Awesome! I wanted an Intel but I must have completely looked over that board. Nice. Now I'll have a 5820k, 6700k, and a 6600k going for a whole family of Intel's I think.


----------



## newtekie1 (Apr 17, 2016)

bermel72 said:


> And I didn't say you were wrong I just like to hear different options and see what suits my needs better. Currently tho I don't have an eBay account and why I wouldn't be against making an account I'm weird about buying new things. Like I said tho I'm not against it.



There is a reason the card he linked to is only $30.  I'll give you a hint, try to find the product page on the manufacturer's website.


----------



## bermel72 (Apr 17, 2016)

Hmmm based on this video I think I will go with a raid 6.


----------



## taz420nj (Apr 17, 2016)

newtekie1 said:


> When the bottleneck varies, it does matter.  In the past, when it was recommended to not use more than a 2TB drive, we had RAID controllers that were slow as well as slow write speeds on the drives.  At the time it was a two fold problem.  Today, both problems have been greatly reduced.  How is this hard to understand?



Except once again, the write speeds are NOT increasing anywhere near fast enough to make a rebuild operation significantly faster to a degree that the array is not in danger.




> Oh you believe the spec sheets, how cute...  Go look at some actual benchmarks.  Too lazy to do that, no problem, I already did, because I actually inform myself before posting, here are some:
> 
> 2008 - Fastest Drive Tested is the 10,000RPM WD Velociraptor - 101MB/s
> WD20EADS - You claim 100MB/s - Reality = 78MB/s (Hey, didn't I say under 80MB/s?)
> 6TB WD RED - 175MB/s



While we're talking about "actual" benchmarks, what, you didn't think I'd notice your link to the 6TB Red is for a 7200RPM "Red Pro" drive that costs over $120 more each than the Red 5400 that you want the OP to buy?







Hmmmm...  Looks like the 5400's writes average out to 123.4MB/s.  Quite a bit short of your claim.





> You still want to say drives haven't gotten significantly faster over time?  Do you not consider almost 100MB/s, over double the speed, significant?  Because I do.



Yeah I just blew that argument out of the water.



> No according to what I've read here.  Waits for BS Backblaze link...


Oh how cute, you had to go to some obscure French site to find someone who claims Seagate has "less returns".  Pardonnez-moi if I believe Backblaze, since they have tens of thousands of these drives in operation and know the rate at which they fail.



> Drives are meant to be used, and the life span is not greatly affected by a rebuild.  Also, hard drives don't have a limited number of reads/writes, so no, I'm not shortening the lifespan.



Every single operation the drive performs gets it closer to encountering an unrecoverable bit error (the entire basis of the problem with large drives in RAID5).  The second drive doesn't actually have to "fail" completely. All that has to happen is that an unrecoverable bit error is encountered during the reverse parity computation and the rebuild will crash - destroying the array.



> They did.  It is amazing what several years of increased hard drive write speeds and more powerful RAID controllers can do.  Again, I've already told you, both arrays were about 60% full.  I never said the two arrays were of the same capacity, the array of 2TB drives was significantly smaller.  Both were 3 drive arrays.



Wow.  Amazing.  So you are running with literally 80% overhead loss (30% in RAID parity loss plus 50% for 1:1 backup) to store your movies??  LOL good to know you can afford to piss money away like that on such a pointless setup.  Why not simply put it in RAID1?  Or don't use RAID at all, just copy it to two sets of drives?  At least that way you're only running with 50% overhead.

And sorry, but there's no way in hell or on Earth that it took the same amount of time to rebuild 2.5TB worth of data as it did to rebuild 8TB of data.  Even if the 75% increase in real world write speed (which last I checked isn't "over double") you're claiming is true (which it's not), it's not the 300% increase that would be required to accomplish such a feat.  Care to check your math?



> I guess you're perfect?  You're telling me you've never accidentally deleted something?  Ever?  I call complete bullshit on that.


Of course I have. But my storage folders on the array have permissions in force that don't allow deletion.   To delete anything from there requires me to log into the server directly using a specific account.



> Now I know you are just a troll.  So I'm done here.



But at least I know what I'm talking about....


----------



## taz420nj (Apr 17, 2016)

newtekie1 said:


> There is a reason the card he linked to is only $30.  I'll give you a hint, try to find the product page on the manufacturer's website.


Uhh duh, it's an old card.  Of course it won't be on their site.  But it'll still do what he needs for a fraction of what a new card costs.  This is a f'n media server, not an enterprise setup.


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> Uhh duh, it's an old card.  Of course it won't be on their site.  But it'll still do what he needs for a fraction of what a new card costs.  This is a f'n media server, not an enterprise setup.



...But my data is important...right?.... The only major thing I care about is my backups but it would be nice to have everything in redundancy. You know..reasons.


----------



## taz420nj (Apr 17, 2016)

bermel72 said:


> ...But my data is important...right?.... The only major thing I care about is my backups but it would be nice to have everything in redundancy. You know..reasons.



Of course it's important.  My comment was regarding the fact that you are not running this array on a heavily used business network with large, frequently accessed databases that would necessitate a brand new expensive top of the line RAID card in order to keep up with day to day operations.  You're using it to store media and play it back, which is for the most part going to be written once and then read.  It will only have to deal with a limited amount of throughput, so you can absolutely use something from last generation and it will more than suit your needs.


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> Uhh duh, it's an old card.  Of course it won't be on their site.  But it'll still do what he needs for a fraction of what a new card costs.  This is a f'n media server, not an enterprise setup.



Got a question for you @taz420nj so the thing is I would rather have something new that comes with a warranty (I work for vizio tech support and understand the importance of warranties and customer service). So with that being said would this be okay?


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> Of course it's important.  My comment was regarding the fact that you are not running this array on a heavily used business network with large, frequently accessed databases that would necessitate a brand new expensive top of the line RAID card in order to keep up with day to day operations.  You're using it to store media and play it back, which is for the most part going to be written once and then read.  It will only have to deal with a limited amount of throughput, so you can absolutely use something from last generation and it will more than suit your needs.



Correct I see my my laughing faces didnt go through on the first part of that. Here . That second part was serious tho.


----------



## taz420nj (Apr 17, 2016)

bermel72 said:


> Got a question for you @taz420nj so the thing is I would rather have something new that comes with a warranty (I work for vizio tech support and understand the importance of warranties and customer service). So with that being said would this be okay?


No.  That's just a SATA card. You don't want anything that mentions "HBA" (Host Bus Adapter).  It needs to be an actual RAID card.  Those are the ones that cost $500+ new.

As far as a warranty in this case, you'd need to have a used card fail 10-20 times (depending on the price) to even come close to the price of buying a new card that comes with a 1 year warranty.  Do cards fail? Sure. But with enterprise cards it'll more likely be from something like a power surge or physically broken connector than it just 'dying'.  I haven't had one die yet.

Another example you can relate to since you mentioned you work for Vizio..  I have a TV on my patio (under a roof but outside nonetheless)..  An actual "outdoor" TV that is waterproof, dustproof, etc costs about $2,000.  Instead I bought a $350 Vizio, knowing full well that it may not survive.. But even if I have to replace it every other year or so, it'll still take 10 years for it to cost the $2,000 an actual outdoor TV would've cost..  And know what?  It's been there for over 3 years and still works great.


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> No.  That's just a SATA card. You don't want anything that mentions "HBA" (Host Bus Adapter).  It needs to be an actual RAID card.  Those are the ones that cost $500 new.
> 
> As far as a warranty in this case, you'd need to have a used card fail 10-20 times (depending on the price) to even come close to the price of buying a new card that comes with a 1 year warranty.  Do cards fail? Sure. But with enterprise cards it'll more likely be from something like a power surge or physically broken connector than it just 'dying'.  I haven't had one die yet.
> 
> Another example you can relate to since you mentioned you work for Vizio..  I have a TV on my patio (under a roof but outside nonetheless)..  An actual "outdoor" TV that is waterproof, dustproof, etc costs about $2,000.  Instead I bought a $350 Vizio, knowing full well that it may not survive.. But even if I have to replace it every other year or so, it'll still take 12 years for it to cost the $2,000 an actual outdoor TV would've cost..  And know what?  It's been there for over 3 years and still works great.



Well thanks for that, if you have any problems with it let me know...haha...you got your own personal rep for vizio tvs. Anyways I'll check it out and see what I can come up with. I guess I never really thought of it that way, ebay doesn't seem so bad, so as long as it doesn't say HBA in it and its a LSI its good? Cool cool.


----------



## taz420nj (Apr 17, 2016)

bermel72 said:


> Well thanks for that, if you have any problems with it let me know...haha...you got your own personal rep for vizio tvs. Anyways I'll check it out and see what I can come up with. I guess I never really thought of it that way, ebay doesn't seem so bad, so as long as it doesn't say HBA in it and its a LSI its good? Cool cool.



Haha thanks!   The one I linked you to is what you should use.  It's dirt cheap, it's got 12 ports so you will still have room to add more drives if you use a bigger case, it comes with the battery backup unit, and it's one that was very popular so you won't have a problem finding a new one if you do happen to have a failure. I use the 8 port version myself and I know for a fact it will handle what you're going to do with it.  I've had mine running 10 simultaneous streams (2 transcoded, 8 direct play) while downloading from my seedbox at 50Mbps and unRARing, without a hiccup.


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> Haha thanks!   The one I linked you to is what you should use.  It's dirt cheap, it's got 12 ports so you will still have room to add more drives if you use a bigger case, it comes with the battery backup unit, and it's one that was very popular so you won't have a problem finding a new one if you do happen to have a failure. I use the 8 port version myself and I know for a fact it will handle what you're going to do with it.  I've had mine running 10 simultaneous streams (2 transcoded, 8 direct play) while downloading from my seedbox at 50Mbps and unRARing, without a hiccup.



Holy crap...alright all bets are off....going on a hunt for that card. 10 streams? Thats amazing that means every device in my house could be streaming something and it wouldn't even hiccup. Impressive, I am debating on running a monitor on an i5 6600k setup, just at a simple 1080p and having a dedicated monitor for my server (its a new toy so I need to play with it) either that or getting a basic graphics card that can support 1440p and switching inputs on my monitor, would the integrated graphics on the 6600k support this or would you recommend something like a 780ti for $100 bucks?


----------



## taz420nj (Apr 17, 2016)

bermel72 said:


> Holy crap...alright all bets are off....going on a hunt for that card. 10 streams? Thats amazing that means every device in my house could be streaming something and it wouldn't even hiccup. Impressive, I am debating on running a monitor on an i5 6600k setup, just at a simple 1080p and having a dedicated monitor for my server (its a new toy so I need to play with it) either that or getting a basic graphics card that can support 1440p and switching inputs on my monitor, would the integrated graphics on the 6600k support this or would you recommend something like a 780ti for $100 bucks?



LOL yeah that was something that I set up specifically to stress it, it's not the normal scenario.  But your HD streams will be somewhere in the 10-15Mbps range. 10 streams will be 150Mbps - the card and drives can handle 6 times that throughput.

On a server that is going to run mostly headless, I honestly wouldnt bother with anything but the onboard graphics.   They'll definitely support 1080p, not sure about 1440p but honestly thats not necessary for a server.


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> LOL yeah that was something that I set up specifically to stress it, it's not the normal scenario.  But your HD streams will be somewhere in the 10-15Mbps range. 10 streams will be 150Mbps - the card and drives can handle 6 times that throughput.
> 
> On a server that is going to run mostly headless, I honestly wouldnt bother with anything but the onboard graphics.   They'll definitely support 1080p, not sure about 1440p but honestly thats not necessary for a server.



Eh maybe I'll just install teamviewer and be done with it.


----------



## taz420nj (Apr 17, 2016)

One other thing just to be clear, any stream that requires transcoding (remote streams, or clients that can't direct play a particular file) that will still tax your CPU.  Plex recommends a passmark score of at least 1500 per transcoded HD stream in addition to everything else the computer will be doing.  So with the 6600k I wouldn't recommend more than 3 transcodes running simultaneously because that's over 50% of the CPU's power (for comparison my server is dual Xeons with a total Passmark close to the 6600k, I think it's around 7400). Streams that can be direct-played by the client (such as an MKV on the Vizio Plex app) don't have that requirement so you can literally run as many as your network can support.


----------



## bermel72 (Apr 17, 2016)

taz420nj said:


> One other thing just to be clear, any stream that requires transcoding (remote streams, or clients that can't direct play a particular file) that will still tax your CPU.  Plex recommends a passmark score of at least 1500 per transcoded HD stream in addition to everything else the computer will be doing.  So with the 6600k I wouldn't recommend more than 3 transcodes running simultaneously because that's over 50% of the CPU's power (for comparison my server is dual Xeons with a total Passmark close to the 6600k, I think it's around 7400). Streams that can be direct-played by the client (such as an MKV on the Vizio Plex app) don't have that requirement so you can literally run as many as your network can support.



PCPartPicker part list / Price breakdown by merchant

*CPU:* Intel Core i5-6600K 3.5GHz Quad-Core Processor  ($244.99 @ Newegg)
*CPU Cooler:* NZXT Kraken X41 106.1 CFM Liquid CPU Cooler  ($94.59 @ NZXT)
*Motherboard:* Asus SABERTOOTH Z170 MARK 1 ATX LGA1151 Motherboard  ($209.99 @ Amazon)
*Memory:* Kingston Savage 8GB (2 x 4GB) DDR4-2800 Memory  ($59.99 @ Newegg)
*Storage:* Samsung 850 EVO-Series 250GB 2.5" Solid State Drive  ($87.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Storage:* Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive  ($86.99 @ Amazon)
*Case:* Fractal Design Define R5 (Black) ATX Mid Tower Case  ($89.99 @ Newegg)
*Power Supply:* EVGA SuperNOVA P2 650W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($74.99 @ NCIX US)
*Wired Network Adapter:* Intel EXPI9301CTBLK 10/100/1000 Mbps PCI-Express x1 Network Adapter  ($27.99 @ Amazon)
*Case Fan:* Fractal Design GP14-WT 68.4 CFM 140mm  Fan  ($11.99 @ NCIX US)
*Case Fan:* Fractal Design GP14-WT 68.4 CFM 140mm  Fan  ($11.99 @ NCIX US)
*Other:* AMCC 9650SE-12/16ML 9650SE-12ML 12-Port PCIe SATA II Raid Controller w/ Cables  ($29.99)
*Other:* 2x Mini 10Gbps SAS SFF-8087 36Pin to 4 SATA 7Pin HDD Hard Drive Splitter Cable  ($15.20)
*Total:* $1655.61
_Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-17 01:18 EDT-0400_

Going to be setup in Raid 6, so I'd loose two drive equaling 12TB free and 4TB not free. Is this a safe setup or would you recommend an i7, I don't think I should ever be streaming more then 3 devices for transcoding. And everything will be running on a Ps4 Plex server, at least on tv's (2 Tv's, I have 2 Ps4's) and the only time it should have to transcode is when your streaming to a phone, tablet, laptop, ect? I took your idea about the 2TB but that will be plenty for a few years yet, also I went with a really expensive motherboard (Asus Sabertooth) for its claim about being extremely reliable and robust.

Edit:
Also would you recommend Windows 10 Home/Pro or Windows 7 Ultimate. I am okay with purchasing Windows 10 Home/Pro, but I have a Windows 7, but I would rather use Windows 10 if possible.


----------



## newtekie1 (Apr 17, 2016)

taz420nj said:


> But at least I know what I'm talking about....



The guy who quoted spec sheets, and has obviously never actually done rebuilds on large arrays and recommends an almost 10 year old RAID card that doesn't even have proper driver support, or a proper place to download the driver says he knows what he is talking about... HA!



taz420nj said:


> Uhh duh, it's an old card. Of course it won't be on their site. But it'll still do what he needs for a fraction of what a new card costs. This is a f'n media server, not an enterprise setup.



It doesn't matter if it is old, the support from the manufacturer is clearly not there.  And if the hardware is good, it doesn't matter if it is old, it will still be on the manufacturer's website for people that are still using it.  That is why they are selling for dirt cheap.  It is hard to even find the driver for the damn thing, and will probably only get harder as time goes by.  Once a manufacturer of a piece of hardware removes the product completely from the website, like it never even existed, it is probably a good sign that hardware has reached the end of its useful life.  Yeah, you can still do some digging on their website and find the driver download, but it took far too long to find it already, and who knows how long it will be available.  So in a couple years, when you go to re-install Windows, oops no more driver.  Sure you can go buy another cheap card off ebay, _but _then you run into the main drawback of hardware RAID.  Because the array was build on that old card, you have to find one that is compatible with it or loose all your data and rebuild the array from scratch.

Also, because it is an almost 10 year old design. its slow.  It uses a very weak PowerPC based processor, which means RAID5/6 performance is horrible.  Ironically, I've actually used that card back in the day.  For its time, it was good.  But the biggest problem was RAID5/6 write performance.  The processor they used was simply too weak. The only saving grace was the onboard cache, so if you enabled write-back then write operations were good enough as long as you didn't fill up the cache.  But with only 256MB, that fills up pretty quickly.

The highpoint card I recommended might use the main CPU to do the RAID calculations, but so what?  The CPUs in computers have come a long way in 10 years.  Back when the 9650se was popular we were still on single, or if you were really ballin' a dual-core, netburst processors.  They needed all the main CPU power they had for doing other tasks, they couldn't spare CPU cycles for RAID, and the RAID calculations would load up a netburst CPU core pretty hard.  But today, it is completely different.  We are talking about putting a quad-core Skylake chip in his machine.  It has more than enough power to do RAID calculations without even breaking a sweat.  In fact, I know for a fact(because I actually have experience on the subject) that the highpoint card will easily do sustained writes to a RAID5 array of over 100MB/s, which for a network server on a gigabit network is easily fast enough.  I also know that the 9650se will do the same, but only until the 256MB cache fills up, then the write speeds drop down to the ~20MB/s mark.  That's how slow it is at actually writing to the RAID5 array.

If you are going to recommend a card off ebay, and there is nothing wrong with that, recommend something worth using. You can fine good Adaptec cards, like this one, on ebay for not much money.  My recommendation of the Highpoint was just a recommendation of something inexpensive that was new and I only recommended it to replace the onboard controller, because what he had was filling up all the Intel ports with hard drives and leaving no port for his SSD. At the time we were talking about 6x6TB drives, 2 arrays of 3 drives, with a 1:1 backup.  The highpoint would allow him to use all the hard drives he wanted at the time on the RAID card, and put the SSD on the Intel. 

It is bad advice to recommend the card you did, and shows you don't know what you are talking about.  All of your recommendations so far have show how little knowledge you have in the subject actually...



bermel72 said:


> Going to be setup in Raid 6, so I'd loose two drive equaling 12TB free and 4TB not free. Is this a safe setup or would you recommend an i7, I don't think I should ever be streaming more then 3 devices for transcoding. And everything will be running on a Ps4 Plex server, at least on tv's (2 Tv's, I have 2 Ps4's) and the only time it should have to transcode is when your streaming to a phone, tablet, laptop, ect? I took your idea about the 2TB but that will be plenty for a few years yet, also I went with a really expensive motherboard (Asus Sabertooth) for its claim about being extremely reliable and robust.
> 
> Edit:
> Also would you recommend Windows 10 Home/Pro or Windows 7 Ultimate. I am okay with purchasing Windows 10 Home/Pro, but I have a Windows 7, but I would rather use Windows 10 if possible.



Don't use 2TB drives.  There is no point, and you leave yourself no room for expansion.  If anything, get 5x4TB drives.  Price wise, it works out about the same, but gives you 3 open spots for adding drives in the future.

And, I've already explained why not to go with the AMCC card.  It's total junk.  The Adaptec I posted above would be better if you want a complete hardware RAID.

Finally, I would use Win10 Pro.


----------



## bermel72 (Apr 17, 2016)

newtekie1 said:


> The guy who quoted spec sheets, and has obviously never actually done rebuilds on large arrays and recommends an almost 10 year old RAID card that doesn't even have proper driver support, or a proper place to download the driver says he knows what he is talking about... HA!
> 
> 
> 
> ...



Damn that was a lot to read but I got thru it all. Alright well I've already decided that I want raid 6 in hardware raid if what you say is true then replacing it down the line would be easier. I'm not out to start a war between members I just want some solid advice on the performance between the two and make a decision from there.


----------



## newtekie1 (Apr 17, 2016)

bermel72 said:


> Damn that was a lot to read but I got thru it all. Alright well I've already decided that I want raid 6 in hardware raid if what you say is true then replacing it down the line would be easier. I'm not out to start a war between members I just want some solid advice on the performance between the two and make a decision from there.



If you are cool with buying off ebay, then the Adaptec card I posted is a good start.  It has the features you need and is a good performer.


----------



## bermel72 (Apr 17, 2016)

newtekie1 said:


> If you are cool with buying off ebay, then the Adaptec card I posted is a good start.  It has the features you need and is a good performer.



Not really but I need something that's less then a $100 bucks idk tho that one that was suggested earlier tho from newegg, I just would rather buy something new but don't really want to spend $500 bucks on a single card. I don't even have a eBay account.


----------



## newtekie1 (Apr 17, 2016)

The card I posted earlier from newegg doesn't do RAID 6, only RAID 5. The next model up does though, but it is over $100. 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816115100


----------



## bermel72 (Apr 18, 2016)

Hmmm....found one for $200 bucks I might go with.


----------



## bermel72 (Apr 20, 2016)

taz420nj said:


> One other thing just to be clear, any stream that requires transcoding (remote streams, or clients that can't direct play a particular file) that will still tax your CPU.  Plex recommends a passmark score of at least 1500 per transcoded HD stream in addition to everything else the computer will be doing.  So with the 6600k I wouldn't recommend more than 3 transcodes running simultaneously because that's over 50% of the CPU's power (for comparison my server is dual Xeons with a total Passmark close to the 6600k, I think it's around 7400). Streams that can be direct-played by the client (such as an MKV on the Vizio Plex app) don't have that requirement so you can literally run as many as your network can support.




PCPartPicker part list / Price breakdown by merchant

*CPU:* Intel Core i7-5820K 3.3GHz 6-Core Processor  ($394.98 @ Newegg)
*CPU Cooler:* NZXT Kraken X61 106.1 CFM Liquid CPU Cooler  ($119.99 @ Amazon)
*Motherboard:* EVGA Classified EATX LGA2011-3 Motherboard  ($275.99 @ Newegg)
*Memory:* G.Skill Ripjaws V Series 32GB (4 x 8GB) DDR4-3200 Memory  ($169.99 @ Newegg)
*Storage:* Samsung 850 EVO-Series 500GB 2.5" Solid State Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Storage:* Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive  ($149.99 @ Newegg)
*Video Card:* Asus Radeon R9 390X 8GB Video Card  ($399.98 @ Newegg)
*Case:* Phanteks Enthoo Series Primo Aluminum ATX Full Tower Case  ($219.99 @ Newegg)
*Power Supply:* EVGA SuperNOVA P2 850W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($109.99 @ NCIX US)
*Optical Drive:* LG WH16NS40 Blu-Ray/DVD/CD Writer  ($59.98 @ Newegg)
*Operating System:* Microsoft Windows 10 Home Full - USB (32/64-bit)  ($119.99 @ Amazon)
*Case Fan:* Phanteks PH-F140SP_BK 82.1 CFM 140mm  Fan  ($16.99 @ Newegg)
*Case Fan:* Phanteks PH-F140SP_BK 82.1 CFM 140mm  Fan  ($16.99 @ Newegg)
*Case Fan:* Phanteks PH-F140SP_BK 82.1 CFM 140mm  Fan  ($16.99 @ Newegg)
*Case Fan:* Phanteks PH-F140SP_BK 82.1 CFM 140mm  Fan  ($16.99 @ Newegg)
*Case Fan:* Phanteks PH-F140SP_BK_RLED 82.1 CFM 140mm  Fan  ($20.98 @ Newegg)
*Case Fan:* Phanteks PH-F140SP_BK_RLED 82.1 CFM 140mm  Fan  ($20.98 @ Newegg)
*Case Fan:* Fractal Design GP12-WT 52.3 CFM 120mm  Fan  ($11.99 @ Amazon)
*Case Fan:* Fractal Design GP12-WT 52.3 CFM 120mm  Fan  ($11.99 @ Amazon)
*Monitor:* BenQ XL2730Z 144Hz 27.0" Monitor  ($509.99 @ Amazon)
*Headphones:* Kingston HyperX Cloud II 7.1 Channel Headset  ($99.00 @ Amazon)
*Other:* 3WARE Cable, 1 Unit Of 1 Meter Multi-lane Internal (SFF-8087) Serial Ata Breakou  ($19.39)
*Other:*  3WARE Cable, 1 Unit Of 1 Meter Multi-lane Internal (SFF-8087) Serial Ata Breakou  ($19.39)
*Other:*  LSI Logic Megaraid Eight-Port 6Gb/s PCI Express 3.0 SATA+SAS RAID Controller LSI00330  ($449.99)
*Total:* $4144.47
_Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-19 22:28 EDT-0400_

I did my server and main computer all in one, It will just literally run 24x7, which is fine. It'll save room do you think this will work for a Raid 6 setup?


----------



## n-ster (Apr 20, 2016)

Idk why you have so many fans, but I'd suggest http://m.newegg.com/Product?ItemNumber=35-709-027&iscoz=true for PWM and acoustics for all your 140mm needs (well it's 120mm screw holes so be careful of compatibility), the XP versions are okay too and have 140mm holes 

A Silent Base 800 or similar could probably do the job just fine in terms of cases no? 

The X61 Kraken is great, though I personally am a fan of air coolers for 24/7 use, Dark Rock Pro 3 and PH-TC14PE being my favourite options. 

I'm also just assuming that lower noise is better, if it isn't, then recommendations would change


----------



## bermel72 (Apr 25, 2016)

The passmark rating is about 9k on this processor, and it will need to be able to do about 3 live trans-coding at most within my home network. Also with the file system FreeNAS uses, does it need a dedicated drive for mirroring as raid would? I did some "light" reading on the file system and from what I understand it does not. Most likely I would want to use type 2 or 3 for redundancy. Any suggestions would be wonderful.

PCPartPicker part list / Price breakdown by merchant

*CPU:* Intel Xeon E3-1231 V3 3.4GHz Quad-Core Processor  ($241.80 @ Amazon)
*CPU Cooler:* Noctua NH-D15 82.5 CFM CPU Cooler  ($88.49 @ Amazon)
*Motherboard:* Supermicro X10SL7-F Micro ATX LGA1150 Motherboard  ($201.98 @ Newegg)
*Memory:* Crucial 32GB (2 x 16GB) Registered DDR3-1600 Memory  ($185.91 @ Amazon)
*Storage:* Samsung 850 EVO-Series 250GB 2.5" Solid State Drive  ($84.99 @ NCIX US)
*Storage:* Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive  ($239.99 @ Amazon)
*Storage:* Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive  ($239.99 @ Amazon)
*Storage:* Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive  ($239.99 @ Amazon)
*Case:* NZXT H440 (Matte Black) ATX Mid Tower Case  ($109.37 @ Amazon)
*Power Supply:* EVGA SuperNOVA GS 550W 80+ Gold Certified Fully-Modular ATX Power Supply  ($74.99 @ Amazon)
*Total:* $1707.50
_Prices include shipping, taxes, and discounts when available_
_Generated by PCPartPicker 2016-04-24 21:51 EDT-0400_


----------

