# RAID6: Samsung or WD?



## Albuquerque (May 7, 2012)

Your opinion counts 

I'm building a new 2008 R2 server that will act as a VM host for a plethora of boxes, to include a WHS 2011 instance, another 2008R2 instance for hosting some online games (MineCraft, some old Telnet games), and some other nonsense. Because the box will be running 24/7 and will only occasionally get "busy", I'm going to spec it out with equipment with good idle power characteristics.  And because it will be serving as the backup instance for all my other home Windows devices thanks to WHS, I need to make sure that data doesn't go off the deep end.

I'm going to stack it all up using laptop drives: a pair of WD Scorpio Black 320GB drives in RAID1 + Z77 caching SSD for the OS and apps volume, and then eight 1TB 9.5mm laptop drives all connected to a Highpoint 2720 SGL in RAID6 for the data volume.

The question is: which 1TB laptop drives to use? You decide


----------



## Solaris17 (May 7, 2012)

which one do you like better?


----------



## Disparia (May 7, 2012)

The cheapest ones? 

Seems a bit elaborate for what you need. Not that I'm against such things, but sometimes the more practical solution wins out - like 5-6 drives on the motherboard controller. When that's outlived, then I'd expand to another controller.


----------



## Albuquerque (May 7, 2012)

Motherboard (Z77 Mini-ATX) controller will be 'busy doing a pair of WD Scorpio Black 320Gb drives in RAID1 along with an SSD for caching 

Want to talk about elaborate? Look at my system specs


----------



## Aquinus (May 7, 2012)

Jizzler said:


> The cheapest ones?
> 
> Seems a bit elaborate for what you need. Not that I'm against such things, but sometimes the more practical solution wins out - like 5-6 drives on the motherboard controller. When that's outlived, then I'd expand to another controller.



Just makes sure they support some form of TLER.

If you have the money and want an enterprise drive solution consider WD RE4 as they're built specifically for RAID.
http://wdc.com/en/products/products.aspx?id=30



Albuquerque said:


> Motherboard (Z77 Mini-ATX) controller will be 'busy doing a pair of WD Scorpio Black 320Gb drives in RAID1 along with an SSD for caching
> 
> Want to talk about elaborate? Look at my system specs



It's not that elaborate. How fast does the SSD RAID go because my two Force GTs hit 1gb/s easy.


----------



## Cotton_Cup (May 7, 2012)

well just go grab either both are good but if your going to raid samsung might be a better choice (not sure about this as western digital got bad raid or something by what people say and I am not sure about that)


----------



## Disparia (May 7, 2012)

Albuquerque said:


> Motherboard (Z77 Mini-ATX) controller will be 'busy doing a pair of WD Scorpio Black 320Gb drives in RAID1 along with an SSD for caching
> 
> Want to talk about elaborate? Look at my system specs



Oh silly me, I should have gathered from your post that you wanted a Rube Goldberg approach to your server. Some times it needs to be spelled out for me to catch on. 




Aquinus said:


> Just makes sure they support some form of TLER.
> 
> If you have the money and want an enterprise drive solution consider WD RE4 as they're built specifically for RAID.
> http://wdc.com/en/products/products.aspx?id=30



I will... when it matters. Not so much here.


----------



## Albuquerque (May 7, 2012)

What part of "laptop drives" did the two of you miss?  

And as for Rube Goldberg?  In what way?  RAID1 + Caching for OS and apps, and a RAID6 volume for data via an 8-port SAS RAID card.  Unless of course it's a "Rube Goldberg" approach because I'm not using twenty 15,000RPM enterprise SCSI drives plumbed into my own $10,000 SAN device that feeds a $4000 5U racked server?

No.  I'm building a server for *home* with *good idle power characteristics* which will be based on *commodity hardware* using *low power devices* such as *laptop drives*.    Not a big ask, if you look around...


----------



## ERazer (May 7, 2012)

have you google about TLER if ur serious about RAID

NM you want laptop drives

i would go with blue WD then, i never have bad experience with WD


----------



## Disparia (May 7, 2012)

Albuquerque said:


> And as for Rube Goldberg?  In what way?  RAID1 + Caching for OS and apps, and a RAID6 volume for data via an 8-port SAS RAID card.  Unless of course it's a "Rube Goldberg" approach because I'm not using twenty 15,000RPM enterprise SCSI drives plumbed into my own $10,000 SAN device that feeds a $4000 5U racked server?



No... that would be even more elaborate then your current plan, to point of being needlessly 
excessive. Perhaps we we're thinking of two different Rube Goldbergs.




Albuquerque said:


> No.  I'm building a server for *home* with *good idle power characteristics* which will be based on *commodity hardware* using *low power devices* such as *laptop drives*.    Not a big ask, if you look around...



Exactly. You have simple needs which is why I gave a simple setup suggestion. If you have other reasons for planning it out as you have then by all means, continue on. It can even be "because I want to" - I've used that plenty of times for doing things the way I do.


----------



## Albuquerque (May 7, 2012)

Your suggestion doesn't work within the guidelines I have provided (there are no Mini ATX motherboard that support eight RAID slots, and I'm not aware of ANY that support RAID6), so please provide another one.


----------



## W1zzard (May 7, 2012)

get the cheapest ones as raid 6 will provide plenty of protection against hdd failure


----------



## Albuquerque (May 7, 2012)

Cheapest is probably the Sammy, but only by like $2 depending on where they're purchased from (Amazon for the WD, vs NewEgg for the Sammy.)  I posted this same question on another forum that I am a long-time member of, and everyone there is leaning towards WD as well.


----------



## Solaris17 (May 7, 2012)

Albuquerque said:


> Cheapest is probably the Sammy, but only by like $2 depending on where they're purchased from (Amazon for the WD, vs NewEgg for the Sammy.)  I posted this same question on another forum that I am a long-time member of, and everyone there is leaning towards WD as well.



WD is a very old company. but any HDD is prone to failure. honestly between the sammy and the WD i dont think either is more susceptible to failure then the other. just go for the one thats cheaper.


----------



## dhdude (May 7, 2012)

I'd go WD as those Scorpio Blues are amazing drives for the money, and I've had a few problems with a few recent Samsung drives in RAID


----------



## theeldest (May 9, 2012)

The only reason I'd recommend against the Western Digital is due to lack of RAID support in their non RE series of drives.

But I'm not sure if this extends to their 2.5" drives.

Do you need laptop drives? or just 2.5" drives? (ie: are 15mm 2.5" drives going to fit in your enclosure? or do we need to stick to <9mm?)


----------



## Sasqui (May 9, 2012)

Do the WD Blue laptop drives support RAID?  I've read plenty of complaints that the desktop Black versions don't...


----------



## FordGT90Concept (May 9, 2012)

Neither.  Seagate Constellation or Western Digital XE.

If you're looking to save money, switch to 3.5" drives and look at the Western Digital RE4 and Seagate Constellation ES drives.


----------



## theeldest (May 9, 2012)

Ok, let's get some concrete power numbers. I'm going to look at the Idle power as I don't know if that controller will let drives go to standby.

*Summary*
WD Blue: 0.89 Watts
Seagate 7200rpm: 2.95 Watts (15mm z-height)
Samsung: 0.7 Watts


*My thoughts*
Depending on how much load you actually put on the storage during peak usage the upgrade from 5400 to 7200 may be worth your while. Even at about 3 watts per drive, you end up better than 3.5" drives (5 3.5" 5400 rpm drives in RAID6 = 25.5 watts @ idle; 8 2.5" 7200rpm drives in RAID6 = 23.6 watts).

The seagate is obviously more expensive, but if you would like performance it may be a viable option (you can buy it here).

Otherwise the Samsung will use less power and we know it's not purposely crippled for RAID setups whereas the Western Digital _*may*_ be crippled (TLER).

*Sources*
Western Digital 2.5" 1TB 5400rpm Scorpio Blue
Current Requirements
	Power Dissipation
	Read/Write	1.4 Watts
	Idle	0.59 Watts
	Standby	0.18 Watts
	Sleep	0.18 Watts

Seagate 2.5" 1TB 7200rpm Constellation
	Idle	2.95 Watts
	Read/Write	3.84 Watts

Samsung 1TB 5400rpm Spinpoint
	Read/Write	2.2 Watts
	Idle	0.7 Watts
	Standby	0.2 Watts

Samsung 2TB 5400rpm F4 review


----------



## FordGT90Concept (May 9, 2012)

Constellation is an enterprise 7200 RPM drive, the rest are consumer 5400 RPM drives.  Different categories.


----------



## theeldest (May 9, 2012)

FordGT90Concept said:


> Constellation is an enterprise 7200 RPM drive, the rest are consumer 5400 RPM drives.  Different categories.



Added a note on 15mm z-height for the constellation.


----------



## Disparia (May 9, 2012)

Sasqui said:


> Do the WD Blue laptop drives support RAID?  I've read plenty of complaints that the desktop Black versions don't...



"RAID Support Yes/No" doesn't exist, or at least I don't believe it's as cut and dry as others make it out to be.

WD recommends Greens/Blues/Blacks for RAID _with limitations_. Currently those limitations are:

- Used in consumer RAID solutions (ICHxR, SBxxx, some dedicated, some software)
- RAID-0 or RAID-1 only.
- No more than two drives in an array.

There is an exception to the last two. If you're a system builder which provides support to the end-user, you may use these drives in larger and more complex RAID arrays. For all other situations WD recommends their enterprise-level drives.

Being my own system builder, I test and support my own arrays


----------



## FordGT90Concept (May 9, 2012)

Any kind of drive can be used in any kind of RAID.  RAID is a controller technology, not a drive technology.

Those exceptions are listed because they don't want people complaining to them about frequent hard drive failures in a RAID.  Remember that all RAIDs have their limitations on number of failures.  Exceed that number, all data is lost.


----------



## theeldest (May 9, 2012)

FordGT90Concept said:


> Any kind of drive can be used in any kind of RAID.  RAID is a controller technology, not a drive technology.
> 
> Those exceptions are listed because they don't want people complaining to them about frequent hard drive failures in a RAID.  Remember that all RAIDs have their limitations on number of failures.  Exceed that number, all data is lost.



There's a specific issue with Western Digital non-RE series drives and support for TLER. When the time to recover from an error is too long then the drives will act as if they're unresponsive and be dropped from the RAID.

In the past, TLER could be enabled on all of Western Digital's drives. But to move RAID users to the RE series they imposed a block on enabling TLER on desktop drives.

So, yes, there are limitations to which drives can be used with which controllers. Some controllers can get around this problem. I am unfamiliar with the controller the OP mentioned.


Western Digital
Wikipedia


----------



## Disparia (May 9, 2012)

"Problem" doesn't seem like the right word here. Earlier you _could_ adjust the TLER value, but you didn't have to when using the drive in a consumer RAID situation. They have such long timeouts that if one of my drives actually exceeded it - I'd be glad that it got dropped. It's not a workaround, it's just how it is.

What was happening before the block was IT departments buying dozens to hundreds of Black drives, adjusting the TLER value and using them on controllers with stricter timeouts. To WD, each box (20 drives) of Blacks sold where RE should have been used was a loss of ~$2K. With that kind of savings, it was easy to have spares on hand to cope with the marginally higher failure rate. WD didn't block TLER to squeeze consumers for more coin, they did it so IT would fall in line with their product tiers 

However I also agree a bit with Ford. Drives are not the only fruit in a RAID salad - there's the bios/drivers/etc. Passing support to system builders who will validate their systems was smart.


----------



## FordGT90Concept (May 9, 2012)

theeldest said:


> There's a specific issue with Western Digital non-RE series drives and support for TLER. When the time to recover from an error is too long then the drives will act as if they're unresponsive and be dropped from the RAID.
> 
> In the past, TLER could be enabled on all of Western Digital's drives. But to move RAID users to the RE series they imposed a block on enabling TLER on desktop drives.
> 
> ...


Sounds to me like people shouldn't be buying Western Digital drives then.  Crippling drives just so you have to buy a more expensive model is business conduct that should be shunned.


----------



## theeldest (May 9, 2012)

Jizzler said:


> What was happening before the block was IT departments buying dozens to hundreds of Black drives, adjusting the TLER value and using them on controllers with stricter timeouts. To WD, each box (20 drives) of Blacks sold where RE should have been used was a loss of ~$2K. With that kind of savings, it was easy to have spares on hand to cope with the marginally higher failure rate. WD didn't block TLER to squeeze consumers for more coin, they did it so IT would fall in line with their product tiers





FordGT90Concept said:


> Sounds to me like people shouldn't be buying Western Digital drives then.  Crippling drives just so you have to buy a more expensive model is business conduct that should be shunned.



I'm actually OK with companies artificially differentiating their products. Take i7 vs i5 & hyperthreading. K series vs Locked.

It generally helps to drive prices apart. If you don't need the features, you pay less. If you do, you pay more.


The reason I switched from WD to Samsung in my fileserver is the lack of 'enterprise' quality 5400rpm drives from Western Digital. Average filesize on my server is 4GBs. I don't think I really need blistering access times if I can save the power.

Heck, only Seagate goes down to 7.2k rpm in the 15mm form factor. Everyone else is 10k or 15k.

Where are the power efficient 'enterprise' drives? 


But back to the OP:
Do your enclosures need actual laptop sized drives? is performance important?

I think the two best options are the seagate constellation if performance is important and your enclosure can handle the 15mm drives.

Otherwise the best option is the cheapest non-WD drive.


----------



## FordGT90Concept (May 9, 2012)

Enterprise have more stringent quality controls that result in a longer average lifespan (substantially longer usually).  That is differation enough for everyone except Western Digital, apparently. 

Your example, for instance, comparing i7 and i5.  Intel's quality control process results in binning: the chips that operate the most flawlessly end up as i7 while those that operate at less than optimal end up as i5.

Western Digital is taking a perfectly fine product and intentionally breaking a function of it so it won't work with industry standards.  That's damn near criminal.




theeldest said:


> Where are the power efficient 'enterprise' drives?


Hybrid or solid state.  Enterprise drives are designed for both performance and longevity.  5400 RPM doesn't fit in that first category.


----------



## Aquinus (May 10, 2012)

FordGT90Concept said:


> Western Digital is taking a perfectly fine product and intentionally breaking a function of it so it won't work with industry standards. That's damn near criminal.



Stop making those claims unless you've actually used a Western Digital RE4, an *enterprise grade disk designed for RAID* or the Caviar Blue which *has TLER*. I think you should learn a little bit about WD's offers before you start making such claims. If you take a look at their *consumer grade hard drives* you will see that it basically breaks down to this (this is off their website).

WD Caviar Blue
Performance and reliability for everyday computing. 

WD Caviar Green
High-capacity, cool, quiet, eco-friendly storage. 

WD Caviar Black
Maximum performance for power computing.

Okay, that's pretty simple and straight forward, right?

Now SATA enterprise offering:

WD RE4
Massive capacity, 64 MB cache, 1.2 million hours MTBF, and a 5-year limited warranty, WD RE4 drives offer an ideal combination of high capacity, optimum performance, and *24x7 reliability for enterprise applications*.

I think that is the key phrase you're looking for, and honestly, I turned off Patrol Read on my RAID-5 and it's running like a dream, with 1 Hitachi drive, 1 WD Black from 2012, and 1 WD Green from 2008. I don't think you have to worry about it too much and honestly, I've had nothing but good luck with WD.

Western Digital got an RMA drive for work yesterday afternoon (it was a WD green from 2009), they had the replacement out and shipped the next morning. If that isn't service, I don't know what is.

Thanks for dealing with my rant, I just had to defend WD as their a great company with great customer service and plenty reliable drives. We have 12 caviar blacks at work from 2007, they're all still running and haven't skipped a beat. They've been running 24/7 since we got the servers.




theeldest said:


> Where are the power efficient 'enterprise' drives?



I'm not convinced that "Enterprise" and "power efficient" belong in the same phrase. In all honesty, when I want to copy a database, I don't care if it's eating 5 watts or 15 watts, I just want the database to copy as fast as it can as reliably as it can.


----------



## Albuquerque (May 10, 2012)

Nice, I love the conversation that has ensued since last I checked 

The array in question is pure data storage; a few video streams _might_ come out of it simultaneously but that's going to be rare.  A single one of these drives could maintain two 1080p MKV datastreams without issue, eight of these spindles in a RAID6 config is going to far exceed my usage pattern (in terms of overall performance).

That data is not life-critical, but it is data that I really do not want to lose. We're talking pictures and video of my child's first moments in this life, electronic copies of financial or legal documents  (we have paper copies too, also in a safe), and other such detail.  Thus, RAID6 is where I would _like_ to be.  We also have some cloud storage that will maintain the most 'critical' of data, but I really don't need my array taking a huge dump and taking all my 'slightly less than critical' data with it.

So, it goes like this: I need a moderate to high amount of data integrity, I need low to very low power draw at idle (the overwhelming majority of this box's life will be at idle), I want a reasonable amount of storage space, I want it to take up a tiny amount of physical space (HTPC size) and I do not require a significant amount of speed.  RAID6 on laptop drives seems to fit most of that, so long as I don't get owned in the face by RAID compatibility issues (TLER being obvious.)

I think that should clear up most of the questions


----------



## FordGT90Concept (May 10, 2012)

Aquinus said:


> Stop making those claims unless you've actually used a Western Digital RE4


My server has an RE3 replacing one of the Barracuda ES drives that failed.

My computer has two Barracuda, non-ES, drives, that have been RAID0 for over 7 years.  My server has two non-ES drives that have been RAID1 for 5 years.

I buy nothing less than Caviar Black from WD and I have one in my server external backup.


@Albuquerque: Hot-spares are very important.  If you are doing 8 in your RAID6, that doesn't leave any room for a hot-spare.  The controller supports it.


----------



## Albuquerque (May 10, 2012)

RAID6 requires two drives to fail before I lose data; I'm not sure that I would want an additional hot spare on top of that.  I already have another 1TB drive at home that I could immediately swap in on a single drive failure; if the stress involved in the rebuild takes down the entire array, well then I'm screwed aren't I?    A hot spare wouldn't save me from that anyway though.


----------



## FordGT90Concept (May 10, 2012)

I have a similar Highpoint RAID card.  It keeps track of the SMART data and if a drive is marked by SMART as a failure, it automatically replaces the drive with the hot spare.  Once the hot spare has been initialized, it sounds an audio alarm.  Look at the errors in the management software and it will tell you the model/serial number for the failed drive.  Yank it out and in it's place, put in another hot spare.

Hot spares basically mean the window in which the RAID is compromised is isolated to the time it takes to initalize it (less than an hour likely).

You're right that you don't need it.  In fact, I would probably put those resources (spent on spares) into an external backup should the volume catastrophically fail.


----------



## Albuquerque (May 10, 2012)

I could probably just build a messaging system in Windows task scheduler to generate an email event when the system event log data is captured from the Highpoint controller -- if it exposes such an event in the system event log.

I know the controller itself also has the ability to do messaging; I had already considered looking into that anyway.


----------



## theeldest (May 10, 2012)

To me: RAID 6 = RAID 5 with an active hot spare. 

Given your use, the Samsung you mentioned in the original post is probably the best bet. (lower power draw than the Western Digital AND you don't need to worry about the WD TLER issue; win, win)

I've got 5400 rpm Samsung drives (though they're 3.5") in my server (backups and XBMC) and they give no problem for video streams. Given the larger number of data drives (your 6 vs my 3) you'll certainly have no problem.


*And @ aquinas:*
Not all enterprise activities really need performance. Off hand, I can think of email archiving and video surveillance systems. For both of these you want the most storage/year/dollar. ie, you'd pay a bit more for higher efficiency drives.

I think we'll go that direction as CIOs/IT Directors/etc are becoming more conscious of the cost of running things in addition to cost of buying equipment. 

Heck, the big server OEMs are putting 80 Plus Titanium PSUs in systems this year. Large organizations stand to save quite a bit of money by cutting OpEx.

Until solid state tech becomes competitively priced per gigabyte we'll continue to see tiered storage. (SSD -> 15k/10k SAS -> 7.2k -> 5.4k?). It's not too far out of the question to think manufacturers would add that last step.


----------



## Albuquerque (May 10, 2012)

To that tiered data point -- my RAID1 array for the OS and apps will be using the Z77 chipset, of which I'll be leveraging the SSD cache option.  A lot of the build-your-own-hybrid approaches are making a LOT of sense now, both for home use and enterprise.


----------



## theeldest (May 10, 2012)

Albuquerque said:


> To that tiered data point -- my RAID1 array for the OS and apps will be using the Z77 chipset, of which I'll be leveraging the SSD cache option.  A lot of the build-your-own-hybrid approaches are making a LOT of sense now, both for home use and enterprise.



The Intel SSD caching does actually work really well. I'm using it to accelerate a RAID 10 in my desktop. I can't imagine that z68 vs z77 really makes much difference.

The only thing to consider is Maximized vs Enhanced (the two caching options). The difference is whether writes are cached (maximized DOES cache writes).

For what you're doing, it's a toss up whether one over the other would be better. Your cache will have lower throughput than the array for sequential writes and would thus slow it down. But the caching algorithm already takes into account large files and will not cache them. (so if you're copying a 1Gig + file, it'll skip the cache, even if you're using it in Maximized mode).

Just FYI.


----------



## Albuquerque (May 10, 2012)

I haven't decided which caching mode I'm going to use yet.  Maximized is certainly an option as the box plugs into a UPS and I have the software configured to properly acknowledge power issues and shut the box off gracefully during extended blackouts.

As for performance: I have the OS + Apps array (dual 320GB WD Scorpio Blacks in RAID1) and the Hyper-V array (dual 750GB WD Scorpio Blacks in RAID1) already running right now in the new box.  So far, the OS performance is fantastic even without the caching.  The whole system goes from power-button-click to CTRL+ALT+DEL screen in about 15 seconds.  Given what I'm seeing right now, it may be more advantageous to enable the caching on the Hyper-V array to get all the VM's booted quickly.

The obvious solution is to tinker with it


----------



## theeldest (May 10, 2012)

Albuquerque said:


> I haven't decided which caching mode I'm going to use yet.  Maximized is certainly an option as the box plugs into a UPS and I have the software configured to properly acknowledge power issues and shut the box off gracefully during extended blackouts.
> 
> As for performance: I have the OS + Apps array (dual 320GB WD Scorpio Blacks in RAID1) and the Hyper-V array (dual 750GB WD Scorpio Blacks in RAID1) already running right now in the new box.  So far, the OS performance is fantastic even without the caching.  The whole system goes from power-button-click to CTRL+ALT+DEL screen in about 15 seconds.  Given what I'm seeing right now, it may be more advantageous to enable the caching on the Hyper-V array to get all the VM's booted quickly.
> 
> The obvious solution is to tinker with it



In any event, sounds like you have some fun times ahead.

Be sure to give us an update when you get everything settled on!


----------



## Albuquerque (May 10, 2012)

Will do.  No matter which drives I use, I'll pummel them with way too many writes over the course of several days to ensure one (or more) don't go astray due to TLER or other issues.  Once I feel like I've proven them sufficiently stable, I'll start building the guest OS instances and can really get into the performance tuning.

One thing I did right: the whole box at idle (the four drives and two 120mm fans spinning but the KVM disconnected) consumes 32W at the wall according to my Kill-a-Watt E3.  It's an MSI Z77 board, an i5-3570k, 16Gb of ram, an LG Bluray drive and the highpoint RAID card with no drives attached and all connected to a PC Power and Cooling 500W 80+Bronze modular unit.  I even have Aero glass and audio enabled on my Win2K8 R2 OS, which I'm sure adds to the tally just a tad.

Not bad


----------



## theeldest (May 12, 2012)

Albuquerque said:


> Will do.  No matter which drives I use, I'll pummel them with way too many writes over the course of several days to ensure one (or more) don't go astray due to TLER or other issues.  Once I feel like I've proven them sufficiently stable, I'll start building the guest OS instances and can really get into the performance tuning.
> 
> One thing I did right: the whole box at idle (the four drives and two 120mm fans spinning but the KVM disconnected) consumes 32W at the wall according to my Kill-a-Watt E3.  It's an MSI Z77 board, an i5-3570k, 16Gb of ram, an LG Bluray drive and the highpoint RAID card with no drives attached and all connected to a PC Power and Cooling 500W 80+Bronze modular unit.  I even have Aero glass and audio enabled on my Win2K8 R2 OS, which I'm sure adds to the tally just a tad.
> 
> Not bad




That's pretty awesomely low power draw. *What case are you using for this?*

I'm considering upgrading my XBMC/fileserver to get lower power draw and you've given me some good ideas. I'd also like a smaller case (preferably an HTPC case but they're too wide for where I'd like to put it--I have 13"x13")


----------



## Albuquerque (May 14, 2012)

I'm using this case:  http://www.silverstonetek.com/product.php?pid=314

I ordered the eight 1TB WD drives on Thursday via Amazon Prime, turns out that 2-day free shipping also includes Saturday!  I have all eight wired and connected, started the RAID initialization on Sunday and it finished late last night.  I loaded up WHSv2 on an Hyper-V instance with the OS as a VHD file on the OS+App array for now (still missing one 750GB drive for the virtual machines, should arrive tomorrow) and the RAID6 data array direct-linked through the VM.

I loaded up a 100GB test file on the single 750GB drive and have a robocopy process that is continually re-writing that file all over the RAID6 array since about 6:30am PST this morning.  It's noon now, and it has yet to toss a drive after six hours of continuous writes...  So far, TLER on laptop drives appears to be a no-show, or at least it's not impacting the Highpoint 2720's ability to do RAID6 correctly.

I can't see exactly how many instances have run (I didn't think to write that into a log) so I cannot say how fast the array is.  I can tell from Resource Monitor that the speed limit is the read rate of the 750GB drive, the RAID6 array is apparently capable of more than ~110GB/sec writes sustained.  I'll do some ATTO or something similar when I get home and post some pics.

It appears that, with the Highpoint card loaded up and eight drives stacked in, the box is going to idle right around the 40W mark.  Loading up WHSv2 (from an ISO file) onto the HyperV instance "peaked" the box out at around 46W.  Before I left the house, the E3 device was recording around 43W running the continuous copy from 750GB drive to the RAID array.

I should've bought a smaller PSU


----------



## theeldest (May 14, 2012)

Albuquerque said:


> I'm using this case:  http://www.silverstonetek.com/product.php?pid=314



That's a FINE looking case. I might have an upgrade in my future ...



> So far, TLER on laptop drives appears to be a no-show, or at least it's not impacting the Highpoint 2720's ability to do RAID6 correctly.



Good news gets better!



> I'll do some ATTO or something similar when I get home and post some pics.



For disk benching head over to this thread: http://www.techpowerup.com/forums/showthread.php?t=166005



> I should've bought a smaller PSU



What size did you use?


----------



## Albuquerque (May 14, 2012)

theeldest said:


> That's a FINE looking case. I might have an upgrade in my future ...


I put 3.5" -> dual 2.5" converter in every available 3.5" bay for a total of ten 2.5" drives, and then a 5.25" -> 3.5" -> dual 2.5" converter below the BDRW drive for another pair.  This gives me a total of thirteen 2.5" slots, which is an immense amount of storage space for such a small chassis.  And it's DEAD quiet, even with the included fans at 100% speed.



theeldest said:


> What size did you use?


PC Power and Cooling 500W 80+ Bronze modular unit: http://www.amazon.com/dp/B0064XAJ30/?tag=tec06d-20

It got excellent reviews on Johnny Guru for efficiency and power regulation.  And the rebate made it cheap   Modular cables also keep the case clean; I'll take some pics of the inside.


----------



## Albuquerque (May 16, 2012)

eldest, I couldn't get your ioMeter to stop complaining about sector size.  I actually can't spend time on it tonight, as I had a cranky 7 month old and a tired wife to take care of.  I did snap some pics and ATTO benches though:

The E3 meter showing 35W at idle:






The innards:









ATTO run from inside the WHS virtual machine (so the array volume is "passed through" Hyper-V, which gives interesting and slightly odd results)





ATTO run from the host operating system, so it produces a far more reliable / stable result





And snapshots of the configuration pages of the Highpoint 2720SGL array controller









Stats on the rig:
MSI Z77-GD45 board
i5-3570k processor (default settings)
Silverstone PS07B case
PC Power and Cooling 500W 80+ Bronze MKIII modular power supply
LG 12x BDRW/DVDRW
2 x 8GB Mushkin value ram DDR3-1333 (undervolted to 1.35v)
2 x WD Scorpio Black 7200RPM 320GB drives on the ICH10-R RAID1 for OS+Apps
1 x OCZ Agility 3 240GB SSD for acceleration of the OS+Apps volume and scratch space
8 x WD Scorpio Blue 5400RPM 1TB drives on the Highpoint 2720SGL RAID6 for data
1 x WD Scorpio Black 7200RPM 750GB drive on the ICH10-R for VHD storage (second drive hasn't arrived yet to build the RAID1 array)

I think that about covers it.


----------



## Albuquerque (May 17, 2012)

Aww, nobody wanted a low-power home server?    I thought it came out well, especially worth noting that the WD drives haven't committed suicide on the RAID6 array too.


----------



## Albuquerque (Oct 17, 2012)

It's been almost six months since I posted my last update in this thread; I figured someone might stumble onto this thread while researching and what to know how it _really_ turned out.

A few months ago, Highpoint released an updated web interface utility that can now show the SMART status of all the drives connected to the controller.  It just so happens that two of the drives reported SMART correctable read faults over the months. Despite those errors, the array has never once gone into a "degraded" or "failed" state.  This either points to a favorable TLER capability of the WD Scorpio Blue laptop drives, or else a very "forgiving" capability of the Highpoint RAID card.

Both errors were logged, but both were corrected without any array state change. In fact, that array currently has an uptime of over 130 days, in which time it has faithfully served as the data volume for my WHSv2 virtual machine. That means it has downloaded and played back countless gigabytes of video and audio files in wildly varying formats and bitrates, along with 130 nights worth of backups from the household computers with WHSv2 clients installed. it also serves as the file target for backups of the host server.

So, after six months of full operation, I am comfortable in claiming that the Highpoint 2720SGL SAS RAID controller is 99.999% compatible with the WD Scorpio Blue 1TB laptop drives in pretty much any RAID configuration that would make sense to use. No TLER fallout, no missing drives in the array, and smooth sailing even when SMART reports issues.

Anyone considering a low power, low profile, inexpensive and large data volume set should at least consider this as an option IMO.


----------



## snabe (Jan 16, 2013)

*thanks*

just registered to the forums to thank you for sharing your experience
found the thread looking for the same idea: safe storage+low power+low cost
the rocketraid 2720sgl has still no rivals in terms of price/performance
i decided to go with 8x hitachi travelstars because they have an 3y warranty
and are a bit cheaper too, sata3 as bonus
the disks are installed in a supermicro CSE-M28SAB enclosure:

http://www.supermicro.com/a_images/products/Accessories/CSE-M28SAB-OEM.jpg

unbeatable small form factor, but pricey and not really compatible to to highpoint's enclosure management (overheat/fan fail led does not work despite supermicro sas cables with sideband used)

it was a pain to get the controller to work on my msi k9nd board but the advantage of
a fake raid controller is: you dont need controller's bios to configure it (option rom issue)
flashed the firmware via pcie hot plug (booted to dos without controller, plugged it in live and flashed firmware 1.5)
works like a charm
configuration via web tool: 8x 1tb in raid6, 4k sectors and 64kb stripes, write back and disk cache on
windows auto aligned the volume at 128mb or something, seems ok

performance was really good with an empty array: nearly 600mb/s read and 500mb/s write 
not bad for a 'hardware assisted' raid and cheap laptop drives

but here are some real world scores with array 50% filled:

http://img835.imageshack.us/img835/4651/attow2k8r2arrayfilled.jpg

also noticed write performance decreased a bit after switching from w2k3x64 to w2k8r2, despite win7 / server 2008 driver is newer
and the array gets slower with more data on it - like expected

seems the (cpu dependend) performance is suffering from processor's power saving features
write speeds go down if i use the balanced power profile in windows

when i bought the drives (4 each from different distros) i expected at least one drive with quality issues...
but no all 8 disks run without any smart event since 3 months
maybe the hitachis were a good choice


i'm sorry for necroing this thread but i hope some more people with low power/low cost arrays will report here...
i'm also curious about your experience with the wd disks

greets from germany


----------



## Albuquerque (Oct 18, 2013)

Sooo, it's been a year.  Guess what?  One of my server drives died.

But guess what else: it wasn't one of the RAID6 drives   I have two other RAID1 volumes on that server, one for the OS + apps and another for all the virtual machine files.  One of the drives for the virtual machine files is what died, the horrible click of death is what ultimately killed it.

Funnily enough, it was a WD Black drive too.  Meh.  The cheaper, certainly-not-RAIDable WD Blue drives are still running strong as ever, quietly and efficiently taking care of business.

I'm replacing the dead drive with a new pair of HGST 1TB 7200rpm drives instead, just because I need the extra room.


----------

