# Best Raid Controller Card?



## fraya713 (Jan 23, 2014)

Looking to replace my motherboard RAID config with a standalone dedicated RAID controller card

Currently setup a RAID 10 with 4 of these guys:
http://www.amazon.com/dp/B00691WMJG/?tag=tec06d-20

What are your thoughts? I definitely think I can get better performance.

Thanks all!


----------



## Steevo (Jan 23, 2014)

How much do you want to spend? There are some cards out that cost over a grand, some that cost $20, why do you want to use those drives specifically, do you have available PCIe slots?


----------



## fraya713 (Jan 23, 2014)

Steevo said:


> How much do you want to spend? There are some cards out that cost over a grand, some that cost $20, why do you want to use those drives specifically, do you have available PCIe slots?


I already bought the drives, so it's just what I am using.
I'm not looking to spend anything over the $500ish range if I can help it.

It's a personal PC that I use to game a lot and also have some work that I do on it.
Just looking to get the RAID speed without it impacting my processing performance if I can help it.

Thanks again!


----------



## Mindweaver (Jan 23, 2014)

I would not use those drives in a RAID 10 array. I would use an enterprise drive with that array. I think you'll only run into problems using your current drives.


----------



## Easy Rhino (Jan 23, 2014)

If you care about data integrity you won't use a raid card with those hybrid drives. RAID works by writing data across several physical discs. Hybrid drives work by storing heavily used data in cache. Your raid card will not recognize that differing data stored across the array in the cache leading to corruption and possibly a file system disaster.


----------



## Mindweaver (Jan 23, 2014)

Easy Rhino said:


> If you care about data integrity you won't use a raid card with those hybrid drives. RAID works by writing data across several physical discs. Hybrid drives work by storing heavily used data in cache. Your raid card will not recognize that differing data stored across the array in the cache leading to corruption and possibly a file system disaster.


Well said!


----------



## Steevo (Jan 23, 2014)

As long as things like NCQ are turned off on the controller and you are aware your "ssd" portion of the drives will wear out faster than intended due to the cache policy of the RAID card VS the cache policy of the disks. 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816115059

This plus the correct battery backup for the card, and making sure to use a good UPS for the system with low power shutdown and there should be few issues. You might saturate the SATAII bus on the card, but with the cache and each drives throughput you will only notice a difference if you are running benchmarks, and perhaps a second on windows boot or other disk intensive pure write or read activity. But if you were looking for the performance there you should have gone a different direction with the disks/storage anyway.


----------



## Aquinus (Jan 23, 2014)

I agree with others here, don't go with hybrid drives in a RAID setup. That's only asking for trouble.

Personally, I like LSI cards. We have the 4i variant of this one in one of our servers at work and it's our best performing RAID card (in RAID-6, with 4 drives versus some of our 6-disk arrays with 3Ware/Adaptec cards.)

I always get at least WD Blacks for RAID. I would recommend WD RE series drives or Seagate Constellation ES drives if you're "getting serious" about SATA RAID.

Edit: Are you still using the rig on your account or is that old? What are you putting this into?


----------



## fraya713 (Jan 24, 2014)

Aquinus said:


> I agree with others here, don't go with hybrid drives in a RAID setup. That's only asking for trouble.
> 
> Personally, I like LSI cards. We have the 4i variant of this one in one of our servers at work and it's our best performing RAID card (in RAID-6, with 4 drives versus some of our 6-disk arrays with 3Ware/Adaptec cards.)
> 
> ...




No, those specs are really old. I'm certainly not talking enterprise or corporate level drive redundancy here, just a simple RAID setup that isn't over the top in price and where I'm not losing performance based on my motherboard RAID config. I've confirmed that the SSHD's can be setup in RAID 0 and 1, however, 5,6 and 10 haven't really been tested (however, I've been running mine for a good 6-8 months without issue.)



> http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03821563&cc=us&dlc=en&lc=en#N122
> *Can I use SSHD with RAID?*
> Yes you can, with excellent results, use Raid 0 and Raid 1 configurations. An SSHD in RAID 0, as compared to a single drive configuration, delivers performances as much as 70% higher.
> Standard drives in RAID configuration should not be used with SSHD drives as this will nullify the benefits of using a hybrid drive.
> Because current drives are not designed for multi-drive vibrations, they have not been tested with RAID 5, 6, or 10, and are not recommended.




*My PC Specs*
Operating System: Microsoft Windows 7 64 bit Ultimate Edition
*Processor:* Intel Core i7-920 Bloomfield 2.66GHz (Overclocked to 4.0 ghz) Quad-Core Processor
*Motherboard:* ASUS P6T Deluxe V2 LGA 1366 Intel X58 ATX Intel Motherboard
*Cooling:* Prolimatech Megahalems Rev.B CPU Cooler with Antec 120mm Blue LED Fan + Antec 120mm Blue LED Case Fan (x8) + EVERCOOL 50mm Case Fan (x3) + Antec 200mm top fan
*Memory:* Kingston HyperX 12GB DDR3 SDRAM 1600 Desktop Memory
*Video Card(s):* EVGA ACX Cooler GeForce GTX 780 3GB GDDR5
*Hard Disk(s):* Seagate Momentus XT 750 GB 7200RPM SATA 6Gb/s 32 MB Cache 2.5 Inch Solid State Hybrid Drive (x4) in RAID 10 for 1.36 TB
*CD/DVD Drive:* Pioneer Black SATA Blu-ray Disc/DVD/CD Writer
*LCD/CRT Monitor:* BenQ High Performance Gaming 120hz 27-Inch Screen LED-Lit Monitor
*Case:* Antec Twelve Hundred Black Steel ATX Full Tower Computer Case
*Sound Card:* ASUS Xonar Essence STX Virtual 7.1 Headphone AMP Card
*Power Supply:* EVGA SuperNOVA NEX1500 Classified 1500W 80 PLUS GOLD Certified Modular Power Supply
*Speakers / Headphones:* SENNHEISER PC350 Circumaural Headset
*Card Reader:* AFT PRO-35U All-in-one USB 2.0 Card Reader
*Keyboard:* Logitech G19 USB Gaming Keyboard
*Mouse / Mousepad:* RAZER DeathAdder Black 3500 dpi Mouse
/ RAZER eXactMat and eXactRest
*Other Hardware:* Logitech QuickCam Orbit USB 2.0 WebCam


----------



## Steevo (Jan 24, 2014)

Why RAID 10 and not RAID 5? Just as much redundancy and failure tolerance, but more storage and speed.


----------



## fraya713 (Jan 24, 2014)

Steevo said:


> Why RAID 10 and not RAID 5? Just as much redundancy and failure tolerance, but more storage and speed.


I decided RAID 10 because I could potentially rebuild my raid faster if a drive did fail, I also based this decision off of never hitting the 1.36tb of my combined raid so more space would've just been simply that.. more space. Also, I believe RAID 10's are more reliable and have more integrity when rebuilding against corrupted data, as it has two sources to compare from.

Basically I made the decision based on my needs at the time and so far it hasn't been an issue.

After doing a little reading, I may need to check to see if Smart Response Technology is even turned on on my Intel RAID application  to use that cache space i've had.

Ultimately, I've had no problems with my disks or my raid configuration, but a co-worker and i got into a RAID conversation and he brought up the fact that motherboard raid configs have a processing performance impact, so I wanted to see if it was worth venturing into obtaining a dedicated RAID controller for my PC.


----------



## The Von Matrices (Jan 24, 2014)

Replying directly to the OP, if you're doing RAID 10, you will see negligible performance benefit from using a dedicated controller.  The SSHDs will not max out the uplink to the processor, and there is no computation of parity data that a RAID controller could accelerate.  The only real advantage of a dedicated card would be that you could move the drive among platforms without having to reformat.



Steevo said:


> Why RAID 10 and not RAID 5? Just as much redundancy and failure tolerance, but more storage and speed.



RAID 5 won't be any faster reading, and it will be much slower writing due to the parity calculations.  The only advantage of RAID 5 is that you would get 3/4 the max capacity versus 1/2 with RAID 10.  Of course, RAID 5 has its own list of horror stories during recovery regarding corrupt parity data, which you do not have with RAID 10.



Easy Rhino said:


> If you care about data integrity you won't use a raid card with those hybrid drives. RAID works by writing data across several physical discs. Hybrid drives work by storing heavily used data in cache. Your raid card will not recognize that differing data stored across the array in the cache leading to corruption and possibly a file system disaster.





Aquinus said:


> I agree with others here, don't go with hybrid drives in a RAID setup. That's only asking for trouble.
> 
> Personally, I like LSI cards. We have the 4i variant of this one in one of our servers at work and it's our best performing RAID card (in RAID-6, with 4 drives versus some of our 6-disk arrays with 3Ware/Adaptec cards.)
> 
> I always get at least WD Blacks for RAID. I would recommend WD RE series drives or Seagate Constellation ES drives if you're "getting serious" about SATA RAID.



The cache on the SSHDs is non-volatile and is managed by the disk's controller.  To the RAID controller, it's no different than a conventional hard drive; I see no reason why the SSHDs would have any less data integrity than conventional hard drives.  The main issue would be with the lack of time limited error recovery causing drives to drop from the array, which is just as much of a problem in the Western Digital Black drives you recommend.


----------



## Steevo (Jan 24, 2014)

I haven't used a controller card since Intel Pentium PRO days that couldn't rebuild on the fly. RAID5 performance degradation during a live rebuild on a 20 user system for me was about 25% slower for 12 hours. I could have even scheduled it for offline that night but I didn't want to wait or stick around and babysit.



The Von Matrices said:


> RAID 5 won't be any faster reading, and it will be much slower writing due to the parity calculations.  The only advantage of RAID 5 is that you would get 3/4 the max capacity versus 1/2 with RAID 10.  Of course, RAID 5 has its own list of horror stories during recovery regarding corrupt parity data, which you do not have with RAID 10.



Hardware RAID5 is **close to with real world data** as fast as two drives in RAID0 for reads, and only slightly slower in writes. Older cards with slow CPU's and slower SCSI bus and multiple drives on the same bus were slower. I have personally tested 4 3TB drives in multiple configurations and RAID5 was the safest, highest performance for the dollar.


----------



## fraya713 (Jan 24, 2014)

The Von Matrices said:


> snip


Thanks for that, ultimately, my main concern was the raid leaching cpu performance - I game a lot and though I understand there isn't a lot of data reading while I actually am in game (besides map loading, etc), if there is anything impacting my cpu performance I definitely want to try to nip it in the bud.

The question was more academic and I've learned a lot from this as it seems a raid controller would really only be ideal and worth while in a designated SAN/NAS situation for direct redundant backup or file storage.



Steevo said:


> Hardware RAID5 is as fast as two drives in RAID0 for reads, and only slightly slower in writes. Older cards with slow CPU's and slower SCSI bus and multiple drives on the same bus were slower. I have personally tested 4 3TB drives in multiple configurations and RAID5 was the safest, highest performance for the dollar.



Understood, and price comparison was definitely a concern - hence why I went with the SSHD as it was the best bang for the buck when comparing size and performance
So as far as the drive choice and RAID choice, it's pretty much explained.


----------



## Steevo (Jan 24, 2014)

http://www.tweaktown.com/reviews/4561/lsi_megaraid_sas_9265_8i_raid_controller_review/index6.html


----------



## newtekie1 (Jan 24, 2014)

fraya713 said:


> Thanks for that, ultimately, my main concern was the raid leaching cpu performance - I game a lot and though I understand there isn't a lot of data reading while I actually am in game (besides map loading, etc), if there is anything impacting my cpu performance I definitely want to try to nip it in the bud. The question was more academic and I've learned a lot from this as it seems a raid controller would really only be ideal and worth while in a designated SAN/NAS situation for direct redundant backup or file storage.



You'd be surprised at how little CPU power is really used by Software/Firmware RAID controllers. Yeah, they use some, but it is relatively nothing with todays CPUs.  It used to be an issue to worry about back in the PIII days, when a software RAID controller might use 25% of a 1GHz processor.  But that really only amounts to 250MHz.  And with today's more efficient processors and multiple cores, you're talking maybe 100MHz on one of four cores.  You'll never notice that.

At this point, you'd be better off spending that money on a decent sized SSD for your OS and main programs/games.  And using the hard drives as a storage area.  If you haven't come close to filling the 1.36TB you have, then that $500 you were willing to spend on a RAID card would be far better spent on a 480GB SSD like this one. http://www.newegg.com/Product/Product.aspx?Item=N82E16820226255


----------



## Easy Rhino (Jan 24, 2014)

fraya713 said:


> No, those specs are really old. I'm certainly not talking enterprise or corporate level drive redundancy here, just a simple RAID setup that isn't over the top in price and where I'm not losing performance based on my motherboard RAID config. I've confirmed that the SSHD's can be setup in RAID 0 and 1, however, 5,6 and 10 haven't really been tested (however, I've been running mine for a good 6-8 months without issue.)



I have no doubht raid 0 works with sshds, but for how long and how stable? with traditional harddrives in raid, if you lose power the array remains in tact since all information is available. if you lose power with SSHDs then you lose what is in cache which apparently is controlled by the hdd controller which means it has to be stored. since the cache wont be written in time and your cache is part of the file system then you could lose your entire file system to corruption since your controller now has to tell your raid card what info is has in cache.

i really dont see how this is a good option at all. if you want speed, buy two SSDs and use software raid 0 to great effect. That would be cheaper than a $500 raid card which you will only be using 25% of its ability.


----------



## yogurt_21 (Jan 24, 2014)

newtekie1 said:


> You'd be surprised at how little CPU power is really used by Software/Firmware RAID controllers. Yeah, they use some, but it is relatively nothing with todays CPUs.  It used to be an issue to worry about back in the PIII days, when a software RAID controller might use 25% of a 1GHz processor.  But that really only amounts to 250MHz.  And with today's more efficient processors and multiple cores, you're talking maybe 100MHz on one of four cores.  You'll never notice that.
> 
> At this point, you'd be better off spending that money on a decent sized SSD for your OS and main programs/games.  And using the hard drives as a storage area.  If you haven't come close to filling the 1.36TB you have, then that $500 you were willing to spend on a RAID card would be far better spent on a 480GB SSD like this one. http://www.newegg.com/Product/Product.aspx?Item=N82E16820226255



This to me makes the most sense with your setup. If you're already using mobo raid you're not going to see anything close to a 500$ value return by getting a raid card. But a 300$ 480GB SSD? You'd get a value return out of that. 200$ cheaper likely faster while also offering more storage. Also you wouldn't have to worry about having to backup data and re-creating the raid. You could keep your raid 10 as is and simply install windows on the ssd.


----------



## Useful Idiot (Jan 24, 2014)

Easy Rhino said:


> If you care about data integrity you won't use a raid card with those hybrid drives. RAID works by writing data across several physical discs. Hybrid drives work by storing heavily used data in cache. Your raid card will not recognize that differing data stored across the array in the cache leading to corruption and possibly a file system disaster.


Where exactly are you getting this information, a link please? This is flat out misleading and wrong. The SSHD handles LBA abstraction, it is transparent to the RAID controller. 


Mindweaver said:


> Well said!



No, it was not. 



Steevo said:


> As long as things like NCQ are turned off on the controller and you are aware your "ssd" portion of the drives will wear out faster than intended due to the cache policy of the RAID card VS the cache policy of the disks.



So much misinformation. You would not want to turn off NCQ, that would lead to single QD performance. The drives internally manage the cache of the SSHD, and the cache of the RAID controller will only increase the longevity of the SSHD. The RAID controller cache takes random data and sequentializes it, which is friendlier to NAND.



Aquinus said:


> I agree with others here, don't go with hybrid drives in a RAID setup. That's only asking for trouble.



it will be fine  



The Von Matrices said:


> RAID 5 won't be any faster reading, and it will be much slower writing due to the parity calculations.  The only advantage of RAID 5 is that you would get 3/4 the max capacity versus 1/2 with RAID 10.  Of course, RAID 5 has its own list of horror stories during recovery regarding corrupt parity data, which you do not have with RAID 10.



You might wish to read a primer on RAID 5. RAID 5 produces faster reads (x the number of drives) but typicallyonly writes at the speed of one drive. 



newtekie1 said:


> You'd be surprised at how little CPU power is really used by Software/Firmware RAID controllers. Yeah, they use some, but it is relatively nothing with todays CPUs.  It used to be an issue to worry about back in the PIII days, when a software RAID controller might use 25% of a 1GHz processor.  But that really only amounts to 250MHz.  And with today's more efficient processors and multiple cores, you're talking maybe 100MHz on one of four cores.  You'll never notice that.
> 
> At this point, you'd be better off spending that money on a decent sized SSD for your OS and main programs/games.  And using the hard drives as a storage area.  If you haven't come close to filling the 1.36TB you have, then that $500 you were willing to spend on a RAID card would be far better spent on a 480GB SSD like this one. http://www.newegg.com/Product/Product.aspx?Item=N82E16820226255



Agreed. Unless you are pushing 250K+ IOPS CPU utilization will be negligible. Storage Spaces would allow for concatenation of the two SSHDs, and a fast SSD boot disk is uber. The bad thing about SSHD, at the end of the day they arent fast, only 5,400 RPM. No getting around that. Buy an SSD.


----------



## The Von Matrices (Jan 24, 2014)

Easy Rhino said:


> I have no doubht raid 0 works with sshds, but for how long and how stable? with traditional harddrives in raid, if you lose power the array remains in tact since all information is available. if you lose power with SSHDs then you lose what is in cache which apparently is controlled by the hdd controller which means it has to be stored. since the cache wont be written in time and your cache is part of the file system then you could lose your entire file system to corruption since your controller now has to tell your raid card what info is has in cache.
> 
> i really dont see how this is a good option at all.  if you want speed, buy two SSDs and use software raid 0 to great effect. That would be cheaper than a $500 raid card which you will only be using 25% of its ability.



I do not understand why you think traditional hard drives are better.  I interpret that you're worried about write-back caching, but the SSHD's NAND write-back cache is non-volatile and won't be lost if the disk loses power.  I think what you're also missing is that the SSHDs he's talking about are completely self-contained.  The RAID controller sees only one volume per SSHD; all the caching is done by the drive's controller and is completely transparent to the RAID controller.

In SSHDs, the only data that can be lost in a power loss is small amount of data in the controller's SDRAM.   However, this is nothing exclusive to SSHDs; _all_ storage mediums including pure HDDs and SSDs (except a few Sandforce models) have this SDRAM cache.  Certain disks can be forced to operate in write-through mode, which greatly improves data security in the event of a power loss, but you likely won't find that feature in anything but enterprise level RAID controllers and hard drives.



Useful Idiot said:


> Where exactly are you getting this information, a link please? This is flat out misleading and wrong. The SSHD handles LBA abstraction, it is transparent to the RAID controller.
> 
> No, it was not.
> 
> ...



Thank you for helping to clear up the misinformation.



Useful Idiot said:


> You might wish to read a primer on RAID 5. RAID 5 produces faster reads (x the number of drives) but typicallyonly writes at the speed of one drive.



RAID 5 can theoretically write at the speed of a N-1 drive RAID 0; of course, most RAID 5 arrays are constrained by the speed of parity calculations and in effect write much slower than that.  However, saying it only writes at the speed of one drive is a vast oversimplification and ignores the varying levels of speed that RAID controllers and CPUs can calculate parity.

Modern RAID controllers can compute RAID 5 parity at about 3/4 the speed of RAID 0.

Top image: RAID 0 performance
Bottom image: RAID 5 performance


----------



## Useful Idiot (Jan 24, 2014)

The Von Matrices said:


> RAID 5 can theoretically write at the speed of a N-1 drive RAID 0; of course, most RAID 5 arrays are constrained by the speed of parity calculations and in effect write much slower than that.  However, saying it only writes at the speed of one drive is a vast oversimplification and ignores the varying levels of speed that RAID controllers and CPUs can calculate parity.


...hence the inclusion of the word 'typically' . In NAS usage and some 'cheaper' RAID controllers and HBA's this can be a reality due to parity overhead.


----------



## Easy Rhino (Jan 24, 2014)

Well I learned something new today.


----------



## newtekie1 (Jan 24, 2014)

Useful Idiot said:


> You might wish to read a primer on RAID 5. RAID 5 produces faster reads (x the number of drives) but typicallyonly writes at the speed of one drive.



Yep, in fact my RAID5 reads and writes at almost the same speed.  It is in the 120MB/s+ speed constant across the entire array, which maxes out my gigabit network so I'm cool with that.  I think the array is actually limited by the fact that it is run off a PCI-E x1 card, I'm kind of kicking myself for not getting the PCI-E x4 card.



Useful Idiot said:


> Agreed. Unless you are pushing 250K+ IOPS CPU utilization will be negligible. Storage Spaces would allow for concatenation of the two SSHDs, and a fast SSD boot disk is uber. The bad thing about SSHD, at the end of the day they arent fast, only 5,400 RPM. No getting around that. Buy an SSD.



The OP's SSHD drives are actually 7,200RPM, it is only the latest generation that is 5,400RPM, and the 3.5" drives are also 7,200RPM.


----------



## Mindweaver (Jan 24, 2014)

Useful Idiot said:


> No, it was not.



Really I think not? I would not use those drives for any type of RAID array, because I care about my data. They are not built for that purpose. The money spent for those drives could have easily got him enterprise drives with error recovery and are 10k. I have and have used many RAID arrays in a production environment. Now can he use those drive? Yes, but why? I'm just suggesting that his money could have been spent better. He is buying the drives thinking it will be faster because of the Hybrid SSD Cache, but in reality he is better off getting regular SSD's. You are putting a lot of faith in a hybrid drive that is not built for RAID.



Useful Idiot said:


> You might wish to read a primer on RAID 5. RAID 5 produces faster reads (x the number of drives) but typicallyonly writes at the speed of one drive.



Really? Maybe in a software RAID array, but using a Hardware RAID array the write is much faster. Below is one of my RAID5 arrays using 4x 150GB WD Raptor 10k drives and a 3ware 9650SE. Keep in mind it has about 15+ users reading and writing when I performed this test, but with out anyone using them it's closer to 350mb /sec Read and 350mb /sec write.


----------



## Steevo (Jan 24, 2014)

Useful Idiot said:


> So much misinformation. You would not want to turn off NCQ, that would lead to single QD performance. The drives internally manage the cache of the SSHD, and the cache of the RAID controller will only increase the longevity of the SSHD. The RAID controller cache takes random data and sequentializes it, which is friendlier to NAND.
> 
> 
> You might wish to read a primer on RAID 5. RAID 5 produces faster reads (x the number of drives) but typicallyonly writes at the speed of one drive. .




 

See that little segment that says 8D R5 Fast Path? Its all of 73 slower maximum than the RAID0 option. 


On a multiple controller depth to physical medium setup such as this NCQ can increase the overhead and increase the queuing time in real world reads and writes. Its the RAID cards job to handle the disks, use the setup for Fast Path on these LSI cards, they wouldn't have made it and show such great numbers if their logic was somehow worse than what the standard is.


Lastly the TRIM will not pass through on these disks with a RAID card, other features windows implements for SSD's will not pass though, and since the RAID card will be blissfully unaware there is cache on the hybrid drive it will treat them as standard mechanical drives, and perform standard mechanical drive operations, which will get run through the cache and wear it out ever so slightly faster.


----------



## The Von Matrices (Jan 24, 2014)

As much as we all may argue about what is best, RAID remains a tradeoff between reliability and cost.  There is no "optimal" RAID configuration, since all RAID configurations have a greater than zero chance of losing all data.  What the person building the RAID needs to do is identify what failure rate is acceptable and then design a RAID configuration around that.


----------



## Steevo (Jan 24, 2014)

Compared to even single disks RAID5, 1, or 10 are the only real redundant and fault tolerant arrays that minimize risk.


----------



## Useful Idiot (Jan 24, 2014)

Steevo said:


> View attachment 54265
> On a multiple controller depth to physical medium setup such as this NCQ can increase the overhead and increase the queuing time in real world reads and writes. Its the RAID cards job to handle the disks, use the setup for Fast Path on these LSI cards, they wouldn't have made it and show such great numbers if their logic was somehow worse than what the standard is.
> 
> 
> Lastly the TRIM will not pass through on these disks with a RAID card, other features windows implements for SSD's will not pass though, and since the RAID card will be blissfully unaware there is cache on the hybrid drive it will treat them as standard mechanical drives, and perform standard mechanical drive operations, which will get run through the cache and wear it out ever so slightly faster.



The multiple controller depth paragraph above i have no idea what you are saying to be honest. It reads as gibberish, I have tried to interpret it five times now. Could you possibly post that in English?

TRIM is inconsequential on an SSHD. It doesn't use TRIM commands, it isn't an SSD. You lack a basic understanding of SSHD technology. The internal algorithms promote and demote information to the cache based upon block-level analysis of the data patterns. The data is not directly deleted from the cache by the filesystem, which is responsible for issuing the TRIM command. Thus, no need for TRIM command.



The Von Matrices said:


> As much as we all may argue about what is best, RAID remains a tradeoff between reliability and cost.  There is no "optimal" RAID configuration, since all RAID configurations have a greater than zero chance of losing all data.  What the person building the RAID needs to do is identify what failure rate is acceptable and then design a RAID configuration around that.



Agreed. Data replication is a must, regardless of the RAID setup. RAID is not backup.



Steevo said:


> Compared to even single disks RAID5, 1, or 10 are the only real redundant and fault tolerant arrays that minimize risk.



so raid 50, 60 and other nested RAID arrays aren't redundant? Check again.


----------



## Zedicus (Jan 24, 2014)

hardware raid on a budget i recommend adaptec 5805.  it is not the newest kid in town but more capable then about any home system will require.

the new 6805 would be great also, but it over double the price and you will not push enough utilization to see a performance difference.  do not get a 6xxx series with an E in the model. they are HBA stile cards (software.)  

if you do need an HBA i would go with a Dell H310 or H200.  low cost abundant and capable of doing the job. (HBA = Host Bus Adapter, its a bridge not hardware implement. but if you wanted to dabble in ZFS or snapraid or some software variant then they are the way to go.)


----------



## newtekie1 (Jan 24, 2014)

Mindweaver said:


> Really I think not? I would not use those drives for any type of RAID array, because I care about my data. They are not built for that purpose. The money spent for those drives could have easily got him enterprise drives with error recovery and are 10k. I have and have used many RAID arrays in a production environment. Now can he use those drive? Yes, but why? I'm just suggesting that his money could have been spent better. He is buying the drives thinking it will be faster because of the Hybrid SSD Cache, but in reality he is better off getting regular SSD's. You are putting a lot of faith in a hybrid drive that is not built for RAID.



Using a SSHD is no more worse than using a standard desktop drive.  Seagate desktop drives actually seem to work far better in RAID than Western Digital drives in my experience.  I believe this is because Seagate doesn't actually disable the error recovery features while WD does.  Yes, for about the same price he could have gone with Constellation drives, but performance would have been worse.  The SSD cache on these drives actually does work with RAID, so these drives work very well in RAID actually.

As much as we all may argue about what is best, RAID remains a tradeoff between reliability and cost. There is no "optimal" RAID configuration, since all RAID configurations have a greater than zero chance of losing all data. What the person building the RAID needs to do is identify what failure rate is acceptable and then design a RAID configuration around that.



The Von Matrices said:


> As much as we all may argue about what is best, RAID remains a tradeoff between reliability and cost.  There is no "optimal" RAID configuration, since all RAID configurations have a greater than zero chance of losing all data.  What the person building the RAID needs to do is identify what failure rate is acceptable and then design a RAID configuration around that.



RAID0 is the only RAID level that doesn't minimize risk, all other RAID levels are more redundant and less risky compared to a single hard drive.


----------



## Mindweaver (Jan 24, 2014)

newtekie1 said:


> Using a SSHD is no more worse than using a standard desktop drive.



I never said it was worse than a standard desktop drive buddy, and with that said I wouldn't use a standard drive either. Plus, I just think for the money that was spent he would be better off with a RAID specific drive over these Hybrid drives. I hope these drives work well for the OP. I can't say that I'm not interested in his results when it's complete. Also, I think we are all trying to help the OP.


----------



## The Von Matrices (Jan 24, 2014)

newtekie1 said:


> RAID0 is the only RAID level that doesn't minimize risk, all other RAID levels are more redundant and less risky compared to a single hard drive.



You're right, but there are endless discussions on the internet as to which RAID level is best with many people insisting on using the same RAID level for all applications.  My argument is that the RAID level used depends on application; certain data is more valuable than others.  It might make sense to use a less redundant RAID level is your data can be easily copied from backup or the internet.  In that case spending lots of money to ensure a very high level of data security is money wasted.

RAID is a tradeoff of three factors, and you simply cannot design an array that excels in all three factors:

Low cost
High performance
High data security

I value low cost and high data security with no regard to performance, so I use RAID 6 for my home storage.  Someone like the OP values high performance and high data security with little regard to cost might chose a RAID 10.  Someone who values low cost and high performance with no care about data integrity might choose a RAID 0.


----------



## Steevo (Jan 24, 2014)

Useful Idiot said:


> The multiple controller depth paragraph above i have no idea what you are saying to be honest. It reads as gibberish, I have tried to interpret it five times now. Could you possibly post that in English?
> 
> so raid 50, 60 and other nested RAID arrays aren't redundant? Check again.


Both the disk and the RAID card have caching algorithms for data, NCQ in a RAID 5 array where parity calculation and caching are already in use has almost no effect from my real world experience with HDD's featuring 64MB of cache as well as hardware based controller cards.
Depending on existing que depth and request length it can have a minimally positive and negative effect.


I am unaware of the caching policy using by SSHD's from different manufacturers, I agree many are probably based on real world block usage, but many seem to feature real time caching of files as a feature as well, meaning a RAID card caching back to the disks from its own cache will causes odd read and writes to the drives internal SSD cache.


I don't know a single person on the board or other boards who use or would use nested arrays for their personal systems. Way to take it overboard.


----------



## newtekie1 (Jan 25, 2014)

Mindweaver said:


> I never said it was worse than a standard desktop drive buddy, and with that said I wouldn't use a standard drive either. Plus, I just think for the money that was spent he would be better off with a RAID specific drive over these Hybrid drives. I hope these drives work well for the OP. I can't say that I'm not interested in his results when it's complete. Also, I think we are all trying to help the OP.



I never said you said it was worse than a standard desktop drive _buddy. _My point was that, in my experience, desktop drives from Seagate work just as well as the enterprise drives, and the SSHD drives perform better.




The Von Matrices said:


> You're right, but there are endless discussions on the internet as to which RAID level is best with many people insisting on using the same RAID level for all applications.  My argument is that the RAID level used depends on application; certain data is more valuable than others.  It might make sense to use a less redundant RAID level is your data can be easily copied from backup or the internet.  In that case spending lots of money to ensure a very high level of data security is money wasted.
> 
> RAID is a tradeoff of three factors, and you simply cannot design an array that excels in all three factors:
> 
> ...



Yes, you certainly can design an array that excels at all three factors.  My RAID5 array was faster than a single drive, more secure than a single drive, and relatively low cost since it used standard desktop drives.


----------



## The Von Matrices (Jan 25, 2014)

newtekie1 said:


> Yes, you certainly can design an array that excels at all three factors.  My RAID5 array was faster than a single drive, more secure than a single drive, and relatively low cost since it used standard desktop drives.



By low cost I meant low cost per capacity, in which case you actually support my original post.  Your RAID 5 is more expensive per capacity and slower than a RAID 0 even if your configuration excels at data security.  RAID is all about trading storage space for redundancy, so it will never be less expensive for the same capacity compared to JBOD or non-redundant RAID 0.  You will never be able to design an array that is better than any other configuration at all three factors simultaneously.


----------



## newtekie1 (Jan 25, 2014)

The Von Matrices said:


> By low cost I meant low cost per capacity, in which case you actually support my original post.  Your RAID 5 is more expensive per capacity and slower than a RAID 0 even if your configuration excels at data security.  RAID is all about trading storage space for redundancy, so it will never be less expensive for the same capacity compared to JBOD or non-redundant RAID 0.  You will never be able to design an array that is better than any other configuration at all three factors simultaneously.


I see what you're saying.


----------



## Mindweaver (Jan 25, 2014)

newtekie1 said:


> I never said you said it was worse than a standard desktop drive _buddy. _My point was that, in my experience, desktop drives from Seagate work just as well as the enterprise drives, and the SSHD drives perform better.



Oh I see buddy. After thinking about it awhile, why wouldn't an SSHD be worse than a standard drive? I mean the SSHD has 2x the parts to fail. It has the mechanical part that can fail and the SSD part that could fail. I don't have any experience with SSHD, will the SSHD still work if the SSD part fails? I know the answer to if the mechanical part fails will it still work and that answer is "_No_". This is part of the reason I would never use an SSHD in RAID.


----------



## newtekie1 (Jan 25, 2014)

Mindweaver said:


> Oh I see buddy. After thinking about it awhile, why wouldn't an SSHD be worse than a standard drive? I mean the SSHD has 2x the parts to fail. It has the mechanical part that can fail and the SSD part that could fail. I don't have any experience with SSHD, will the SSHD still work if the SSD part fails? I know the answer to if the mechanical part fails will it still work and that answer is "_No_". This is part of the reason I would never use an SSHD in RAID.


The SSDs in these drives are extremely simple compared to traditional SSDs.  They are more like an SDCard than a SSD actually.  Normal SSDs have multiple flash chips, if just one goes bad the whole drive is toast.  The SSDs on these SSHDs are a single chip.  Sure that is an extra chip on the PCB that could fail, but is it likely?  No. 

I'm sure the next question is what happens when the SSD wears out? The SSD isn't going to wear out in the life of the drive.  Seagate actually use 16GB flash chips on the drives with 8GB of SSD cache, and 8GB chips on the early drives with 4GB of SSD cache.  They do this so they can have 50% over-provisioning, which will make the SSD last a very very long time.  Normal SSDs use something like 10% over-provisioning.  Plus, since the cache is only written to a small amount, the life of the SSD is basically a non-issue.

Now, what happens if that SSD does fail?  We'll I'm not entirely sure.  However, since the data isn't stored solely on the SSD, it is also on the hard drive portion, if the SSD does fail the data should be safe.  In fact, I would guess the drive would continue to function, though at the speed of a traditional hard drive.  Though, it could also be just like what happens when a traditional cache on a hard drive fails.


----------



## fraya713 (Jan 27, 2014)

Wow looks like this conversation went a long way.

If you're interested in how my SSHD's are doing with my motherboard RAID 10 config - I will answer that with, "so far so good" - I've been running them for about 8 months or so without a hitch.

I was expecting more of a performance boost so I was concerned that the SSD part of my drives weren't working because they were configured as RAID, however, after some simple google searching it appears that these drives can and do work with RAID configs.

So after investigating further, I updated my Intel Matrix Storage manager and found that Write-Cache Buffer Flushing was enabled and the cache mode was off.

I turned this the cache mode to "Write Back" (You need to turn off Write-Cache Buffer off before doing this) and I believe it is now configured to use my SSD part of my hard drives, time will tell if this prompts a performance boost, but worth knowing that the SSD part probably won't work without the setting adjustments on the the Intel Matrix Storage Manager.


----------



## Mindweaver (Jan 27, 2014)

That's good to know. So, have you had any luck with finding a RAID Card? I would say you'll see a good improvement with Hardware RAID. I've had a lot of success with my LSI 3Ware 9650SE card, but it's still pretty high in price. I've heard some people have some success with the HighPoint RocketRaid 640L, and it's around 100 bucks. If I'm not mistaken I think Kreij bought one to use with SSD's, but I don't know if he used RAID10 or not.


----------



## fraya713 (Jan 27, 2014)

Mindweaver said:


> That's good to know. So, have you had any luck with finding a RAID Card? I would say you'll see a good improvement with Hardware RAID. I've had a lot of success with my LSI 3Ware 9650SE card, but it's still pretty high in price. I've heard some people have some success with the HighPoint RocketRaid 640L, and it's around 100 bucks. If I'm not mistaken I think Kreij bought one to use with SSD's, but I don't know if he used RAID10 or not.


after all the mixed responses, I wasn't sure if getting a raid controller was the best choice or not. 
The SATA II on the LSI kind of bothers me, SATA III would definitely be preferable, right?


----------



## The Von Matrices (Jan 27, 2014)

You are completely mistaken about what you are doing.  The SSD part of your hard disks is enabled 100% of the time; there is nothing you can do to enable or disable it.

What you have done is enable write back caching on the OS.  *This is very dangerous for data security*, especially when it is run at the OS level.  If you care about your data, do not use it.

Write back caching means that when your OS sends a command to write data to disk, it is not written to disk immediately but is actually written into a RAM cache.  Once the data is in the RAM cache, the OS (and you) receive a confirmation that the data has been written to disk (even though it hasn't yet).  Then, in the background, the RAID manager moves data from the RAM cache to the array as fast as the disks will accept it.  The problem is that if the system loses power or crashes, then all the data that is in RAM but not yet written to disk is lost forever.

This loss of data can be caused by a power outage, or since the Intel RAID is run by the OS, even an improper shutdown of the OS (such as a system crash or an improper restart) can cause all data in the RAM cache to be lost.  This can corrupt silently, and you might not even know it is occurring.  Just checking to see if your files you just wrote before the crash are on the disk isn't enough.  The master file table can be fine and show your files as on the disk, but one day you will go to open those files and find that parts of them are corrupt.

Professional RAID cards that use write back caching use RAM on the RAID card itself (separating it from the OS) so that an OS level crash does not corrupt the data in the card's RAM.  But that still means that the cache can be lost in a power outage.  Any smart person using a RAID card either has a battery backup on the card to preserve the cache in a power loss or disables the cache altogether.


----------



## fraya713 (Jan 27, 2014)

Thanks sir. So to verify, do I want to keep Write-cache buffer flushing on and Cache-Mode to off or read only?


----------



## The Von Matrices (Jan 27, 2014)

fraya713 said:


> Thanks sir. So to verify, do I want to keep Write-cache buffer flushing on and Cache-Mode to off or read only?



If you have system memory to spare, use read-only, since that has no chance for data loss and can speed up reading of frequently accessed files.  The only reason you would want to disable the cache completely is if your system was very low on memory.


----------



## fraya713 (Jan 27, 2014)

The Von Matrices said:


> If you have system memory to spare, use read-only, since that has no chance for data loss and can speed up reading of frequently accessed files.  The only reason you would want to disable the cache completely is if your system was very low on memory.


Thanks, I guess when I read it and what it did, I thought it would use the SSD part of my hard drives to cache instead of my actual RAM.
I've set it back to Read as my data is important to me (and the time it takes to configure my computer back )


----------



## Mindweaver (Jan 28, 2014)

Yea, it's only advisable to us write back on a hardware RAID controller with BBU (_Battery Backup Unit_) in case of power lose. I would suggest getting a Hardware RAID Card. I think you would see a performance increase even using SATAII, but the 640L is SATAIII.


----------



## Aquinus (Jan 28, 2014)

The Von Matrices said:


> What you have done is enable write back caching on the OS. *This is very dangerous for data security*, especially when it is run at the OS level. If you care about your data, do not use it.



I would say that this is true unless you were running server grade hardware on a UPS. I have confidence in a Xeon and ECC memory (running 100% stock, like server hardware should,) will be fine. Write-through would probably be optimal, because you still cache it even if you wait for writes to flush to the array.


----------



## The Von Matrices (Jan 28, 2014)

Mindweaver said:


> I would suggest getting a Hardware RAID Card. I think you would see a performance increase even using SATAII, but the 640L is SATAIII.



There might be a performance increase going from SATA II to SATA III, but given the same SATA spec, those RocketRAID cards will not be any better than integrated RAID.  Those low end RocketRAID cards are software RAID as well.  It doesn't matter either way because RAID 10 involves no parity calculations.

He has an X48 system, so the only benefit would be that he can plug the RocketRAID card into one of the PCIe 2.0 slots on the northbridge (PCIe 2.0 x4 = 2GB/s) and not be constrained by the 1GB/s uplink from the southbridge.  Of course, I am not sure about the performance of SSHDs and whether 4 of them can actually break 1GB/s.  If they can't, then the bottleneck is the drives themselves and the add on card is pointless.



Aquinus said:


> I would say that this is true unless you were running server grade hardware on a UPS. I have confidence in a Xeon and ECC memory (running 100% stock, like server hardware should,) will be fine. Write-through would probably be optimal, because you still cache it even if you wait for writes to flush to the array.



I agree; your situation is different because you are taking every precaution to prevent system crashes.  My storage server with a RAID card has no battery backup, so I have all its write caches (including the on-disk caches) disabled.


----------



## Steevo (Jan 28, 2014)

I run it, but am willing to live with the risks (I backup), and since the agility drives don't feature a cache that needs flushed.......


Any benchmark numbers for us to oogle?


----------



## Aquinus (Jan 28, 2014)

The Von Matrices said:


> I agree; your situation is different because you are taking every precaution to prevent system crashes. My storage server with a RAID card has no battery backup, so I have all its write caches (including the on-disk caches) disabled.



Even without a BBU or UPS, write-through is useful for reads. There is no risk of data loss because data is stored to the disk synchronously with cache. It's supposed to improve read speeds if recently written data is accessed shortly after being written.


----------

