# RAID. Give me the facts.



## Evo85 (Jul 5, 2009)

Know up front I have never set up a RAID before. Never really found it to be needed for me. But, I am toying with the idea and want to gets some things cleared up before I do.

 First, from what I understand RAID drives should be exact matches. Same Manufacturer, Capacity, and speeds. True?

 RAID set ups tend to be more prone to failure. True?

 RAID set ups can decrease Read/Write/Access times dramatically. True?

 Thanks for any insight you can provide here. I am weighing the benefits VS. the possible cons..


----------



## newtekie1 (Jul 6, 2009)

Evo85 said:


> First, from what I understand RAID drives should be exact matches. Same Manufacturer, Capacity, and speeds. True?



This is recommended, but not always necessary.  Most controllers will deal with mis-matched drives, but having identical drives is best.



Evo85 said:


> RAID set ups tend to be more prone to failure. True?



Depends on the type of RAID you do.  

If you do RAID0, then yes they are more prone to failure.  Since if one drive fails, everything is lost.  This means that if you use 2 drives in a RAID0 setup, you double the chance of failure, if you use 3 drives you triple it.

Other forms of RAID are less prone to failure, as really the whole point of the other forms of RAID is to add redundancy.  So if a drive fails, no data is lost.  Some forms of RAID can handle multiple drive failures before any data is lost.



Evo85 said:


> RAID set ups can decrease Read/Write/Access times dramatically. True?



Yes, RAID setups can do this.  RAID0 is the common setup used for speed, though the other setups can also improve speed while some can actually decrease it.


----------



## CrAsHnBuRnXp (Jul 6, 2009)

Read this guide I made. See if it can help you out. 

http://forums.techpowerup.com/showthread.php?t=43572&highlight=raid+arrays+explained

I still vote for sticky for that thing.


----------



## Kursah (Jul 6, 2009)

I had 2xWD400 40GB drives in Raid0 and then went to 2xWD6400AAKS drives in Raid0, my thread is somewhere in this section, the bench results were pretty nice, but real-world results didn't change much and some games had much worse loading times. I went back to a no-raid setup, I might try it again with SSD's, but for me it didn't make enough difference to warrant running after trying it out for a few months and making comparisons to a single 6400AAKS, I decided for my uses, no raid was the best route and safest to boot. Doesn't mean you shouldn't try it...it's definately interesting, and not having to load drivers during installing vista (at least in my application) was nice.


----------



## Homeless (Jul 6, 2009)

Kursah said:


> I had 2xWD400 40GB drives in Raid0 and then went to 2xWD6400AAKS drives in Raid0, my thread is somewhere in this section, the bench results were pretty nice, but real-world results didn't change much and some games had much worse loading times. I went back to a no-raid setup, I might try it again with SSD's, but for me it didn't make enough difference to warrant running after trying it out for a few months and making comparisons to a single 6400AAKS, I decided for my uses, no raid was the best route and safest to boot. Doesn't mean you shouldn't try it...it's definately interesting, and not having to load drivers during installing vista (at least in my application) was nice.



I find that raid controllers play a part in determining the speed here.  If you are using a software raid like the Intel ICHxxR controller, my personal experience with them is that they will load windows slower than a single drive, but once windows is fully loaded everything is faster.  With a native hardware raid, it feels like everything is gliding along all the time.


----------



## Homeless (Jul 6, 2009)

Evo85 said:


> Know up front I have never set up a RAID before. Never really found it to be needed for me. But, I am toying with the idea and want to gets some things cleared up before I do.
> 
> First, from what I understand RAID drives should be exact matches. Same Manufacturer, Capacity, and speeds. True?
> 
> ...



The only insight I can add here that already hasn't been said by newtekie is that make sure your system is 100% stable.  It may just be bad luck on my part, but if my system wasn't absolutely 100% stable, my raid would die somewhere along the way (ICH10R controller).  After running 3.5ghz on my chip for a few months, my raid0 magically died one day and I have no idea why.  I ran it through prime/occt/linpack and found that after some long period of time it was unstable.  I got my system 100% stable and haven't had problems since.


----------



## Davidelmo (Jul 9, 2009)

I'm using RAID0 on my ex58 ud5 motherboard.

Basically doubles the speed of a single HDD. File transfers sometimes break 200mb/s and games load way quicker.

Obviously don't use it for long term storage. But I just use it for Windows, games etc. All documents go onto a separate drive


----------



## newtekie1 (Jul 9, 2009)

Homeless said:


> I find that raid controllers play a part in determining the speed here.  If you are using a software raid like the Intel ICHxxR controller, my personal experience with them is that they will load windows slower than a single drive, but once windows is fully loaded everything is faster.  With a native hardware raid, it feels like everything is gliding along all the time.



I find that people tend to be confused with what a Software RAID is and what a Hardware RAID is.  Onboard RAID controllers are not Software RAID, even though they are often labelled as such.

Software RAID was originally a term that refered to litterally software RAID.  Using Windows or another OS to take two or more drives that may or may not be on the same controller, and putting them into a RAID configuration.  Nothing is done via hardware, everything is managed via software.  This isn't really done anymore.

Onboard RAID is not truly software RAID.  It actually is hardware RAID.  However, instead of having a dedicated controller that does all the work, some of the work is offloaded to the CPU.  However, this should not affect boot times.  The RAID exists and is initiallized long before Windows begins to boot.



Homeless said:


> The only insight I can add here that already hasn't been said by newtekie is that make sure your system is 100% stable.  It may just be bad luck on my part, but if my system wasn't absolutely 100% stable, my raid would die somewhere along the way (ICH10R controller).  After running 3.5ghz on my chip for a few months, my raid0 magically died one day and I have no idea why.  I ran it through prime/occt/linpack and found that after some long period of time it was unstable.  I got my system 100% stable and haven't had problems since.



This is very true with onboard RAID controllers.  Since they use the CPU to do most of their work, if the CPU isn't stable, the work offloaded to the CPU can be affected by the instability, and cause problems.

This is one of the many reasons I don't use onboard RAID.


----------



## FordGT90Concept (Jul 9, 2009)

Evo85 said:


> First, from what I understand RAID drives should be exact matches. Same Manufacturer, Capacity, and speeds. True?


True, with one caveat: Ideally, everything is equal except manufacturing lot.  Drives from separate lots are less likely to fail at the same time.  I don't practice this (largely because it is difficult to achieve) although it is a good idea.




Evo85 said:


> RAID set ups tend to be more prone to failure. True?


True.  The more drives you have, the more likely at least one drive will fail.  RAID, however, provides redundancy which minimizes the impact (except RAID0).  I have three RAIDs in two different computers with zero hard drive failures.  Hard drives actually are probably the longest-life devices (infant mortality is anywhere between 3-5 years, operational life is measured in decades) in a computer so long as they are handled with care.



Evo85 said:


> RAID set ups can decrease Read/Write/Access times dramatically. True?


False.  Random access times are generally as fast as the slowest drive.  What they improve is sequential read/write performance, not random (or base line hard drive performance).


----------



## Kursah (Jul 9, 2009)

Homeless said:


> I find that raid controllers play a part in determining the speed here.  If you are using a software raid like the Intel ICHxxR controller, my personal experience with them is that they will load windows slower than a single drive, but once windows is fully loaded everything is faster.  With a native hardware raid, it feels like everything is gliding along all the time.



Which is great and equals more junk stuffed into your PC, fine if you're a server runner or power user, but for the regular gamer, imo it's kind of a waste of money and space when that same budget could be much better used for more specific uses like a more powerful GPU or that cooler so an extra OC is attained at cooler temps that would return a better performance increase. And with SSD's getting cheaper and matching some of the best 2-drive non-ssd raids, and just devastating seek times, it's almost pointless. I agree there's a solid market there, but the line between casual and power user get's drawn pretty broadly.

I did use the ICH10R chipset, and it was fine, but not nearly impressive...if I was more of a bench whore I'd still be using raid...but as a gamer, there was really no major benefit, I'd be better off taking the money spent on better SAS drives and Raid controllers and getting a newer OCZ Vertex SSD or similar that would be faster to begin with. That's where there's the grey area though, but at least there's a few options...frankly I'm very content with 3x6400aaks drives in non-raid, really I have plenty of space and the speed is tolerable for my needs until SSD or other technologies are actually what I feel is worth the price value of space and performance.


----------



## FordGT90Concept (Jul 9, 2009)

There's little difference in performance between onboard RAID and $100-200 RAID cards (as seen with the Highpoint RocketRAID 2300).  You only see performance increases in RAID when you look at enterprise-class RAID cards that start around $1000 and head north into the tens of thousands of dollars which have dedicated processors and memory.


A single SSD provides better performance than a very large array of 7200RPM HDDs configured in RAID0; however, at significant decrease in reliability and longevity.  If sheer performance is your sole objective, SSD is about the only way to go.


----------



## largon (Jul 10, 2009)

Short answer is: 
Use only RAID1, 5 or 6. 
Ignore RAID0. The MB/s increase it gives _means nothing_, unless you do heavy video editing.


----------



## AsRock (Jul 10, 2009)

Evo85 said:


> Know up front I have never set up a RAID before. Never really found it to be needed for me. But, I am toying with the idea and want to gets some things cleared up before I do.
> 
> First, from what I understand RAID drives should be exact matches. Same Manufacturer, Capacity, and speeds. True?
> 
> ...




Same size same speed.  All though i like to keep to the same manufacturer as well.

Raid 0 like other say surly are.  I had a hard shutdown about 4 days again which my apps(raid 5) and games(raid0) partitions were gone. 
Although Intels Intel Matrix Storage Manager fixed it all including the raid 0 partition.  I also had a bad time when i had to remove all the drives from the case as i wanted to spay the case and even though all HDD's and cables were numbered the whole raid failed.
But booted a OS of a spare HDD and fixed it like new. 

Hard shutdowns can make issue's but normally there easy fixed it depends on how bad it was. Never lost any thing yet.

Running raid has only increased speed for me. But instead of running 1 raid 5 i opted for 2 raid 5 setups so if ones doing some heavy work the other does some thing else with no issue like playing a game for example. Increased in windows and game loading times.

Overall i like it then again i have enough space to back every thing up too for the just in case moment.


----------



## newtekie1 (Jul 10, 2009)

FordGT90Concept said:


> There's little difference in performance between onboard RAID and $100-200 RAID cards (as seen with the Highpoint RocketRAID 2300).  You only see performance increases in RAID when you look at enterprise-class RAID cards that start around $1000 and head north into the tens of thousands of dollars which have dedicated processors and memory.



Performance wise, yes.  There is very little difference between onboard and a dedicated card.

However, easy of use, easy of portability, and reliability are different stories.

Most people with a RAID0 array, won't care about these things, and IMO RAID0 is the only time onboard should be used.

However, if you are doing any kind of redundant array, it is pointless to use onboard.  The RAID array is tied to the motherboard.  

If you ever want to put in a new motherboard, for whatever reason, it becomes a problem.  You either have to back the entire array up, then move the drives to the new board and re-setup the array and restore the backup, or lose all the data.

Then there is the issue with overclocking, and corrupting the RAID array, that we already talked about.

Add to that, that most onboard controllers do not support OCE/ORLM.  Intel's just finally added this into their latest ICH10R onboard controller.  Something even the basic dedicated cards(such as the RR2300) offers.

Plus, I just like the fact that with a dedicated card, I can pull the entire array out of one machine and place it in another if I ever need to.  For example: If I decide to sell off my main rig, I can just stick my RAID array in any one of my other machines and have access to all my data in 5 minutes or less.


----------



## Disparia (Jul 10, 2009)

Onboard does RAID-5 well enough for a low-end server. Got 6 x WD2500YS off an Intel 5000P board in one of my servers. Network transfers to it top out at about 120MB/s (1Gb x 2 trunk). Straight benches keep just above 200MB/s. I know - probably doesn't apply to many people here.

On my home box I currently have 4 x 640GB Blacks in RAID-5 off an ICH10R. HT Tune benches it at 300MB/s top, 245MB/s average with a 12ms access time. Not the best but I had an equal need of speed, capacity, and redundancy at a low cost.

By the end of the month I hope to have an Adaptec 5405 and 5805 here at work along with WD RE3's (1TB x 8, 320GB x 16). Upgrading two older servers and building two new ones so I'll have some benches to share then. The 5405 might interest some, with a $330 street price it's not hideously expensive and should be able to handle any four SSD's you throw at it. Wouldn't be too shabby with mech drives either


----------



## FordGT90Concept (Jul 10, 2009)

newtekie1 said:


> However, easy of use, easy of portability, and reliability are different stories.


Ease of use depends on motherboard/card combo, how many arrays in operation, the hirearchy of the arrays, etc.  On my Tyan/Highpoint combo for instance, I've spent days trying to get it to work right because the OS was was on Highpoint and the backup volumes were on Intel.

Dedicated cards and associated arrays are easier to transfer from computer to computer because there is little risk in losing the data on those drives.  Dedicated cards also tend to have more options than onboard.




newtekie1 said:


> Most people with a RAID0 array, won't care about these things, and IMO RAID0 is the only time onboard should be used.


I have no problem doing RAID0, 1, 01 and 10 onboard.  They all have simple control schemes which aren't demanding at all on the system.  I am wary of using onboard for RAID5/6 and other levels that have complex controller requirements (heavier system load).





newtekie1 said:


> If you ever want to put in a new motherboard, for whatever reason, it becomes a problem.  You either have to back the entire array up, then move the drives to the new board and re-setup the array and restore the backup, or lose all the data.


Whenever you are dealing with critical data on a RAID array, you want no less than two backups of that data on two separate mediums (two computers, a computer and a tape, a computer and a stack of DVDs, a portable hard drive and a tape, etc.).  If I can't achieve that, I am wary of doing it until I gain access or invest in the necessary hardware to do it.




newtekie1 said:


> Then there is the issue with overclocking, and corrupting the RAID array, that we already talked about.


Overclock (or running any component out of spec) introduces new hazards into the equation.  Regardless, any unstable system has the potential to corrupt data on a hard drive during write operations and return inaccurate values during read operations.  As with anything else, when you increase the workload, you increase the risk of an error occurring.




newtekie1 said:


> Add to that, that most onboard controllers do not support OCE/ORLM.  Intel's just finally added this into their latest ICH10R onboard controller.  Something even the basic dedicated cards(such as the RR2300) offers.


OCE and ORLM are enterprise-class RAID functions, that is, required for zero-downtime  critical mainframes and systems.  Consumer-class can afford to shutdown for an hour to make changes.  Whenever the system isn't critical, it should be shutdown anyway for the sake of not throwing the power supply a curveball.


----------



## newtekie1 (Jul 10, 2009)

Jizzler said:


> Onboard does RAID-5 well enough for a low-end server.



No, it doesn't.  The board dies, the data is gone.  Where is your redundancy now?  RAID5 is not meant for performance, it is meant for redundancy, which running it on an onboard controller decreases.



FordGT90Concept said:


> I have no problem doing RAID0, 1, 01 and 10 onboard.  They all have simple control schemes which aren't demanding at all on the system.  I am wary of using onboard for RAID5/6 and other levels that have complex controller requirements (heavier system load).



It isn't about system load.  It is about redundancy. Onboard might be sufficient for RAID1 with only 2 drives, but beyond that, go with a dedicated card.



FordGT90Concept said:


> Whenever you are dealing with critical data on a RAID array, you want no less than two backups of that data on two separate mediums (two computers, a computer and a tape, a computer and a stack of DVDs, a portable hard drive and a tape, etc.).  If I can't achieve that, I am wary of doing it until I gain access or invest in the necessary hardware to do it.



To me, all data is critical.  And this is very good advice right here.  All of my data is backed up in at least on other location.  That doesn't mean I want the hassle of having to restore it if I can avoid it.  If spending a couple hundred dollars saves me that hassle even once, and it has more than once actually, then it has paid for itself.



FordGT90Concept said:


> Overclock (or running any component out of spec) introduces new hazards into the equation.  Regardless, any unstable system has the potential to corrupt data on a hard drive during write operations and return inaccurate values during read operations.  As with anything else, when you increase the workload, you increase the risk of an error occurring.



Very true, but the danger is increased significantly when using the onboard RAID controller.



FordGT90Concept said:


> OCE and ORLM are enterprise-class RAID functions, that is, required for zero-downtime  critical mainframes and systems.  Consumer-class can afford to shutdown for an hour to make changes.  Whenever the system isn't critical, it should be shutdown anyway for the sake of not throwing the power supply a curveball.



OCE and ORLM can be about zero-downtime, but it also has to do with simply increasing the storage capacity of the RAID array, online or not.  Onboard controllers lack this function, and it has nothing to do with enterprise-class RAID.  A home user would certainly like the ability to add a drive to an array and increase its size from time to time, that is what OCE does.  Up until ICH10R, you couldn't make changes to an array after it was created.  To increase the array, it needed to be deleted, then re-created with the new drive included, and then restored from a backup.  OCE and ORLM isn't just about saving the user from a reboot and downtime.


----------



## Disparia (Jul 10, 2009)

newtekie1 said:


> No, it doesn't.  The board dies, the data is gone.  Where is your redundancy now?  RAID5 is not meant for performance, it is meant for redundancy, which running it on an onboard controller decreases.



It goes to one of the three other servers with that identical board or any board with any of the other compatible ICH chipsets that can read the array. To me, the "it dies with the board" thinking died out when we stopped using those IDE RAID controllers that use to come on Pentium 2/Pentium 3 boards.

Lets see, 200MB/s is more than 80MB/s (single drive), so I'll have to bottleneck the array in some fashion to make it fit your definition  j/k


----------



## newtekie1 (Jul 10, 2009)

Jizzler said:


> It goes to one of the three other servers with that identical board or any board with any of the other compatible ICH chipsets that can read the array. To me, the "it dies with the board" thinking died out when we stopped using those IDE RAID controllers that use to come on Pentium 2/Pentium 3 boards.



And the ICH allows you to recover arrays from those drives?  I don't think so...


----------



## Disparia (Jul 11, 2009)

Thanks for holding. Your post is important to us.

I have here at my house systems with Intel Q965/ICH8DO and P45/ICH10R motherboards. Created on the first system with two drives was a 120GB RAID-0 array with a RAID-1 array utilizing the remaining space. I installed them in the second machine and the two arrays were picked up on boot and usable in the OS.

Now I fully admit that this does not suffice as proof to my original statement, besides not having a third drive for RAID-5 testing*. We'll have to wait til I'm back in the office on Monday and even then I may have to wait until the new servers arrive, so that I have some systems and arrays to play with.

Also, to clarify my thoughts on this, it's not that I'm trying to say or imply that every ICHxR works with every other ICHxR with all possible combination of arrays - it's that when "the board dies" I will have a means of recovering the data.

* Last week I recovered 1TB+ off a degraded RAID-5 array (3 of the 4 disks). Was made on ICH10R and recovered on ICH10R, but on different boards. Over the years I've experienced the more common situations IMO and those experiences have been positive. So much in fact, that I'd bet on it. (Of course, this betting man is not a complete fool, my personal and work data is backed up elsewhere)


----------

