# RAID 5 for games?



## hat (May 20, 2018)

Sometime in the coming months I'm upgrading my system, or when Ice Lake comes out, which will be a longer wait, but yeah...

Anyways, I was mulling over storage solutions, as I currently have just a 128GB SSD and a 500GB SSD. Games are getting bigger and bigger, and I could get 3 1TB drives for the price of one 1TB SSD, throw them in RAID 5, and wind up with 2TB storage. Then, I can use that for games and anything else for a long time. I'm not too concerned with write performance, but I do want good read performance. Would 3 drives in RAID 5 be good for that, or would I maybe be better off with 2 2TB drives in RAID 1?


----------



## cucker tarlson (May 20, 2018)

Get a HDD and cache it with an SSD. Btw I know games are getting bigger,but who keeps all of them installed ? Do you play 20 at once ? 500GB SSD is more than enough for games, 1TB SSD is plenty. I suggest getting a second 500GB SSD if you can't fit them on one. HDDs are good for storage but performance/noise-wise they're absolutely awful compared to SSDs.

2TB HDD,even in raid5,stands no chance in comparison against 1TB SSD for games. It's only better for data that does not need constant quick access.


----------



## therealmeep (May 20, 2018)

personally i have 2 wd blacks in raid 0 and they dont hold a candle to my 960 evo or even my mx300. ssds are just such a big performance jump. Something like r6 takes ~30 seconds to load a map on the black array, ~5 seconds on the mx300 and less than a second on the 960.


----------



## hat (May 20, 2018)

That's what they said back when I got my 512gb ssd, which is now in another system. I wasn't impressed with the performance in games at that time...


----------



## cucker tarlson (May 20, 2018)

If you want price/performance, then get a fast HDD like barracuda pro an a 32GB optane stick. This is a heluim HDD with 256MB cache and it's fast for a HDD.

https://www.purepc.pl/pamieci_masow...eagate_barracuda_pro_10_tb_hel_yeah?page=0,10


----------



## therealmeep (May 20, 2018)

unfortunatly his current machine wont support optane/NVMe.


----------



## cucker tarlson (May 20, 2018)

He's upgrading. Read what the man wrote.

I think amd's storage MI is a neat option, especially when you combine it with a big SSD, like +500GB. Still, I'd rather have all my games on a dedicated SSD but overall it's a very good option.


----------



## MrGenius (May 20, 2018)

Again with the helium HDD thing? Like that makes it faster? B to the S.

I briefly had one of the Barracuda Pro 256MB 8TB drives. Wasn't as fast as the Toshiba X300 128MB 5TB(I also  briefly had) in sequential R/W. And was only slightly faster in random R/W. Yet the Seagate cost more than twice as much. NOT impressed. Returned it(just like I did with the Toshiba prior, for being as loud as a chainsaw). And that was the last time I will ever spend another dime on an HDD. I've gone SSD now...and ain't never going back!

Anyway. Seen that link posted for a Micron 1100 2TB SSD the other day. $268 on sale. Throw your games on one of those. You'll be glad you did. Instead of dicking around with an HDD RAID array.


----------



## hat (May 21, 2018)

I somehow managed to forget all about RAID 0. I wouldn't really care if it blew up and I had to download my games again. I could RAID 0 two 1TB drives for a total cost of about $90 today (Seagate Barracudas) giving me 2TB for games and whatever other crap. A single 1TB SSD still costs $190 for the cheapest one one Newegg. So I could save about $100 and get more storage space. Question is, is the performance _really_ there? As I mentioned before, it sure didn't seem like it back when I picked up a 512GB SSD, coming from a single 500GB WD Caviar Black.

The OS is already on a 128GB SSD at this point, so the storage solution in question is pretty much just for games and other non critical bulk storage.


----------



## Jetster (May 21, 2018)

For drives bigger than 2 Tb RAID 5 is obsolete. So in this case sure. But RAID 0 would be better


----------



## Th3pwn3r (May 21, 2018)

Are you guys really need not realizing how fast SSD are in comparison to HDs? Maybe you never came across something with long load times...


----------



## hat (May 21, 2018)

Jetster said:


> For drives bigger than 2 Tb RAID 5 is obsolete. So in this case sure. But RAID 0 would be better


Yeah, not sure how I missed the possibility of raid 0 in this case. Been thinking too much about other types of raid for other applications lately...


----------



## newtekie1 (May 21, 2018)

hat said:


> Yeah, not sure how I missed the possibility of raid 0 in this case. Been thinking too much about other types of raid for other applications lately...



In my experience, RAID0 is great for sequential read/write speeds, but the random speeds aren't that great.

If it was me, I'd buy a single 2TB hard drive, and an inexpensive 240-250GB SSD, then use Primocache to setup the SSD as a cache for the HDD.  It actually works pretty well.  Yeah, it isn't always as fast as an SSD, but it definitely is a boost over just a normal SSD.  And once I upgraded to the 8700K and 1080Ti I really noticed the HDD causing stutter in big open world games, but adding the SSD cache eliminated that.


----------



## cucker tarlson (May 21, 2018)

MrGenius said:


> Again with the helium HDD thing? Like that makes it faster? B to the S.
> 
> I briefly had one of the Barracuda Pro 256MB 8TB drives. Wasn't as fast as the Toshiba X300 128MB 5TB(I also  briefly had) in sequential R/W. And was only slightly faster in random R/W. Yet the Seagate cost more than twice as much. NOT impressed. Returned it(just like I did with the Toshiba prior, for being as loud as a chainsaw). And that was the last time I will ever spend another dime on an HDD. I've gone SSD now...and ain't never going back!
> 
> Anyway. Seen that link posted for a Micron 1100 2TB SSD the other day. $268 on sale. Throw your games on one of those. You'll be glad you did. Instead of dicking around with an HDD RAID array.


Again,*prove* me wrong. All you do is bark at me while reviews tell I'm right. This is not how winning arguments works. Faster than WD black almost every time in real world testing.

https://www.tomshardware.com/reviews/seagate-barracuda-pro-10tb-hdd,5210-2.html
https://uk.hardware.info/reviews/79...p-storage-wars-test-results-pcmark8-subscores

Synthetic sequential testing doesn't matter for how your programs run at all. Do you really think it does ?

Where did you get the twice the price number from ? This is *absolutely not true*. X300 5TB is PLN620 here, Barracuda Pro 6TB is 900PLN. That's PLN124/GB for Toshiba, PLN150/GB for Seagate and it's well worth it with Seagate being faster,quieter,with a 5 year warranty.

How to talk to loudmouths who ignore facts and invent their own ? Blocked.


----------



## eidairaman1 (May 21, 2018)

newtekie1 said:


> In my experience, RAID0 is great for sequential read/write speeds, but the random speeds aren't that great.
> 
> If it was me, I'd buy a single 2TB hard drive, and an inexpensive 240-250GB SSD, then use Primocache to setup the SSD as a cache for the HDD.  It actually works pretty well.  Yeah, it isn't always as fast as an SSD, but it definitely is a boost over just a normal SSD.  And once I upgraded to the 8700K and 1080Ti I really noticed the HDD causing stutter in big open world games, but adding the SSD cache eliminated that.



What about a sshd?


----------



## hat (May 21, 2018)

I just realized I could go even cheaper and get another 500GB WDC Black and put it in RAID 0 with the one I already have. I don't need all 128GB of my SSD for my OS and programs either... currently I'm skimming by on a 30GB partition for that. I figure I could RAID some hard drives, and use the rest of my SSD to either store the most demanding games on, or set it up as a cache for the RAID array, though it would only be 80GB or so.


----------



## phill (May 21, 2018)

I just tend to go along with the thoughts, my internet sucks, so doesn't matter how fast I can load a program/game up, I'd be waiting for someone else loading the map or something, so why worry if it takes 3 seconds or 30??  If it takes longer then well that's just personal preference 

Plus I would say that SSDs for the 'big games' like GTA 5, Battlefield etc would probably make more sense, but then it might depend on what games you play and at what res and so on...   There's a few variables    Just depends on your budget or your wants for your system and what that can support I think 

I would like to go all SSDs if I'm honest, but even with SATA 3 based models, I'd believe that standard single drives would be much easier to deal with than raid'ed modes..


----------



## hat (May 21, 2018)

I've never messed with RAID myself previously, so I'm kind of interested in it as a learning experience if nothing else...


----------



## phill (May 21, 2018)

hat said:


> I've never messed with RAID myself previously, so I'm kind of interested in it as a learning experience if nothing else...



Oh it's definitely a learning experience   But may I suggest that you try it with no important data??  Use benchmarks or something if you want to just have a go at it..  Then if anything does go wrong, you won't be stressing out... 

Oh but definitely backup everything first !!


----------



## hat (May 21, 2018)

Yeah, it's just for storing games and potentially other storage that isn't critical. If something goes wrong, not the end of the world.


----------



## biffzinker (May 21, 2018)

hat said:


> I've never messed with RAID myself previously, so I'm kind of interested in it as a learning experience if nothing else...


I learned about RAID0 with flat ribbon cables running at ATA/133 MB/s per drive on an older Abit board with a Promise controller integrated on the mobo. The drives were from Maxtor if I remember right. Worked well enough at the time.


----------



## Aquinus (May 21, 2018)

If you don't want to re-install your entire steam library if a drive fails, RAID-5 is the way to go. RAID-5 only has a couple downsides compared to RAID-0 but, it has some upsides too.

RAID-5 offers redundancy so, you won't lose all of your stuff if you have a single drive failure.
RAID-5 offers similar *read* performance as RAID-0 which tends to scale to the number of disks you have.
RAID-5 has worse write performance as RAID-0 due to the need to store a parity data for redundancy.
I personally have a steam library on both my RAID-5 (HDDs,) and RAID-0 (SSDs,) for the bigger games that I don't play as often, I install them to the RAID-5. For the smaller or more often played games, I install them to the SSD RAID-0 and that works out fairly well for me.

FWIW, RAID-5 read speeds can be pretty good. This is what I get with 4x1TB drives. Bigger drives are going to have better transfer speeds too since single 1TB can't really break 130MB/s at its best. It's the access time that makes the drive feel slow which is why SSDs feel fast.

RAID-5 (4x1TB WD Black,)




RAID-0 SSDs: Corsair Force GT 120GB




Single 1TB WD Black:


----------



## cucker tarlson (May 21, 2018)

If you're thinking about ssd in raid0 for just regular use, forget it. I've been there. Had 850 Pro in raid0, it was hard to see any benefit. It was no faster than a single drive. A single 850 pro in rapid mode was more responsive in daily use than two of them in raid0.

All in all, the best setup would be a separate ssd for games. Period. I'd rather have a larger SSD than cache a HDD with one. And why would you wanna get rid of your 120GB ssd ? It's perfect for a system drive. I'd be glad to have one myself and just leave the rest of my drives (256+256+512) for games and data.

I think the worst part about your raid5 setup is having three 7200rpm drives running all the time. They're noisy and fail more often than decent ssds due to mechanical parts.


----------



## hat (May 21, 2018)

I wouldn't get rid of my SSD. I'm still keeping it, using it as a boot drive for sure. The question was if it would be worth it to partition it out keeping 40GB or so for the system and the rest as a cache, or a separate drive (albeit small) for the most demanding games.

I could care less if I have to redownload my Steam library or whatever other insignificant data on the drive goes. Don't care much for write performance either in this case. Still, $20 more for a third drive for better read performance wouldn't hurt, and the redundancy is there too which doesn't hurt. But then I have a decent storage solution that more than likely isn't gonna blow up so I could store other stuff on it, but it's only 1TB. Or just go big and SSD the whole thing.

Too many decisions. At least I have a while to work it all out and decide what I want to do...  still got months of saving to go before I can do anything at all, let alone potentially holding until Ice Lake comes around.


----------



## John Naylor (May 21, 2018)

Well I can tell you this .... we conducted two experiments:

1.  Desktop system ... 5 users over 6 weeks; each system had (2) 256 GB Samsung Pros (2) Seagate 2 TB 7200 rpm SSHDs and (1) 7200 rpm HD.   Users were told that we had installed new AV / system monitoring software and asked them to report any periodic performance issues on booting, application or gaming usage.  The system had multiple OS installs, set up such that system could be booted of any of the drive types and were changed w/o anyone's knowledge daily via boot menu.    Over the 6 weeks, one user reported in one instance where boot time "seemed slower".

2.  Twin laptops ... same 5 users, two lappies, one equipped with SSD and HD, one equipped with SSHD.  No reported differences

This does not show that all those storage devices are the same speed ... it does show that the user impact is negligible enough that user's are not impacted. These are measured results:

Boot Times:
HD = 21.2 seconds
SSHD = 16.5 seconds
SSD = 15.6 seconds

AutoCAD load large file times were identical which was puzzling

Game load times with MMO were also identical which was attributed to server handshaking being the bottleneck


3.  The desktop box was iniially set up, prior to above test as RAID 0 on SSDs and RAID 1 on SSHDs ... after 3 months, arrays were broken.. interestengly when we called Samsung they advised that RAID was not supported nor recommended.

4.  We have tested RAID about every three years over the last dozen or so years ... looking for answers as to why we saw no gain in either apps that we use or gaming (RAID does have a place in animation, rendering, video editing) and collexted the following about 10 years ago ... nothing's changed:

===============================================

http://en.wikipedia.org/wiki/RAID_0#RAID_0

_RAID 0 is useful for setups such as large read-only  NFS servers where mounting many disks is time-consuming or impossible and redundancy is irrelevant.

RAID 0 is also used in some gaming systems where performance is desired and data integrity is not very important. However, real-world tests with games have shown that RAID-0 performance gains are minimal, although some desktop applications will benefit.[1][2]_

http://www.anandtech.com/printarticle.aspx?i=2101
_"We were hoping to see some sort of performance increase in the game loading tests, but the RAID array didn't give us that. While the scores put the RAID-0 array slightly slower than the single drive Raptor II, you should also remember that these scores are timed by hand and thus, we're dealing within normal variations in the "benchmark".

Our Unreal Tournament 2004 test uses the full version of the game and leaves all settings on defaults. After launching the game, we select Instant Action from the menu, choose Assault mode and select the Robot Factory level. The stop watch timer is started right after the Play button is clicked, and stopped when the loading screen disappears. The test is repeated three times with the final score reported being an average of the three. In order to avoid the effects of caching, we reboot between runs. All times are reported in seconds; lower scores, obviously, being better.  In Unreal Tournament, we're left with exactly no performance improvement, thanks to RAID-0

If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop.

Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance. That's just the cold hard truth."_

http://www.techwarelabs.com/articles/hardware/raid-and-gaming/index_6.shtml
_".....we did not see an increase in FPS through its use. Load times for levels and games was significantly reduced utilizing the Raid controller and array. As we stated we do not expect that the majority of gamers are willing to purchase greater than 4 drives and a controller for this kind of setup. While onboard Raid is an option available to many users you should be aware that using onboard Raid will mean the consumption of CPU time for this task and thus a reduction in performance that may actually lead to worse FPS. An add-on controller will always be the best option until they integrate discreet Raid controllers with their own memory into consumer level motherboards."_

http://www.hardforum.com/showthread.php?t=1001325
_"However, many have tried to justify/overlook those shortcomings by simply saying "It's faster." Anyone who does this is wrong, wasting their money, and buying into hype. Nothing more."_

http://jeff-sue.suite101.com/how-raid-storage-improves-performance-a101975
_"The real-world performance benefits possible in a single-user PC situation is not a given for most people, because the benefits rely on multiple independent, simultaneous requests. One person running most desktop applications may not see a big payback in performance because they are not written to do asynchronous I/O to disks. Understanding this can help avoid disappointment."_

http://www.scs-myung.com/v2/index. [...] om_content
_"What about performance? This, we suspect, is the primary reason why so many users doggedly pursue the RAID 0 "holy grail." This inevitably leads to dissapointment by those that notice little or no performance gain.....As stated above, first person shooters rarely benefit from RAID 0.__ Frame rates will almost certainly not improve, as they are determined by your video card and processor above all else. In fact, theoretically your FPS frame rate may decrease, since many low-cost RAID controllers (anything made by Highpoint at the tiem of this writing, and most cards from Promise) implement RAID in software, so the process of splitting and combining data across your drives is done by your CPU, which could better be utilized by your game. That said, the CPU overhead of RAID0 is minimal on high-performance processors."_

Even the HD manufacturers limit RAID's advantages to very specific applications and non of them involves gaming:

http://westerndigital.com/en/products/raid/http://westerndigital.com/en/products/raid/
=========================================================================

5.  BTW, other than for that test box, we have not installed a HD in over 7 years.  None of those we installed (gotta be 50+) has failed.  I have replaced two older SSDs, well 3 as a warranty replacement also failed,  but they were when SSDs were relatively new and didn't have the life span of current drives.

Yes, you can post benchmarks all day long and i will agree that there is a HUGE speed difference in benchmarks.... but what those benchmarks simulate are things that are not performed often every day ... I just don't have the need to move 500 GB of files very day ... yes I have 2 TB of backups, but after the 1st one, the rest only take seconds since they are incremental.   When I decide that work day is over and it's now "play time", I start the game load, walk away to gran a bio or a snack and all ready when I sit down again.  My son does the same thing and who cares if game loads in 23 versus 21 seconds while he's logging in to discord, putting his headphones on and opening his browser to display game related web pages that he uses for a resource ?

We recommend an SSD paired with an SSHD in every build .... if budget restrictions result in say dropping GFX card down a tier, starting with the SSHD and adding the SSD later is the recommended option.  We put a backup OS install on the SSHD anyway.  Given your system specs, if you sticking with just 8 GB of RAM, and budget is an issue, I'd:

a)  Use the 500 GB SSD for OS and fav games.
b)  Use the 128 GB for pagefile , temp files and maybe even a RAM drive
c)   Use a 2 TB SSHD Firecuda 7200 rpm

In gaming, the older Seagate SSD was 50% faster than the WD Blacks in gaming, far greater than anything you will see w/ RAID 0 on the desktop in gaming ... THG Hard Drive charts not loading today but here's the link

http://www.tomshardware.com/charts/


----------



## newtekie1 (May 21, 2018)

eidairaman1 said:


> What about a sshd?



The small SSD cache on SSHDs doesn't really help performance a whole lot.



hat said:


> I just realized I could go even cheaper and get another 500GB WDC Black and put it in RAID 0 with the one I already have. I don't need all 128GB of my SSD for my OS and programs either... currently I'm skimming by on a 30GB partition for that. I figure I could RAID some hard drives, and use the rest of my SSD to either store the most demanding games on, or set it up as a cache for the RAID array, though it would only be 80GB or so.



An 80GB cache would be better than nothing.  I ran with a 60GB cache on my 2TB drive for years.  I just recently finally upgraded to a 480GB cache only because I replaced my MX200 OS drive with a new SSD, and didn't really have another use for the MX200.


----------



## phill (May 22, 2018)

Aquinus said:


> If you don't want to re-install your entire steam library if a drive fails, RAID-5 is the way to go. RAID-5 only has a couple downsides compared to RAID-0 but, it has some upsides too.
> 
> RAID-5 offers redundancy so, you won't lose all of your stuff if you have a single drive failure.
> RAID-5 offers similar *read* performance as RAID-0 which tends to scale to the number of disks you have.
> ...



I like the testing software, is that in Linux per chance??  

Give it a go and let us know your findings!!


----------



## Aquinus (May 23, 2018)

phill said:


> I like the testing software, is that in Linux per chance??


Yup, it's my daily driver. ...and the software:






phill said:


> Give it a go and let us know your findings!!


Running Linux or running games off the RAID-5? I've been running Ubuntu exclusively for a couple years now and it's fine for what I do with it. As for running games off the RAID-5, I already do that and it's fast enough. Obviously not as fast as the SSDs but, it's fast enough where it doesn't really bother me at all.


----------



## kastriot (May 23, 2018)

Raid 5 is obsolete and replaced by raid 6 but you need extra hdd and you get no performance drop when one hdd goes bad..


----------



## las (May 23, 2018)

Mechanical drives sucks for that. They are only for storage IMO. Way too slow compared to SSD, and RAID5 using onboard controller is NOT good regardless...

Get another 500GB SSD and run RAID0, or go M.2 -> 1TB Samsung 970 Evo

My 2x 500GB 850 Evo have been running flawlessly in RAID0 for years now. Very fast. Only con is 3s longer boot time. Every bench or game loading time is faster in RAID0 config (tested several titles and benchmarks when I bought the 2nd). RAID0 is fine for this, zero important data (gamesaves and everything else is backed up in cloud + network).

Going M.2 NVME on next build tho (no RAID).


----------



## Mindweaver (May 23, 2018)

I saw this yesterday right before I left the office. I meant to reply when I got home but forgot... One question I've not seen asked or I overlooked it is, do you plan to use software RAID or hardware? I would never use RAID 5 in a software array. If you do plan to use hardware be sure to have a BBU to enable write-back cache for better performance. If you plan to use software RAID then I would go with RAID 0 and for the best of both worlds RAID 10, but it's double the drives. Think of 2x RAID 0 arrays Mirrored.

  Years ago I used 3x 250gb WD RE4 drives in RAID 0 for my game drive and it was fast, but nothing compared to today's SSD drives. I would suggest buying a big mechanical storage drive and a 256-512gb SSD and use something like *Steam Mover* to move the games you are playing to the SSD, I would do this over even setting up an SSD cache drive. It creates Symbolic Links. You can create a batch file to do this, but Steam Mover works fairly easy for anyone (_I feel like a spammer now..but if the bot ban hammers me I know the guy that owns the site.. .. lol_). I have created my own program, but it's not as polished as Steam Mover.. hehe You know @FordGT90Concept my have a Symbolic Link program of his on somewhere here, but like I said I've never run into any issues with Steam Mover.. But I've not used it in years since I created my own.

Also on a side note, I have 2x 250gb SSD's in RAID 0 and I don't notice any better performance than my 850 pro 256gb in games and programs. Sure the benchmarks say it's faster, but in the real world, I don't notice it. I've only ever really noticed it in copying huge files like DB's and Backups, but nothing in programs or games. Actually, there was only really one game it maked a difference in and that was Rage with its mega textures, but that was on a PCI-E SSD with 1200 write and 1400 read.


----------



## FordGT90Concept (May 23, 2018)

I do not.  The only Steam-related program I have is to fix Steam installscript errors.

Steam mostly runs from itself.  If you just copy the Steam directory to another drive and run Steam, it will update everything itself.

Any kind of RAID is fine for games.  All games have systems to handle slow loading.  RAID5 will have faster read performance than a single drive but with a slight latency penalty.

My Origin, Steam, and GOG libraries are on a single 6 TB hard drive.  It's 3/4 full.  I'm going to have to buy a ~12 TB hard drive soon and copy everything over.


----------



## Mindweaver (May 23, 2018)

FordGT90Concept said:


> I do not.  The only Steam-related program I have is to fix Steam installscript errors.
> 
> Steam mostly runs from itself.  *If you just copy the Steam directory to another drive and run Steam, it will update everything itself*.
> 
> ...



That's good to know. Yea, I haven't used Steam Mover in awhile nor the program I wrote to move the games folder and create Symbolic Links either. Currently, I have a 1tb SSHD _(it was a gift_) for my main drive for games and then a few SSD's for the games I play the most.. which is slim to none now.. lol I mainly play FO4VR (_when I get time_) and I did create symbolic links for the Mods. Only because I didn't want to have them installed twice for pancake FO4 and FO4VR.


----------



## therealmeep (May 23, 2018)

Personally i put the most played games from steam on my 960 evo, put the rest on my raid 0 2x2TB wd blacks (~300 gigs free) and leave the bazillion gigs of battlefield games and other origin crap on my data 4tb drive.


----------



## Deleted member 178884 (May 23, 2018)

I wonder how most people here even live off 1tb, I've hit 1.1tb storage used on my toshiba x300 6tb and it's rising fast. I'd recommend toshiba's x300 hdds. If your looking for raw performance i'd take a look at kingston ssd's heres my crystaldisk video (skip to around 4:30) 








When you dump the cash on a 12tb drive go helium they are excellent performers and capacity range.


----------



## TheoneandonlyMrK (May 23, 2018)

hat said:


> That's what they said back when I got my 512gb ssd, which is now in another system. I wasn't impressed with the performance in games at that time...


Time's change, now some games are a nightmare on hdd for example Pubg.
Op, buy one 3tb drive and just get familiar with the steam move folder option and move games between you new hdd and old 512sdd win win and cheap too.


----------



## hat (May 23, 2018)

The 512gb ssd isn't an option anymore. It's in another system and has been for some time, and that's where it's likely to stay.

For those asking about what type of raid I would use specifically, that would most likely be software raid from within Windows. Not too happy about that either though because software raid uses CPU cycles. I would set it up from bios, but I've learned recently that isn't true hardware raid either. The reason for running raid like this would be because I'm cheap and getting a real hardware raid controller card isn't something cheap people do

That 512gb ssd isn't an option, it's in use in another system.

I would be using software raid from within Windows. I heard bad things about onboard raid, and the only way to get true hardware raid is to buy a separate controller card, which I won't do. That's expensive, and the whole idea behind using raid is to save some money, because it would be cheaper than an ssd.


----------



## therealmeep (May 23, 2018)

Personally I have my drives software raided in raid 0 and have yet to have an issue with them. On my home server I ran raid off the mobo chipset, but both drives failed due to heat (haf xb evo hotswap bays suck for cooling 24/7 operation), and now there are 2x3TB seagate nas drives (NOT Ironwolfs) in it on a dedicated card with no issue.


----------



## TheoneandonlyMrK (May 23, 2018)

hat said:


> The 512gb ssd isn't an option anymore. It's in another system and has been for some time, and that's where it's likely to stay.
> 
> For those asking about what type of raid I would use specifically, that would most likely be software raid from within Windows. Not too happy about that either though because software raid uses CPU cycles. I would set it up from bios, but I've learned recently that isn't true hardware raid either. The reason for running raid like this would be because I'm cheap and getting a real hardware raid controller card isn't something cheap people do
> 
> ...


Well then personally I would go with option d, get two nvme ssds and that 3tb hdd , use one 256Gb nvme for os ,a new 512 nvme for fave and in play games and the 3tb for storage.
I did 3xwestern digi blacks in raid 0 for a few years until I got an ssd , i found it pointless given the latency increased a bit with occasional lags during really heavy cpu use and the read speeds were not as quick , then I got a volume corruption, that's life.


----------



## FordGT90Concept (May 24, 2018)

hat said:


> The 512gb ssd isn't an option anymore. It's in another system and has been for some time, and that's where it's likely to stay.
> 
> For those asking about what type of raid I would use specifically, that would most likely be software raid from within Windows. Not too happy about that either though because software raid uses CPU cycles. I would set it up from bios, but I've learned recently that isn't true hardware raid either. The reason for running raid like this would be because I'm cheap and getting a real hardware raid controller card isn't something cheap people do
> 
> ...


Highpoint cards are affordable and good.  The RocketRAID in my server is 11 years old now and going strong.

Just beware that none of Highpoint cards support GPT booting.  You do not want to make a RAID built on them your boot volume (because you'll be limited to 2 TB capacity).

Operating system RAID < chipset RAID < PCIe AIB RAID

The only problem with chipset RAID is that you can only transfer the RAID from like (e.g. Intel) to like (Intel).  If you want to move it from Intel to AMD, you have to start from scratch.


----------



## hat (May 24, 2018)

It seems I messed up my OP with a simple typo. I mentioned I had a 128GB SSD for the OS, and a 500GB SSD. If that were the case... this thread wouldn't exist, lol. The 500GB is a HDD, _not_ an SSD. I _did_ have a 512GB SSD at one time, but it's now in another system.

I considered 3 1TB drives in RAID 5 initially because they're cheap, cheaper than even a 1TB SSD. Then I went and mentioned RAID 1 for some reason, which is nowhere near what I need for this. I'm talking about mostly game installs and potentially other data that I wouldn't care to lose. I don't need redundancy outside of possibly a convenience factor for if/when a drive does die, I wouldn't have to reinstall games. Though I have in the past collected cool stuff that I no longer can find, mostly Quake related stuff... remembering that I'd think RAID 5 would be the way to go. Losing stuff just sucks.

About types of raid, I found this @FordGT90Concept 
https://skrypuch.com/raid/

This basically says onboard raid (fake RAID) is horrible, so if you're going to do that, you may as well use software RAID. Hardware RAID can be really bad too, if the controller fails...


----------



## FordGT90Concept (May 24, 2018)

I wouldn't trust that website because it doesn't even know what the difference is:  "Fake RAID" is a host bus adapter (HBA) where "Hardware RAID" is a host bus controller (HBC).  I also completely disagree with the article because HBAs live below the software stack so it's completely invisible to software that doesn't directly interface with the drivers.  This isolates the RAID from all software environments and that's a huge plus, even if there isn't extra hardware facilitating the RAID.

Let's go point by point:
> Use RAID 0 or 1.
CPUs these days have gobs of power to perform parity checks.  It's perfectly fine to use RAID5 on them.

>Can't nest RAID arrays.
>Create RAID arrays using whole disks only.
Gee, I wonder why: Redundant Array of Inexpensive Disks.  This is exactly the point I was making above: software RAID isn't implemented below itself.  It's just file system trickery to spread the data out over multiple drives.  Legit RAID doesn't do that because the entire point of it is to prevent data loss from mechanical failure.  What the author is advocating here is complicating RAID for no apparent reason.

>Pray your motherboard never dies or keep several identical ones on hand.
False.  Like I said, AMD can detect AMD and Intel can detect Intel.  My RAID1 migrated from a dual socket 771 board to a single socket 1151.  I was amused because I didn't even intend to keep it.  It was actually fairly difficult to purge the RAID1 completely from both drives (think I ended up erasing them).  It's also hilarious that the author brings this up because software RAID locks you into the specific operating system that built it.

>Can't have hot spares and can't support hot swappable drives.
True on the first point but no one looking at a four drive RAID should care.  I did RAID5+1 hot spare for a decade.  The volume was starting to get full and I thought to myself: "why am I powering a drive that's doing utterly nothing on the 0.00000000001% chance that two drives will die at the same time?"  Also, I have all that data backed up on an external so should that 0.00000000001% come to pass, I'd just shrug and order two replacement drives instead of one.  So I did the sane thing and put that drive into the RAID and increased the capacity of the RAID by 50%.  Seriously, if you aren't talking 8+ drives, it's not even worth considering.  Even then, I seriously wouldn't promote the idea unless you have 16+ drives in the RAID.  In which case, you wouldn't be using RAID 0, 1, 5, nor 10 anyway so this point is entirely moot.

Virtually all newer motherboards support hot plugging.  You just have to tell it which ports to enable it on.  Not that I would hot swap in the first place.  I've been bitten by too many fans to know that's a terrible idea.


Like I said: Operating system RAID < chipset RAID (HBA) < PCIe AIB RAID (HBC)

There's a lot of software RAID advocates out there and I really don't understand it. Software RAID has more overhead than HBA RAID and a lot of arguments for it are nonsensical.  Let me go point by point again:


> In Linux, you can create RAID devices using any regular block device (including whole drives, partitions, regular files, other RAID devices, etc) with mdadm. You can mix and match RAID levels using RAID 0, 1, 4, 5, 6, 10 and linear (linear not really being RAID per se, but it's handled by the same framework). You can also arbitrarily nest RAID devices, so you can create a RAID 0 of RAID 6s of RAID 1s if that's what floats your boat. You can also physically rip the drives out of one machine, plug them into another, and your RAID array(s) will continue working as before, with no twiddling needed.


Only in Linux.  If the drives are plugged into different controllers than the machine they came from and Linux doesn't support those controllers, it's boned.



> mdadm of course supports all of the features you'd expect like hot spares, hot swappable drives (hardware permitting), but it also has several other useful features. Of particular note is that you can grow a RAID 5 array completely online (it calls this feature reshaping). That is, take an n drive array with n-1 capacity, add an additional drive and (completely online) end up with an n+1 drive array with n capacity. Furthermore, you can add in as many drives as you'd like and compose them into the same array, hanging them off the ports on the motherboard, ports on an expansion card, external drives, drives on the network...


"hardware permitting"  Yeah, those are hardware features.  Implementing them in software is pretty easy but, you see, if the hardware doesn't support it, it's not supported period.  Software can never supercede hardware so why is software so fantastic?

Remember how I was talking about integrating the hot spare on my RocketRAID above?  That was done entirely by the RocketRAID card and it continued to work on it regardless if the operating system was loaded or not.  Hell, that RocketRAID DID integrate a hot-spare into the RAID while I was sleeping one time.  You know what's fantastic about RAID controllers? They'll even do it while the computer is sitting at the BIOS screen--no operating system loaded at all.  Oh, 0% CPU load rebuilding the array too but it was noticeably slow (because that's just the way it is with RAID5).

Yes, enterprise can use software RAID across multiple systems and cards but that's the only use-case where software RAID makes sense: after the hardware itself is redundant.  As I said before, I see no sense in software RAID until you exceed 16 drives.



> Well, that all sounds great, but what about performance? The good news is that performance of Software RAID is generally on par with Hardware RAID and almost always (significantly) better than Fake RAID. You might not have noticed, but in the last decade or two, CPUs have become very fast, greatly outpacing hard drive speed. Even with a full RAID 5 resync in progress with many fast drives, you're unlikely to see more than 25% CPU usage, and that's just on a single core, these days you probably have at least 4 cores. RAID levels that don't involve parity (0, 1, 10, linear) incur essentially no CPU load.


That's freakin hilarious because the bottleneck is the drives themselves.  Software RAID is never on par with hardware RAID because the parity work doesn't even hit the system bus.  "Fake RAID" is also faster than software RAID because it's happening at the hardware level instead of software.  When software tries to do something, it has to go through hundreds of lines of instruction before it reaches the data.  For example, software has to interpret its own instructions a read/write is requested, then it has to send it to the kernel, which interprets it through its file system, which invokes the driver, and finally read/write operations commence.  Every step along the way, there are checks being made to ensure the requests are valid.  File systems often have some error correction code too. None of that happens with "fake RAID" or "hardware RAID."  To the operating system, all it sees is a single drive and the RAID accepts those commands and does what it needs to do regardless of how many drivers are under it.  Software RAID is fundamentally inefficient.


You can tell a HBA from a HBC because HBCs have dedicated RAM.


----------



## hat (May 24, 2018)

How often would raid 5 be doing these parity checks, or any other form of raid that would consume CPU power? It seems to me it would happen mostly when writing, which in my case, wouldn't happen all too often.


----------



## FordGT90Concept (May 24, 2018)

Every single read and write does parity checks.  If the sector size is 512 bytes, writing a 1,000,000 bytes would result in no less than 1954 parity calculations.  When reading, it recalculates parity and checks it against the stored checksum to determine if the data integrity is good.  If not, it uses the information to determine which drive oopsie whoopsied (it logs and fixes it).


----------



## Aquinus (May 24, 2018)

Mindweaver said:


> I would never use RAID 5 in a software array.


If this were the mid 2000s, I would agree with you but with modern hardware, there isn't really a good reason to use hardware unless you're building a server and you just want to make it rock solid. Even with full on software RAID-5 in MDADM, I was able to get enough performance to saturate 1Gbps ethernet which is good enough for me when it comes to archival and streaming video.


FordGT90Concept said:


> Any kind of RAID is fine for games. All games have systems to handle slow loading. RAID5 will have faster read performance than a single drive but with a slight latency penalty.


Ehhh, not as much as you would think. Check out those benchmarks I ran, access latency on the RAID-5 versus a single disk in the array actually almost the same.


FordGT90Concept said:


> The only problem with chipset RAID is that you can only transfer the RAID from like (e.g. Intel) to like (Intel). If you want to move it from Intel to AMD, you have to start from scratch.


On the same token though, if I took my setup and dropped it into another board that supports RSTe, it would be detected out of the box. The same will likely happen if you sick with AMD when starting with it, same deal with HighPoint or LSI. It's not really a problem, just something to be aware of when going with RAID. This actually might even be an argument for software RAID because as long as the RAID is supported in the OS and you can boot, it will be detected since it isn't hardware dependent. Just a thought.


FordGT90Concept said:


> It's just file system trickery to spread the data out over multiple drives.


I don't agree with that statement. Software RAID is still acting on the device as if it were a RAID controller, the difference is that commands that would typically issued to a controller are handled by the driver instead (which is really what FakeRAID in setups like RSTe actually do to a very large extent.) The driver knows the topology of the RAID so when it issues a read or write, like a regular hardware controller, figures out which disk the data resides in and then accesses it. Software versus hardware doesn't change that, it's just a matter of what's handling that, the driver or dedicated RAID hardware. Nothing stops you (in Linux,) for using a software RAID device as a block device with no file system and just writing bytes directly to the array. File systems have nothing to do with software raid implementations.


FordGT90Concept said:


> Only in Linux. If the drives are plugged into different controllers than the machine they came from and Linux doesn't support those controllers, it's boned.


With software RAID if Linux doesn't support the controller, you won't see the disks at all. If the controller is supported and you can at least see the disks, mdadm will work fine as all it needs is the ability to interact with disks as anything else in the OS does. If you can read and write to a disk, it's fair game for software RAID.  I'm not saying software RAID is the best option, I'm saying it's not a bad option when you have no other good options or if your performance needs are met by something like mdadm (say, streaming video or archiving.)



FordGT90Concept said:


> "hardware permitting"  Yeah, those are hardware features. Implementing them in software is pretty easy but, you see, if the hardware doesn't support it, it's not supported period. Software can never supercede hardware so why is software so fantastic?


This doesn't need to be a pissing content. There is a time and a place for mdadm and that isn't all the time. The benefit of software raid are low barrier to entry because you don't need really any special hardware to start using it and flexibility. Performance isn't better but, *it's good enough a lot of the time*. Which for some people, that's okay because benchmarks aren't everything.


FordGT90Concept said:


> "Fake RAID" is also faster than software RAID because it's happening at the hardware level instead of software.


No it's not. That's a flat out incorrect. It's handled by the *driver* which is kernel space code handling it, you know, a lot like software RAID. The only difference is that things for figuring out disk topology are determined by the controller but, actually reading and writing to the array is completely dependent on the driver.


FordGT90Concept said:


> When software tries to do something, it has to go through hundreds of lines of instruction before it reaches the data. For example, software has to interpret its own instructions a read/write is requested, then it has to send it to the kernel, which interprets it through its file system, which invokes the driver, and finally read/write operations commence.





FordGT90Concept said:


> Every single read and write does parity checks.  If the sector size is 512 bytes, writing a 1,000,000 bytes would result in no less than 1954 parity calculations.  When reading, it recalculates parity and checks it against the stored checksum to determine if the data integrity is good.  If not, it uses the information to determine which drive oopsie whoopsied (it logs and fixes it).


I'm not sure that's an entirely accurate statement for all RAID implementations. I don't think most controllers are going to check parity unless there is a reason to, whether that's because it's doing a validation run or because it hit a block where there was an issue reading, I don't think it calculates parity on every read, otherwise performance would be a lot worse and far more similar to write speeds, which they're not.


----------



## FordGT90Concept (May 24, 2018)

Aquinus said:


> No it's not. That's a flat out incorrect. It's handled by the *driver* which is kernel space code handling it, you know, a lot like software RAID. The only difference is that things for figuring out disk topology are determined by the controller but, actually reading and writing to the array is completely dependent on the driver.


It's technically firmware RAID, not driver.  Features of the RAID can be enabled and disabled from within the firmware itself; however, more complex tasks like rebuilding the RAID only occur while the driver is loaded.




Aquinus said:


> I'm not sure that's an entirely accurate statement for all RAID implementations. I don't think most controllers are going to check parity unless there is a reason to, whether that's because it's doing a validation run or because it hit a block where there was an issue reading, I don't think it calculates parity on every read, otherwise performance would be a lot worse and far more similar to write speeds, which they're not.


All drives perform the read operation simutaneously so in a minimum RAID5: disk0 reads one sector that contains data, disk1 reads one sector that also contains data and disk2 reads one sector that contains parity data.  disk0 and disk1 caculates the parity data and checks if it equals the parity data.  If yes, the data is available to software, if no, it fixes it and makes it available.  Like I said, adds a little latency (<1ms) but the fact you're getting double the data, it ends up being a lot more throughput.


----------



## newtekie1 (May 24, 2018)

First, I'm going to address Software RAID vs. Firmware RAID vs. Hardware RAID.

Software RAID is RAID handled entirely by the OS.  This type has the highest overhead, it is also the most unreliable in my experience.  Though the reliability can vary greatly depending on the OS itself.  For example, the Windows software RAID is pretty horrible, but FreeNAS software RAID is decent(but then again FreeNAS was built from the ground up with Software RAID in mind).  But my advise is don't use Software RAID if you are running Windows.

Firmware RAID is RAID that is configured by the controller, but must of the heavy lifting calculations are handled by the computer's CPU.  Motherboard onboard RAID falls in this category, as well as most inexpensive "hardware" RAID cards such as most Highpoint cards.  This isn't a dig at Highpoint, I use them, and they are really good RAID cards for the money.  If you are looking to get into RAID, IMO, picking up an inexpensive Highpoint card is a good idea.  But heck, even using the onboard RAID works too.  There is still system overhead, but with modern CPUs it isn't noticeable.  Most of the controllers use a single threaded calculation, so at most it will load a single core, and it really only does it on writes, reads don't really cause any load.  With modern 4+ Core processors, this isn't really a big deal to a home system.

Hardware RAID is RAID that is handled entirely by the controller.  All the calculations are done by the controller, so there is very little system overhead.  This is important in a server environment where there is a heavy load on the server constantly, but in a home environment it usually isn't.  Obviously hardware RAID controllers can be expensive, but we are seeing very good deals on used ones come up on ebay, as data centers are replacing their older cards with new.


Second I'd like to address the statement that games have a mechanism to deal with slow hard drives.  Yes, this is true to an extent.  However, there are a lot of games that end up with issues from a slow hard drive, particularly large open world games.  I've noticed in FarCry5 particularly, that a slow hard drive causes stutter.  But I've even noticed it in GTA:V as well.  It finally prompted me to replace my HDD from 2013 with a better one, as well as adding an SSD cache to it.  The stuttering is now gone.

Finally, what to do in the OP's situation.  I'm going to go back to my original suggestion and say a single hard drive with an SSD cache.  You can get a decent 3TB hard drive these days for under $50.  Pair with an inexpensive SSD(or use part of the one already in the system) using PrimoCache, and you've got a great setup for running games off of.  Even if you throw in a dedicated 240GB SSD for the cache, with the $30 for Primocache, you still looking at under $150.


----------



## CrAsHnBuRnXp (May 24, 2018)

I have a RAID guide stickied on this forum. If you want to check it out, link is in my sig.


----------



## Mindweaver (May 24, 2018)

@Aquinus - True, it has got better, but for speed, without the need for redundancy, I would only use RAID 0. If I needed redundancy and speed on a software RAID array then I would spend the extra cash on a RAID 10 array. Most average users make the mistake of thinking RAID 0 arrays are just 2x drives and think going with RAID 5 array would be quicker using 3x drives.. which is wrong  (_not say you fall in this category_). RAID 0 can handle more than 2x drives, but it can be a double edge sword because adding too many drives can cause a negative impact on your system using software RAID. Around 2005-2007 I used a RAID 0 software array with 3x WD RE4 250gb which gave me 750gb storage with around 170-200 write / read.

Also, users wanting to use RAID need to know the importance of labeling your RAID array.


----------



## FordGT90Concept (May 24, 2018)

RAID10 is wasteful only having 50% of the drive total capacity available.  RAID5 is n-1 capacity.  If we were talking 3 drive RAID5 versus 2 drive RAID0, the read performance would be about the same but write performance of RAID5 is much worse.

Basic overview of common levels: https://www.datarecovery.net/articles/raid-level-comparison.aspx

I would never recommend RAID0 with more than two drives.  The risk of data loss keeps going up and up.



Mindweaver said:


> Around 2005-2007 I used a RAID 0 software array with 3x WD RE4 250gb which gave me 750gb storage with around 170-200 write / read.


Here's my 11 year old RAID5:



Heh, I thought it felt painfully slow.  Well, there's the proof.  It needs upgrading but, meh, drives are still good.


----------



## Mindweaver (May 24, 2018)

FordGT90Concept said:


> RAID10 is wasteful only having 50% of the drive total capacity available.  RAID5 is n-1 capacity.  If we were talking 3 drive RAID5 versus 2 drive RAID0, the read performance would be about the same but write performance of RAID5 is much worse.
> 
> Basic overview of common levels: https://www.datarecovery.net/articles/raid-level-comparison.aspx
> 
> ...


Yea, I never said RAID10 would be cheap.  Also, I had to retire my personal RAID 0 game drive array over the same thing.. It worked but was showing it's age.. lol


----------



## urbuntus (Jun 21, 2018)

Get a HDD and cache it with an SSD.  
So TRUE!


----------



## Deleted member 178884 (Jun 21, 2018)

urbuntus said:


> Get a HDD and cache it with an SSD.
> So TRUE!


Optane will do even


----------



## cucker tarlson (Jun 21, 2018)

I still don't understand why people wanna cache HDDs. I wanna have control over what is on the ssd and what isn't. I don't want the system to manage it, eventually it'll waste space on what I don't want on the ssd and the stuff I do want there will get pushed out.
People worry about GB/$ too much. If I had to choose either a 500GB ssd or 5TB hdd for games, I'd take the ssd all day. It's not just the speed and quietness, it's also this



> In terms of endurance, TechReport reveals that that the majority of consumer quality SSDs tends to be able to endure more than 700TB of reading & writing, with a few others surviving up to an exceptional 2.5 pentabytes. They also found that TLC type SSDs had generally less endurance than their MLC counterparts.
> 
> 
> 
> ...



And you wanna tell me you're thinking about having three of those ?

I have 3TB+1TB HDDs myself, but they mostly just serve for data I write then delete or upload, and they're so cheap per GB that I don't care if they die in the middle of this sentence.


----------



## kastriot (Jun 21, 2018)

Raid 5 is outdated but 6 is what you need no speed degradation but higher costs, but i would use raid 10 for that more capacity, speed and redundancy.


----------

