# best stripe and cluster size for raid 0?



## Deleted member 24505 (Sep 30, 2006)

i had my two sataII 80gb drives on raid 0,but i had to re-install.i'm using them both separate but it is waay to slow now.

if i set it on raid 0 again what is the best size of stripe and cluster size?.as you can see from the hd tach thread it was pretty fast before and now it is really slow at game loading.so i am gonna raid them up again.


----------



## Alec§taar (Sep 30, 2006)

tigger69 said:


> i had my two sataII 80gb drives on raid 0,but i had to re-install.i'm using them both separate but it is waay to slow now.
> 
> if i set it on raid 0 again what is the best size of stripe and cluster size?.as you can see from the hd tach thread it was pretty fast before and now it is really slow at game loading.so i am gonna raid them up again.



I'd like some feedback on this too... mainly, from the hardware-side & RAID controller/firmware perspectus.

I *THINK* I went w/ 16k sized striping, but not sure!

(I'd have to check, & should have yesterday since I updated its BIOS from 2.9.0.10 to 2.9.0.22 firmware & BIOS code, plus its driver also from Promise)... 

E.G.-> Typically, I format my disks to use 4096k sized ones, because this is the size @ which the memmgr of the OS pages data in/out of programs in memory, so, I figured "makes sense, keep it consistent with that"!

There was a reason for that, sort of, but I am NOT sure if it applies @ this level.

(Especially for paging - it already has the advantage of being 'raw written' afaik, faster than normal filesystem access, & also being Ring 0/RPL 0 device driver driven, faster than usermode/Ring 3/RPL 3 apps have).

I later found out, that pagefile.sys access is performed @ 16kb sized streams!

Now, I am NOT 110% sure of this, but perhaps this is "READ AHEAD" mechanism of a sort for it!

Again though: NOT totally sure of why this is done this way!

(Makes not a lot of sense to me it should read/write @ a diff. rate/size than the memmgr. does when moving page frames in/out of RAM in programs running there).

I have heard tell online that large as 64k for RAID is the fastest performing, but I don't know... I can't go & just "nuke it" now though... too much work & too long of a tuned up setup to just destructively DUMP it now.

SO, waiting on feedback too!

APK

P.S.=> "Number #5 is alive - NEEDS INPUT!!!", lol... apk


----------



## Deleted member 24505 (Sep 30, 2006)

anyone who says raid makes no differance is wrong imho.i have noticed the differance without it.

and i dont know if its true but,i heard win xp only likes a 128k stripe.dont know about the cluster size tho'.would be nice if you could confirm this for me/us.you may be better at finding out than me .

i may do it tonite or tom morning so if you can find out m8 it would be very nice.


----------



## Alec§taar (Sep 30, 2006)

Well, I found this via GOOGLE:

http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html



BETTER (or rather, more direct, than Anandtech one I posted initially)

=======================

"The second important parameter is the stripe size of the array, sometimes also referred to by terms such as block size, chunk size, stripe length or granularity. This term refers to the size of the stripes written to each disk. RAID arrays that stripe in blocks typically allow the selection of block sizes in kiB ranging from 2 kiB to 512 kiB (or even higher) in powers of two (meaning 2 kiB, 4 kiB, 8 kiB and so on.) Byte-level striping (as in RAID 3) uses a stripe size of one byte or perhaps a small number like 512, usually not selectable by the user.

Warning: Watch out for sloppy tech writers and marketing droids who use the term "stripe width" when they really mean "stripe size". Since stripe size is a user-defined parameter that can be changed easily--and about which there is lots of argument :^)--it is far more often discussed than stripe width (which, once an array has been set up,  is really a static value unless you add hardware.) Also, watch out for people who refer to stripe size as being the combined size of all the blocks in a single stripe. Normally, an 8 kiB stripe size means that each block of each stripe on each disk is 8 kiB. Some people, however, will refer to a four-drive array as having a stripe size of 8 kiB, and mean that each drive has a 2 kiB block, with the total making up 8 kiB. This latter meaning is not commonly used.

The impact of stripe size upon performance is more difficult to quantify than the effect of stripe width: 

Decreasing Stripe Size: As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives that an average file will use to hold all the blocks containing the data of that file, theoretically increasing transfer performance, but decreasing positioning performance.

Increasing Stripe Size: Increasing the stripe size of the array does the opposite of decreasing it, of course. Fewer drives are required to store files of a given size, so transfer performance decreases. However, if the controller is optimized to allow it, the requirement for fewer drives allows the drives not needed for a particular access to be used for another one, improving positioning performance.

Tip: For a graphical illustration showing how different stripe sizes work, see the discussion of RAID 0.

Obviously, there is no "optimal stripe size" for everyone; it depends on your performance needs, the types of applications you run, and in fact, even the characteristics of your drives to some extent. (That's why controller manufacturers reserve it as a user-definable value!) There are many "rules of thumb" that are thrown around to tell people how they should choose stripe size, but unfortunately they are all, at best, oversimplified. For example, some say to match the stripe size to the cluster size of FAT file system logical volumes. The theory is that by doing this you can fit an entire cluster in one stripe. Nice theory, but there's no practical way to ensure that each stripe contains exactly one cluster. Even if you could, this optimization only makes sense if you value positioning performance over transfer performance; many people do striping specifically for transfer performance."

=======================

* BUT, I'd hold off, until we get some more feedback... once you commit it, afaik? There's NO WAY to non-destructively reset it in your RAID controller OR firmware on the mobo etc. (if not a separate card).

APK

P.S.=> Gotta fly, I haven't even read it myself yet, but figured put it out as "food for thought" & a reference for yourself, myself, & others... Time to fly here though, it's ballgame & beer time with pals (back to the "REAL WORLD" outside the Matrix here)... 

Well, while waiting on my friends to come pick me up (who will doubtless be late for their own funerals, lol), I edited it out from Anandtech & found another reference above, crucial excerpt is above... apk


----------



## Deleted member 24505 (Oct 1, 2006)

well i've done it alec,with a 16k stripe.if you look at my new test result on the hdd bench thread its 333mb/sec burst.even quicker than before.dont know why!.i had a 128k stripe before.


----------



## Alec§taar (Oct 1, 2006)

tigger69 said:


> well i've done it alec,with a 16k stripe.if you look at my new test result on the hdd bench thread its 333mb/sec burst.even quicker than before.dont know why!.i had a 128k stripe before.



Yes, I noticed that in your scores on HD Tach 3.0 benchmark here:

http://forums.techpowerup.com/showthread.php?p=160467#post160467

Whatever the case may be, per the quote excerpt above (about their being no single overall "perfect/best" stripesize, because it depends on the type of data & its size you are going to be working most with & all that)?

You've apparently found a BETTER size for your needs & all that!

APK


----------

