# RAID transfer SLOOOW



## taz420nj (Jul 9, 2017)

Ok, so I got my new array set up on Plywood, and I started transferring media over from the old one - but the transfer rate is horrendous.  For the first few minutes it was screaming along at 350MB/s, basically maxing out the SATA2 interfaces on the old RAID card, then it dropped down to 45MB/s.  Now I know RAID 5/6 have a parity penalty, but the H700 is a decent card and it shouldn't drop that low when the buffer fills up (and the drop occurred after dozens of GB had been copied, the card has a 512MB cache).

I read about the stripe size and cluster sizes causing performance issues when large files are concerned, so I have tried several different combinations (including ReFS) and now ALL transfers are topping out at 45-50MB/s - it doesn't even start fast and slow down anymore.  WTF???

HD Tune Pro shows benchmark performance on the H700 to be over 800MB/s read and write and on the LSI to be roughly 300MB/s read and write on an 8GB file (the average size of my media).  I'll post screenshots in a bit.

Here's the setup..

SuperMicro X7DBE motherboard
2x Xeon E5430 Quad Core
24GB RAM

Original Array:
3Ware/LSI 9550-8 PCI-X (SATA2) w/BBU (in a 133MHz PCI-X slot)
4x Hitachi Deskstar SATA3 2TB 5900 RPM

New Array:
Dell PERC H700 (SAS2/SATA3) Integrated PCIe x8 (in a x8 PCIe 2.0 slot)
512MB Cache, BBU installed, Write-Back enabled (Dirty Cache light comes on during transfers so cache is functioning properly)
8x Hitachi Ultrastar SATA3 2TB 7200RPM
Connected via 2x SFF-8087 SAS to 4xSATA3 fanout cables

Any ideas?

What stripe size/cluster size/file system should I be using?  The video files are large, but each one has associated screenshots/posters/XML files for library purposes, and then there's my music library and programs (mostly ISOs of install media).


----------



## blobster21 (Jul 9, 2017)

If the "_PCIE Max Payload Size_" setting is available in your motherboard's bios, how is it configured right now ? 128B ? 256B ?

i have had some issues with my adaptec RAID adapters in the past with anything but the default value (128B)


----------



## taz420nj (Jul 9, 2017)

I'll have to check but I do remember a PCIe setting with 128 and 256B as the options..  "Coalesce"?  It was set at 128B.

Edit:  Yeah PCI-E I/O performance is set to Coalesce/128B


----------



## blobster21 (Jul 9, 2017)

As for your question about stripe size, i'm going with the default value of 256K on both RAID0 and RAID5EE.

The cluster size is a different matter, as my overlaying file system is ext4, and my setting probably doesn't apply to you, though i am basically storing/accessing the same kind of data  as you, size wize (you are on NTFS, isn't it ?)


----------



## taz420nj (Jul 9, 2017)

The default stripe/chunk on the H700 is actually 64K..  It was originally at the default (and NTFS formatted as default size, which is I believe 4K) when I was getting the fast then slow down..  I have tried 32K, 128K, and 256K..

And yes it is a Windows server, so I have NTFS and ReFS available.


----------



## blobster21 (Jul 9, 2017)

so both cards bench pretty well when they are tested separately, and then the speed is stuck around 50MB/s when it comes to move data from one array to the other....

Could you plug a spare SSD on an empty motherboard sata plug, and transfer some data from/to the largest array ? At this point it's worth verifying if the problem occurs only when both arrays are involved...

Edit : just another guess : there could be some bottleneck because both RAID adapters shares the same ressources, you know how the slots sometimes have their lanes closely intertwined...do you have some spare slots to move the cards apart from each others ?


----------



## newtekie1 (Jul 9, 2017)

I almost wonder if the Windows write cache isn't responsible for the high transfer speeds at first, then the slow down after several GB has been "written".  If I'm not mistaken, when you format a volume with ReFS, it automatically turns off the Windows Write caching because it is supposed to be reliable.  That is why your transfers are always slow when you use ReFS, but fast at first when you use NTFS.

I bet if you formatted it to NTFS, but turned off the Windows Write caching for that volume, it would also always be slow.


----------



## eidairaman1 (Jul 9, 2017)

@cdawall


----------



## cdawall (Jul 9, 2017)

Is the card/chipset over-temping? That'll do it as well especially on the older ones.


----------



## Aquinus (Jul 10, 2017)

taz420nj said:


> And yes it is a Windows server


Write buffering. Thanks, Windows. What you're seeing is the read speed from the drives you're copying off of until the buffer is filled (which could be as much as 4GB.) Then it drops down to what you're seeing for write speed at the destination because you're no longer writing directly to memory and are then bottlenecked by writing to the target.

Stress test the new array first before putting it to use.


----------



## taz420nj (Jul 10, 2017)

blobster21 said:


> so both cards bench pretty well when they are tested separately, and then the speed is stuck around 50MB/s when it comes to move data from one array to the other....
> 
> Could you plug a spare SSD on an empty motherboard sata plug, and transfer some data from/to the largest array ? At this point it's worth verifying if the problem occurs only when both arrays are involved...
> 
> Edit : just another guess : there could be some bottleneck because both RAID adapters shares the same ressources, you know how the slots sometimes have their lanes closely intertwined...do you have some spare slots to move the cards apart from each others ?



I don't have a spare SSD, just the one in my laptop and I don't feel like pulling it apart right now.. I tried copying from a 1TB hard drive and it maxed at 70MB/s..  Still way slower than it should be.

It can't be shared resources, the one card is PCI-X and the other is PCIe.



newtekie1 said:


> I almost wonder if the Windows write cache isn't responsible for the high transfer speeds at first, then the slow down after several GB has been "written".  If I'm not mistaken, when you format a volume with ReFS, it automatically turns off the Windows Write caching because it is supposed to be reliable.  That is why your transfers are always slow when you use ReFS, but fast at first when you use NTFS.
> 
> I bet if you formatted it to NTFS, but turned off the Windows Write caching for that volume, it would also always be slow.



I only tried ReFS last..  It was topping at 45MB/s on every combination of factors after the original.  Even after going back to the original configuration (64K stripe, NTFS default cluster size), it will not go back to the "fast burst" that I was getting before.



cdawall said:


> Is the card/chipset over-temping? That'll do it as well especially on the older ones.



I doubt it, there's a fan blowing right past it.



Aquinus said:


> Write buffering. Thanks, Windows. What you're seeing is the read speed from the drives you're copying off of until the buffer is filled (which could be as much as 4GB.) Then it drops down to what you're seeing for write speed at the destination because you're no longer writing directly to memory and are then bottlenecked by writing to the target.
> 
> Stress test the new array first before putting it to use.



If this array were on another really old slow card like the 9550, I might buy that.  But I don't believe for a second that the H700 is only capable of calculating parity at 45MB/s.  It's an enterprise card that's only one generation old.


----------



## Aquinus (Jul 10, 2017)

taz420nj said:


> If this array were on another really old slow card like the 9550, I might buy that.  But I don't believe for a second that the H700 is only capable of calculating parity at 45MB/s.  It's an enterprise card that's only one generation old.


In RAID, the entire array is only as fast as the weakest link. It only takes one drive to slow the entire thing down with any RAID level. If the slow drive gets kicked out and the array becomes degraded, it's not unrealistic for the degraded array to be faster than before the bad disk was kicked out but until then, the slowest drive will hinder the array.

Edit: Could you get a dump of the SMART stats for all of the drives in the 8-disk array? Also, is that RAID 5 or 6?

Edit 2: GSmartControl can read through most RAID controllers. I would suggest using this for either Windows or Linux (if you're in a GUI environment.)


----------



## taz420nj (Jul 10, 2017)

None of the drives are reporting any issues.    If there were any problem it would benchmark around the same rate but it's going at full expected speed.

The new array is RAID 6.

Heres screenshots...  In the last one it started at like 100 then dropped almost instantly to 30, then it settled in right around 45-46...


----------



## Toothless (Jul 10, 2017)

@FordGT90Concept


----------



## blobster21 (Jul 10, 2017)

i have seen this behaviour with regular non-raid disk-to-disk copy, and also through samba shares.

The set of icons in the quicklaunch toolbar makes me think you are running 2012 (R2 ?), and if my memory serves, i was on 2012R2 when this occured.

If you want to rule out the write caching influence during your file copy, you could give a try to this advice :



> The server most likely has a high System cache usage, monitor the "system cache resident bytes" counter, its located under the "Memory" object in perfmon. as workaround you can use Xcopy with the J parameter to use un-buffered  IO
> 
> Try Xcopy /J -- it uses un-buffered IO and should not have an overhead on the system cache
> 
> ...


----------



## taz420nj (Jul 10, 2017)

blobster21 said:


> i have seen this behaviour with regular non-raid disk-to-disk copy, and also through samba shares.
> 
> The set of icons in the quicklaunch toolbar makes me think you are running 2012 (R2 ?), and if my memory serves, i was on 2012R2 when this occured.
> 
> If you want to rule out the write caching influence during your file copy, you could give a try to this advice :


Yeah it's R2.. I'll give that a try when I get home from work.

Sorry for the delay, off for the weekend so I have some time to mess with this..

I am seeing the same transfer rate using xcopy /j...


----------



## blobster21 (Jul 15, 2017)

Ouch 

Speaking of file copy performances, how fast would be each RAID array if you decided to copy and paste a large folder internally ? (aka from D: to D:, and then from G: to G

I have the feeling that the system bus is the culprit....


----------



## Steevo (Jul 15, 2017)

Did you perform a long format on the array once it was initiallized to make sure that everything was set up? also you have to remember a raid five it has to calculate parity and stripe for the spanning stripe and if the disk format wasn't a full one and it's also trying to perform format at the same time as his writing data it will be horrendously slow.


----------



## taz420nj (Jul 15, 2017)

The plot thickens...  I tried a couple different folders (nothing special, all have the same file types - .mkv video file, a few .jpgs and an xml), and some do seem to transfer at full speed....  This one transferred 8GB in about 15 seconds..


----------



## Aquinus (Jul 15, 2017)

Big files are going to write very quickly because sequential write speeds is where RAID shines. If you're copying a ton of small files, it's going to slow right down because you're doing more writes that are smaller and you're touching things like the file system more often for every files being written when you have small files versus big ones. As a result, transfer rate for small files will always be less than a single large file. SSDs mitigate this but, this is most definitely a thing for rotational media drives. So what you're copying actually has a lot to do with how quickly it will copy.


----------



## taz420nj (Jul 15, 2017)

Steevo said:


> Did you perform a long format on the array once it was initiallized to make sure that everything was set up? also you have to remember a raid five it has to calculate parity and stripe for the spanning stripe and if the disk format wasn't a full one and it's also trying to perform format at the same time as his writing data it will be horrendously slow.


No I quick formatted. Do you have any idea how long a 12TB long format would take lol?



Aquinus said:


> Big files are going to write very quickly because sequential write speeds is where RAID shines. If you're copying a ton of small files, it's going to slow right down because you're doing more writes that are smaller and you're touching things like the file system more often for every files being written when you have small files versus big ones. As a result, transfer rate for small files will always be less than a single large file. SSDs mitigate this but, this is most definitely a thing for rotational media drives. So what you're copying actually has a lot to do with how quickly it will copy.



There's only a few small files in each folder, maybe up to 15.  And it always seems to transfer the video file first, so that's what it's measuring.  I've also tried just transferring the actual video file without the small files.


----------



## Aquinus (Jul 15, 2017)

Aquinus said:


> Edit: Could you get a dump of the SMART stats for all of the drives in the 8-disk array?


@taz420nj I'm still standing by this one. The attributes will tell us the state of the drives or if there have been any problems, even if it's not currently having one. Something could stand out and SMART is literally the first thing you should check with respect to any kind of storage issue.


----------



## taz420nj (Jul 15, 2017)

Aquinus said:


> @taz420nj I'm still standing by this one. The attributes will tell us the state of the drives or if there have been any problems, even if it's not currently having one. Something could stand out and SMART is literally the first thing you should check with respect to any kind of storage issue.



Like I said, not showing any issues.  And yes I know about the hours on them, it's already been taken care of with the place I bought them from.

It's disks 3-10.


----------



## blobster21 (Jul 15, 2017)

After reviewing the photos of your media server, it seems possible to move the perc H700 on the second PCI-E 8x slot, it would be interesting to test it in this configuration.

You could also do it the other way around and move the LSI9550 on another PCI-X 133Mhz, since you've got one on spare ?

Correct me if i'm wrong, but your LSI adapter is plugged in a PCI-X 100Mhz slot, according to the user's manual, page 10 (X7DBE rev 2.0 has a slight color scheme modification for the PCI-X 100Mhz slot, the one on the bottom is no longer blue but it's still 100Mhz nonetheless)

Interestingly, on page 69, i read you can also "enable a selected device as the PCI bus master", in the PCI-X/E slots pool ( _Slot1 PCI-X 100 MHz ZCR, Slot2 PCI-X 133MHz, Slot3 PCI-X 133MHz, Slot4 PCI-Exp x4, Slot5 PCI-Exp x8, and Slot6 PCI-Exp 8x_ )

Yes indeed, the disk report shows no problems on the raid6 array nor the LSI one .


----------



## Steevo (Jul 15, 2017)

taz420nj said:


> No I quick formatted. Do you have any idea how long a 12TB long format would take lol?
> 
> 
> 
> There's only a few small files in each folder, maybe up to 15.  And it always seems to transfer the video file first, so that's what it's measuring.  I've also tried just transferring the actual video file without the small files.




Yes, yes I do. I have done over 20TB in RAID5 and many controllers and Windows will only do a small amount of large volumes to speed up setup processing at the expense of performance later unless you tell the Raid card to perform a full initialization and on cheap RAID cards the only way to give this is with a full format or a drive free space wipe.


----------



## Aquinus (Jul 15, 2017)

```
- Disk: #0: WDC WD1600BEVS-00RST0 --

    Hard Disk Summary
   -------------------
    Hard Disk Number . . . . . . . . . . . . . . . . : 0
    Interface  . . . . . . . . . . . . . . . . . . . : Intel RAID #0/0 [11/0 (0)]
    Hard Disk Model ID . . . . . . . . . . . . . . . : WDC WD1600BEVS-00RST0
    Firmware Revision  . . . . . . . . . . . . . . . : 04.01G04
    Hard Disk Serial Number  . . . . . . . . . . . . : WD-WXE107092151
    Total Size . . . . . . . . . . . . . . . . . . . : 152627 MB
    Power State  . . . . . . . . . . . . . . . . . . : Active
    Logical Drive(s) . . . . . . . . . . . . . . . . : C: []
    Current Temperature  . . . . . . . . . . . . . . : 26 °C
    Power On Time  . . . . . . . . . . . . . . . . . : 1650 days, 1 hours
    Estimated Remaining Lifetime . . . . . . . . . . : 9 days
    Health . . . . . . . . . . . . . . . . . . . . . : #------------------- 9 % (Critical)
    Performance  . . . . . . . . . . . . . . . . . . : #################### 100 % (Excellent)

    There are 129 bad sectors on the disk surface. The contents of these sectors were moved to the spare area.
    Based on the number of remapping operations, the bad sectors may form continuous areas.
    Problems occurred between the communication of the disk and the host 450 times.
    In case of sudden system crash, reboot, blue-screen-of-death, inaccessible file(s)/folder(s), it is recommended to verify data and power cables, connections - and if possible try different cables to prevent further problems.
    More information: http://www.hdsentinel.com/hard_disk_case_communication_error.php
    It is recommended to examine the log of the disk regularly. All new problems found will be logged there.
      It is recommended to backup immediately to prevent data loss.
```

You say it drops down to ~40MB/s? Would it be a coincidence that the maximum transfer rate for your C: drive which might be in crisis is matching the speed you're seeing?

```
Maximum Transfer Rate  . . . . . . . . . . . . . : 41475 KB/s
```

The winning question: Is your swap file on the C: drive? Poor swap performance can impact copy speeds.

Better question: Is your C: drive about to die?


----------



## taz420nj (Jul 15, 2017)

I have tried both PCIe slots, and yeah I realized the end slot is 100MHz.  It's currently in the third slot (tried both).



Aquinus said:


> ```
> - Disk: #0: WDC WD1600BEVS-00RST0 --
> 
> Hard Disk Summary
> ...



C: is a pair of drives in RAID1.  Yes one of the drives is growing a bad patch.  They are going to be replaced.

I may buy that, but why do some folders transfer at full speed?  Isn't the /J switch supposed to bypass the OS buffer?  And the C: drives are only SATA1, so if it's transferring to the swap first, I should never see any transfer between the two arrays greater than half that interface.


----------



## Aquinus (Jul 15, 2017)

Try disabling your swap file or moving it to the fastest array you have.

It may be the case that just enough is getting cached to hit the swap file, more than what you have for available system memory on that machine. You can use the resource monitor to look for swap activity and see if it matches when it slows down.


----------



## blobster21 (Jul 15, 2017)

It would be extremely dissapointing to have that much RAM and be tied to the inferior performances of some disk swapping....the performance monitoring on earlier screenshots suggested that the system used 9-10Gb RAM max  at any given time.


----------



## taz420nj (Jul 15, 2017)

Disabled the swap file, no change.. (yes I rebooted)

System is only using about 4GB..  I even checked during an Explorer copy and it didn't chew it all up like it's buffering...


----------



## Aquinus (Jul 15, 2017)

According to the report you uploaded it says:

```
-- Partition Information --

Logical Drive                           Total Space         Free Space          Free Space               Used Space
C: (Disk: #0-1)                         148.7 GB            77.1 GB              52 %                    #########-----------
D: RAID-5 (Disk: #11-12-13-14)          5587.9 GB           29.4 GB               1 % (Low)              ###################-
G: RAID-6 (Disk: #3-4-5-6-7-8-9-10)     11174.9 GB          11148.6 GB          100 %                    --------------------
I: TeraDrive (Disk: #2)                 931.5 GB            8.0 GB                1 % (Low)              ###################-
```

If the RAID-5 disk is that filled up, is it possible that fragmented data is making it hard to get good read speeds? I'm just guessing at this point. I'm not really sure what's going on but, the RAID-5 array being practically full stood out to me.


----------



## taz420nj (Jul 15, 2017)

I don't see how that could be, since stuff is written once and rarely deleted.  It may have some fragmented parts but I cant imagine it being that bad.


----------



## OneMoar (Jul 16, 2017)

did you turn off write-cache buffer flushing in the device manger for all the drives ?

you absolutely need todo that with RAID 5


----------



## taz420nj (Jul 16, 2017)

What about RAID 6?  The 5 array is only going to be reads.


----------



## OneMoar (Jul 16, 2017)

taz420nj said:


> What about RAID 6?  The 5 array is only going to be reads.


write cache buffer flushing needs to be off with any RAID setup that has raid card
the card manages buffer control having it enabled in windows is known to murder speeds

the exception two this is a two or four drive raid 0

the reason it hurts is because it effectively makes the write buffer flush twice because the RAID card is acting as Drive-cache
on a multi-drive raid or a high level array it kills


----------



## taz420nj (Jul 16, 2017)

Okay, that did seem to make a little difference.. Still slow but 20MB/S better..


----------



## OneMoar (Jul 16, 2017)

try turning write caching all the way off


----------



## taz420nj (Jul 16, 2017)

OneMoar said:


> try turning write caching all the way off


----------



## OneMoar (Jul 16, 2017)

probably neeed to change the write policy with LSi's MegaCLI tool
megacli64 should have been included with the drivers

here is the basic syntax
https://wiki.mikejung.biz/LSI#Configure_LSI_to_disable_cache_with_a_bad_BBU_with_MegaCLI


----------



## taz420nj (Jul 24, 2017)

Sigh....  I thought I had this issue beat.  I didn't change anything else at this point, I rathered do yardwork lol..  Rebooted the machine and tried copying folders one at a time..  Got full speed. Over 200MB/S consistently.  So I tried doing a mass move, about 15 folders.  First four (about 40GB, including tiny .jpg files) blazed through  - hitting over 300MB/S at the seond peak.  Then it hit a wall and dropped like a rock.  So seriously..  Please...  Somebody must know what in the F*&K is going on here?







edit:

It seems to be all over the place at this point..  No consistency in the speed whatsoever..






Edit again...

It almost seems like it has to do with the files themselves.  I don't mean the big vs small files because it is always during the media itself.  I mean one movie file seems to transfer a lot faster than another..  They are all .mkv, but it shouldn't matter because bits are bits as far as copying, right?  Because when it got to this file, look what happened...


----------



## OneMoar (Jul 24, 2017)

fragmentation ? did you change any of the caching settings with megacli64  ?

you could also have a drive or a card not playing nice / going out

sadly at this stage I think you may be looking at a array rebuild (again)


----------



## taz420nj (Jul 24, 2017)

OneMoar said:


> fragmentation ? did you change any of the caching settings with megacli64  ?
> 
> you could also have a drive or a card not playing nice / going out
> 
> sadly at this stage I think you may be looking at a array rebuild (again)



I don't see what settings I could change with MegaCLI that would make any difference..  It's got a BBU and it is already set to Write Back.  I did that through the option rom bios when I set up the array.  It's not an SSD array so setting it to Write Through is only going to hurt performance.  Already been to that movie with the old card before I had the BBU for it.

The two RAID cards are the only cards in the machine - and they are on two completely different buses.  The old card is PCI-X and the new one is PCIe.

I have rebuilt that array about 36 times already with different settings/formatting.  What good would doing it again do?

Seriously I'm not shitting on you because I appreciate the help but the definition of insanity is doing the same thing over and over and expecting a different result.


----------



## Steevo (Jul 24, 2017)

Is the card getting hot?

Is the battery dead?

http://www.j0e.us/2011/10/20/hard-l...ghnMAM&usg=AFQjCNFG-h1qHV7AWa83EB8dVxfeeuIiYQ


----------



## OneMoar (Jul 24, 2017)

taz420nj said:


> I don't see what settings I could change with MegaCLI that would make any difference..  It's got a BBU and it is already set to Write Back.  I did that through the option rom bios when I set up the array.  It's not an SSD array so setting it to Write Through is only going to hurt performance.  Already been to that movie with the old card before I had the BBU for it.
> 
> The two RAID cards are the only cards in the machine - and they are on two completely different buses.  The old card is PCI-X and the new one is PCIe.
> 
> ...


did you try write-though mode write-back is slower then write through in some cases when using large amounts of cache

is the board peg-lane throttling the cards is the pci-e power management disabled ?

do you have enough pci-e lanes to keep the cards fed

that board is pretty old it might be done to not having enough pci-e bandwith or it could just be one of the drives performing below-par and holding things up


----------



## zwing688 (Jul 25, 2017)

I signed up after reading this thread because I am experiencing huge write speed drops with Adaptec 71605 controller under Windows10.

CPU: Intel i7-3930K ; 32GB RAM ; Adaptec 71605 (without BBU) Firmware 32106 and drivers v7.5.0.52013 ; 8 WD Black 2.5" 750GB WD7500BPKX hard disks with available space set as 750GB RAID-50 boot then 1800GB RAID-6 and 1638GB RAID-6 ; Windows 10 x64 1607 14393.1378

Under Windows7SP1 and Linux I saw no speed drops in writing operations.

With Windows10 there must be some bugs with RAID controllers then that the writing slows down.. OS write cache completely broken for RAID ? And reading here it seems that Windows Server versions are affected too then.  All known Microsoft spyware/malware stuff has been disabled with DoNotSpy10

Reading files from RAID partitions it's always fast .. although no more than 160MB/s in Windows10 anyway, higher in Windows7.
Writing files on RAID partitions it starts fast in the 120MB/s range then it drops down to 10MByte/s or 20MByte/s. It always happens after 4GBytes (regardless of huge files like 10GB+ or a bunch of small files for more than 4GB in size overall).
Using ExtremCopy Free utility I managed to get sustained 70MB/s writing speed on RAID partitions. It doesn't use the Windows OS writing cache calls but its own.
The problem is with all programs using Windows writing cache and so slowing down a lot. The system is way slower than under Windows7 or Linux anyway.
None of the Windows10 monthly updates (manually installed from Microsoft Catalog) so far fixed the issues.

I sent a report to Adaptec support about this issues. I hope they will be able fix it with Microsoft and other RAID controllers manufacturers quickly.


----------



## blobster21 (Jul 25, 2017)

Good point, zwing688.

After having my files server for years on Microsoft OSes ( Windows 7, then Windows 2012R2 and ultimately on Windows 10) i moved to Ubuntu Mate Zeisty mostly because i randomly experienced things i couldn't pinpoint, explain, or reproduce at will.

Don't get me wrong, it worked remarkably well most of time, but on some occasions i ran into inexplicable slow downs like the ones @taz420nj  and @zwing688 688 describded.

So far i've had no complaints about transfer speed to/from my arrays.

It would be interesting to boot from a live linux media, and do a couples transfers from your old array to the new one.  Any recent distros most likely have native support for the Perc H700 and the older LSI 9550 raid adapters.

You would probably not get the highest speed possible out of them, because both arrays have been formatted as NTFS/ReFS and the ntfs-3g implementation on linux is not as efficient as the native NTFS support on Microsoft OSes, but you should witness some stable, consistent speed transfers, which could indicate that there's something wrong with the OS itself.


----------



## taz420nj (Jul 29, 2017)

I'm thinking it probably is fragmentation of the data like @OneMoar and @Aquinus said..  How that happens when you basically copy once and rarely delete, Im not sure but I started defrag and it said (even though it is set to defrag automatically - which it obviously isn't) that Drive D was 48% fragmented.  It is now 63% done with pass 6..  It has been running for 6 solid days.  I should've just let it copy at the shit speed, it would've been done by now lol - minus the god awful thrashing it's been putting on the drives.


----------



## blobster21 (Jul 29, 2017)

> it would've been done by now lol - minus the god awful thrashing it's been putting on the drives.



i would have done the same thing, it's over only when everything has been tried at least once. But 6 day is overkill !


----------



## Aquinus (Jul 29, 2017)

taz420nj said:


> I'm thinking it probably is fragmentation of the data like @OneMoar and @Aquinus said..  How that happens when you basically copy once and rarely delete, Im not sure but I started defrag and it said (even though it is set to defrag automatically - which it obviously isn't) that Drive D was 48% fragmented.  It is now 63% done with pass 6..  It has been running for 6 solid days.  I should've just let it copy at the shit speed, it would've been done by now lol - minus the god awful thrashing it's been putting on the drives.


It may not be fragmentation depending on *what* you're copying. I know that with sequential read/write, my RAID-5 is pretty fast but, small files will kill throughput. I wouldn't be surprised if the jumps in write speed are directly related to when larger files are being copied. For example, you would likely see higher write speeds copying one 4GB file versus 40 x 100Mb files and probably gets even worse if you go down to 400 x 10MB files because of the nature of random reads on rotation media drives which would mimic the kinds of things you would see when large files get fragmented. Technologies like NCQ try to mitigate these problems by efficently figuring out where the read/write head needs to be for a number of buffered operations but, the reality is that HDDs suck at these kinds of workload and RAID tends to harm performance for random reads and writes.


----------



## taz420nj (Jul 30, 2017)

Aquinus said:


> It may not be fragmentation depending on *what* you're copying. I know that with sequential read/write, my RAID-5 is pretty fast but, small files will kill throughput. *I wouldn't be surprised if the jumps in write speed are directly related to when larger files are being copied. *For example, you would likely see higher write speeds copying one 4GB file versus 40 x 100Mb files and probably gets even worse if you go down to 400 x 10MB files because of the nature of random reads on rotation media drives which would mimic the kinds of things you would see when large files get fragmented. Technologies like NCQ try to mitigate these problems by efficently figuring out where the read/write head needs to be for a number of buffered operations but, the reality is that HDDs suck at these kinds of workload and RAID tends to harm performance for random reads and writes.



It's not. As you can see above there were a few times when the speed jumped, but it does not in any way correlate to just "big" files.  The first set of peaks were four folders - each with a single 8-10GB MKV,15-20 jpgs, and an XML - which transferrred between 200 and 300MB/s.  Then it was 45-100MB/s shit for the next 10 folders (of the same makeup), then the last folder spiked to almost 350MB/s.  The tiny files in each folder transfer pretty much instantly, then the MKV.


----------



## taz420nj (Jul 30, 2017)

blobster21 said:


> i would have done the same thing, it's over only when everything has been tried at least once. But 6 day is overkill !



It just started a 7th pass..  Don't know how many it's going to do..  But this one started as soon as the C drive finished (only took 30 minutes or so).  This one is 6TB full though..

Edit:  I was looking around and I saw one post from someone that said they had to defrag a 6TB array and it took close to a MONTH!


----------



## OneMoar (Jul 30, 2017)

yea this is why we defrag our drives before we raid them ....

check back in a month when its done


----------



## taz420nj (Jul 30, 2017)

OneMoar said:


> yea this is why we defrag our drives before we raid them ....
> 
> check back in a month when its done



Uhrrr...  It was empty when it was RAIDed 5 years ago.  The drives were brand new.  ALL drives are empty (or become empty) when they are put into an array because the initialization destroys all existing data - and data isn't transferred in a fragmented state.  So I don't know where you were going with that..


----------



## OneMoar (Jul 30, 2017)

have you considered UnRaid ? 
I was half joking large data set is large


----------



## taz420nj (Jul 30, 2017)

OneMoar said:


> have you considered UnRaid ?
> I was half joking large data set is large


Not really.. Isn't it basically software RAID4?  I think the fact that no major storage vendor supports RAID4 at the hardware level these days says a lot about it.  And their own claims of 30-40MB/s writes unless you have a SSD cache pool is pretty pathetic too IYAM


----------



## heartog (Dec 15, 2018)

Was the problem fixed at the end? I have the exact issue but with windows software RAID.


----------



## eidairaman1 (Dec 15, 2018)

heartog said:


> Was the problem fixed at the end? I have the exact issue but with windows software RAID.



The thread itself is over a year old typically if you have slow performance if you've got a 5400 RPM drive that will be a part of the cause of it otherwise it could be the control card


----------



## heartog (Dec 15, 2018)

eidairaman1 said:


> The thread itself is over a year old typically if you have slow performance if you've got a 5400 RPM drive that will be a part of the cause of it otherwise it could be the control card


Yes I do have 5400rpm, but that wouldn't be the issue when it used to be doing sequential read/writes at 220MB/s and now at 40MB/s. BTW I said I'm using software RAID. I replied because I want to see if OP had ever found a solution or not.


----------



## eidairaman1 (Dec 15, 2018)

heartog said:


> Yes I do have 5400rpm, but that wouldn't be the issue when it used to be doing sequential read/writes at 220MB/s and now at 40MB/s. BTW I said I'm using software RAID. I replied because I want to see if OP had ever found a solution or not.



Sorry rotation rate correlates to overall data transfer speed. Being onboard, id get a controller card.


----------



## Gorstak (Dec 15, 2018)

I think you're leading the guy on a wild goose chase...all good comments and advices, but nothing that will help with transfer speeds.
Basically, transfer speed depends not just on cache and the actual speed of the drive, it depends on the content you are transferring...
many small files always copy very slow, since it takes a while for the system to locate and access them...
big files depend mostly on their density...it's not the same to copy some light big file, such as linux image iso and mkv movie file...mkv has a much higher density and will take longer to copy
if your speed is ranging from 45 to 500MB/s, it's perfectly normal.


----------



## heartog (Dec 15, 2018)

Gorstak said:


> I think you're leading the guy on a wild goose chase...all good comments and advices, but nothing that will help with transfer speeds.
> Basically, transfer speed depends not just on cache and the actual speed of the drive, it depends on the content you are transferring...
> many small files always copy very slow, since it takes a while for the system to locate and access them...
> big files depend mostly on their density...it's not the same to copy some light big file, such as linux image iso and mkv movie file...mkv has a much higher density and will take longer to copy
> if your speed is ranging from 45 to 500MB/s, it's perfectly normal.



my whole drive is all H.264 TS files that are all 10Mbps in bitrate, speeds had been fine for the past 2 years from files ranging to 2 to 100GB and all copies at stable speeds. I've tested file transfers from SSDs, HDDs, Network, and all turns out to be the same, speed drops.


----------



## eidairaman1 (Dec 15, 2018)

heartog said:


> my whole drive is all H.264 TS files that are all 10Mbps in bitrate, speeds had been fine for the past 2 years from files ranging to 2 to 100GB and all copies at stable speeds. I've tested file transfers from SSDs, HDDs, Network, and all turns out to be the same, speed drops.


Controller problem then or a cable or psu


----------



## taz420nj (Dec 16, 2018)

Are you slowing down on reading or writing?  My issue was in trying to transfer data from an older smaller array to a new bigger one. The transfer rates would fluctuate wildly, with periods upwards of 350MB/s and then dropping drastically, then screaming along again.  The data is all MKV packed video.  My issue was definitely due to really bad fragmentation of the old array.  Files that were contiguous were flying and fragmented files crawled.   I ended up canceling the defrag and just letting it transfer as it wanted because the defrag was thrashing the drives and I didn't want them to start failing.  Fragmentation wouldn't cause an existing array's write speeds to suddenly drop that much though.


----------



## eidairaman1 (Dec 16, 2018)

taz420nj said:


> Are you slowing down on reading or writing?  My issue was in trying to transfer data from an older smaller array to a new bigger one. The transfer rates would fluctuate wildly, with periods upwards of 350MB/s and then dropping drastically, then screaming along again.  The data is all MKV packed video.  My issue was definitely due to really bad fragmentation of the old array.  Files that were contiguous were flying and fragmented files crawled.   I ended up canceling the defrag and just letting it transfer as it wanted because the defrag was thrashing the drives and I didn't want them to start failing.  Fragmentation wouldn't cause an existing array's write speeds to suddenly drop that much though.



I was replying to the new guys posts is what, thanks for replying back though


----------



## Solaris17 (Dec 16, 2018)

Gorstak said:


> ig files depend mostly on their density...it's not the same to copy some light big file, such as linux image iso and mkv movie file...



This is total mis-information. data streams do not care how a file is packaged.


----------



## John Naylor (Dec 16, 2018)

We put RAID on a desktop every 3 or 4 years and give it a run.... as of yet, outside of benchmarks and workstation apps (video editing, animation, rendering) , never realized any advantage.  If you can afford the T & E to play with RAID, methinks better off to put your time and money into SSDo.


----------



## taz420nj (Dec 17, 2018)

John Naylor said:


> We put RAID on a desktop every 3 or 4 years and give it a run.... as of yet, outside of benchmarks and workstation apps (video editing, animation, rendering) , never realized any advantage.  If you can afford the T & E to play with RAID, methinks better off to put your time and money into SSDo.



RAID0 is not the topic of this discussion.  We use RAID5/6 to give our media collections a degree of fault tolerance in the event of a drive failure.  Yes we all know RAID is not a backup, but when you have a collection that spans a dozen terabytes and growing, 1:1 backup (because media files are incompressable) is expensive and tedious for material that is not critical or irreplaceable (just a royal pain in the ass and time consuming to replace).   SSDs are still far too expensive for a bulk storage application as well, especially because its main feature (speed) is pretty much useless.  Even when you factor in a controller (which you can buy a decent used server pull for a fraction of the price of a new one) and parity overhead, HDD RAID using 2TB enterprise drives is still only about $100/TB versus SSD at $150/TB.


----------



## John Naylor (Dec 18, 2018)

RAID should be  a topic of discussion when OP is talking about not realizing anticipated performance.


----------



## taz420nj (Dec 18, 2018)

John Naylor said:


> RAID should be  a topic of discussion when OP is talking about not realizing anticipated performance.



He hasn't been back yet to answer my question but it doesn't sound like hes "not realizing anticipated performance", he has had a drastic dropoff in ACTUAL performance.  And once again, nobody is talking about RAID 0 because it is irrelevant to the topic.


----------



## fredaroony (Feb 16, 2019)

I'm having the exact same problem with MKV files and really noticed it since I've gone 4K. I have a HP Microserver Gen8 with a HP P222 RAID controller configured in RAID5 and it seems to be an issue with certain MKV files, some go very fast and some stall after a few seconds of transfer.

I have zero issue between the OS drive, which is a single SSD, to an external USB3 attached drive. The only issue is the RAID5 array, I have put in my old Adaptec 3405 RAID which I thought fixed the issue but it came back last night with one file. Even the files that were an issue before now work but then another won't....it's bizarre.

I looked at defragging but the drive was already 0% so I don't see that as the issue. 

Running Server 2016 fully patched.

I'm stumped!


----------



## SoNic67 (Feb 16, 2019)

fredaroony said:


> The only issue is the RAID5 array


What are you actual speeds on this RAID5? I have found out that some cheap controllers are really bad at RAID5 (software/host based).
https://www.techpowerup.com/forums/threads/post-your-crystaldiskmark-speeds.250319/


----------



## fredaroony (Feb 16, 2019)

SoNic67 said:


> What are you actual speeds on this RAID5? I have found out that some cheap controllers are really bad at RAID5 (software/host based).
> https://www.techpowerup.com/forums/threads/post-your-crystaldiskmark-speeds.250319/



The speeds are fine, both cards aren't cheapo cards. It seems to be specific files i.e. 1 out of 30 files it might do it


----------

