# Suddenly slow performance from Intel RAID



## Violet_Shift (Sep 2, 2020)

I've had this RAID array running for years now. It's a 4x 3TB RAID-5.

I migrated the RAID to a new system when I moved countries but it's been fine. Now however I've ran into problems.

Last week, while transferring a large file, there was some sort of write error and the RAID had to verify. Since then, my computer has been behaving very strangely.

Although the verification succeeded, the performance of the array has plummeted. It takes a long time to read anything from it, and it is nearly constantly making noises from small read operations.

Windows now takes over 20 minutes to start, so I now dread restarting - I presume this is related to the exceptionally slow performance.


I presume that there is a problem with one of the drives or something, but everything is showing as normal to Intel's tools.

CrystalDiskInfo is giving me this:






I'm not sure if this is the cause of the problems, if I should remove this disk from the array, or what. I'm manually verifying again just to check, but if there's something wrong with this disk I'm likely to just yank it and replace it.

Before anyone asks, yes I do have external backups of mission-critical data. I mostly use this for games and media, and I noticed the speed issues when trying to browse folders of photos.

No games installed on the RAID will even run decently.

(Part of me is just tempted to go and buy a freakin' big HDD, since 9TB of storage is hardly huge anymore).


----------



## R-T-B (Sep 2, 2020)

That one disk is relocating sectors so yeah, it's on its way out.


----------



## Violet_Shift (Sep 2, 2020)

Thanks; I find it hard to interpret these reports. I'll buy a new drive immediately and swap it in.  It's odd that it's the youngest drive of the group, but then again it is a manufacturer recertified one, so I guess I was playing with fire when I bought it.


----------



## Solaris17 (Sep 2, 2020)

That drive is toast.

but eh, that’s part of the curve. Things generally fail soon or close to end of life cycle.


----------



## pavle (Sep 2, 2020)

53° C? Your disk is overheating, before anything else and yea, also bad sectors. Put some fans in front of your disks...


----------



## Violet_Shift (Sep 2, 2020)

I presume that parameter counts down rather than up, since all the other drives show "200" in that space?



pavl3 said:


> 53° C? Your disk is overheating, before anything else and yea, also bad sectors. Put some fans in front of your disks...



It's one of the two mounted on the rear side of the motherboard... I'll have to think of a solution for this as well, but thanks for informing me.


----------



## Equus_Ferus_Caballus (Sep 2, 2020)

Violet_Shift said:


> I presume that parameter counts down rather than up, since all the other drives show "200" in that space?



that parameter counts up since its logging the amount of bad sectors the drive finds. id replace the drive and get some cooling on them because >50C reduces the lifespan of the drives.


----------



## kiriakost (Sep 2, 2020)

pavl3 said:


> Perhaps mount a fan on the bottom of the disk; I have that in some of my computers.View attachment 167493



My 0.2 Euro cents,  effective HDD  cooling  this is possible by using an DC fan this sending air from any direction other than the bottom plate.
The bottom plate (PCB controller)  this gets cooled down by using the HDD body as heatsink.



Violet_Shift said:


> View attachment 167491
> 
> I'm not sure if this is the cause of the problems



*C5 and C6* this is the problem. 
I am using INTEL RAID for over a decade,  the 05 indication this is not considered as problem.


----------



## DrCR (Sep 2, 2020)

Solaris17 said:


> but eh, that’s part of the curve. Things generally fail soon or close to end of life cycle.


Isn't the stats something like if it lasts 3 or 4 years or some such, it's likely to last 10 years?

The drives in my NAS are 12 years old at this point.


----------



## Ahhzz (Sep 2, 2020)

DrCR said:


> Isn't the stats something like if it lasts 3 or 4 years or some such, it's likely to last 10 years?
> 
> The drives in my NAS are 12 years old at this point.


I'll have to check to see if I can find it again, but I believe you're pretty close. I think the article I read fouind that most computer components fail within a short period of time from the factory (something like 18 months for hard drives, I think), but after that they tend to get replaced due to either replacing the computer itself, upsizing, or simple component obsolescence, not due to actual failure.


----------



## SnakeDoctor (Sep 2, 2020)

The drive looks to be on its way out . If you install Harddrive Sentinel it will give you a user friendly report on the issue of your harddrive.
No need to run tests to damage a failing drive.Seems harrdive has multipul bad sectors





						Download Hard Disk Sentinel
					

Download Hard Disk Sentinel, Professional, Trial, DOS, Linux versions




					www.hdsentinel.com


----------



## kiriakost (Sep 2, 2020)

CrystalDiskInfo  this is sufficient for diagnosis.
It just needs some pocking around at the settings, so the user to see meaningful numbers ( hours of total operation). 
Here are meaningful numbers from my system.


----------



## Violet_Shift (Sep 2, 2020)

Do these warnings explain the abysmally slow performance from the RAID? I'm tempted to remove the disk and put it into degraded mode to check, but I really don't want to risk screwing up the array.

I'm honestly tempted to go back to a single-disk solution, since I had to switch the entire array out after it was initially built out of a pack of bad Seagates. After four drive swaps, I'm in "Is this more trouble than it's worth?" territory.

I'll install HDSentinel for more diagnostics...

Oh boy, this thing is a liability. I'm searching for viable replacements now, but it unfortunately seems that 3TB 7200RPM drives are rare these days


----------



## DRDNA (Sep 2, 2020)

Violet_Shift said:


> Do these warnings explain the abysmally slow performance from the RAID? I'm tempted to remove the disk and put it into degraded mode to check, but I really don't want to risk screwing up the array.
> 
> I'm honestly tempted to go back to a single-disk solution, since I had to switch the entire array out after it was initially built out of a pack of bad Seagates. After four drive swaps, I'm in "Is this more trouble than it's worth?" territory.
> 
> ...


yes they do explain it. Replace the drive.


----------



## John Naylor (Sep 2, 2020)

Another thing to consider is what exactly is RAID doing for you other than impressive benchmarks.   We test RAID every three years or so ...  it is somewhat satisfyingbm seeing the benchmarks.  But on the desktop, I find it more trouble than it's worth.  In last test ... 

(2) Samsung Pro SSDs in RAID 0
(2) Seagate 2 TB SSHDs in RAID 1

a)  As usual great benchmarks
b)  In application / gaming based benchmarks, RAID 0 was slightly slower.

Contacted Samsung which advised that they neither recommend nor support RAID.  On the RAID 1 array there was no performance impoact, but about once a moth the system would boot finding one drive was missing ... Turning system on, waiting a bit and restarting solved the problem. After about 4 months, I broke both arrays and have seen no impact,  Performancce on the SSDs is faster but not in any user observable way.  The mirrored drive are maintained by free software which runs at 12 noon and midnite and no  more "missing drive" problem.

RAID definitely has it's uses ... we were using a NAS for about 10 years.  Best thing about ... for us .... it was, in a  fire I could just grab the box and run.  If we had more concurrent users on the SOHO network, we'd look at it again, but even with the video and music server activity, w/ (6) LAN connected PCs, (3) printers, (6) lappies and a couple of TVs, we just don't have enough I/O to have any impact.  The Small Office side and Home side activity are rarely concurrent and disk usage is rather light.  A CAD Operator may open a 8 GB file in the A.M. and intermittent saves are stored locally on their box until the file is saved back to the main drive before it's closed.  That might happen twice a day.

RAID is a valuable tool ... we just don't have anything on our boxes that can benefit in anyway from using it.   Well it should speed up backups ... but as these happen between 12 midnight and 4 am, it doesn't have any user impact.


----------



## Violet_Shift (Sep 2, 2020)

Browsing for replacement now... Are Toshiba drives good?

A direct replacement with the same model is unaffordable because it seems to have been discontinued, and about the only reasonably priced offering in 3TB / 7200RPM is a Toshiba. I'm hesitant to go away from WD or Seagate but at the moment I just want to stop my array from blowing up.


----------



## pavle (Sep 3, 2020)

Toshiba drives with P in their name are not that good, but for data storage, not OS-drive, they might do - keep 'em cool though.


----------



## Violet_Shift (Sep 3, 2020)

The one I was looking at was a Toshiba DT101ACA, if that helps.

And yeah, the amount of data throughput here will not be that high - I use an SSD for my OS.


----------



## kiriakost (Sep 3, 2020)

Violet_Shift said:


> I'm tempted to remove the disk and put it into degraded mode to check



This is not a flat car tire, this is a light bulb that is now burned. 
Move on ...


----------



## pavle (Sep 3, 2020)

Violet_Shift said:


> The one I was looking at was a Toshiba DT101ACA, if that helps.
> 
> And yeah, the amount of data throughput here will not be that high - I use an SSD for my OS.


I looked a little at comments on newegg and yeah 3 stars out of 5 (700+ opinions) and one says they stopeed functioning after 2 years, but for data they're ok. Again - cool them; they should be under 40°C for maximum life.


----------



## Violet_Shift (Sep 4, 2020)

I got it, since the stability of my array for now matters to me.

If I get a failure in 3 years or so I'll likely buy a single disk solution, but this seems to work as a stopgap.

Thanks everyone for the help.


----------



## kiriakost (Sep 4, 2020)

Violet_Shift said:


> I got it, since the stability of my array for now matters to me.
> 
> If I get a failure in 3 years or so I'll likely buy a single disk solution, but this seems to work as a stopgap.
> 
> Thanks everyone for the help.



Sorry for the bad news, but modern made HDD this is no more than three years investment. 
Some people they vote in favor of SSD while them loosing capacity every single day.

If I was this day determined to build a RAID 1 for Tera disk space then I would. 
a) Buy six NEW HDD
b) Two used in RAID 1 as main drives.
c)Two used in RAID 1 as backup drives.
e) Two staying in their anti-static bags as spare drives, which them they will replace any HDD that will get bad in the future.


----------

