• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Western Digital Launches 20 TB NAS HDD with 64 GB iNAND

Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
I am not kidding, 300TB is only 15 times the capacity.
My point was that the VAST majority hard drives will not break 3 or 4 full drive read/writes a year, let alone 15x.

The drive is rated for 268MB/s so it only takes 311h to hit this workload limit. There are 8760 or 8784 hours in a year - WD uses the former.
You're making a lot of assumptions and complaining about something that happens only in very rare situations and will NOT happen in a standard PC or NAS box.
 
Last edited:
Joined
Jun 29, 2018
Messages
542 (0.23/day)
My point was that the VAST majority hard drives will not break 3 or 4 full drive read/writes a year, let alone 15x.
This HDD is not in the same segment as vast majority of HDDs.
You're making a lot of assumptions and complaining about something that happens only in very rare situations and will NOT happen in a standard PC or NAS box.
I am writing from experience. A NAS drive that dares to call itself "Pro" should have the specs to match, and this one doesn't.
 
Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
I am writing from experience.
So am I. 40years worth.
A NAS drive that dares to call itself "Pro" should have the specs to match, and this one doesn't.
That is an opinion that suffers no merit..

300TB per year is not only reasonable level of usage expectation for the market segment it being targeted to, but is in fact on the VERY generous side.
 
Joined
Jun 29, 2018
Messages
542 (0.23/day)
So am I. 40years worth.

OK, Boomer ;)

That is an opinion that suffers no merit..

300TB per year is not only reasonable level of usage expectation for the market segment it being targeted to, but is in fact on the VERY generous side.

When I analyzed the situation with numbers you discarded it as well, so I have no idea what you consider as "merit" other than an opinion that aligns with your own.
Take a simple ZFS RAID1 (or btrfs which Synology prefers) with 2 of those drives, which is a reasonable use-case for "a NAS box". If you fill it up and leave the default scrub period of once-a-month you'll be left with at most (since ZFS write amplification depends on what exactly you're doing with it) 60TB worth of workload per year.
I don't know how else can I explain my problem with this, so as I wrote before: if you're fine with it the be my guest.
 
Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
When I analyzed the situation with numbers you discarded it as well, so I have no idea what you consider as "merit" other than an opinion that aligns with your own.
You numbers are flawed in the context that they do NOT apply to the use-case-scenario this drive is intended for.
Take a simple ZFS RAID1 (or btrfs which Synology prefers) with 2 of those drives, which is a reasonable use-case for "a NAS box". If you fill it up and leave the default scrub period of once-a-month you'll be left with at most (since ZFS write amplification depends on what exactly you're doing with it) 60TB worth of workload per year.
Even if that is correct(which I doubt), that is 60TB per year, spread across 2 drives. A far cry from the 300TB per year stated for each drive. Are you seeing the flaw in your logic yet?
I don't know how else can I explain my problem with this
Your explanation fails at context. That's on you, not me, not Western Digital nor anyone else. Your complaint is meritless and baseless when the facts of probable usage are taken into account.
 
Joined
Jun 29, 2018
Messages
542 (0.23/day)
You numbers are flawed in the context that they do NOT apply to the use-case-scenario this drive is intended for.

Even if that is correct(which I doubt), that is 60TB per year, spread across 2 drives. A far cry from the 300TB per year stated for each drive. Are you seeing the flaw in your logic yet?

First, it is 60TB left per drive, which is obvious from the calculations. Second, my previous calculations and the ZFS ones are mirrored by the analysis piece on ServeTheHome. Their numbers are more rounded than mine, but the scale is the same. They also go into an interesting tangent of whether mainstream SSD endurance is comparable to what WD did here.

Additionally, one of the commenters there noted that this "Pro" drive has worse non-recoverable errors per bits than other WD drives, for example WD Green have <1 in 10^14 while this WD Red "Pro" has <10 in 10^14 - an order of magnitude difference.

Your explanation fails at context. That's on you, not me, not Western Digital nor anyone else. Your complaint is meritless and baseless when the facts of probable usage are taken into account.

I don't know what you think the context is, but this drive is advertised as a "Pro" NAS drive up to 24 bays with 24x7 operation, not normal PCs, and not small NAS. It doesn't fit the advertised role with such low workload value and worse error specs than the Green series.
 
Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
First, it is 60TB left per drive
2 x 300TB = 600TB
600TB - 60TB = 540TB

The SIMPLE math shows the real deal.

You clearly fail at math skills. At this point it's seems clear you either being a troll or you're just a fool. I really don't care which. You're done here.
 
Joined
Jun 29, 2018
Messages
542 (0.23/day)
2 x 300TB = 600TB
600TB - 60TB = 540TB

The SIMPLE math shows the real deal.

You clearly fail at math skills. At this point it's seems clear you either being a troll or you're just a fool. I really don't care which. You're done here.

You clearly didn't understand what I wrote, at all. Not only that, but you ignored every other argument and simply declared victory, which is hilarious.

Scrubbing ZFS RAID1 once a month takes up to 240TB workload from each of the drive.
2x 300TB = 600TB
2x 300TB - 2x 12x 20TB = 2x 60TB left for normal operations.
That is 60TB per drive in a RAID1, but don't forget how RAID1 works. The writes take workload from both drives simultaneously. Reads take from one, unless the data is corrupt and checksums don't match - then it compares with the other.
 
Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
You clearly didn't understand what I wrote, at all.
OR you did not state your points with clear enough detail. Just went back and re-read everything. I did not misunderstand you.
Scrubbing ZFS RAID1 once a month
First, why would you need to do that? Scrubbing only needs to be done yearly for most workload situations, when an error is discovered or when resilvering.
Doing so monthly is not needed and is a COMPLETE waste of time and resources.
takes up to 240TB workload from each of the drive.
Second, that number is a load of nonsense. The "scrubbing" functions require only as much space as there is data on the drive. Nothing more, nothing less. Why? Because scrubbing is just reverifying data on the array and correcting errors if found(rare).
That is 60TB per drive in a RAID1, but don't forget how RAID1 works. The writes take workload from both drives simultaneously. Reads take from one, unless the data is corrupt and checksums don't match - then it compares with the other.
Third, you assume that everyone using these drives in a NAS with ZFS implementation will be using RAID1. Most do not, even in NAS racks with only 2 drive bays. Most use some form of striped array with parity such as RAIDZ-1, RAIDZ-2 or RAIDZ-3, depending on the options offered by the NAS device.

So once again, your point isn't one.
Then I suggest you ignore me now.
It's getting mighty tempting...
 
Joined
Jun 29, 2018
Messages
542 (0.23/day)
First, why would you need to do that? Scrubbing only needs to be done yearly for most workload situations, when an error is discovered or when resilvering.
Doing so monthly is not needed and is a COMPLETE waste of time and resources.
Once a month is the default for example in Debian which uses ZFS-on-Linux (OpenZFS). Nobody uses Oracle's ZFS apart from Oracle due to licensing. I'm not saying that their documentation is wrong, but rather that it is aimed at a different class of hardware with different assumptions.
Scrubbing often with very high capacity drives is advised, especially for this drive that has worse error rate spec than even WD Green, not to mention the enterprise drives which are two orders of magnitude better rated.
Second, that number is a load of nonsense. The "scrubbing" functions require only as much space as there is data on the drive. Nothing more, nothing less. Why? Because scrubbing is just reverifying data on the array and correcting errors if found(rare).
That's why I wrote "If you fill it up", obviously meaning "use the whole 20TB".
Third, you assume that everyone using these drives in a NAS with ZFS implementation will be using RAID1.
It's an easy to understand example.
Most do not, even in NAS racks with only 2 drive bays
This drive is not meant for those according to WD, it's a "Pro" HDD for bigger NASes with up to 24 drives, which I wrote as well. I've used RAID10 in bigger NASes due to performance characteristics, so it's not as clear cut.
Most use some form of striped array with parity such as RAIDZ-1, RAIDZ-2 or RAIDZ-3, depending on the options offered by the NAS device.
That changes nothing, since those are scrubbed as well. And so is btrfs used, for example, by Synology in their NAS appliances.
Parity raids also can have higher read amplification than RAID1 which use up more of the workload, while RAID1 can read different data from different mirrors at the same time. I wrote "can have" because RAIDZ behaves a bit different than classical parity raids.
It's getting mighty tempting...
I'm yet to see you provide a compelling argument, so go ahead :)
 
Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
It's an easy to understand example.
That few use, which makes it a very poor example.
Once a month is the default for example in Debian which uses ZFS-on-Linux (OpenZFS). Nobody uses Oracle's ZFS apart from Oracle due to licensing. I'm not saying that their documentation is wrong, but rather that it is aimed at a different class of hardware with different assumptions.
Scrubbing often with very high capacity drives is advised, especially for this drive that has worse error rate spec than even WD Green, not to mention the enterprise drives which are two orders of magnitude better rated.
You still don't and should NEVER be doing a monthly scrub. And no, it's NOT a default setting, even in Debian.
This drive is not meant for those according to WD, it's a "Pro" HDD for bigger NASes with up to 24 drives, which I wrote as well.
Which ALSO means RAID1 will rarely be used with these drives. So once again, your point isn't one.
That changes nothing, since those are scrubbed as well. And so is btrfs used, for example, by Synology in their NAS appliances.
Parity raids also can have higher read amplification than RAID1 which use up more of the workload, while RAID1 can read different data from different mirrors at the same time. I wrote "can have" because RAIDZ behaves a bit different than classical parity raids.
Thank you for displaying for everyone the shocking lack of merit to your argument.
I'm yet to see you provide a compelling argument, so go ahead
Irony, wish granted. Not wasting any more of my time...
 
Joined
Jun 29, 2018
Messages
542 (0.23/day)
You still don't and should NEVER be doing a monthly scrub. And no, it's NOT a default setting, even in Debian.

It IS a default setting in Debian, Ubuntu, Proxmox and many other ZoL implementations. Why are you lying?

http://deb.debian.org/debian/pool/contrib/z/zfs-linux/zfs-linux_2.1.2-1~bpo11+1.debian.tar.xz /debian/zfsutils-linux.cron.d:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# TRIM the first Sunday of every month.
24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi

# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi
It was added in 2013:
zfs-linux (0.6.5.6-2) unstable; urgency=medium
...
* Scrub all healthy pools monthly from Richard Laager

Which ALSO means RAID1 will rarely be used with these drives. So once again, your point isn't one.
If only RAID10 used RAID1 internally...
Thank you for displaying for everyone the shocking lack of merit to your argument.
Irony, wish granted. Not wasting any more of my time...
It's OK, you simply ignored all the other stats, arguments and the STH article :)
 
Top