I thought write endurance was mostly related to the amount of written space, not the amount of write operations. The way i see it, downloaded files should write the same amount they weight, once, then be read many times if you seed the torrent, thus WORM. I do appreciate the explanation given on QLC vs SLC, although i did know most of that, i learned about 4kb size blocks and how smaller writes could affect NAND life. Having said that, aren't most torrents divided into 4KB blocks anyway?
I know the drive is not 128GB, but you get the benefits of SLC only in those 128GB, that's what i meant.
I do realize i made a mistake in my original post, i thought the drive worked like the Store-MI software and moved the "hotter" files to the SLC portion of the drive. I see now that this is only used for write operations, like most QLC+pseudo SLC drives on the market.
How does this differ from competing drives on the market though? Aren't those drives's pseudo SLC NANDs as durable as this is?
You seem to imply that because in this drive the SLC is fixed, it would reach higher endurance than the competition. Thing is, we don't know how high the SLC endurance rating is on the other drives, as the number they give is based on the QLCs endurance, as the SLC being dynamic in size, is not guaranteed. But in a real-case scenario i would prefer a dynamic SLC caching, as it does offer a larger cache size without taking away the possibility of filling the entire drive if i wanted to.
Another way of putting it would be:
If i buy a 2TB intel 665p drive and leave ~400GB unformatted i would always have a guaranteed SLC cache. Let's assume for the sake of argument, the resulting SLC is 128GB in size. What difference would it make in comparison to the ~$100 more expensive Enmotus drive? Apart from the bundled software of course.
Sorry for the late reply, went away over the weekend so no time to respond until now. Anyhow, while in theory I agree that most of what you are saying ought to be the case, it largely isn't. (Note that all of this assumes that the pSLC cache is a fixed portion of the flash, which is how I interpret the wording of the original article here.)
- SSDs with dynamic pSLC caching don't have a fixed portion of flash set aside for caching, but rotate the cache around across the entire drive as time passes. (There are drives with non-dynamic pSLC, but those typically have very small caches (~10GB for 1TB drives) and AFAIK still move the cache around the drive during use.) This does help with wear leveling, as a static cache would mean the rest of the drive gets a lot more wear than the cache portion - I'll get back to how this drive likely overcomes that. They also typically scale the size of their pSLC cache depending on how full the drive is - for example,
the Intel 660p 1TB drive has a maximum cache size of 140GB, but gradually reduces this as the drive is filled, down to 12GB. Here's a graph showing how this scales across the three SKUs for that series.
- The size of the pSLC cache in dynamic cache drives is not affected by unformatted space or overprovisioning (at least in any drive I've ever seen a review of), so leaving ~400GB unformatted sadly
won't give you a guaranteed pSLC cache. It
will give you massive overprovisioning, which helps with wear leveling and thus extends the usable lifetime of the drive as the unformatted parts of the drive will then be available as spare cells for any flash that gets retired due to wear. But at best it will guarantee that the pSLC cache never shrinks below the level it is at with ~400GB of free space.
- While it's obvious that write amplification is bad, and that the ideal in terms of wear would be for any write operation to only write exactly as much data as is needed, this is essentially impossible with flash storage. If flash controllers interfaced with the NAND on a per-bit level, the controllers would need massive performance increases (and thus power draw) to reach current performance levels, the interconnects between NAND and controller would need to be massively more complex (and require enormous board space), and the RAM needed for caching drive information would expand dramatically (current drives need ~1GB of DRAM per TB of storage - moving from 4k blocks to 1b blocks might not result in direct 4096x multiplication of RAM needs, but likely a major increase). The reason some server SSDs have larger block sizes is that they are models meant for (nearly) only sequential writes, which are slightly faster with a larger block size, but they perform
terribly for random writes. On the other hand a smaller-than-4k block size would absolutely kill sequential write speeds. So, for consumer usage and until someone comes up with a better technology than NAND flash, we're stuck with 4k blocks being read, erased and rewritten for each write done on an SSD. And it's a compromise that all in all works pretty well as long as the drive is able to minimize write amplification. These days most do this reasonably well.
- Torrents are split into blocks, but their size varies a lot, and even 4kb blocks wouldn't really help, as that would still require the torrent application to cache the data in RAM until the entire block is downloaded. Data needs to be stored from the first bit downloaded, after all. Given that most torrents are downloaded non-sequentially from multiple sources simultaneously at varying speeds, the write pattern is essentially random. This is what makes torrents a worst-case scenario for SSDs.
- The thing is, a 128GB pSLC cache doesn't just benefit that much of the drive, as the chances of hammering the drive with >128GB of writes fast enough for it to not be able to clear its cache are ... well, zero in a consumer use case. Unless you have a habit of copying multiple game installs between different NVMe SSDs (or from a RAM disk I guess), you'll never saturate the drive enough to fill the cache. It will be clearing itself continuously every chance it gets, shuffling data over to QLC as soon as there is an opening in the queue or the workload is light enough that the controller has capacity to spare - as do all pSLC drives. For consumer uses, the controller has capacity to spare essentially all the time.
- Yes, the pSLC of other QLC drives is (likely) exactly as durable as this (though with a caveat I'll get back to soon). The possible difference there comes from the persistence of the cache, both in size and portion of the flash used: A persistent large cache means that the chance of filling the cache doesn't increase as the drive fills (unlike the 660p above - filling a 12GB cache is far more likely than a 128GB one), and the size of the cache means it will under pretty much any consumer usage scenario have plenty of time to figure out the optimal way of writing the cached data to QLC while minimizing write amplification. This is of course down to firmware tuning, but dynamic caches are under much more pressure to empty themselves as some of the cells might soon be called on to work as QLC, and a setup like this will potentially have the headroom to do a more thorough job in its cache flushing, ensuring less write amplification while maintaining write performance. The QLC part of this drive will be worn slightly
more than a drive with pSLC from cache flushes, as those writes will be spread over just 75% of the NAND rather than 100% of it in a drive with dynamic caching. This ought however to be compensated for by lower write amplification, as the drive can pretty much always afford to wait for a full 4k of data to write before moving data off the pSLC, no matter the workload. And that is precisely where a solution like this has a chance to beat the competition. Competing drives will (ideally) wear evenly across the entire drive, while for this drive the portion of flash used for the pSLC will essentially never see wear. Given sufficient minimization of write amplification, this can result in an overall increase in longevity over current dynamic solutions. Of course a dynamic solution with a huge cache and firmware tuned to minimize write amplification at all costs would be even better, but AFAIK no such drive exists.
This turned all long and rambly on me, but I hope you see a bit more of where I'm coming from. Torrenting on an SSD is an edge case, but this drive ought to handle it better than most competitors.