Friday, July 3rd 2020

Enmotus, Company Behind Original StoreMI, Launches FuzeDrive NVMe SSD

Enmotus is the company behind the FuzeDrive software on which the original AMD StoreMI technology is based, which juggles data among your various physical storage devices based on heat (frequency of access), improving performance. The company has now come up with its first hardware-product, the FuzeDrive NVMe SSD. Built in an M.2-2280 form-factor, the drive offers 1.6 TB of capacity, and combines a Phison E12-series controller with 96-layer 3D QLC NAND flash memory. The drive takes advantage of PCI-Express gen 3.0 x4.

Performance numbers of the FuzeDrive 1.6 TB SSD as rated by its makers include up to 3,470 MB/s sequential reads, up to 3,000 MB/s sequential writes; and an endurance rating of 5,000 TBW. The drive uses a 128 GB SLC cache to speed up write performance in moderate bursts. There's more to this drive than just its hard-product, Enmotus includes software that juggles data between the 128 GB pseudo-SLC and QLC areas; and of course the FuzeDrive software that lets you build volumes of up to 15 TB in size by throwing in fixed physical drives of any shape and size. Enmotus is pricing the FuzeDrive 1.6 TB NVMe SSD at $349.

Update Jul 3rd: We've learned through Enmotus that this drive has a permanent 128 GB SLC cache that's exclusive of the 1.6 TB QLC user-area. We believe this drive is possibly a 2 TB QLC drive, in which a quarter of the user area is permanently assigned to work as SLC, with 30,000 P/E cycles. The FuzeDrive firmware transfers hot data between the SLC and QLC areas of the drive.
Add your own comment

27 Comments on Enmotus, Company Behind Original StoreMI, Launches FuzeDrive NVMe SSD

#1
Athlonite
5,000 TBW that's a bold claim
Posted on Reply
#3
nemesis.ie
Why not PCIe gen 4? Seems like a missed opportunity, especially for a new player.
Posted on Reply
#4
Valantar
Pretty neat to see this as an integrated solution, though I'm still wishing for something like this where the cache is 3D Xpoint/Optane. Someone just needs to make a dual-mode controller ...

And yes, 2TB raw capacity (well, after overprovisioning) with 512GB of QLC set to act exclusively as SLC is the only thing that makes sense with conventional flash die capacities and channel counts.
Posted on Reply
#5
Vader
I feels awkward to bring this technology into a qlc nand that doesnt really need help during READ operations. And the fixed slc cache feels dubious, why remove the user from the possibility of using the entire drive? Dynamic cache wotks better imo
Posted on Reply
#6
Valantar
VaderI feels awkward to bring this technology into a qlc nand that doesnt really need help during READ operations. And the fixed slc cache feels dubious, why remove the user from the possibility of using the entire drive? Dynamic cache wotks better imo
I would guess the argument is that this delivers consistent performance no matter how full the drive is. Also, isn't the SLC cache used just for writes? Given the software roots of the company behind this I would assume there's some special firmware tricks here to make this better than a conventional dynamic cache drive (though of course it remains to be seen if that actually works).
Posted on Reply
#7
zlobby
spectatorxQLC - e-waste.
While I totally agree with you, I must also take into account the countless non-tech-savvy people who have no clue what voltage or current is, let alone what's the difference between SLC, MLC, QLC, etc. is, and why it matters.
Exactly that kind of people are being targeted by such scams, err, 'products'. Manufacturers will always prey on such people and it's our duty to protect the commonfolk from the greasy, greedy fingers of the modern manufacturers. :rockout:
Posted on Reply
#8
Valantar
Oh, right, 'cause the average PC user tends to hammer their drive with writes. Sure.

Most TLC SSDs barely scratch a few percent (if that) of their total write endurance before the PC it lives in is replaced wholesale. QLC is a perfectly viable alternative for budget and/or high capacity flash storage, and its endurance is passable for anything but enthusiast use given properly tuned firmware.
Posted on Reply
#9
zlobby
ValantarOh, right, 'cause the average PC user tends to hammer their drive with writes. Sure.
Err, torrents? Video and photo editing?

Why would anyone buy a 2TB SDD for a sole boot disk?
Posted on Reply
#10
Valantar
zlobbyErr, torrents? Video and photo editing?

Why would anyone buy a 2TB SDD for a sole boot disk?
Game drive? That is the clear use case for a drive like this. And a) are you saying the average PC user downloads torrents? LOL. b) who downloads torrents to an SSD anyhow? The amount of writes required for video or photo editing are also entirely within the capabilities of a QLC drive.
Posted on Reply
#11
zlobby
ValantarGame drive? That is the clear use case for a drive like this. And a) are you saying the average PC user downloads torrents? LOL. b) who downloads torrents to an SSD anyhow? The amount of writes required for video or photo editing are also entirely within the capabilities of a QLC drive.
a) Yes.
b) Me, for one. Even my NAS boxes are on SSD.
c) (I guess?) Many creators and editors I know use north of 500GB of scratch files on a SSD. And that's what they do all day. With the rest of the SSD full of project files and raw content, it's easy to kill every wear-leveling algo.
Posted on Reply
#12
Valantar
zlobbya) Yes.
b) Me, for one. Even my NAS boxes are on SSD.
c) (I guess?) Many creators and editors I know use north of 500GB of scratch files on a SSD. And that's what they do all day. With the rest of the SSD full of project files and raw content, it's easy to kill every wear-leveling algo.
a) sorry, but they don't. It's a small minority at best.
b) Well, I guess this is the one SSD that might actually suit that use case, given that it has 128GB of flash that is always SLC, making its write endurance near endless. All the random writes go there, which gets bundled into larger writes and moved to the QLC. QLC write amplification thus becomes near zero. Those 128GB of SLC will make this last far longer running torrents 24/7 than any TLC drive.
c) Scratch disks don't use that many writes, they're mainly for reads - it's a typical WORM (write once read many) workload. Unless you're doing a professional-level amount of video work this is a non-issue. And if you are doing video professionally, why are you looking at stuff like this anyhow? Get proper equipment that is made for the job.
Posted on Reply
#13
zlobby
Valantara) sorry, but they don't. It's a small minority at best.
b) Well, I guess this is the one SSD that might actually suit that use case, given that it has 128GB of flash that is always SLC, making its write endurance near endless. All the random writes go there, which gets bundled into larger writes and moved to the QLC. QLC write amplification thus becomes near zero. Those 128GB of SLC will make this last far longer running torrents 24/7 than any TLC drive.
c) Scratch disks don't use that many writes, they're mainly for reads - it's a typical WORM (write once read many) workload. Unless you're doing a professional-level amount of video work this is a non-issue. And if you are doing video professionally, why are you looking at stuff like this anyhow? Get proper equipment that is made for the job.
I stopped reading after your answer to a). If that's what you trully believe, further discussion will be futile.
Posted on Reply
#14
Valantar
zlobbyI stopped reading after your answer to a). If that's what you trully believe, further discussion will be futile.
You were specifically talking about
zlobbythe countless non-tech-savvy people who (...) are being targeted by such scams, err, 'products'.
Are you now saying that these people are savvy enough to use bittorrent? Some of them might have been five or ten years ago - out of sheer necessity if nothing else - but current streaming services (in combination with the wave of torrent sites being closed some years back) have decimated global torrent traffic.

And, again, the actual (not dynamic) SLC cache here would make this drive eminently suitable for torrent downloads, unlike other SSDs.

That you seem to be suggesting that these same people are semi-professional video editors is verging on the absurd.

My point: The stupid hate against QLC needs to stop. It's a perfectly reasonable tradeoff for the vast majority of PC users, and it gives us more fast storage for roughly the same money, even if it does need better firmware and controllers to not perform terribly when stressed. It's not suitable for write-heavy enthusiast use (though admittedly not much enthusiast use is that write-heavy either), but that's fine. Not all parts are - or should be - made for that market. And while the overall need for >1TB non-enthusiast SSDs is relatively small, game install sizes are a significant driver for larger drives for non-enthusiasts. Installing 10-30 large-ish games, keeping them updated, and periodically uninstalling some and installing something new won't move the needle on write endurance over the expected lifespan of a drive like this.
Posted on Reply
#15
kayjay010101
zlobbyI stopped reading after your answer to a). If that's what you trully believe, further discussion will be futile.
Or in other words, you have no comeback when you got rolled in an argument and are trying to save face and leave the discussion.
Posted on Reply
#16
zlobby
kayjay010101Or in other words, you have no comeback when you got rolled in an argument and are trying to save face and leave the discussion.
Non sequitur. Waiving from an argument doesn't automatically makes me wrong.

I don't know what countries are you two from, but apart from some West European countries (most notably Germany), and parts from the US, people are downloading tons of torrents. Fetch the first top 10 torrents in any tracker and see the IP of the peers. Most torrent clients are even putting some small country flags next to the IP.

I could also cite you some world IP traffic statistics but to what end?
Posted on Reply
#17
Valantar
zlobbyNon sequitur. Waiving from an argument doesn't automatically makes me wrong.

I don't know what countries are you two from, but apart from some West European countries (most notably Germany), and parts from the US, people are downloading tons of torrents. Fetch the first top 10 torrents in any tracker and see the IP of the peers. Most torrent clients are even putting some small country flags next to the IP.

I could also cite you some world IP traffic statistics but to what end?
... that doesn't tell us anything at all about how many people are actually downloading torrents. Got any stats on total amount of data transferred through bittorrent compared to, say, Netflix? Any trustworthy user count data? A small group of extremely active users (let's say on the scale of 100 000 - 1 000 000 worldwide, though even 10x that would be small) can make any network seem extremely active despite it being comparatively small when you look at relevant alternatives (mostly streaming services). And it's pretty well established that the core user group of BT is data hoarders with a voracious appetite for any and all forms of media. How many torrents do you see with hundreds of thousands or millions of seeds and peers?

And while stepping away from an argument doesn't automatically make you wrong, it does leave arguments contrary to yours unanswered, which (given that the arguments are reasonably sound) undermines your credibility. Refusing to address issues raised about your reasoning only serves to highlight said issues.

Though this source is admittedly a few years out of date, it was the most comprehensive overview I found in three minutes of DuckDuckGo searching. At least the numbers come from Cisco, so they should be about as accurate as one could hope to get.

Given that torrents fall under the heading of "file sharing", we can see that it represents a tiny proportion of global data traffic. And, given that media shared through torrents tends to be large files (at least when compared to non-video web sites, but also when compared to streaming video at the same resolution), one can relatively safely assume that this correlates to an overall low number of actual torrent traffic, and thus users and downloads.
Posted on Reply
#18
zlobby
Valantar... that doesn't tell us anything at all about how many people are actually downloading torrents. Got any stats on total amount of data transferred through bittorrent compared to, say, Netflix? Any trustworthy user count data? A small group of extremely active users (let's say on the scale of 100 000 - 1 000 000 worldwide, though even 10x that would be small) can make any network seem extremely active despite it being comparatively small when you look at relevant alternatives (mostly streaming services). And it's pretty well established that the core user group of BT is data hoarders with a voracious appetite for any and all forms of media. How many torrents do you see with hundreds of thousands or millions of seeds and peers?

And while stepping away from an argument doesn't automatically make you wrong, it does leave arguments contrary to yours unanswered, which (given that the arguments are reasonably sound) undermines your credibility. Refusing to address issues raised about your reasoning only serves to highlight said issues.

Though this source is admittedly a few years out of date, it was the most comprehensive overview I found in three minutes of DuckDuckGo searching. At least the numbers come from Cisco, so they should be about as accurate as one could hope to get.

Given that torrents fall under the heading of "file sharing", we can see that it represents a tiny proportion of global data traffic. And, given that media shared through torrents tends to be large files (at least when compared to non-video web sites, but also when compared to streaming video at the same resolution), one can relatively safely assume that this correlates to an overall low number of actual torrent traffic, and thus users and downloads.
TL; DR like, literally. At least I hope you feel better if anything.
Over and out, like, literally.
Posted on Reply
#19
Valantar
zlobbyTL; DR like, literally. At least I hope you feel better if anything.
Over and out, like, literally.
More like disappointed that you for some weird reason are refusing to take part in a reasonable on-topic debate. Oh well.
Posted on Reply
#20
Vader
Valantarb) Well, I guess this is the one SSD that might actually suit that use case, given that it has 128GB of flash that is always SLC, making its write endurance near endless. All the random writes go there, which gets bundled into larger writes and moved to the QLC. QLC write amplification thus becomes near zero. Those 128GB of SLC will make this last far longer running torrents 24/7 than any TLC drive.
I don't get this part. I mean, buy a $349 ssd to get 128gb of usable space? Why would the algorithm always use the slc for the current download anyways? This implies you are not accesing any other files, otherwise they would be "hotter" and would take priority on the slc.
Isn't torrent downloading another case of WORM type worload? How does the drive configuration achieve longer write endurance? It's the same qlc nand in the end, besides, i don't see how it is relevant in a torrenting scenario; if you're really worried about write endurance you probably have downloaded way past the slc cache anyway
Posted on Reply
#21
zlobby
VaderI don't get this part. I mean, buy a $349 ssd to get 128gb of usable space? Why would the algorithm always use the slc for the current download anyways? This implies you are not accesing any other files, otherwise they would be "hotter" and would take priority on the slc.
Isn't torrent downloading another case of WORM type worload? How does the drive configuration achieve longer write endurance? It's the same qlc nand in the end, besides, i don't see how it is relevant in a torrenting scenario; if you're really worried about write endurance you probably have downloaded way past the slc cache anyway
Shh, you are trying reason and logic where there is none! :D
Posted on Reply
#22
Valantar
VaderI don't get this part. I mean, buy a $349 ssd to get 128gb of usable space? Why would the algorithm always use the slc for the current download anyways? This implies you are not accesing any other files, otherwise they would be "hotter" and would take priority on the slc.
Isn't torrent downloading another case of WORM type worload? How does the drive configuration achieve longer write endurance? It's the same qlc nand in the end, besides, i don't see how it is relevant in a torrenting scenario; if you're really worried about write endurance you probably have downloaded way past the slc cache anyway
Torrents are definitely not WORM, they are typically a heap of random writes over a relatively long period of time. So many, many writes, but potentially also many reads. As for the drive having only 128GB of accessible space, where did you get that from? I think you need to go back and reread something, because you clearly misread something. The drive has a fixed size 128GB SLC write cache (which the user doesn't see or have direct access to) alongside 1.5GB of user-visible capacity. All writes first go to the write buffer unless it is full or the algorithm has some other reason to send writes directly to QLC (of which there are few that apply with a 128GB buffer, nor is there any real chance of it filling up). Why? Because the use of the buffer allows the drive to aggregate writes into larger blocks that can be more optimally written to the QLC - this (along with SLC being faster to write to) is the main reason for pseudo-SLC caching to exist in the first place. As torrent downloads typically come from multiple sources simultaneously at different speeds and comprising of different parts of the same file, a good SLC cache allows the drive to hold on to these chunks until it can write them together rather than on the fly. The SLC being a write cache doesn't affect accessing other files already on the drive, as reading from the QLC directly is fast enough. The SLC buffer is not involved in read operations (except for data that has yet to be moved to the QLC). And as to "downloading past the cache": that's not how write caching works. The cache is continuously emptied as long as the controller has available processing power and bandwidth to do so, writing its contents to QLC as soon as the opportunity arises. Unless your downloads are pushing the drive to 100% active time it will be continuously flushing its write cache. Even if the drive is 99% full the write cache should be entirely empty if the drive has been idle for a handful of seconds. ("Idle time" for an SSD controller can consist of a break of a few ms between operations, and that's still enough time to do some cleanup and cache flushing, though flushing large amounts of data from cache obviously takes more time.)

As for how pseudo-SLC achieves better write endurance, and especially a fixed cache, I will try to give you the cliff notes version of how flash endurance works, as it seems to be needed. This will take a bit of time though, as it needs a low level explanation. For the tl;dr: it's more difficult to tell 16 variations of data (QLC) from each other than 2 (SLC), and this difficulty gets worse as the flash wears down. Setting a fixed portion of flash as SLC (which essentially never wears out) thus allows for writes to the QLC to be grouped together for ideal 4k block sizes and spread across available flash for optimal wear leveling.

The lower write endurance of higher bit per cell flash comes from simple physics: the more bits per cell, the more charge levels the controller needs to be able to distinguish. Flash essentially works by trapping electrons inside a "cell" and then measuring the voltage. For SLC it's two voltage leves equating to a single bit of data; 1 or 0. For MLC it's 4 (00, 01, 10, 11), for TLC it's 8 (000, 010, 011, etc.), and for QLC it's 16 distinct charge levels, all in a cell so small it can at best contain a couple hundred electrons, though often fewer (the last generations of planar, non-3D NAND had cells so small they barely fit a dozen electrons). The fewer electrons, the less leeway you have between your voltage levels, and the harder it is to distinguish between them: let us say we have a theoretical flash cell with room for up to 64 electrons in it. You then have a 32-electron margin between your 1 and 0 state for your single bit of data if its an SLC cell, but only a 4-electron margin for QLC. The voltage difference of a cell with 0 or 1 or even a handful of electrons in it is of course very, very small. Over time, electrons will inevitably get stuck, either in the cell or in the gates surrounding it. This then complicates measuring the voltage, as the stuck electrons will influence the reading. Suddenly you then have just 63 actual electrons under your control. For SLC this is no problem - distinguishing 2 levels is still simple. Distinguishing 16 is on the other hand much trickier, as you now have a weirdly skewed number of possible voltage levels, all of which are then less than four electrons apart. And the more electrons get stuck, the harder this gets, until you are no longer able to use the cell and it must be retired. For QLC, this could take just a couple of stuck electrons if the cell is small enough. For SLC, you could in theory have more than fifty stuck electrons in a cell with room for 64 and still easily get the correct reading from the cell every time as long as the controller can account for the wear - at 60 stuck electrons you would still have the same margin for the reading as a single bit in a brand new, never-used QLC cell! This is also why QLC has much lower write speeds - given the much lower tolerances, the controller needs to take more time to ensure the write operation completes correctly. For flexible pSLC caches like most SSDs use the endurance is far less than for "real" SLC as the cell still must be able to be used as QLC/TLC (depending on the drive) if the controller asks it to. If the SLC cache is fixed, you can treat the cache fully as SLC, with essentially the same write endurance and speed. Which means practically infinite endurance and very, very fast writes.

This plays together with what is called write amplification: SSDs have a base data block size of 4kb (some enterprise drives have larger blocks, up to 128k). Any write smaller than this still requires the drive to (if the block has data in it, read, erase, then) write a full 4k block - so writing a single bit of data alone would without any type of caching result in wear on the drive equal to writing 4k of data. This means that over the lifetime of an SSD, the total amount of writes it's subjected to can be notably higher than the actual amount of data written to it. This of course causes it to wear out even more rapidly. Caching helps avoid this as you can then save all of these tiny writes, bundle them together, and not write anything to the QLC until you actually have a full 4k of data to write. Both fixed and dynamic caches help with this, and a RAM cache can too, but the latter runs the risk of data loss in case of a power failure.

Hope this clears things up a bit for you :)
zlobbyShh, you are trying reason and logic where there is none! :D
Thank you for always bringing such valuable insight to the debate. You're really begging to be reported, aren't you? I mean, if you actually had any basis for disagreeing with me you would be likely to actually argue your case. Instead you just sit back and toss out insults and ad hominems. Real classy. Doesn't leave much confidence in your side of the argument.
Posted on Reply
#23
Vader
ValantarTorrents are definitely not WORM, they are typically a heap of random writes over a relatively long period of time. So many, many writes, but potentially also many reads.
I thought write endurance was mostly related to the amount of written space, not the amount of write operations. The way i see it, downloaded files should write the same amount they weight, once, then be read many times if you seed the torrent, thus WORM. I do appreciate the explanation given on QLC vs SLC, although i did know most of that, i learned about 4kb size blocks and how smaller writes could affect NAND life. Having said that, aren't most torrents divided into 4KB blocks anyway?
ValantarAs for the drive having only 128GB of accessible space, where did you get that from?
I know the drive is not 128GB, but you get the benefits of SLC only in those 128GB, that's what i meant.

I do realize i made a mistake in my original post, i thought the drive worked like the Store-MI software and moved the "hotter" files to the SLC portion of the drive. I see now that this is only used for write operations, like most QLC+pseudo SLC drives on the market.
ValantarSetting a fixed portion of flash as SLC (which essentially never wears out) thus allows for writes to the QLC to be grouped together for ideal 4k block sizes and spread across available flash for optimal wear leveling.
How does this differ from competing drives on the market though? Aren't those drives's pseudo SLC NANDs as durable as this is?
You seem to imply that because in this drive the SLC is fixed, it would reach higher endurance than the competition. Thing is, we don't know how high the SLC endurance rating is on the other drives, as the number they give is based on the QLCs endurance, as the SLC being dynamic in size, is not guaranteed. But in a real-case scenario i would prefer a dynamic SLC caching, as it does offer a larger cache size without taking away the possibility of filling the entire drive if i wanted to.

Another way of putting it would be:
If i buy a 2TB intel 665p drive and leave ~400GB unformatted i would always have a guaranteed SLC cache. Let's assume for the sake of argument, the resulting SLC is 128GB in size. What difference would it make in comparison to the ~$100 more expensive Enmotus drive? Apart from the bundled software of course.
Posted on Reply
#24
Valantar
VaderI thought write endurance was mostly related to the amount of written space, not the amount of write operations. The way i see it, downloaded files should write the same amount they weight, once, then be read many times if you seed the torrent, thus WORM. I do appreciate the explanation given on QLC vs SLC, although i did know most of that, i learned about 4kb size blocks and how smaller writes could affect NAND life. Having said that, aren't most torrents divided into 4KB blocks anyway?

I know the drive is not 128GB, but you get the benefits of SLC only in those 128GB, that's what i meant.

I do realize i made a mistake in my original post, i thought the drive worked like the Store-MI software and moved the "hotter" files to the SLC portion of the drive. I see now that this is only used for write operations, like most QLC+pseudo SLC drives on the market.

How does this differ from competing drives on the market though? Aren't those drives's pseudo SLC NANDs as durable as this is?
You seem to imply that because in this drive the SLC is fixed, it would reach higher endurance than the competition. Thing is, we don't know how high the SLC endurance rating is on the other drives, as the number they give is based on the QLCs endurance, as the SLC being dynamic in size, is not guaranteed. But in a real-case scenario i would prefer a dynamic SLC caching, as it does offer a larger cache size without taking away the possibility of filling the entire drive if i wanted to.

Another way of putting it would be:
If i buy a 2TB intel 665p drive and leave ~400GB unformatted i would always have a guaranteed SLC cache. Let's assume for the sake of argument, the resulting SLC is 128GB in size. What difference would it make in comparison to the ~$100 more expensive Enmotus drive? Apart from the bundled software of course.
Sorry for the late reply, went away over the weekend so no time to respond until now. Anyhow, while in theory I agree that most of what you are saying ought to be the case, it largely isn't. (Note that all of this assumes that the pSLC cache is a fixed portion of the flash, which is how I interpret the wording of the original article here.)

- SSDs with dynamic pSLC caching don't have a fixed portion of flash set aside for caching, but rotate the cache around across the entire drive as time passes. (There are drives with non-dynamic pSLC, but those typically have very small caches (~10GB for 1TB drives) and AFAIK still move the cache around the drive during use.) This does help with wear leveling, as a static cache would mean the rest of the drive gets a lot more wear than the cache portion - I'll get back to how this drive likely overcomes that. They also typically scale the size of their pSLC cache depending on how full the drive is - for example, the Intel 660p 1TB drive has a maximum cache size of 140GB, but gradually reduces this as the drive is filled, down to 12GB. Here's a graph showing how this scales across the three SKUs for that series.


- The size of the pSLC cache in dynamic cache drives is not affected by unformatted space or overprovisioning (at least in any drive I've ever seen a review of), so leaving ~400GB unformatted sadly won't give you a guaranteed pSLC cache. It will give you massive overprovisioning, which helps with wear leveling and thus extends the usable lifetime of the drive as the unformatted parts of the drive will then be available as spare cells for any flash that gets retired due to wear. But at best it will guarantee that the pSLC cache never shrinks below the level it is at with ~400GB of free space.

- While it's obvious that write amplification is bad, and that the ideal in terms of wear would be for any write operation to only write exactly as much data as is needed, this is essentially impossible with flash storage. If flash controllers interfaced with the NAND on a per-bit level, the controllers would need massive performance increases (and thus power draw) to reach current performance levels, the interconnects between NAND and controller would need to be massively more complex (and require enormous board space), and the RAM needed for caching drive information would expand dramatically (current drives need ~1GB of DRAM per TB of storage - moving from 4k blocks to 1b blocks might not result in direct 4096x multiplication of RAM needs, but likely a major increase). The reason some server SSDs have larger block sizes is that they are models meant for (nearly) only sequential writes, which are slightly faster with a larger block size, but they perform terribly for random writes. On the other hand a smaller-than-4k block size would absolutely kill sequential write speeds. So, for consumer usage and until someone comes up with a better technology than NAND flash, we're stuck with 4k blocks being read, erased and rewritten for each write done on an SSD. And it's a compromise that all in all works pretty well as long as the drive is able to minimize write amplification. These days most do this reasonably well.

- Torrents are split into blocks, but their size varies a lot, and even 4kb blocks wouldn't really help, as that would still require the torrent application to cache the data in RAM until the entire block is downloaded. Data needs to be stored from the first bit downloaded, after all. Given that most torrents are downloaded non-sequentially from multiple sources simultaneously at varying speeds, the write pattern is essentially random. This is what makes torrents a worst-case scenario for SSDs.

- The thing is, a 128GB pSLC cache doesn't just benefit that much of the drive, as the chances of hammering the drive with >128GB of writes fast enough for it to not be able to clear its cache are ... well, zero in a consumer use case. Unless you have a habit of copying multiple game installs between different NVMe SSDs (or from a RAM disk I guess), you'll never saturate the drive enough to fill the cache. It will be clearing itself continuously every chance it gets, shuffling data over to QLC as soon as there is an opening in the queue or the workload is light enough that the controller has capacity to spare - as do all pSLC drives. For consumer uses, the controller has capacity to spare essentially all the time.

- Yes, the pSLC of other QLC drives is (likely) exactly as durable as this (though with a caveat I'll get back to soon). The possible difference there comes from the persistence of the cache, both in size and portion of the flash used: A persistent large cache means that the chance of filling the cache doesn't increase as the drive fills (unlike the 660p above - filling a 12GB cache is far more likely than a 128GB one), and the size of the cache means it will under pretty much any consumer usage scenario have plenty of time to figure out the optimal way of writing the cached data to QLC while minimizing write amplification. This is of course down to firmware tuning, but dynamic caches are under much more pressure to empty themselves as some of the cells might soon be called on to work as QLC, and a setup like this will potentially have the headroom to do a more thorough job in its cache flushing, ensuring less write amplification while maintaining write performance. The QLC part of this drive will be worn slightly more than a drive with pSLC from cache flushes, as those writes will be spread over just 75% of the NAND rather than 100% of it in a drive with dynamic caching. This ought however to be compensated for by lower write amplification, as the drive can pretty much always afford to wait for a full 4k of data to write before moving data off the pSLC, no matter the workload. And that is precisely where a solution like this has a chance to beat the competition. Competing drives will (ideally) wear evenly across the entire drive, while for this drive the portion of flash used for the pSLC will essentially never see wear. Given sufficient minimization of write amplification, this can result in an overall increase in longevity over current dynamic solutions. Of course a dynamic solution with a huge cache and firmware tuned to minimize write amplification at all costs would be even better, but AFAIK no such drive exists.

This turned all long and rambly on me, but I hope you see a bit more of where I'm coming from. Torrenting on an SSD is an edge case, but this drive ought to handle it better than most competitors.
Posted on Reply
#25
Vader
No worries, i'm happy to see a detailed response.

I can see now where you're coming from.

However, there is something else we haven't talked about, and that is network bandwith vs QLC write speeds. Looking at the data in other reviews, QLC seems to settle at around 100MB/s, it takes almost a saturated gigabit connection to overcome that, which is not all that very common. There is still usefulness in the drive i think; in the real world most people would be using the drive for other stuff as well, so that would take away some of the overhead.

Upon further examination, i believe there's a bigger picture that we've missed so far. With the included software, there's the possibility of arranging big storage spaces with (probably) mechanical HDDs, with this SSD working as a cache drive. With such arrangement, there i can see a good chance of this drive being more useful than competing drives, even more so than the torrent case scenario
Posted on Reply
Add your own comment
Jan 22nd, 2025 05:09 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts