Friday, July 3rd 2020
Enmotus, Company Behind Original StoreMI, Launches FuzeDrive NVMe SSD
Enmotus is the company behind the FuzeDrive software on which the original AMD StoreMI technology is based, which juggles data among your various physical storage devices based on heat (frequency of access), improving performance. The company has now come up with its first hardware-product, the FuzeDrive NVMe SSD. Built in an M.2-2280 form-factor, the drive offers 1.6 TB of capacity, and combines a Phison E12-series controller with 96-layer 3D QLC NAND flash memory. The drive takes advantage of PCI-Express gen 3.0 x4.
Performance numbers of the FuzeDrive 1.6 TB SSD as rated by its makers include up to 3,470 MB/s sequential reads, up to 3,000 MB/s sequential writes; and an endurance rating of 5,000 TBW. The drive uses a 128 GB SLC cache to speed up write performance in moderate bursts. There's more to this drive than just its hard-product, Enmotus includes software that juggles data between the 128 GB pseudo-SLC and QLC areas; and of course the FuzeDrive software that lets you build volumes of up to 15 TB in size by throwing in fixed physical drives of any shape and size. Enmotus is pricing the FuzeDrive 1.6 TB NVMe SSD at $349.Update Jul 3rd: We've learned through Enmotus that this drive has a permanent 128 GB SLC cache that's exclusive of the 1.6 TB QLC user-area. We believe this drive is possibly a 2 TB QLC drive, in which a quarter of the user area is permanently assigned to work as SLC, with 30,000 P/E cycles. The FuzeDrive firmware transfers hot data between the SLC and QLC areas of the drive.
Performance numbers of the FuzeDrive 1.6 TB SSD as rated by its makers include up to 3,470 MB/s sequential reads, up to 3,000 MB/s sequential writes; and an endurance rating of 5,000 TBW. The drive uses a 128 GB SLC cache to speed up write performance in moderate bursts. There's more to this drive than just its hard-product, Enmotus includes software that juggles data between the 128 GB pseudo-SLC and QLC areas; and of course the FuzeDrive software that lets you build volumes of up to 15 TB in size by throwing in fixed physical drives of any shape and size. Enmotus is pricing the FuzeDrive 1.6 TB NVMe SSD at $349.Update Jul 3rd: We've learned through Enmotus that this drive has a permanent 128 GB SLC cache that's exclusive of the 1.6 TB QLC user-area. We believe this drive is possibly a 2 TB QLC drive, in which a quarter of the user area is permanently assigned to work as SLC, with 30,000 P/E cycles. The FuzeDrive firmware transfers hot data between the SLC and QLC areas of the drive.
27 Comments on Enmotus, Company Behind Original StoreMI, Launches FuzeDrive NVMe SSD
And yes, 2TB raw capacity (well, after overprovisioning) with 512GB of QLC set to act exclusively as SLC is the only thing that makes sense with conventional flash die capacities and channel counts.
Exactly that kind of people are being targeted by such scams, err, 'products'. Manufacturers will always prey on such people and it's our duty to protect the commonfolk from the greasy, greedy fingers of the modern manufacturers. :rockout:
Most TLC SSDs barely scratch a few percent (if that) of their total write endurance before the PC it lives in is replaced wholesale. QLC is a perfectly viable alternative for budget and/or high capacity flash storage, and its endurance is passable for anything but enthusiast use given properly tuned firmware.
Why would anyone buy a 2TB SDD for a sole boot disk?
b) Me, for one. Even my NAS boxes are on SSD.
c) (I guess?) Many creators and editors I know use north of 500GB of scratch files on a SSD. And that's what they do all day. With the rest of the SSD full of project files and raw content, it's easy to kill every wear-leveling algo.
b) Well, I guess this is the one SSD that might actually suit that use case, given that it has 128GB of flash that is always SLC, making its write endurance near endless. All the random writes go there, which gets bundled into larger writes and moved to the QLC. QLC write amplification thus becomes near zero. Those 128GB of SLC will make this last far longer running torrents 24/7 than any TLC drive.
c) Scratch disks don't use that many writes, they're mainly for reads - it's a typical WORM (write once read many) workload. Unless you're doing a professional-level amount of video work this is a non-issue. And if you are doing video professionally, why are you looking at stuff like this anyhow? Get proper equipment that is made for the job.
And, again, the actual (not dynamic) SLC cache here would make this drive eminently suitable for torrent downloads, unlike other SSDs.
That you seem to be suggesting that these same people are semi-professional video editors is verging on the absurd.
My point: The stupid hate against QLC needs to stop. It's a perfectly reasonable tradeoff for the vast majority of PC users, and it gives us more fast storage for roughly the same money, even if it does need better firmware and controllers to not perform terribly when stressed. It's not suitable for write-heavy enthusiast use (though admittedly not much enthusiast use is that write-heavy either), but that's fine. Not all parts are - or should be - made for that market. And while the overall need for >1TB non-enthusiast SSDs is relatively small, game install sizes are a significant driver for larger drives for non-enthusiasts. Installing 10-30 large-ish games, keeping them updated, and periodically uninstalling some and installing something new won't move the needle on write endurance over the expected lifespan of a drive like this.
I don't know what countries are you two from, but apart from some West European countries (most notably Germany), and parts from the US, people are downloading tons of torrents. Fetch the first top 10 torrents in any tracker and see the IP of the peers. Most torrent clients are even putting some small country flags next to the IP.
I could also cite you some world IP traffic statistics but to what end?
And while stepping away from an argument doesn't automatically make you wrong, it does leave arguments contrary to yours unanswered, which (given that the arguments are reasonably sound) undermines your credibility. Refusing to address issues raised about your reasoning only serves to highlight said issues.
Though this source is admittedly a few years out of date, it was the most comprehensive overview I found in three minutes of DuckDuckGo searching. At least the numbers come from Cisco, so they should be about as accurate as one could hope to get.
Given that torrents fall under the heading of "file sharing", we can see that it represents a tiny proportion of global data traffic. And, given that media shared through torrents tends to be large files (at least when compared to non-video web sites, but also when compared to streaming video at the same resolution), one can relatively safely assume that this correlates to an overall low number of actual torrent traffic, and thus users and downloads.
Over and out, like, literally.
Isn't torrent downloading another case of WORM type worload? How does the drive configuration achieve longer write endurance? It's the same qlc nand in the end, besides, i don't see how it is relevant in a torrenting scenario; if you're really worried about write endurance you probably have downloaded way past the slc cache anyway
As for how pseudo-SLC achieves better write endurance, and especially a fixed cache, I will try to give you the cliff notes version of how flash endurance works, as it seems to be needed. This will take a bit of time though, as it needs a low level explanation. For the tl;dr: it's more difficult to tell 16 variations of data (QLC) from each other than 2 (SLC), and this difficulty gets worse as the flash wears down. Setting a fixed portion of flash as SLC (which essentially never wears out) thus allows for writes to the QLC to be grouped together for ideal 4k block sizes and spread across available flash for optimal wear leveling.
Hope this clears things up a bit for you :) Thank you for always bringing such valuable insight to the debate. You're really begging to be reported, aren't you? I mean, if you actually had any basis for disagreeing with me you would be likely to actually argue your case. Instead you just sit back and toss out insults and ad hominems. Real classy. Doesn't leave much confidence in your side of the argument.
I do realize i made a mistake in my original post, i thought the drive worked like the Store-MI software and moved the "hotter" files to the SLC portion of the drive. I see now that this is only used for write operations, like most QLC+pseudo SLC drives on the market. How does this differ from competing drives on the market though? Aren't those drives's pseudo SLC NANDs as durable as this is?
You seem to imply that because in this drive the SLC is fixed, it would reach higher endurance than the competition. Thing is, we don't know how high the SLC endurance rating is on the other drives, as the number they give is based on the QLCs endurance, as the SLC being dynamic in size, is not guaranteed. But in a real-case scenario i would prefer a dynamic SLC caching, as it does offer a larger cache size without taking away the possibility of filling the entire drive if i wanted to.
Another way of putting it would be:
If i buy a 2TB intel 665p drive and leave ~400GB unformatted i would always have a guaranteed SLC cache. Let's assume for the sake of argument, the resulting SLC is 128GB in size. What difference would it make in comparison to the ~$100 more expensive Enmotus drive? Apart from the bundled software of course.
- SSDs with dynamic pSLC caching don't have a fixed portion of flash set aside for caching, but rotate the cache around across the entire drive as time passes. (There are drives with non-dynamic pSLC, but those typically have very small caches (~10GB for 1TB drives) and AFAIK still move the cache around the drive during use.) This does help with wear leveling, as a static cache would mean the rest of the drive gets a lot more wear than the cache portion - I'll get back to how this drive likely overcomes that. They also typically scale the size of their pSLC cache depending on how full the drive is - for example, the Intel 660p 1TB drive has a maximum cache size of 140GB, but gradually reduces this as the drive is filled, down to 12GB. Here's a graph showing how this scales across the three SKUs for that series.
- The size of the pSLC cache in dynamic cache drives is not affected by unformatted space or overprovisioning (at least in any drive I've ever seen a review of), so leaving ~400GB unformatted sadly won't give you a guaranteed pSLC cache. It will give you massive overprovisioning, which helps with wear leveling and thus extends the usable lifetime of the drive as the unformatted parts of the drive will then be available as spare cells for any flash that gets retired due to wear. But at best it will guarantee that the pSLC cache never shrinks below the level it is at with ~400GB of free space.
- While it's obvious that write amplification is bad, and that the ideal in terms of wear would be for any write operation to only write exactly as much data as is needed, this is essentially impossible with flash storage. If flash controllers interfaced with the NAND on a per-bit level, the controllers would need massive performance increases (and thus power draw) to reach current performance levels, the interconnects between NAND and controller would need to be massively more complex (and require enormous board space), and the RAM needed for caching drive information would expand dramatically (current drives need ~1GB of DRAM per TB of storage - moving from 4k blocks to 1b blocks might not result in direct 4096x multiplication of RAM needs, but likely a major increase). The reason some server SSDs have larger block sizes is that they are models meant for (nearly) only sequential writes, which are slightly faster with a larger block size, but they perform terribly for random writes. On the other hand a smaller-than-4k block size would absolutely kill sequential write speeds. So, for consumer usage and until someone comes up with a better technology than NAND flash, we're stuck with 4k blocks being read, erased and rewritten for each write done on an SSD. And it's a compromise that all in all works pretty well as long as the drive is able to minimize write amplification. These days most do this reasonably well.
- Torrents are split into blocks, but their size varies a lot, and even 4kb blocks wouldn't really help, as that would still require the torrent application to cache the data in RAM until the entire block is downloaded. Data needs to be stored from the first bit downloaded, after all. Given that most torrents are downloaded non-sequentially from multiple sources simultaneously at varying speeds, the write pattern is essentially random. This is what makes torrents a worst-case scenario for SSDs.
- The thing is, a 128GB pSLC cache doesn't just benefit that much of the drive, as the chances of hammering the drive with >128GB of writes fast enough for it to not be able to clear its cache are ... well, zero in a consumer use case. Unless you have a habit of copying multiple game installs between different NVMe SSDs (or from a RAM disk I guess), you'll never saturate the drive enough to fill the cache. It will be clearing itself continuously every chance it gets, shuffling data over to QLC as soon as there is an opening in the queue or the workload is light enough that the controller has capacity to spare - as do all pSLC drives. For consumer uses, the controller has capacity to spare essentially all the time.
- Yes, the pSLC of other QLC drives is (likely) exactly as durable as this (though with a caveat I'll get back to soon). The possible difference there comes from the persistence of the cache, both in size and portion of the flash used: A persistent large cache means that the chance of filling the cache doesn't increase as the drive fills (unlike the 660p above - filling a 12GB cache is far more likely than a 128GB one), and the size of the cache means it will under pretty much any consumer usage scenario have plenty of time to figure out the optimal way of writing the cached data to QLC while minimizing write amplification. This is of course down to firmware tuning, but dynamic caches are under much more pressure to empty themselves as some of the cells might soon be called on to work as QLC, and a setup like this will potentially have the headroom to do a more thorough job in its cache flushing, ensuring less write amplification while maintaining write performance. The QLC part of this drive will be worn slightly more than a drive with pSLC from cache flushes, as those writes will be spread over just 75% of the NAND rather than 100% of it in a drive with dynamic caching. This ought however to be compensated for by lower write amplification, as the drive can pretty much always afford to wait for a full 4k of data to write before moving data off the pSLC, no matter the workload. And that is precisely where a solution like this has a chance to beat the competition. Competing drives will (ideally) wear evenly across the entire drive, while for this drive the portion of flash used for the pSLC will essentially never see wear. Given sufficient minimization of write amplification, this can result in an overall increase in longevity over current dynamic solutions. Of course a dynamic solution with a huge cache and firmware tuned to minimize write amplification at all costs would be even better, but AFAIK no such drive exists.
This turned all long and rambly on me, but I hope you see a bit more of where I'm coming from. Torrenting on an SSD is an edge case, but this drive ought to handle it better than most competitors.
I can see now where you're coming from.
However, there is something else we haven't talked about, and that is network bandwith vs QLC write speeds. Looking at the data in other reviews, QLC seems to settle at around 100MB/s, it takes almost a saturated gigabit connection to overcome that, which is not all that very common. There is still usefulness in the drive i think; in the real world most people would be using the drive for other stuff as well, so that would take away some of the overhead.
Upon further examination, i believe there's a bigger picture that we've missed so far. With the included software, there's the possibility of arranging big storage spaces with (probably) mechanical HDDs, with this SSD working as a cache drive. With such arrangement, there i can see a good chance of this drive being more useful than competing drives, even more so than the torrent case scenario