Thursday, August 29th 2024

Disabled SLC Cache Tested on M.2 SSD, Helps Performance in Some Cases

Gabriel Ferraz, maintainer of the TechPowerUp SSD database and content creator, recently published an article that shows the relationship between SLC (Single-Level Cell) cache technology and its performance impact on SSDs. Using a Pichau Aldrin Pro 2 TB SSD featuring an Innogrit IG5236 controller and YMTC 128-layer TLC NAND, Gabriel has uncovered both the advantages and potential drawbacks of this feature. The article reveals that with SLC cache enabled, which acts as a high-speed buffer, the SSD achieved remarkable write speeds of up to 6.5 GB/s, but only until 691 GB had been written. Beyond that, speeds dropped to 2.2 GB/s and then to 860 MB/s as the drive filled up.

Disabling the SLC cache delivers more consistent performance results that are 2.1 GB/s across the whole capacity of the SSD, but with lower peak performance. Testing also examined the impact on power consumption and efficiency. With the SLC cache active, the SSD consumed approximately 5 W of power while achieving over 3000 MB/s bandwidth. Disabling the cache reduced power consumption but at the cost of halving the bandwidth to around 1900 MB/s, resulting in lower overall efficiency. Maximum power consumption with cache enabled peaked at 7.3 W, compared to a lower figure when operating in constant TLC mode. Below, you can see some performance benchmarks published on The Overclock Page.
Interestingly, in real-world scenarios such as game loading times and Windows boot speeds, the difference between cached and non-cached performance was minimal. Synthetic game benchmarks and Windows boot tests showed negligible variations, suggesting that current software may not be fully optimized to leverage the speed offered by SLC cache, likely due to the prevalence of random 4K operations demanded by software, which NAND flash is not optimal for, rather being ideal for sequential operations. File transfer tests, however, tell a different story. Copying large files and game installations took more than twice as long with the cache disabled, highlighting the significant advantage of SLC cache in data-intensive sequential tasks.
For complete benchmarks and in-depth explanation, check out the original article by Gabriel.
Source: The Overclock Page
Add your own comment

44 Comments on Disabled SLC Cache Tested on M.2 SSD, Helps Performance in Some Cases

#26
Wirko
InVasManiWould be nice if the drive allowed you to disable SLC cache at a certain % full threshold.
Why would that be useful?

Anyway, I've seen occasional reports that SLC cache becomes ineffective when the SSD is close to full. I tend to blame internal fragmentation for that, as SLC caching probably needs large extents of contiguous free space, so it can write large chunks of data sequentially (random/fragmented would be slow).
GabrielLP14First of all, thanks for all the comments and i hope you guys liked the content, my next one will be disabling a DRAM Cache in a NVMe SSD to see a real-world case scenarios, and we hope to see the "REAL" difference
If I'm allowed to make a suggestion, here it is: please record the SMART data and report how much data each of your benchmarks writes to the SSD. As far as I'm aware, no SSD reviewer does that. OS booting and game loading probably write little data, but it would be nice to have a proof. That would be the reason why disabling the SLC cache has little effect.
Posted on Reply
#27
InVasMani
It looked like beyond a certain % threshold it it performed worse. So maybe with the right threshold point you could get a bit better balance between cache vs no cache.
Posted on Reply
#28
GabrielLP14
SSD DB Maintainer
WirkoWhy would that be useful?

Anyway, I've seen occasional reports that SLC cache becomes ineffective when the SSD is close to full. I tend to blame internal fragmentation for that, as SLC caching probably needs large extents of contiguous free space, so it can write large chunks of data sequentially (random/fragmented would be slow).


If I'm allowed to make a suggestion, here it is: please record the SMART data and report how much data each of your benchmarks writes to the SSD. As far as I'm aware, no SSD reviewer does that. OS booting and game loading probably write little data, but it would be nice to have a proof. That would be the reason why disabling the SLC cache has little effect.
I don't do that since the SSDs are secondary discs
bugA DRAM cache vs HMB would also be nice, but I guess it's hard to pick two drives that are similar enough for it to be a somewhat apples-to-apples comparison.
It's hard to that since the controllers either support HMB or DRAM. Only a handful support both
Posted on Reply
#29
Waldorf
"I dont always write 700 GB, but if i do, i prefer slc..."
Please transfer data responsibly. :D

(for those outside americas, check Dos Equis commercials on yt)
Posted on Reply
#30
lexluthermiester
dgianstefaniIt's not about performance it's about cheap.
Cheap is as cheap does.. And if it looks like garbage and acts like garbage...
Posted on Reply
#31
chrcoluk
Sunlight91That's why I don't like it when TechPowerup rates a large SLC cache as something positive. 1000-2000 MB/s write speed is still plenty for most applications. But when it drops to 600 MB/s or worse 100MB/s for QLC drives then it's just awful. Even your Internet speed can be faster than that.
I would think the reason is obvious, essentially the longer the transfer goes on for, the less likely a real use case will encounter it. The drives with the smallest SLC cache can be exhausted in some real world cases, but drives with the largest SLC cache, will probably never hit the scenario of where the pSLC is exhausted with a huge backlog of data having to be moved out of it.

For all my drives e.g. the likely biggest sustained write is when/if I am migrating data from a drive it is replacing, a one off event.

After I got my SN850X I did move a couple of hundred gigs worth of games of my 980 pro though. But I wont be doing this sort of thing often. Plus it wasnt all in one go, one game at a time, with gaps in between.
InVasManiWould be nice if the drive allowed you to disable SLC cache at a certain % full threshold.
Probably the worst time to get rid, pSLC also increases endurance, and you want that if the drive is nearly full.
Posted on Reply
#32
InVasMani
chrcolukI would think the reason is obvious, essentially the longer the transfer goes on for, the less likely a real use case will encounter it. The drives with the smallest SLC cache can be exhausted in some real world cases, but drives with the largest SLC cache, will probably never hit the scenario of where the pSLC is exhausted with a huge backlog of data having to be moved out of it.

For all my drives e.g. the likely biggest sustained write is when/if I am migrating data from a drive it is replacing, a one off event.

After I got my SN850X I did move a couple of hundred gigs worth of games of my 980 pro though. But I wont be doing this sort of thing often. Plus it wasnt all in one go, one game at a time, with gaps in between.


Probably the worst time to get rid, pSLC also increases endurance, and you want that if the drive is nearly full.
I didn't really take into account endurance angle on things, but fair enough consideration. I was simply looking at it from a performance angle it could make sense and maybe it's generally fine to that option for a typical consumer as well not sure. I don't think most consumers write to disk too heavily to be honest so a lot of endurance concerns are probably a bit overstated. Basically you could perhaps look at it a bit similarly to like short stroking a HDD, but kind of in reverse with cache. Not a perfect analogy perhaps, but from a performance angle a bit of a inverse scenario to it. The whole purpose of short stroking as well was to kind of minimize seek access performance cratering.
Posted on Reply
#33
Wirko
chrcolukProbably the worst time to get rid, pSLC also increases endurance, and you want that if the drive is nearly full.
How can pSLC increase endurance?
InVasMania lot of endurance concerns are probably a bit overstated
Yes, agreed. Those who are overly worried about endurance AND actually do demanding stuff with their SSDs, such as lots of small file writing/updating, AND are too cheap to buy a higher tier or enterprise SSD, should simply leave a couple hundred gigabytes free.
Posted on Reply
#34
GabrielLP14
SSD DB Maintainer
chrcolukI would think the reason is obvious, essentially the longer the transfer goes on for, the less likely a real use case will encounter it. The drives with the smallest SLC cache can be exhausted in some real world cases, but drives with the largest SLC cache, will probably never hit the scenario of where the pSLC is exhausted with a huge backlog of data having to be moved out of it.
Precisely right.
Posted on Reply
#35
Scrizz
bugYeah, comparing enterprise QLC to consumer TLC. So very relevant. Care to compare prices as well?
You actually think the price difference comes from the NAND? :laugh:
The NAND is the same. There might be some binning, but it's the same NAND.
Price difference mainly comes from Controller/Firmware/Support you're paying for the RnD.
It would actually make more sense from a supply chain/cost perspective to have just 1 "type" of NAND.
Posted on Reply
#36
bug
ScrizzYou actually think the price difference comes from the NAND? :laugh:
The NAND is the same. There might be some binning, but it's the same NAND.
Price difference mainly comes from Controller/Firmware/Support you're paying for the RnD.
It would actually make more sense from a supply chain/cost perspective to have just 1 "type" of NAND.
Where did say the price comes from NAND? I just said is an apples-to-oranges comparison, not in the least because enterprise drives are engineered for endurance.
Posted on Reply
#37
Wirko
ScrizzYou actually think the price difference comes from the NAND? :laugh:
The NAND is the same. There might be some binning, but it's the same NAND.
Price difference mainly comes from Controller/Firmware/Support you're paying for the RnD.
It would actually make more sense from a supply chain/cost perspective to have just 1 "type" of NAND.
A 30 TB enterprise SSD costs twice as much as the 15 TB version of the same model. 60 TB is twice as much again. Same controller, firmware, support, R&D, probably same PCB.

I'm aware I'm making an enterprise-to-enterprise comparison instead of enterprise-to-consumer but still. There must be a significant price difference due to the NAND.
Posted on Reply
#38
lexluthermiester
ScrizzThe NAND is the same. There might be some binning, but it's the same NAND.
Um, no it's not. TLC and QLC are NOT the same. IF you really think that, you need to go do some reading..
Posted on Reply
#39
bug
lexluthermiesterUm, no it's not. TLC and QLC are NOT the same. IF you really think that, you need to go do some reading..
I think he meant the QLC NAND that goes into enterprise drives is the same as the one that goes into consumer drives, therefore it is ok to compare enterprise and consumer drives. We know it isn't, but I believe that's what he meant.
Posted on Reply
#40
lexluthermiester
bugI think he meant the QLC NAND that goes into enterprise drives is the same as the one that goes into consumer drives, therefore it is ok to compare enterprise and consumer drives. We know it isn't, but I believe that's what he meant.
Oh, I think I missed that context. However, THAT is also very incorrect.
Posted on Reply
#43
Wirko
bugOne is 4 chips / 1Tbit, the other is 6 chips / 1Tbit. That's the most common trick of the enterprise drives. Endurance is just as crappy as consumer drives, but there's 50% more chips to spread the wear.
No, there's something else @Scrizz is pointing the finger at: the N38A die can hold 1 Tb in QLC mode (consumer SSD) or 3/4 Tb in TLC mode (enterprise SSD). This dual use is a rare exception. Making a QLC die work with fewer bits per cell is certainly possible but not trivial (the usual 16 KiB page size becomes ... what? 12 KiB?). Maybe the N38A was optimised for both QLC and TLC.

Enterprise drives also employ eMLC, eTLC, eQLC. This may mean different things to different manufacturers but, as Intel explained in the MLC era, it's made up of three components: binned NAND, more overprovisioned space, and slower writing. Slower writing is more accurate and can push lower voltages to storage cells when writing and erasing. The voltages for erasing are higher than those for writing, that's probably how it has to be, and so I assume that erasing contributes most to NAND wear.

So that "some" binning is actually not something to overlook, and may increase the (market) value of a NAND die considerably.
Posted on Reply
#44
lexluthermiester
WirkoThis dual use is a rare exception.
Very rare.
WirkoThe voltages for erasing are higher than those for writing, that's probably how it has to be
Correct...
Wirkoso I assume that erasing contributes most to NAND wear.
...also correct.
Posted on Reply
Add your own comment
Dec 22nd, 2024 01:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts