• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

100% of my QLC drives are now dead.

Heh. That's why I'm saving for a sweet SLC. My first Intel still works, though it's only 80GB, could work for the OS alone but I keep it as a spare, just in case.
TLC is more than good enough for consumers. SLC is unnecessary.
 
They don't increase density by making the cells smaller. Since the invention of 3D NAND, they are increasing the number of layers to increase capacity and as a result, drive endurance has improved since the last days of planar NAND.
That is correct. You will note however that in the first article, the 4th page says because of V-NAND, Samsung was able to back from 16nm to 40nm and make cell big again. Since then, the manufacturing process has again shrunk to 15nm, so we're back where MLC hit a roadblock. Of course, we can escape upwards, like you correctly pointed out. But while it seems we used all 3 axes to shrink things down since we got V-NAND, we are now limited to playing with the Z axis.
 
That is correct. You will note however that in the first article, the 4th page says because of V-NAND, Samsung was able to back from 16nm to 40nm and make cell big again. Since then, the manufacturing process has again shrunk to 15nm, so we're back where MLC hit a roadblock. Of course, we can escape upwards, like you correctly pointed out. But while it seems we used all 3 axes to shrink things down since we got V-NAND, we are now limited to playing with the Z axis.
I don't believe that we have gone back to 15 nm. Techinsights shows that all critical dimensions are very close to 40 nm or higher for all leading edge NAND makers.
 
Apologies, I misread something when I looked up some specs. Indeed, it seems the fab node isn't usually mentioned. Hopefully we're still in the safe zone.
But my original point still stands: you can't keep shoving more charge levels in cell and hope to read them back if you don't make the cell bigger, past a certain point.
 
Apologies, I misread something when I looked up some specs. Indeed, it seems the fab nose isn't usually mentioned. Hopefully we're still in the safe zone.
But my original point still stands: you can't keep shoving more charge levels in cell and hope to read them back if you don't make the cell bigger, past a certain point.
I agree and that's why I'm leery of what PLC will do to consumer SSDs.
 
I agree and that's why I'm leery of what PLC will do to consumer SSDs.
If the NAND makers really try to stuff 32 levels into one cell and fail, they will attempt to stuff as many as possible. 21 levels is enough for 13 bits per three cells. 26 levels is enough for 14 bits per three cells.
 
TLC is more than good enough for consumers. SLC is unnecessary.
yeah but for how long? I build PCs with the aim they last for 10 years, more if possible

stability and uptime is a must
 
TLC is more than good enough for consumers. SLC is unnecessary.
Unnecessary yes.... but also fast :D

Isn't unnecessary the name of the game here in the enthusiast community?

I agree and that's why I'm leery of what PLC will do to consumer SSDs.
See this is why I was hypothesizing that qlc may in time become the good guy of ssds, like tlc is the good guy today.
 
yeah but for how long? I build PCs with the aim they last for 10 years, more if possible

stability and uptime is a must
Especially with budgeting for SLC drives... Optane/PCM is much more enduring.
I've gone through spec sheets, and no NAND advertises/specifies 10+year (or even 5+year) data retention.

Question:
is/are QLC more-sensitive to thermals, voltage changes, transients, etc. than TLC drives?
(The lil 'sleeve' on Steam Deck drives seems to be a bidirectional shield; maybe WiFi/LTE/5G modem related?)
Other than poor cooling, a laptop should not be killing drives like this.
(Laptop' 3.3V rail on the fritz?)

I've managed to have killed 3 NVME drives:
1. I accidentally kneeled on one, and broke into two pieces.
2. I shorted out surface-mount components on a 16GB M10 Optane, using thermal spreader.
3. I cooked a PM963 960GB 'server' NVME, installing and playing Unity Engine game(s) off it, while 'naked' and plugged into a USB bridge (known to run drives hot).

So, the only time I've seen anything like this, was thermal/voltage related...

There's clearly some common factor amongst your QLC drives that's lending them towards extraordinarily short lives (in your usage).
You're not BSing but, neither is much-anyone grasping how unusual that is, or asking "Why?".

You have established a pattern, not one or 2 anecdotal failures but, a pattern of QLC drives failing prematurely.
View attachment 318935QED.
99.0% remaining health/writes, yet a warning is flagged, and the drive effectively-inoperable.

In my limited-experience opinion:
These QLC units (specifically, in your possession and use) are either doing tons of internal re-writes / error-correction and/or are overheating chronically.
Please note: I'm not making accusations of you doing anything wrong; rather, trying to drill down to a defect or failure in the machine these are dying in.
This is abnormal, even within the known issues with QLC drives.

Can you all stop your anti-QLC circlejerk and USE YOUR BRAINS for 5 seconds?

If QLC NAND was so bad that drives using it were consistently failing at the rate that OP has experienced, do you really think (and this is the part where you need to use your brains) that companies would be selling products using QLC? No, they would not, because they would be haemorrhaging money and customer satisfaction like no tomorrow.
Lil harsh but, true.
There's no need to defend QLC; it's factually improved to a point manufacturers feel confident in selling Enterprise and Industrial NAND devices using it.
It has downsides, and is not appropriate for all use cases and scenarios but, its reliability is specifiable.
What I do know is that I, and millions of others, have and use QLC drives without problems, and will continue to do so.
The QLC hate train (IMHO) originates from SSD manufacturers pushing immature QLC devices onto consumers, enthusiasts, and gamers. Effectively, we got forced into beta testing a new technology, and at a premium. The hate is earned, even if inapplicable today.

Nothing is going to help the OP. The drives are dead. Nothing anyone in these forums, or anywhere else on the internet, will say can help them. The purpose of this thread is exclusively about letting everyone know what their experiences have been. It just adds to the already expansive knowledge base of experience with QLC. It is only natural for others to chime in.
Disagree.
OP has something wrong; TLC SSDs are merely in- or less-susceptible to whatever that issue is.
-and, figuring that out, will help others that may be researching why their NVMe drives keep dying.
IMO, that's what forums like TPU are best at. Ex: I've found helpful ancient posts from you and eldairman, etc. in my own research on issues.

4 dead QLC drives out of 4 dead QLC drives is not an indictment of QLC, it's an indictment of something else. What that is I don't know.
Very Very much, 'this'.

I have a question .. Are QLC SSDs more prone to malfunction due to bad DC from powersupply than TLC SSDs?
If I was the OP, I would take my PC to a computer repair shop and ask them to check the powersupply.
It doesn't matter how good a powersupply is, bad things can happen to them even from the beginning because something is faulty somewhere in the powersupply.
While I've not yet found a direct answer, the logically-implacable answer is yes.
1698714083224.png

With only my limited knowledge and experience... I'd say OP has something wrong with power delivery.

I'm not sure if OP was clear that all these drives died in the same machine but,
I'd assume voltage on earth/ground may cause R/W issues with QLC.
 
Last edited:
That could also be caused by the folder you copied containing lots of small files. Windows' copy function sucks when it comes to lots of small files, and it'll regularly drop down into the single MB/s if not KB/s.
It was an ISO seems unlikely
Crucial BX? 500?
I think
 
Did the OP ever give a list of the models that were failing?
 
yeah but for how long? I build PCs with the aim they last for 10 years, more if possible

stability and uptime is a must
That means redundancy, not overpriced hardware. Just more of it.
 
i own ~10 NVMe SSDs. four of them are QLC Drives.
ALL TLC Drives are 100% fine.
ALL QLC Drives are dead within days to a couple months of light usage. (game installations)
now my last 10 weeks old Crucial P3 Plus 4TB starts to die and is unuseable (games take 15 minutes to load instead of seconds, textures often never load and SMART shows 898 critical errors.)
a failure rate of 100% is insanity... i'll never buy another QLC drive in my life.
View attachment 318904View attachment 318905
DAYS? That's definitely not NAND's fault then. Probably a controller kink not worked out properly.
A couple months? I COULD see that, but if your usage is actually light, even that sounds lilke a stretch to blame on NAND. Then again I never had QLC drive in my life to know for sure.
 
yeah but for how long? I build PCs with the aim they last for 10 years, more if possible

stability and uptime is a must
The Crucial BX300 was (well, still is) 3D NAND MLC, with a specified write endurance of 160 TB for the 480 GB model. The MX200 had (well, still has) similar endurance data but was built with 16 nm planar MLC. Would you choose either of those for a new build rather than a modern TLC SSD from whichever brand you trust most?
 
The Crucial BX300 was (well, still is) 3D NAND MLC, with a specified write endurance of 160 TB for the 480 GB model. The MX200 had (well, still has) similar endurance data but was built with 16 nm planar MLC. Would you choose either of those for a new build rather than a modern TLC SSD from whichever brand you trust most?
'twas a gift :D

It's showing its age already, games take their sweet time to load. M.2 and U.2 slots are free but there's no way to get a decent drive where I live so I'll wait till I can do some shady smuggling move again.
 
Did the OP ever give a list of the models that were failing?
He said one of them was a 4TB Crucial P3 Plus (Phison E21 and 176-layer QLC I'm pretty sure).

Which happens to be identical to the 4TB Fanxiang S660 I returned. Maybe I dodged a bullet there...
 
when samsung says its 990 Pro nvme drives are MLC 3-bit, is that just a fancy way of saying proprietary controller TLC? so its not actually MLC on the 990 Pro? samsung does literally say on the product page for it MLC 3-bit... so... im just confused at this point.
 
i own ~10 NVMe SSDs. four of them are QLC Drives.
ALL TLC Drives are 100% fine.
ALL QLC Drives are dead within days to a couple months of light usage. (game installations)
now my last 10 weeks old Crucial P3 Plus 4TB starts to die and is unuseable (games take 15 minutes to load instead of seconds, textures often never load and SMART shows 898 critical errors.)
a failure rate of 100% is insanity... i'll never buy another QLC drive in my life.
View attachment 318904View attachment 318905
Can you post the full information, not a cut down version?
You can get warnings like that from them overheating as well

Some drives also report NAND writes vs total writes, so theres multiple values that all add meaning to the problem.
1698921974539.png


The question obviously is whats killing your drives, are they connected to CPU or chipset lanes when they fail? All in the same PC?

If they're all on the one AM5 system, i'd be finding out what voltages/rails the NVME are using - could be something like higher SoC voltages affecting the CPU attached slot, or just a downright faulty slot/board.
 
Can you post the full information, not a cut down version?
You can get warnings like that from them overheating as well

Some drives also report NAND writes vs total writes, so theres multiple values that all add meaning to the problem.
View attachment 319934

The question obviously is whats killing your drives, are they connected to CPU or chipset lanes when they fail? All in the same PC?

If they're all on the one AM5 system, i'd be finding out what voltages/rails the NVME are using - could be something like higher SoC voltages affecting the CPU attached slot, or just a downright faulty slot/board.
You can rule this out, he said back there that one of the SSDs died on his Laptop. I was wondering if it could be a problem with the quality control of the specific brand (Crucial).
 
You can rule this out, he said back there that one of the SSDs died on his Laptop. I was wondering if it could be a problem with the quality control of the specific brand (Crucial).
I remember a story about a nand factory getting shutdown for awhile for contamination. I wonder if any of that bad nand made it to market?
 
You can rule this out, he said back there that one of the SSDs died on his Laptop. I was wondering if it could be a problem with the quality control of the specific brand (Crucial).
I remember a story about a nand factory getting shutdown for awhile for contamination. I wonder if any of that bad nand made it to market?
A bad batch where all of op's QLC SSDs got from then?
 
A bad batch where all of op's QLC SSDs got from then?
Unlikely. I have had several QLC based drives die as well. Different brands but mostly Samsung. QLC is unsuitable as main storage for ANY device. It's why I quit buying them for anything but external or secondary storage.
 
Back
Top