# Are there wear problems from partitioning a SSD?



## Shrek (Aug 3, 2022)

What if I make a _*small*_ partition on a solid-state drive and then write to it a lot, will I cause unbalanced wear? or can the drive compensate?


----------



## Ferather (Aug 3, 2022)

I don't think you will have much of an issue since by default there is a page file and hibernation file on C:\, so if you install to a SSD it will get all the normal read-writes as usual.
I have had my Samsung 850 evo since 2014-2015, with constant use, and even partition restores, its still performs as if it where new.

The evo is now a secondary storage (M.2 main), where I use a larger external drive for mass storage if needed.





The partition structure allows me to backup-restore.


----------



## Shrek (Aug 3, 2022)

Yes, but a small OS partition of say 64 GB will get thrashed a lot more than if one gives the OS the whole SSD.


----------



## Ferather (Aug 3, 2022)

The partition will make the read-writes more isolated to that section of the SSD, however in the same way, if you don't change the files (size) or location, on the drive, page file should use the same section.
I would also expect in some way the page file at a fixed size does not change location on the drive, and therefor all read-writes to it would be the same section on the drive?

If the page file expanded, and there was data at the end of the original file, then it would expand into unused space.

Either way the drive gets the same overall read-writes. Correct me if wrong.


----------



## Edwired (Aug 3, 2022)

Shouldn't really affect it if there's multi partitions on ssd than leaving one with empty space being unused. If storage is a problem go bigger on a regular hard drive preferable 7200rpm and leave the ssd primary for the operating system


----------



## Shrek (Aug 3, 2022)

Ferather said:


> The partition will make the read-writes more isolated to that section of the SSD, however in the same way, if you don't change the files (size) or location, on the drive, page file should use the same section.
> I would also expect in some way the page file at a fixed size does not change location on the drive, and therefor all read-writes to it would be the same section on the drive?



That is my point... if I just work on one end of my desk, I'll wear out that section long before the desk would wear out if I were to use the whole area.


----------



## Ferather (Aug 3, 2022)

If you don't mind me asking what is it you intend to use the partition for? If something like a cache, then yes it will wear out specifically that section of the drive (start to end of the partition).
If you are talking about something like a page file, which has a fixed size (file) and will always read-write to the same part of the drive, no different (start to end of file).

An example: [File1: 400mb][File2: 200mb][Page: 16gb][End of drive] | All page read-writes will go to the drive section populated by the page file.


----------



## Shrek (Aug 3, 2022)

I don't, but








						Looks like my SSD finally died
					

Looks like after 10 years, my Plextor M5S drive finally gave up the ghost. It started with oddball lockups, which I just kinda shrugged off and rebooted. I haven't had any issues with data loss or anything that I've noticed, but now it refuses to boot into Windows. I've already tried swapping...




					www.techpowerup.com
				



speaks of a 97GB OS partition in a 1TB SSD

So, I got to wondering.


----------



## eidairaman1 (Aug 4, 2022)

Shrek said:


> What if I make a _*small*_ partition on a solid-state drive and then write to it a lot, will I cause unbalanced wear? or can the drive compensate?


Trim compensates by writing 1s in the area to make a wall per se.


----------



## dnm_TX (Aug 4, 2022)

Shrek said:


> speaks of a 97GB OS partition in a 1TB SSD


I really don't see any point of doing this.If SSD fail,it will fail as a whole,not just half and the other half will be still usable.
@Shrek  better look in to some free(or hacked) RAM Drives alternatives so you can reserve part of your memory for temp. files and whatnot and basically reduce the writing on the SSD(you can run your browser there and make script to copy the necessary files back at restart/power off,i know i do ).

P.S. For example. Here is my secondary SSD,barely used for it's age(at least 8+ years old) 1TB written but look at the hours. It's at 97% life.
But if logically thinking here that only extensive writing is killing them then i should be at 100% still:


----------



## Count von Schwalbe (Aug 4, 2022)

Many here have said that they leave part of a drive unallocated for overprovisioning, is that what you had in mind?


----------



## Dr. Dro (Aug 4, 2022)

No, partitioning will not spend write cycles, and partitioning it does *not* limit wear to certain area of the SSD due to wear leveling algorithms. Some newer SSD designs are actually single-die, which means that the entire capacity of the drive is on a single NAND chip.

Do not worry about it, modern operating systems beginning with Windows 8 are not only optimized but designed first and foremost to operate with SSDs, and from 7 onwards the operating system has native TRIM support. If you are running a low RAM (4-8 GB) system that will rely on swap file a lot and you are genuinely concerned due to running an extremely low quality SSD, then just move the swap file to an HDD. Do not disable it on a low RAM system, or you will experience out of memory errors all the time, if possible, do not disable it at all, and don't set a small one either, contrary to popular belief, a small page file will *not* help with write endurance, since data on it will be replaced more frequently with fresh data as the pages exhaust.

Honestly, I am baffled that in 2022 concern over SSD write endurance for regular desktop users is still so common, the source of all of this fear is FUD posted on forums by skeptics in the 2008-2011 time frame when SSDs were still expensive (and paradoxically, at their highest endurance possible due to the lithography and SLC/MLC designs - my 11 year old 160 GB Intel 320 with 180TB of lifetime writes and 97% life left is there to tell the tale). In 99.99% of cases, a single-die QLC "nightmare tier" SSD is going to last over a decade, if not decade*s* on a gaming PC, if one has very high write requirements, they'll know what to buy - MLC drives.



Count von Schwalbe said:


> Many here have said that they leave part of a drive unallocated for overprovisioning, is that what you had in mind?



This is not necessary, as overprovisioning does not work that way. There is extra capacity not exposed to the host controller assigned for overprovisioning, just like the spare area on HDDs.


----------



## dnm_TX (Aug 4, 2022)

Dr. Dro said:


> Honestly, I am baffled that in 2022 concern over SSD write endurance for regular desktop users


Well...he bought a budget SSD(endurance is....unknown(best description,i guess )) so it's understandable to take some precautions.


----------



## Shrek (Aug 4, 2022)

Count von Schwalbe said:


> Many here have said that they leave part of a drive unallocated for overprovisioning, is that what you had in mind?



I have a 256GB SSD on the way and I intend to run Windows on it and nothing else (the programs will reside on hard drives) so that should leave a lot of breathing space.



dnm_TX said:


> Well...he bought a budget SSD(endurance is....unknown(best description,i guess )) so it's understandable to take some precautions.



Indeed.



eidairaman1 said:


> Trim compensates by writing 1s in the area to make a wall per se.



Can't quite follow you on this one, a wall?


Part of my paranoia comes from a Lenovo laptop I upgraded for my daughter; it had 128GB SSD (Union Memory AV310) and 4GB of RAM so was probably paging a bit; I started messing with the removed drive and in no time at all it was dead. The laptop I moved to a 512GB Western Digital Blue SSD and the RAM to 8GB and it's been running just fine since then.


----------



## Count von Schwalbe (Aug 4, 2022)

Shrek said:


> I have a 256GB SSD on the way and I intend to run Windows on it and nothing else (the programs will reside on hard drives) so that should leave a lot of breathing space.


So you are wondering whether to use a small partition for overprovisioning or a larger one to spread the wear?


----------



## eidairaman1 (Aug 4, 2022)

Dr. Dro said:


> No, partitioning will not spend write cycles, and partitioning it does *not* limit wear to certain area of the SSD due to wear leveling algorithms. Some newer SSD designs are actually single-die, which means that the entire capacity of the drive is on a single NAND chip.
> 
> Do not worry about it, modern operating systems beginning with Windows 8 are not only optimized but designed first and foremost to operate with SSDs, and from 7 onwards the operating system has native TRIM support. If you are running a low RAM (4-8 GB) system that will rely on swap file a lot and you are genuinely concerned due to running an extremely low quality SSD, then just move the swap file to an HDD. Do not disable it on a low RAM system, or you will experience out of memory errors all the time, if possible, do not disable it at all, and don't set a small one either, contrary to popular belief, a small page file will *not* help with write endurance, since data on it will be replaced more frequently with fresh data as the pages exhaust.
> 
> ...


7 Definitely was optimized for SSDs


----------



## Shrek (Aug 4, 2022)

Count von Schwalbe said:


> So you are wondering whether to use a small partition for overprovisioning or a larger one to spread the wear?



No, I intend to use a single partition; but
(1) Looks like my SSD finally died | TechPowerUp Forums
made me wonder if what this guy was doing was a good idea.

By running just OS on the SSD I should be fine even if it fails; I'll keep the hard drive bootable so I can be quickly up and running.

At least I didn't go for the $33 1TB SSD on ebay ;-) One even gets to choose the color!


----------



## R-T-B (Aug 4, 2022)

eidairaman1 said:


> Trim compensates by writing 1s in the area to make a wall per se.


No, Trim sets empty blocks to the unprogrammed state (this also tells the controller what is not in use).  It does not write anything, or build "walls."

Anyways, Trim and block remapping/wear leveling that all SSDs have make this a nonissue.


----------



## Dr. Dro (Aug 4, 2022)

dnm_TX said:


> Well...he bought a budget SSD(endurance is....unknown(best description,i guess )) so it's understandable to take some precautions.



Yes, that still stands, though. My current boot drive is a WD SN350 480GB, its official endurance rating is a meek 60 TBW for the lifetime of the drive.






It didn't use a single bit of spare area yet. I'm not a light user, and I knew what I was getting into, but I'm not worried. It will handily outlive my PC and see use in the one I will build next 

Even the crappiest QLC SSDs will do 0.2 DWPD, and on a 500 GB drive that is writing 100 GB a day. Do you write 100 GB every single day to your SSD?



eidairaman1 said:


> 7 Definitely was optimized for SSDs



Windows 7 was aware of SSDs, but at the time it was developed and released, SATA SSDs on AHCI controllers were all that existed, so it was basically TRIM-capable and aware that a drive was of a solid-state type, and that was about it. It was not optimized for nor is it natively compatible with modern PCI Express/NVMe-type SSDs, getting it to work with an NVMe drive requires custom boot drivers and there's no logic to optimize access for higher queue depths than AHCI affords. Windows 8 on the other hand was already designed with this specification in mind, and 8.1 will boot vanilla on one.

I believe there is a hotfix/software update that backports NVMe support on Windows 7, but it was released long after Service Pack 1, so there is no (official) updated installation media. But that doesn't matter, no one should be running Windows 7 anymore.


----------



## hat (Aug 4, 2022)

I didn't split the drive because I was worried about wearing it out. I split the drive because I don't need anywhere near 1TB for Windows, so I created a small partition for Windows to live in while reserving the remainder as a fast storage partition that won't perish if I have to reinstall Windows. I can just wipe out the Windows partition and start over, while the rest of the data on the other partition is untouched. It has nothing to do with trying to preserve the life of the drive. I expect it to wear evenly just like it would normally, and is designed to do, regardless of what I do with partitioning.


----------



## Shrek (Aug 4, 2022)

That is what I don't understand, if one limits paging to a small partition, how can the wear be even across the drive?


----------



## AusWolf (Aug 4, 2022)

Interesting question. It would be nice to know if SSDs provision their unused space through the whole drive, or through individual partitions.

Personally, I prefer several smaller drives to a single large one with several partitions. They're easier to replace if they fail without the need to backup or replace too much data at any one time.


----------



## R-T-B (Aug 4, 2022)

Shrek said:


> That is what I don't understand, if one limits paging to a small partition, how can the wear be even?


The controller does all the wear leveling magic.  In short what you think is always the same sector, often isn't.


----------



## Shrek (Aug 4, 2022)

That is why I bring up this madness... to learn.

When fixing my families hardware only the best will do, but when it comes to my own stuff the same rule no longer holds; is that hypocrisy?


----------



## hat (Aug 4, 2022)

Shrek said:


> That is what I don't understand, if one limits paging to a small partition, how can the wear be even across the drive?


I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.


----------



## Shrek (Aug 4, 2022)

hat said:


> I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.



You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.


----------



## hat (Aug 4, 2022)

Shrek said:


> You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.


That's what I was doing before, but I couldn't get a smaller drive... at least not at a physical store nearby. So I partitioned off the excess capacity, so I could use it for something else without risking whatever I store there being lost if I have to reinstall Windows.


----------



## qubit (Aug 4, 2022)

Shrek said:


> What if I make a _*small*_ partition on a solid-state drive and then write to it a lot, will I cause unbalanced wear? or can the drive compensate?


That's a good question. I don't know the specifics of any particular model, but I'd hazard that logically it should make no difference.

SSDs use all of the free space to ensure that data gets written to different blocks as much as possible. Therefore, there's memory mapping between the logical structure one sees in Windows and the physical structure so it shouldn't matter how you've partitioned it. The one time where it really will make a difference is when the SSD starts to become full since then there will only be relatively few free blocks left which will end up getting hammered, eg a 90% full SSD. And then it will fail faster, or at least that part will.


----------



## Dr. Dro (Aug 4, 2022)

Shrek said:


> You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.



It does not matter, the wear leveling control is done at the controller level.



qubit said:


> That's a good question. I don't know the specifics of any particular model, but I'd hazard that logically it should make no difference.
> 
> SSDs use all of the free space to ensure that data gets written to different blocks as much as possible. Therefore, there's memory mapping between the logical structure one sees in Windows and the physical structure so it shouldn't matter how you've partitioned it. The one time where it really will make a difference is when the SSD starts to become full since then there will only be relatively few free blocks left which will end up getting hammered, eg a 90% full SSD. And then it will fail faster, or at least that part will.



Indeed, it makes no difference, and due to it working the way you described, SSDs include spare area that is not exposed to the host controller to serve as overprovisioning. On older SSD models, up to an entire NAND chip was assigned for this specific purpose, nowadays with the advent of 3D layered NAND, it can be a portion of the die that is marked off as spare area and never written to unless the controller detects that any given cell within the structure has croaked - it then reads the data contained within, moves it to block in the spare area and then remaps it, marking that sector as unusable 

See the old Intel drive I have, Anandtech has a review for the 300 GB model:





__





						The Intel SSD 320 Review: 25nm G3 is Finally Here
					






					www.anandtech.com
				




This review shows that the device marketed as 300 GB capacity has 320 GB worth of NAND actually installed, for example. It's good to keep in mind that Windows uses the wrong nomenclature for GB (power-of-10) to describe what are actually GiB (power-of-2), creating even more confusion in the mix. An "1 TB" partition (with "1,000,000 MB") actually has ~932 GiB; and Windows calls this as GB anyway.


----------



## qubit (Aug 4, 2022)

@Dr. Dro Yes, that overprovisioning helps a lot. I believe some of the cheapest SSDs from no-name manufacturers don't even have it.


----------



## Dr. Dro (Aug 4, 2022)

qubit said:


> @Dr. Dro Yes, that overprovisioning helps a lot. I believe some of the cheapest SSDs from no-name manufacturers don't even have it.



It isn't as necessary with modern layered NAND, even a single-die QLC drive can have spare area nowadays like I mentioned earlier. The SN350 does 

Sure these are far less reliable than the scheme that Intel did with the 320 series back then (backup capacitors for power failure protection, generous overprovisioning with an additional MLC die and all of the bells and whistles you'd expect), but they still exceed the needs of a basic or intermediate PC user. I'm just not entirely comfortable with the concept of PLC (5 bits per cell) though.


----------



## Valantar (Aug 4, 2022)

hat said:


> I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.


That was my thinking as well - don't wear levelling algorithms make block and sector assignments on NAND essentially arbitrary, meaning that any piece of data can be located physically anywhere on the die regardless of partitioning? If so, partitioning wouldn't matter except on a file management level and the minuscule write amplification caused by writes to different partitions not being combined into the same write.


----------



## Wirko (Aug 4, 2022)

R-T-B said:


> The controller does all the wear leveling magic.  In short what you think is always the same sector, often isn't.


Exactly, and that's one of the most basic things to understand about SSDs. I'd even say that it almost never is the same sector.

You can't just rewrite data in flash memory - well you can but a block needs to be erased first, and it takes about a millisecond, so you can't get any kind of performance this way. Instead, you copy its old contents plus the newly written contents to a new block. That new block was erased at an earlier time, when the drive was idle and had enough time for its housekeeping jobs, and then put in the spare area.

Even SD cards use wear leveling, although cheaper ones apparently don't - people who use them as Raspberry Pi system drives care more than others about that:








						What are the best SD cards to use in a Raspberry Pi?
					

The most durable SD cards to use in the Raspberry Pi tiny computer.




					reprage.com


----------



## lexluthermiester (Aug 4, 2022)

Shrek said:


> What if I make a _*small*_ partition on a solid-state drive and then write to it a lot, will I cause unbalanced wear? or can the drive compensate?


You shouldn't worry too much.


----------



## Wirko (Aug 4, 2022)

Dr. Dro said:


> Windows 7 was aware of SSDs, but at the time it was developed and released, SATA SSDs on AHCI controllers were all that existed, so it was basically TRIM-capable and aware that a drive was of a solid-state type, and that was about it. It was not optimized for nor is it natively compatible with modern PCI Express/NVMe-type SSDs, getting it to work with an NVMe drive requires custom boot drivers and there's no logic to optimize access for higher queue depths than AHCI affords. Windows 8 on the other hand was already designed with this specification in mind, and 8.1 will boot vanilla on one.


TRIM needs to be done more than once to be really effective.
The first time is when a file is deleted from the file system, and the OS communicates to the SSD that blocks are now free. Windows 7 does that. But the SSD controller sometimes ignores TRIM commands because it has more important work to do.
So Windows occasionally sends TRIM commands for all free blocks on the file system - that's called "retrim". Probably when doing other disk management jobs like defrag. Windows 7 doesn't do that but 8 does. However, it can be done manually even on XP, using tools like Intel's Toolbox.

About the queue depth - can the OS affect it at all, or does it depend entirely on applications? I can't seem to find any info that newer Windows somehow handle these queues better. I'd just expect Windows utilities like Explorer to improve over time in this regard.


----------



## ThrashZone (Aug 4, 2022)

Shrek said:


> You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.


Hi,
All my os ssd's are this size
I do have some personal files like music/ images/ programs on them to so not all personal files are on different storage drives but back ups are.
None have out and out died except one linux mint killed long ago by never running trim on it seemed to be a crucial firmware bug clash with mint 17
Replacement mx100 256gb still works to this day.

I don't partition ssd's though beside a single system reserved in the front otherwise the rest spans except a little unallocated space at the end for the firmware to use for over provisioning


----------



## Ferather (Aug 4, 2022)

Very interesting information coming up, a good read. Overview of SSD Structure and Basic Working Principle(2) (elinfor.com)


----------



## Wirko (Aug 4, 2022)

Ferather said:


> Very interesting information coming up, a good read. Overview of SSD Structure and Basic Working Principle(2) (elinfor.com)


Anandtech of old, with Anand himself, also had very good articles on the SSD basics. This part is relevant to the discussion here:




__





						The SSD Anthology: Understanding SSDs and New Drives from OCZ
					






					www.anandtech.com


----------



## Dr. Dro (Aug 4, 2022)

Wirko said:


> TRIM needs to be done more than once to be really effective.
> The first time is when a file is deleted from the file system, and the OS communicates to the SSD that blocks are now free. Windows 7 does that. But the SSD controller sometimes ignores TRIM commands because it has more important work to do.
> So Windows occasionally sends TRIM commands for all free blocks on the file system - that's called "retrim". Probably when doing other disk management jobs like defrag. Windows 7 doesn't do that but 8 does. However, it can be done manually even on XP, using tools like Intel's Toolbox.
> 
> About the queue depth - can the OS affect it at all, or does it depend entirely on applications? I can't seem to find any info that newer Windows somehow handle these queues better. I'd just expect Windows utilities like Explorer to improve over time in this regard.



Disk I/O is controlled by a kernel mode function, so I would very much consider it, and yeah, manual TRIM was possible on XP and Vista, they just weren't designed for automatic maintenance of SSDs at all. These OSes are also not aware of the difference between a mechanical or solid-state drive either. This was first implemented in Windows 7.



Valantar said:


> That was my thinking as well - don't wear levelling algorithms make block and sector assignments on NAND essentially arbitrary, meaning that any piece of data can be located physically anywhere on the die regardless of partitioning? If so, partitioning wouldn't matter except on a file management level and the minuscule write amplification caused by writes to different partitions not being combined into the same write.



Not only that, but due to real-time encryption used on most SSD controllers, the data that is actually programmed on the NAND completely mismatches the actual data, as it is often encoded in AES to prevent unauthorized physical data retrieval attacks


----------



## Shrek (Aug 4, 2022)

I am still trying to understand why NAND wears out, even the good stuff seems only good for maybe 600 writes.


----------



## Dr. Dro (Aug 4, 2022)

Shrek said:


> I am still trying to understand why NAND wears out, even the good stuff is only good for maybe 600 writes.



Physics. This is because the logic gates are programmed by a change in physical state by means of an electrical current, and the constant changes in the physical state of the memory cause the material to lose its property over time, ceasing to function correctly, it's similar in concept to electromigration in CPUs. The more states that a given cell can accomodate, the more sensitive to this electrical current it becomes, and this is the reason why newer lithography nodes with ever smaller logic gates and multi-bit NAND cells all have a cost in write endurance. Single bit per cell NAND (SLC), operates with a single programmable state of "0" and "1", while the 2 bits per cell MLC design does "0" "1" and "2", for example. TLC and QLC have 3 and 4 programmable states in addition to empty. This becomes more evident with newer drives, and has been fought off with smarter controllers and advanced wear leveling algorithms to maximize the useful life of the hardware. Reading the current state of the NAND does not cause wear. 

The reason why we don't all use 50 nm SLC like the X25-E is *cost*. To retain the same level of capacity, many more chips are required, and the older the lithography node, the less space efficient it will be, which implies in larger logic gates and less data density per device. Even today with all of the advances in solid state flash memory technology, you'll find that MLC designs such as the Samsung Pro series are still significantly more expensive, and this _isn't_ because they're Samsung drives.


----------



## Shrek (Aug 4, 2022)

Changes physical state? I thought it was quantum tunneling.

Electromigration makes sense, but a CPU can sustain billions of transitions a second and endure for decades.


----------



## Dr. Dro (Aug 4, 2022)

Shrek said:


> Changes physical state? I thought it was quantum tunneling.



Yes, that is what quantum tunneling is used for. The Wikipedia article on this is actually quite concise, if you read it you will understand the underlying mechanics rather well IMHO









						Flash memory - Wikipedia
					






					en.wikipedia.org
				




Oh, just saw your edit, to address the electromigration equivalence I made: sure, but it's about volatility, when powered off, CPUs and DRAM (transistorized logic) completely lose their current state and must re-initialize from zero. The ability to retain form without being powered up (aka being non-volatile) is what requires these physical changes to the device's state. 

Think about a Game Boy cartridge, they use battery-backed memory. If the juice runs dry, being a RAM device, at the time of failure it will erase itself and save data will be lost. This approach was chosen back then probably because programmable flash chips suitable for save data were far too expensive at the time these were manufactured.


----------



## Shrek (Aug 4, 2022)

Not sure I believe that data retention is just 1 year
The Truth About SSD Data Retention (anandtech.com)


----------



## Dr. Dro (Aug 4, 2022)

Shrek said:


> Not sure I believe that data retention is just 1 year
> The Truth About SSD Data Retention (anandtech.com)



That article accounts for a drive whose NAND has been spent, in real-world conditions with a drive kept under ambient temperatures and hasn't been written out of its usefulness, you can probably expect data retention in the range of decades without any corruption. Words of that article itself:



> As always, there is a technical explanation to the data retention scaling. The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it's unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data).
> 
> For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide's ability to hold the electrons inside the floating gate.
> 
> All in all, there is absolutely zero reason to worry about SSD data retention in typical client environment. Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs. If you buy a drive today and stash it away, the drive itself will become totally obsolete quicker than it will lose its data. Besides, given the cost of SSDs, it's not cost efficient to use them for cold storage anyway, so if you're looking to archive data I would recommend going with hard drives for cost reasons alone.



Honestly, if one who's still an SSD skeptic after all of this read on the drawbacks of spindle magnetic storage... they'd never touch an HDD again!


----------



## Edwired (Aug 4, 2022)

The way I understand with ssd is on different scale compared to regular hard drive. Only true test is find a cheap ssd then partition it to a certain size then abuse it to see what happen to the ssd that has the smallest size partition while the rest is empty then do tests on it to see effect it has on the drive. That all I could think of


----------



## pryinglynx.digital (Aug 4, 2022)

hat said:


> I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.


Correct. Partitions just act as headers, and are a range of LBA addresses.


Shrek said:


> That is what I don't understand, if one limits paging to a small partition, how can the wear be even across the drive?


Here is a good short form of how it works on a controller level, which is what handles the LBA addressing of the data.
This happens below the partition level, so partitioning a drive will not cause wear unless you request a write operation to "run over" the existing data in those LBA addresses.
If you want a more detailed example I could create a flowchart 

https://www.elinfor.com/knowledge/overview-of-ssd-structure-and-basic-working-principle2-p-11204


----------



## Dr. Dro (Aug 4, 2022)

Edwired said:


> The way I understand with ssd is on different scale compared to regular hard drive. Only true test is find a cheap ssd then partition it to a certain size then abuse it to see what happen to the ssd that has the smallest size partition while the rest is empty then do tests on it to see effect it has on the drive. That all I could think of



This is pointless as I and numerous other people on the thread mentioned: wear leveling algorithms exist and independ on the partitioning table or the file system itself


----------



## Edwired (Aug 4, 2022)

It isn't pointless as it needs to be done otherwise it never going to get results of what would happen


----------



## Solaris17 (Aug 4, 2022)

Shrek said:


> will I cause unbalanced wear?



No, the file system has no concept of this.


----------



## Wirko (Aug 4, 2022)

Dr. Dro said:


> Physics. This is because the logic gates are programmed by a change in physical state by means of an electrical current, and the constant changes in the physical state of the memory cause the material to lose its property over time, ceasing to function correctly, it's similar in concept to electromigration in CPUs. The more states that a given cell can accomodate, the more sensitive to this electrical current it becomes, and this is the reason why newer lithography nodes with ever smaller logic gates and multi-bit NAND cells all have a cost in write endurance.


Is it correct to call the programming process a change in physical state? No chemical reactions occur and atoms are not moved, it's just electrons that get pushed into the insulating gate, using the quantum tunneling effect. (However, in the slow process of degradation, atoms are moved and/or chemical changes occur.)



Dr. Dro said:


> Single bit per cell NAND (SLC), operates with a single programmable state of "0" and "1", while the 2 bits per cell MLC design does "0" "1" and "2", for example. TLC and QLC have 3 and 4 programmable states in addition to empty.


Not true ... sadly. The misleding names are not helpful here. QLC stores four bits per cell, which means 2^4 = 16 distinct levels of charge. Those 16 may include "empty" or not, who knows.

It would even be possible to store something like 11 levels per cell, yielding 10 bits per 3 cells (11^3 = 1331 > 1024), or 13 levels per cell, yielding 11 bits per 3 cells (13^3 = 2197 > 2048). I know those are weird numbers but 232 layers is a weird number too.


----------



## Dr. Dro (Aug 4, 2022)

Wirko said:


> Is it correct to call the programming process a change in physical state? No chemical reactions occur and atoms are not moved, it's just electrons that get pushed into the insulating gate, using the quantum tunneling effect. (However, in the slow process of degradation, atoms are moved and/or chemical changes occur.)
> 
> 
> Not true ... sadly. The misleding names are not helpful here. QLC stores four bits per cell, which means 2^4 = 16 distinct levels of charge. Those 16 may include "empty" or not, who knows.
> ...



I've definitely oversimplified it a bit, but you are correct. I would argue for the change in physical state because a significant level of wear occurs, but perhaps it is just the speed in which it does occur naturally. It's fascinating stuff to which I'll admit I still have plenty to learn about


----------



## Ferather (Aug 4, 2022)

I remember hybrid mechanicals with nand (SSHD), according to Google they still exist.


----------



## Shrek (Aug 4, 2022)

I have three SeaGate FireCuda 2TB 3.5" SSHDs and like them; it is what I use at the moment.


----------



## chrcoluk (Aug 4, 2022)

It wont add wear, the sectors are not 1 to 1 assigned like on spindles.

Manufacturers even recommend leaving unpartitioned space for over provisioning purposes.

The utilisation level is what affects wear as that reduces the ability of wear levelling.


----------



## Shrek (Aug 4, 2022)

Now THAT is a useful observation.


----------



## Wirko (Aug 4, 2022)

chrcoluk said:


> It wont add wear, the sectors are not 1 to 1 assigned like on spindles.
> 
> Manufacturers even recommend leaving unpartitioned space for over provisioning purposes.
> 
> The utilisation level is what affects wear as that reduces the ability of wear levelling.


This document by Intel explains some basics and also has some calculations for Intel drives. Basically, any free space within a partition is as good as unpartitioned (=guaranteed to be always free) space. As long as you're able to keep ~20% of total drive space free, most of the time, you're good, and you gain a lot of endurance. Most of the time doesn't even mean all of the time, so you can still do some large transfers that fill up all of the drive occasionally.

Edit: even on spindles it's not 1 to 1. The difference is that the mapping doesn't change every day, it only does when the drive finds new bad sectors.


----------



## Count von Schwalbe (Aug 4, 2022)

To make it more confusing, Optane (3DXPoint) was true phase change storage; not that it is relevant now that it has been phased out.


----------



## Edwired (Aug 5, 2022)

Count von Schwalbe said:


> To make it more confusing, Optane (3DXPoint) was true phase change storage; not that it is relevant now that it has been phased out.


Intel thought it be a game changer but they were losing money in the whole deal the last i checked it


----------



## Wirko (Aug 5, 2022)

Count von Schwalbe said:


> To make it more confusing, Optane (3DXPoint) was true phase change storage; not that it is relevant now that it has been phased out.


I see what you did there; and yes, it just changed to vapour phase.

Seriously though, the phase change in Optane is a change in crystalline or amorphous structure, so it's a rearrangement of atoms/molecules, right?


----------



## Mussels (Aug 5, 2022)

Shrek said:


> What if I make a _*small*_ partition on a solid-state drive and then write to it a lot, will I cause unbalanced wear? or can the drive compensate?


Since over provisioning leaves a small unused partition and helps spread writes out over the drive, I assume the drive itself will spread the wear out
It's not being mapped like a mech drive, your partition still includes a portion of multiple memory chips, if not all of them


My OS SSD's always have a 100GB C: partition to reduce data loss and effort when reinstalling an OS, and i've not seen any increase in wear


----------



## ThrashZone (Aug 5, 2022)

chrcoluk said:


> It wont add wear, the sectors are not 1 to 1 assigned like on spindles.
> 
> Manufacturers even recommend leaving unpartitioned space for over provisioning purposes.
> 
> The utilisation level is what affects wear as that reduces the ability of wear levelling.


Hi,
Yep if you've ever used samsung magician and had it do the over improvising for you you'd see unallocated space after this is all the firmware needs.

I'll add the firmware also only needs zero activity to do it's work/ hibernation...


----------



## OneMoar (Aug 5, 2022)

the fawks are you people smoking 

No thats not how it works SSDs don't have physical partitions like a mechanical disk
the partition is just a box. where the data is put in the box is entirely up to the controller and the controller is going to manage its wear leveling automatically
can we please stop even thinking about SSD Endurance in the Year 2022 its completely irrelevant 
./endofthread


----------



## lexluthermiester (Aug 5, 2022)

Dr. Dro said:


> and yeah, manual TRIM was possible on XP and Vista, they just weren't designed for automatic maintenance of SSDs at all.


ALL modern SSDs handle TRIM functions native to the drive controller. OS compatibility is no longer a concern and has not been since around 2014.



Shrek said:


> I am still trying to understand why NAND wears out, even the good stuff seems only good for maybe 600 writes.


The technical details are complicated, however the simple expaination is as follows: All NAND cells require phased voltage applications to change it's data state. With each change, the voltage degrades the cell to a small degree. With SLC, only a single voltage change is applied to set data and one to reset to zero. With MLC, two voltage applications are required and then one to reset to zero. TLC, three voltage applications are required and one to reset to zero. QLC, four+one. The voltage applied at each step is increased with each application on a per-write basis. This effect is cumulative, IE, as the cell wears more voltage must be used to alter it's states. At a certain point the NAND cell will literally burn out from extended voltage exposure thus rendering it non-functional.

So now you can understand the electro-chemistry involved in NAND flash is destructive by design. With each data-change function the data cells involved are degraded. Because of how NAND works, this is an unavoidable result. So the fewer changes made, the better and the more refined the chemical process(resulting in great resistance to degradation and better durability) the better. For this reason SLC or pSLC will always be the best option. MLC/pMLC is the next best and so on.

Perhaps it's now easier to understand why I have such a problem with QLC based NAND and why I consider it garbage for Boot/OS usage.


----------



## dnm_TX (Aug 5, 2022)

*OFF TOPIC* but to to lighten the mood a bit.
@Shrek some wise guy,playing smart,trying to make a buck,selling them for almost double on Craigslist(or they could be very well the free once from Microcenter) :


----------



## Mussels (Aug 5, 2022)

OneMoar said:


> the fawks are you people smoking
> 
> No thats not how it works SSDs don't have physical partitions like a mechanical disk
> the partition is just a box. where the data is put in the box is entirely up to the controller and the controller is going to manage its wear leveling automatically
> ...


People aren't gunna learn if they don't ask questions and get answers


----------



## lexluthermiester (Aug 5, 2022)

OneMoar said:


> the fawks are you people smoking
> 
> No thats not how it works SSDs don't have physical partitions like a mechanical disk
> the partition is just a box. where the data is put in the box is entirely up to the controller and the controller is going to manage its wear leveling automatically
> ...





Mussels said:


> People aren't gunna learn if they don't ask questions and get answers


Then there are those who think they have the answers, but don't because they either misunderstand key parts of the equation or simply never learned the correct information to begin with.


----------



## delshay (Aug 5, 2022)

I always use "Over Provisioning" This drive is around 8 - 10 years old.   ...It eat's up 190GB on a 2TB SSD recommended setting (default).


----------



## P4-630 (Aug 5, 2022)

delshay said:


> I always use "Over Provisioning" This drive is around 8 - 10 years old.   ...It eat's up 190GB on a 2TB SSD recommended setting (default).


I also use the recommended magician overprovisioning on all my SSD's/NVMe's. (don't use mechanical drives anymore)


----------



## Shrek (Aug 5, 2022)

dnm_TX said:


> *OFF TOPIC* but to to lighten the mood a bit.
> @Shrek some wise guy,playing smart,trying to make a buck,selling them for almost double on Craigslist(or they could be very well the free once from Microcenter) :
> 
> View attachment 257029



Yeh, I knew about the offer, but the store is too far away, so paid $25 on ebay


----------



## Valantar (Aug 5, 2022)

delshay said:


> I always use "Over Provisioning" This drive is around 8 - 10 years old.   ...It eat's up 190GB on a 2TB SSD recommended setting (default).


The 850 Pro 2TB launched in mid 2015, so 7 years at the most


----------



## delshay (Aug 5, 2022)

Valantar said:


> The 850 Pro 2TB launched in mid 2015, so 7 years at the most



OK.  can't remember when I bought it, but I know I got it when it first came out.

EDIT: 1TB came out in 2014 which is 8 years old ( I have this drive also).   So I got mixed up which year the drives came out. 2TB came out a year later.


----------



## P4-630 (Aug 5, 2022)

delshay said:


> I always use "Over Provisioning" This drive is around 8 - 10 years old.   ...It eat's up 190GB on a 2TB SSD recommended setting (default).



Got an 850 Pro 512GB using *since 2016* (did not set overprovisioning at that time though.)
But still going strong now using with over provisioning.


----------



## Ferather (Aug 5, 2022)

dnm_TX said:


> ....
> P.S. For example. Here is my secondary SSD,barely used for it's age(at least 8+ years old) 1TB written but look at the hours. It's at 97% life.
> But if logically thinking here that only extensive writing is killing them then i should be at 100% still:
> 
> View attachment 256909






Mine is about the same age, slightly less hours, but a lot more total writes, 3% more wear than yours.
Originally this drive was the primary for 7-8 years, with separate page file partition.


----------



## Palladium (Aug 5, 2022)

~10TB writes in 6 years is like a drop in the endurance bucket though.


----------



## Ferather (Aug 5, 2022)

6% wear for ~46TB over 7-8 years.

----

In terms of unallocated, I did some research before making my partitions, and I have 140mb unallocated MSR, not sure if that's relevant at all.

My original partition layout for the SSD and Windows was the following:


GPT

480mb - NTFS, label: Recovery
280mb - Fat32, label: EFI
140mb - Unallocated, MSR, label: Reserved

40.02gb (40981mb), NTFS, label: Windows
408.02gb (417813mb) NTFS, label: Master
16.84gb (17245mb) NTFS, label: Page


Still much the same now, but as secondary:


----------



## ThrashZone (Aug 5, 2022)

Mussels said:


> People aren't gunna learn if they don't ask questions and get answers


Hi,
Guessing you missed the* /end of thread* bit kudos for that and all others that keep commenting


----------



## chrcoluk (Aug 5, 2022)

Wirko said:


> This document by Intel explains some basics and also has some calculations for Intel drives. Basically, any free space within a partition is as good as unpartitioned (=guaranteed to be always free) space. As long as you're able to keep ~20% of total drive space free, most of the time, you're good, and you gain a lot of endurance. Most of the time doesn't even mean all of the time, so you can still do some large transfers that fill up all of the drive occasionally.
> 
> Edit: even on spindles it's not 1 to 1. The difference is that the mapping doesn't change every day, it only does when the drive finds new bad sectors.


Thats true, I never said you had to leave it unpartitioned, but various manufacturers still reccomend doing so.  Probably because people can accidently use free space easily.


----------



## Mussels (Aug 7, 2022)

Palladium said:


> ~10TB writes in 6 years is like a drop in the endurance bucket though.


There are drives with really low TBW lifespan these days, which would be scary for anyone except users like that
I found some new drives for sale right now, with only 40TBW. Sad to think i'd kill that drive in 1-2 years with just windows and steam.

Mine are at 55TBW (1TB) and 32TBW (2TB, newer) - I'm not a heavy user, that's just like 3 years and 1.5 years as OS and gaming drives



ThrashZone said:


> Hi,
> Guessing you missed the* /end of thread* bit kudos for that and all others that keep commenting


So uh, why are you in charge of that?
You aren't a moderator, or the OP of the thread - and you're giving out incorrect terrible advice that's not based in reality whatsoever.
Take some time out to reflect on the attitude, before it's forced on you.


----------



## Wirko (Aug 7, 2022)

/end of thread means end of end of thread, I'm fine with that
./endofthread, as it was originally put, means ... something scary in Linux?



Mussels said:


> I found some new drives for sale right now, with only 40TBW.


What capacity? Also, do they look at least half-legit, judging by things like price/TB?


----------



## delshay (Aug 7, 2022)

I just want to point out some older Samsung SSD are still under warranty despite it's age. Do they still offer 10+ year warranty in the year 2022?


----------



## lexluthermiester (Aug 7, 2022)

Wirko said:


> /end of thread means end of end of thread, I'm fine with that
> ./endofthread, as it was originally put, means ... something scary in Linux?
> 
> 
> What capacity? Also, do they look at least half-legit, judging by things like price/TB?


You're picking a fight with a moderator when no one said anything to you. You want to *not* do that. Just throwing it out there...


----------



## Valantar (Aug 7, 2022)

Mussels said:


> There are drives with really low TBW lifespan these days, which would be scary for anyone except users like that
> I found some new drives for sale right now, with only 40TBW. Sad to think i'd kill that drive in 1-2 years with just windows and steam.
> 
> Mine are at 55TBW (1TB) and 32TBW (2TB, newer) - I'm not a heavy user, that's just like 3 years and 1.5 years as OS and gaming drives


Wait, what? What capacities are those 40TBW drives? That sounds scary low indeed - but remember that it's a lot harder to write that much data to a lower capacity drive in the first place, as the overhead over "files that are always there" on a system install is much lower. If they're 256GB or something, I wouldn't be worried even by that rating.


----------



## lexluthermiester (Aug 7, 2022)

Valantar said:


> Wait, what? What capacities are those 40TBW drives?


Low end QLC based drives.


----------



## Valantar (Aug 7, 2022)

lexluthermiester said:


> Low end QLC based drives.


I did a quick skim across cheap QLC drives on Newegg, and while most didn't list TBW ratings, the lowest I saw were 512GB drives at ~170TBW. Now, I also discovered that 128GB QLC SSDs actually exist (ewwwww), and I can't imagine those having much of an endurance rating. But then again you wouldn't really be able to write that much to a 128GB drive anyway, so the point is mostly moot. On the other hand, if there are 512GB or above QLC drives with endurance raitings that low, I'd treat them as WORM drives, and definitely not use them as a system drive. They can't be that much cheaper than a cheap TLC drive.


----------



## P4-630 (Aug 7, 2022)

delshay said:


> Do they still offer 10+ year warranty in the year 2022?


No, all of them including the Pro SSD/NVMe's are now 5 years warranty only.


----------



## delshay (Aug 7, 2022)

P4-630 said:


> No, all of them including the Pro SSD/NVMe's are now 5 years warranty only.



You would think with all the improvement's over the years it will remain the same or better, but it's being cut in half.  I wonder why, because this is a step backwards.


----------



## P4-630 (Aug 7, 2022)

delshay said:


> You would think with all the improvement's over the years it will remain the same or better, but it's being cut in half.  I wonder why, because this is a step backwards.


As far as I know the 2.5" 850 serie was the last with 10 years.
I still have an 850 Pro 512GB with 3d v-nand (MLC).

Samsung now only makes drives with 3d v-nand (TLC) or 3d v-nand (QLC).
The TLC comes with a 5 year warranty
The QLC comes with a 3 year warranty (the cheapest)


----------



## lexluthermiester (Aug 7, 2022)

P4-630 said:


> The TLC comes with a 5 year warranty
> The QLC comes with a 3 year warranty (the cheapest)


This clearly shows a lack of confidence in QLC..


----------



## P4-630 (Aug 7, 2022)

lexluthermiester said:


> This clearly shows a lack of confidence in QLC..


They are fine for game storage, I just wouldn't use them as OS drive with windows pagefile...


----------



## ThrashZone (Aug 7, 2022)

Mussels said:


> So uh, why are you in charge of that?
> You aren't a moderator, or the OP of the thread - and you're giving out incorrect terrible advice that's not based in reality whatsoever.
> Take some time out to reflect on the attitude, before it's forced on you.


Hi,
Tad out of placement onemoar originally said it I just kudo'd for not listening to his statement
.




Not sure what terrible/ bad advise you're referring to either.



ThrashZone said:


> Hi,
> All my os ssd's are this size
> I do have some personal files like music/ images/ programs on them to so not all personal files are on different storage drives but back ups are.
> None have out and out died except one linux mint killed long ago by never running trim on it seemed to be a crucial firmware bug clash with mint 17
> ...


Can't be this



ThrashZone said:


> Hi,
> Yep if you've ever used samsung magician and had it do the over improvising for you you'd see unallocated space after this is all the firmware needs.
> 
> I'll add the firmware also only needs zero activity to do it's work/ hibernation...


Or this


----------



## lexluthermiester (Aug 7, 2022)

P4-630 said:


> They are fine for game storage, I just wouldn't use them as OS drive with windows pagefile...


Sure, not a whole lot of data changing happens to a game install once installed. QLC drives make good and less expensive drives for data backups as well. Just not an OS/heavy use drive.


----------



## Mussels (Aug 11, 2022)

This thread might have needed a cleanup, but any perceived grumpiness got sorted out via PM's
Let's keep things nice, we all seem to have different views on SSD's but seriously - there are garbage drives out there, and a lot of garbage information too. Things change in the SSD world, common knowledge from the SLC days means nothing on a QLC drive.

*This thread has changed topic, because we got the answer to the OP's question (no) - but we're still discussing the actual concern of SSD wear*


I'll summarise the entire mess below into: You can buy 1TB drives right now, that range from 1,200TBW to 80TBW.
I've only spent 15 minutes looking into this, i'm sure worse drives exist out there.




Smaller SSD's are at the highest risk of more TBW, because the simple fact of running out of room means you need to delete things and likely re-create them later. Even automated windows tasks like the page file behave this way, with greater storage space helping alleviate re-writes.


I wrote some big annoyed rant a while back about samsungs naming scheme and how a 980 pro has half the TLC of a 970 pro - and every new release (evo, evo plus, evo plus plus, whatever - was it the SD cards that did that?) was fairly consistent, until now when it smashed backwards.



Valantar said:


> Wait, what? What capacities are those 40TBW drives? That sounds scary low indeed - but remember that it's a lot harder to write that much data to a lower capacity drive in the first place, as the overhead over "files that are always there" on a system install is much lower. If they're 256GB or something, I wouldn't be worried even by that rating.



Modern SSD's went backwards in TBW, fast. Theres a lot of 250GB and under drives with low TBW's and some brands that refuse to even advertise them, and give you "hours" instead.

Lower capacity drives often have more writes, not less - a PC gamer is going to delete games to install a new one far more times than a user who installs and leaves it there.
Deleting to free up space, only to re-create is the worst case scenario here.




*From here on i'm only comparing NVME drives that are for sale today.
Ranked in order of samsung as reference, then best to worst.*

Keep in mind, these are considered the top tier premium drives by manu





Sticking to just the 1TB models since every series has them:
980 Pro: 600
980: 600
970 pro: 1200
970 Evo plus: 600
970 evo: 600

980 (regular)




Evo plus range:




970 evo range




What about their QLC range, well known for being cheap, at the reduction of lifespan?

360TBW. Honestly, it's low but it's not terrible - and they get much more reasonable on the bigger models.




So if samsung, the king of consumer gaming SSD's are going backwards (the 980 series) what about other brands?


Team MP34:
Huh. That's actually impressive.




XPG's SX8200 Pro?
Not so bad, on par with samsung.




Crucial P2 series:
Basically, halve samsungs. Except for the 970 pro, quarter that.




Kingston's NV1 range:
Oh, making crucial look good here.




WD green? Oh no. Oh fuck no. 80 TBW for the TLC 960GB and 100TBW for 1TB and 2TB QLC

*From 1200TBW to 80TBW*.





In Sata SSD's, things are just depressing.
These are generally on par with small NVME drives, but you can imagine these drives would end up with data deleted and created far more often than bigger drives that can retain data easier
These 40TBW drives wouldn't last me as an OS drive for a year.

Kingston A400:




Crucial BX500:




WD dont even list the TBW for their WD green SATA drives, they know its so bad. They state "upto 1 million hours" instead for all drives


----------



## Valantar (Aug 11, 2022)

Mussels said:


> This thread might have needed a cleanup, but any perceived grumpiness got sorted out via PM's
> Let's keep things nice, we all seem to have different views on SSD's but seriously - there are garbage drives out there, and a lot of garbage information too. Things change in the SSD world, common knowledge from the SLC days means nothing on a QLC drive.
> 
> *This thread has changed topic, because we got the answer to the OP's question (no) - but we're still discussing the actual concern of SSD wear*
> ...


That's a bunch of useful info, thanks!

One objection though: I don't agree with smaller drives being prone to more writes as a general/linear rule. Why? Because as size goes down, they don't have room for further writes on top of the base software that stays installed at all times - whether that is an OS and basic software, or a game drive with the handful of games that are played regularly and kept. With current AAA game install sizes, larger SSDs also afford installing multiple large games in a way that simply isn't possible on a smaller drive even with deleting a lot of stuff. In short, this being a linear relation seems too simple to me. I would assume there's a kind of bell curve for SSD writes/lifetime, where the amplification due to space constraints that you describe most likely happens in the 500-1000GB range, but is unlikely to happen at 256GB, let alone 128GB - and a 2TB or larger drive might see a relatively lower amount of writes per capacity due to there being less need to delete stuff. Nobody running a 256GB SSD is going to regularly install 50+GB games, uninstall them soon after, and install another 50+GB game in their place. And if you're running a 128GB SSD, even as a game drive, you don't have the capacity to do a lot of writes at all, assuming you're not just stress testing the SSD with writes constantly.

Sadly all of my old low capacity drives are either in my NAS or hooked to consoles that don't allow for host writes to be read from them (why TrueNAS doesn't let you see total host writes though, that's weird), but my old 500GB 960 Evo system drive - which was paired with a 500GB SATA SSD as a game drive, but games were installed on both due to capacity - has seen ~31TB written since I bought it in late 2017, being used as a portable drive (with a not insignificant amount of writes) since I upgraded to a 2TB drive last year. That 2TB drive has seen 16TB written since April last year - and, of course, there's a big boost in writes early in a drive's lifetime. Normalizing for capacity, the 500GB drive has seen more writes/year than the 2TB drive (31/5/0.5 = 12.4 drive writes/year vs. 16/1.3/2 = 6.2), but in raw data the 2TB drive has seen ~2x the data written per year.


----------



## Mussels (Aug 11, 2022)

In my gaming group we have two users with 240GB SSD's
Both have to uninstall one game, to fit another (Be it CoD, WoW, Ark, or whatever) - they're my insight into this.
They eventually go back, adding a whole extra set of (large) writes that a bigger drive would never have had to do

You're using NAS drives in a NAS, which isn't exactly the same thing as windows or games drives, where a user has to make regular decisions about deleting A to fit B, and then deleting B to fit C or go back to A

I do agree it's only a potential situation, but as it's one i hear about regularly from those two gamers it does come to mind as a problem

One of them regularly downloads shows to his SSD, watches, deletes - and then rewatches a week or two later. He's managed to kill three WD greens this year alone, because he simply can't accept torrenting to it is sheer madness.


----------



## Valantar (Aug 11, 2022)

Mussels said:


> In my gaming group we have two users with 240GB SSD's
> Both have to uninstall one game, to fit another (Be it CoD, WoW, Ark, or whatever) - they're my insight into this.
> They eventually go back, adding a whole extra set of (large) writes that a bigger drive would never have had to do


I get that, but how often does that happen, and does that cancel out the "Sure, I've got room for this game too" effect of higher capacity drives? And, the lower capacity you go, the higher the chance of you not being able to clear up enough room to install something new without sacrificing something you'd really prefer to keep. Unless you have the privilege of extremely fast internet access, I guess.


Mussels said:


> You're using NAS drives in a NAS, which isn't exactly the same thing as windows or games drives, where a user has to make regular decisions about deleting A to fit B, and then deleting B to fit C or go back to A


Hm? I said I can't read out the host writes data from my NAS, so I sadly can't check those (older, lower capacity drives). The ones I wrote about were my previous and current main system drives, the older one of which now lives in an external enclosure as a "thumbdrive".


Mussels said:


> I do agree it's only a potential situation, but as it's one i hear about regularly from those two gamers it does come to mind as a problem
> 
> One of them regularly downloads shows to his SSD, watches, deletes - and then rewatches a week or two later. He's managed to kill three WD greens this year alone, because he simply can't accept torrenting to it is sheer madness.


Well, torrenting to any low grade SSD is madness, so ... that is what it is. The write amplification on those small blocks of data must be insane. At this rate it sounds like he'd be better off just ... buying an external HDD? Or an internal one if he's got the room? Heck, if he's going through SSDs at that rate he could probably save money in the long run just buying a NAS, even with their high base cost. But that's another issue entirely  This definitely sounds like a rather extreme edge case IMO.


----------



## Jism (Aug 11, 2022)

A 500GB drive here has 64GB of overprovisioning. That is used to kind of extend the drive's life. The more you assign to it (just a Unallocated chunk of space) the longer your drive will last.

Partitioning does'nt help at all really; i think it's the controller that decides which bit is stored where. These are'nt traditional HDD's with potential bad sectors.


----------



## Valantar (Aug 11, 2022)

Jism said:


> A 500GB drive here has 64GB of overprovisioning. That is used to kind of extend the drive's life. The more you assign to it (just a Unallocated chunk of space) the longer your drive will last.
> 
> Partitioning does'nt help at all really; i think it's the controller that decides which bit is stored where. These are'nt traditional HDD's with potential bad sectors.


I think you've got the OP's question backwards: they were asking if partitioning would increase SSD wear, not if it could be used to work around bad sectors. But that's pretty much settled (as you say, the controller puts data wherever it wants, so partitions only exist on a file system level), as @Mussels said above the topic has changed to one of general SSD wear discussion.


----------



## birdie (Aug 11, 2022)

1. There's no relationship between sectors your OS writes to and what SSDs writes to. This relationship exists for HDDs and classic (non-SSD) USB flash drives. There's always translation in order to facilitate wear levelling.
2. What actually matters is the amount of free ("discarded") space on your drive: the smaller it is, the faster it will die. If you have 99.99% of your drive full and you constantly overwrite the remaining 0.01% it will wear off it a lot faster than having at least 10% free.

In Windows `defrag /L Drive:` (this is _not_ for deframenting, it's for telling the SSDs which sectors/blocks are _actually_ free, i.e. "trimming" - this task should run in Windows by default weekly or monthly, I don't remember now) will take care of that. In Linux you should really use the ,discard mount option.

Overprovisioning is a moot topic. My 1TB HDD has 2MB exactly for that: SMART bad sectors reallocation. My 1TB SSD drive? I've no effing clue - no utility detects it or shows its size.  Just to feel safe I don't fill up my SSD.


----------



## Shrek (Aug 11, 2022)

Isn't that why there is overprovisioning?

Are you suggesting one defrags an SSD?


----------



## Valantar (Aug 11, 2022)

Shrek said:


> Isn't that why there is overprovisioning?
> 
> Are you suggesting one defrags an SSD?


I think what they're referring to is TRIM.


----------



## Count von Schwalbe (Aug 11, 2022)

I have a 512GB game SSD. However, I only play 1-3 games at once, so I uninstall and reinstall regularly.


----------



## Valantar (Aug 11, 2022)

Count von Schwalbe said:


> I have a 512GB game SSD. However, I only play 1-3 games at once, so I uninstall and reinstall regularly.


What kind of scale are we talking for the rotation here? <10GB games? 10-50? >50? How long have you had the drive, and what is the total host writes count? More data is always good


----------



## lexluthermiester (Aug 11, 2022)

Shrek said:


> Are you suggesting one defrags an SSD?


NO!! Never do that! There is NO reason or purpose whatsoever for ANYONE to defrag an SSD!


----------



## Count von Schwalbe (Aug 11, 2022)

lexluthermiester said:


> NO!! Never do that! There is NO reason or purpose whatsoever for ANYONE to defraging an SSD!


To make someone not technical feel better.   

It does not damage it in any way, no?


----------



## lexluthermiester (Aug 11, 2022)

Count von Schwalbe said:


> It does not damage it in any way, no?


Are you serious? Yes, it does actually. Because of the way SSD's work and the fact that there is no practical reason or need to defrag, defraging an SSD only causes extra wear on the drive due to all the extra Erase/Write cycles. Defrag is absolute MURDER on an SSD. *Do NOT do it...*


----------



## Count von Schwalbe (Aug 11, 2022)

lexluthermiester said:


> Are you serious? Yes, it does actually. Because of the way SSD's work and the fact that there is no practical reason or need to defrag, defraging an SSD only causes extra wear on the drive due to all the extra Erase/Write cycles. Defrag is absolute MURDER on an SSD. *Do NOT do it...*


Ok, I meant manual defrag. How much does it do to do it once?


----------



## Frick (Aug 11, 2022)

P4-630 said:


> They are fine for game storage, I just wouldn't use them as OS drive with windows pagefile...



Why not?


----------



## lexluthermiester (Aug 11, 2022)

Count von Schwalbe said:


> Ok, I meant manual defrag. How much does it do to do it once?


Why would you want to? Access times are no different for data on an SSD regardless of the sectors they're stored at. With HDDs, it's important because of the performance variances that exist on the HDD platters. With SSDs it does not matter. 

However, to answer your question, you will invoke a LOT of extra erase/write cycles because of how data is shifted around. You will eat up erase/write cycles while gaining no benefit at all.



Frick said:


> Why not?


Because QLC durability and performance is lacking in comparison to TLC.


----------



## P4-630 (Aug 11, 2022)

Frick said:


> Why not?


Less TBW with shorter warranty.

I personally wouldn't.

3d v-nand (TLC) is better and more reliable for longer term.


----------



## Count von Schwalbe (Aug 11, 2022)

Valantar said:


> What kind of scale are we talking for the rotation here? <10GB games? 10-50? >50? How long have you had the drive, and what is the total host writes count? More data is always good


Ummm - Let me check on that this evening.



lexluthermiester said:


> Why would you want to? Access times are no different for data on an SSD regardless of the sectors they're stored at. With HDDs, it's important because of the performance variances that exist on the HDD platters. With SSDs it does not matter.
> 
> However, to answer your question, you will invoke a LOT of extra erase/write cycles because of how data is shifted around. You will eat up erase/write cycles while gaining no benefit at all.


Ah - I have done it before to satisfy some people with little computer knowledge. Their computer runs a lot faster if they spend some time watching a progress bar...


----------



## lexluthermiester (Aug 11, 2022)

Count von Schwalbe said:


> Ah - I have done it before to satisfy some people with little computer knowledge. Their computer runs a lot faster if they spend some time watching a progress bar...


For future reference, SSD's gain nothing but extra sector wear from defragmentation. Hard Drives yes, there are serious benefits. Solid State Drives, no.


----------



## Frick (Aug 11, 2022)

P4-630 said:


> Less TBW with shorter warranty.
> 
> I personally wouldn't.
> 
> 3d v-nand (TLC) is better and more reliable for longer term.





lexluthermiester said:


> Because QLC durability and performance is lacking in comparison to TLC.



Ok so assume I'll write as much data as I've done so far (which I won't), round it up to 2TB in 5 months, and it's rated for 120TBw. I'll replace it for other reasons long before it dies from overuse, or it dies for other reasons. It was roughly half the price of the closest TLC drive. The vast majority of HDDs have/had a 2 year warranty, and we weren't worried about them, and the same is true for older SSDs. Also, backups is a thing.

The performance bit is true, but people talk about QLC drives like they can die at any point if you even think about putting an OS on them. Sure, other drives are better, but if the price difference is big enough and performance isn't important or if money is an issue QLC drives are just fine.


----------



## lexluthermiester (Aug 11, 2022)

Frick said:


> Ok so assume I'll write as much data as I've done so far (which I won't), round it up to 2TB in 5 months, and it's rated for 120TBw. I'll replace it for other reasons long before it dies from overuse, or it dies for other reasons. It was roughly half the price of the closest TLC drive. The vast majority of HDDs have/had a 2 year warranty, and we weren't worried about them, and the same is true for older SSDs. Also, backups is a thing.


That is only part of the problem, there is the performance penalty. As the drive fills up, write operations slow down as the drive struggles to keep up with the write-cycle process involved with QLC NAND.

My advice is the same now as it was when QLC was new: Do not use it for an OS drive.


----------



## Count von Schwalbe (Aug 11, 2022)

lexluthermiester said:


> For future reference, SSD's gain nothing but extra sector wear from defragmentation. Hard Drives yes, there are serious benefits. Solid State Drives, no.


PEBKAC is reduced...


----------



## Frick (Aug 11, 2022)

lexluthermiester said:


> That is only part of the problem, there is the performance penalty. As the drive fills up, write operations slow down as the drive struggles to keep up with the write-cycle process involved with QLC NAND.
> 
> My advice is the same now as it was when QLC was new: Do not use it for an OS drive.



So the ancient wisdom of not going over 90% stands? And it's not like avarage joes (or 99% of users here, THAT WAS HYPERBOLE) does anything that qualifies as write-intensive, unless you all suddenly stores databases on your C drives with many GBs worth of writes every day or constantly run out of RAM.


----------



## lexluthermiester (Aug 11, 2022)

Frick said:


> So the ancient wisdom of not going over 90% stands?


I would say yes. So if you are going to buy QLC, over purchase. If you want a 1TB drive, buy a 2TB drive and partition 90% of it for use. That should ease up the performance constraints. If you want 2TB, buy a 4TB drive, etc, etc.


Frick said:


> And it's not like avarage joes (or 99% of users here, THAT WAS HYPERBOLE) does anything that qualifies as write-intensive, unless you all suddenly stores databases on your C drives with many GBs worth of writes every day or constantly run out of RAM.


Fair points. However I know plenty of users who fill up their drives and wonder why performance takes a dive.


----------



## Wirko (Aug 11, 2022)

Mussels said:


> In Sata SSD's, things are just depressing.


It's not all that depressing. There's also the Teamgroup CX2 (1 TB, 800 TBW, 65 EUR, largest is 2 TB) and there's some older stuff from 2018: the Silicon Power Ace A55 (1 TB, 500 TBW, 70 EUR, largest is 2 TB) and the Patriot Burst non-Elite (960 GB, 835 TBW, 70 EUR). In fact, when I checked geizhals.eu for cheap high-TBW drives, I found far more SATA than NVMe models.


----------



## Wirko (Aug 11, 2022)

lexluthermiester said:


> Because QLC durability and performance is lacking in comparison to TLC.


The worst thing about QLC drives is the common perception that they're worse than everything else out there in every possible way; that they're the end result of the race to the bottom; that all components are of inferior quality because they are cheap; that they will probably fail long before their rated TBW is reached; that they will not fail gracefully and become read-only; that the data will evaporate after a couple weeks without power; and so on.
And I'm not saying the perception is wrong.



Valantar said:


> Well, torrenting to any low grade SSD is madness, so ... that is what it is. The write amplification on those small blocks of data must be insane.


That seems logical and is probably true, however, I'd very much like to see what the actual write amplification is in this real-world situation. Several tens of gigabytes should be downloaded from torrents and TBW data read out from the drive before and after that. If the client is any smart and doesn't write to disk very often, and if the SSD firmware is any smart and doesn't move the pSLC cache to QLC too often, then the write amplification may even be reasonable (not that I know what's reasonable).


----------



## Count von Schwalbe (Aug 11, 2022)

Count von Schwalbe said:


> Ummm - Let me check on that this evening.


Ok, the game drive is an OEM Intel Optane H10, pulled out of an Asus laptop. It has had maybe 150GB written before becoming game drive? 2 Windows Installs and assorted software install. 
Click to enlarge. 






I have installed the following on this drive.

Steam
Epic Games Store
2K launcher
Amazon Games Launcher
Ubisoft Connect
Gameloop (emulator)

Bioshock Remastered
PUBG
Prey
XCOM 2
Assassin's Creed: Origins + Curse of the Pharaohs DLC
Wolfenstein: The New Order
Crusader Kings 2
Just Die Already
Magic The Gathering Arena
Borderlands 3 
Heroes and Generals WW2
Age of Conquest 4
Jedi Knight 2: Jedi Outcast
Jedi Knight: Jedi Academy

All of those have been deleted except XCOM 2, PUBG, AC: Origins, Jedi Knight 2: Jedi Outcast, and Age of Conquest 4. I can't be arsed to figure out how much each game is, but you can if you like.


----------



## Valantar (Aug 11, 2022)

Count von Schwalbe said:


> Ok, the game drive is an OEM Intel Optane H10, pulled out of an Asus laptop. It has had maybe 150GB written before becoming game drive? 2 Windows Installs and assorted software install.
> Click to enlarge.
> 
> View attachment 257725
> ...


Cool, that's interesting. How long has it been in use (including as a laptop drive)?


----------



## Count von Schwalbe (Aug 12, 2022)

Valantar said:


> Cool, that's interesting. How long has it been in use (including as a laptop drive)?


Crud, meant to mention that. It was pulled out of the Asus when we got it, we had another same model with an existing windows install and a bunch of software already, so we just swapped it when received (it ended badly, but that's another story). IIRC the Laptop may have been refurbished, so not sure how much had been written before. I have had it in use for around 6-8 months, in my Lenovo that is doing temporary gaming duty.


----------



## Valantar (Aug 12, 2022)

Assuming 6 months, that's 1.5/0.5/0.5= 6 drive writes/year, or pretty much exactly the same as my 2TB drive so far. But also just a rate of ~3TB written/year so far which is a lot less data.


----------



## Mussels (Aug 12, 2022)

Shrek said:


> Isn't that why there is overprovisioning?
> 
> Are you suggesting one defrags an SSD?


Over provisioning leaves some free space
Because data is spread evenly over the SSD, if you leave 10% of the SSD unpartitioned every single memory chip has 10% free (This is an oversimplification, but its close enough - with proprietary tech and firmwares etc we'll never know for sure)
That helps them out, since they can spread the wear a little more evenly

Windows defrag utility does TRIM on SSD's, so while you dont defrag em - the name and commands still use the word


The worst thing that can happen is an almost full drive will have to spread files over everything - you could write 100MB, but that could require filling and erasing every single flash module. Like how a single 1KB file can take 4KB of space, if you had to split 100MB over the final 500MB of your SSD, you would use a LOT more writes than on an empty drive


(It's the whole layers shenanigans - a drive with 192 layers requires all 192 layers to get wiped and re-written for even 1KB of new data to be written)



As for the defragging of an SSD:
Oh god i hope you were joking


If not... you'll burn out the SSD. Fast.
And ummm... SSD's are fast because when they read, the data comes in from multiple chips at once like in a RAID 0 array - defragging puts them together, which would burn out the drive to slow it down. Why?


----------



## Shrek (Aug 12, 2022)

Mussels said:


> As for the defragging of an SSD:
> Oh god i hope you were joking
> 
> 
> ...



My way of politely questioning birdie








						Are there wear problems from partitioning a SSD?
					

6% wear for ~46TB over 7-8 years.  ----  In terms of unallocated, I did some research before making my partitions, and I have 140mb unallocated MSR, not sure if that's relevant at all.  My original partition layout for the SSD and Windows was the following:   GPT  480mb - NTFS, label: Recovery...




					www.techpowerup.com


----------



## Count von Schwalbe (Aug 12, 2022)

Shrek said:


> My way of politely questioning birdie
> 
> 
> 
> ...


 He was saying that Defrag utility in Windows runs TRIM on a SSD, right?


----------



## Wirko (Aug 12, 2022)

Mussels said:


> Over provisioning leaves some free space
> Because data is spread evenly over the SSD, if you leave 10% of the SSD unpartitioned every single memory chip has 10% free (This is an oversimplification, but its close enough - with proprietary tech and firmwares etc we'll never know for sure)
> That helps them out, since they can spread the wear a little more evenly


That's wear leveling, and your're right, it's probably a bit less efficient when the drive is close to full.

However, there's write amplification to consider, too. The controller keeps track of everything - partially written blocks, empty blocks, blocks that can be erased. When it receives data to be written, it can better optimise writes if it has a large enough stock of blocks in a suitable state. If that's not the case then it must sometimes do the most damaging thing - erase blocks that are still mostly empty, which leads to high WA.
(A page is 4-16 KB, which is the minimum unit that can be written; and a block, also called an "erase block", is between 1-8 MB or so and is the minimum unit that can be erased).



Mussels said:


> The worst thing that can happen is an almost full drive will have to spread files over everything - you could write 100MB, but that could require filling and erasing every single flash module. Like how a single 1KB file can take 4KB of space, if you had to split 100MB over the final 500MB of your SSD, you would use a LOT more writes than on an empty drive


There would still be 100 MB of writes but there would be potentially 500 MB of blocks to erase beforehand - a case of a very high WA.


Mussels said:


> (It's the whole layers shenanigans - a drive with 192 layers requires all 192 layers to get wiped and re-written for even 1KB of new data to be written)


The controller can, and does, add data to a partially written block, so this can't be true in all cases. The thing I don't understand well is, when is this possible and when it isn't.


Mussels said:


> As for the defragging of an SSD:
> Oh god i hope you were joking


Windows, even #11, does sometimes defragment an SSD. But I'm sure it's done very conservatively, with smallest amount of writes possible. If a file, for example one of Windows registry files, is horribly fragmented - I mean, tens of thousands of fragments - it still makes sense to reduce that. Not to perfection but maybe to a couple hundred fragments. It should make file access less random and more sequential and save users from testing the lower limit of random read/write IOPS every day.

Edit: Classic tools such as Windows 98 defragmenter and Norton Speed Disk remain useful too. Is there a greater satisfaction than watching the construction of a mandala wall of ordered colourful bricks where there was chaos before? And it can't cost more than 50¢ (per show) when an entire QLC SSD is about $70/TB.


----------



## Assimilator (Aug 12, 2022)

Count von Schwalbe said:


> He was saying that Defrag utility in Windows runs TRIM on a SSD, right?


Defrag on an SSD either TRIMs, or is a no-op. Given how smart current SSD controllers are, even the first is likely a no-op because the controller likely treats commands like TRIM that can affect block allocation, and therefore performance, as hints rather than pure "do this now" orders - the controller automatically runs TRIM itself at quiet times when the drive is at low load anyway.



Wirko said:


> Windows, even #11, does sometimes defragment an SSD. But I'm sure it's done very conservatively, with smallest amount of writes possible. If a file, for example one of Windows registry files, is horribly fragmented - I mean, tens of thousands of fragments - it still makes sense to reduce that. Not to perfection but maybe to a couple hundred fragments. It should make file access less random and more sequential and save users from testing the lower limit of random read/write IOPS every day.


Uh, no. Completely, utterly, horribly wrong. As has already been explained multiple times in this thread, *the operating system doesn't know that files on an SSD are fragmented or not because the SSD controller handles file allocation itself internally and does not need to expose that information externally.*


----------



## Shrek (Aug 12, 2022)

Why does a hard drive slow up when near full if it can just wait till things are quiet?


----------



## Assimilator (Aug 12, 2022)

Shrek said:


> Why does a hard drive slow up when near full if it can just wait till things are quiet?


Are you asking "why don't hard drives automatically defragment themselves"?

* hard drives never have a quiet time; they're so slow that they're always busy
* hard drive defragmentation is an extremely performance-intensive process that makes the drive's terrible performance far far worse, so it cannot happen at a quiet time because as per the previous point, there is no quiet time!
* hard drives were created in a time before multi-gigabyte files and the need to prevent said files from becoming fragmented
* hard drive defragmentation was thus invented by operating systems to overcome file fragmentation issues, not by hard drive manufacturers (who treated it as something that's not their problem)


----------



## trieste15 (Aug 12, 2022)

lexluthermiester said:


> For future reference, SSD's gain nothing but extra sector wear from defragmentation. Hard Drives yes, there are serious benefits. Solid State Drives, no.





Assimilator said:


> Defrag on an SSD either TRIMs, or is a no-op. Given how smart current SSD controllers are, even the first is likely a no-op because the controller likely treats commands like TRIM that can affect block allocation, and therefore performance, as hints rather than pure "do this now" orders - the controller automatically runs TRIM itself at quiet times when the drive is at low load anyway.
> 
> 
> Uh, no. Completely, utterly, horribly wrong. As has already been explained multiple times in this thread, *the operating system doesn't know that files on an SSD are fragmented or not because the SSD controller handles file allocation itself internally and does not need to expose that information externally.*











						The real and complete story - Does Windows defragment your SSD?
					

There has been a LOT of confusion around Windows, SSDs (hard drives), and ...




					www.hanselman.com
				




Storage Optimizer will defrag an SSD once a month *if volume snapshots are enabled*. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

When he says volume snapshots or "volsnap" he means the Volume Shadow Copy system in Windows. This is used and enabled by Windows System Restore when it takes a snapshot of your system and saves it so you can rollback to a previous system state. I used this just yesterday when I install a bad driver. A bit of advanced info here - Defrag will only run on your SSD if volsnap is turned on, and volsnap is turned on by System Restore as one needs the other. You _could _turn off System Restore if you want, but that turns off a pretty important safety net for Windows.

@Wirko is correct.


----------



## Valantar (Aug 12, 2022)

Shrek said:


> Why does a hard drive slow up when near full if it can just wait till things are quiet?


Do you mean hard drives or SSDs? For HDDs it's due to not having enough sequential free area on the disk to write to, forcing it into extremely slow random writes. For an SSD, it's due to not having evough free space to achieve a high level of parallelism, as well as having to erase and re-write blocks rather than just write to empty or partially empty ones.


----------



## Shrek (Aug 12, 2022)

Assimilator said:


> Are you asking "why don't hard drives automatically defragment themselves"?



My bad... force of habit... I meant

Why does a *solid state* hard drive slow up when near full if it can just wait till things are quiet?


----------



## trieste15 (Aug 12, 2022)

Mussels said:


> And ummm... SSD's are fast because when they read, the data comes in from multiple chips at once like in a RAID 0 array - defragging puts them together, which would burn out the drive to slow it down. Why?


This doesn't happen.
The defrag utility doesn't rearrange actual blocks on the SSD, it rearranges the fragments of the virtual layout of the filesystem that is provided by the SSD's controller to fool the OS.

Writes and reads still occur needlessly, but they are still performed on multiple chips, transparently handled by the controller.


----------



## Wirko (Aug 12, 2022)

Assimilator said:


> the controller likely treats commands like TRIM that can affect block allocation, and therefore performance, as hints rather than pure "do this now" orders - the controller automatically runs TRIM itself at quiet times when the drive is at low load anyway.


Yes, those are just hints. Pretty much necessary hints because without them, the controller would have no way of knowing which blocks can be erased. It would have to analyse the file system metadata and try to determine that, and that's a dirty job. It somehow worked for me (Windows XP + SSD) and it somehow works for disks in RAID arrays (TRIM is not always usable in those).


Assimilator said:


> Uh, no. Completely, utterly, horribly wrong. As has already been explained multiple times in this thread, *the operating system doesn't know that files on an SSD are fragmented or not because the SSD controller handles file allocation itself internally and does not need to expose that information externally.*


Wow. 
Well, the bold part is true, with a small correction: the SSD controller does not understand the concept of files, it only knows logical blocks/sectors. It maps those to SSD blocks and pages, which is one source of fragmentation. That's unavoidable - it makes wear leveling possible. The controller may do some kind of defrag internally, or at least have algorithms to limit fragmentation, but we don't know that, and the OS doesn't know that either.
But there's the other part you didn't mention. If a file is already badly fragmented at the file system level then it's at least as badly fragmented inside the SSD blocks and pages, right?


----------



## Dr. Dro (Aug 12, 2022)

Shrek said:


> Why does a *solid state* hard drive slow up when near full if it can just wait till things are quiet?



Because it will begin to exhaust the programmable blocks available. More programmable blocks available means more data can be written at once, which greatly increases write speed


----------



## lexluthermiester (Aug 12, 2022)

trieste15 said:


> The real and complete story - Does Windows defragment your SSD?
> 
> 
> There has been a LOT of confusion around Windows, SSDs (hard drives), and ...
> ...


Did you actually READ that article? Context is important and you've missed some.


----------



## trieste15 (Aug 12, 2022)

lexluthermiester said:


> Did you actually READ that article? Context is important and you've missed some.


Do point out what I have missed - substantiate your assertion.


----------



## lexluthermiester (Aug 12, 2022)

trieste15 said:


> Do point out what I have missed - substantiate your assertion.


I'll take a crack at it:


> December 04, 2014


There you go. That article was relevant some 8 years ago when SSD's where made primarily of SLC and MLC NAND and sector management was not always handled by the drive controller. The Windows Defrag service does NOT defrag SSD's any longer and hasn't since shortly before that article was written.

Sector management and trim functionality is handled exclusively by the SSD controller universally as of 2016. The OS(any) is no longer involved. This is an industry standard aspect of SSD NAND controllers. Why? Because of NAND cell wear leveling. An OS does not address cell wear leveling as no OS can see that information. All the OS can see is the S.M.A.R.T. data. SSD controllers managing sectors and wear leveling is a necessity for proper sector and cell wear management.

Additionally, there are many who state clearly that no one should ever let Windows defrag or manually defrag an SSD. It is a fundamentally flawed notion.








						Enable or Disable Defragmentation for SSD in Windows 11/10
					

Learn how to disable defrag for SSD in Windows 11/10. In Windows 11/10, defragmentation for Solid State Disks is enabled.




					www.thewindowsclub.com
				




Even manufacturers CLEARLY state this;








						Should You Defrag an SSD?
					

There are many questions around whether to defrag an SSD. Head to Crucial for expert advice on solid state drives and whether defragging is necessary.




					www.crucial.com
				




Folks, do *NOT* defrag your SSD's, doing so will put them in an early grave, will potentially cost you time, headache and money.. Turn the Windows defrag service off and leave it off!


----------



## trieste15 (Aug 12, 2022)

lexluthermiester said:


> I'll take a crack at it:
> 
> There you go. That article was relevant some 8 years ago when SSD's where made primarily of SLC and MLC NAND and sector management was not always handled by the drive controller. The Windows Defrag service does NOT defrag SSD's any longer and hasn't since shortly before that article was written.
> 
> ...


The article I linked is not talking about sector management. It is talking about OS filesystem metadata fragmentation.

Please link a clear article describing changed filesystem design in Win 11 if you're so certain that metadata fragmentation is no longer an issue.


----------



## lexluthermiester (Aug 13, 2022)

trieste15 said:


> The article I linked is not talking about sector management. It is talking about OS filesystem metadata fragmentation.
> 
> Please link a clear article describing changed filesystem design in Win 11 if you're so certain that metadata fragmentation is no longer an issue.


I don't need to. I did provide at least one MANUFACTURER link that clearly states NOT to defrag your SSD. 








						Should You Defrag an SSD?
					

There are many questions around whether to defrag an SSD. Head to Crucial for expert advice on solid state drives and whether defragging is necessary.




					www.crucial.com
				





> To summarize, do not defrag an SSD
> The answer is short and simple — do not defrag a solid state drive. *At best it won't do anything, at worst it does nothing for your performance and you will use up write cycles.*


You're continuing to miss context. I'm not holding your hand on this matter. Figure it out for yourself or continue in ignorance.


----------



## trieste15 (Aug 13, 2022)

lexluthermiester said:


> I don't need to. I did provide at least one MANUFACTURER link that clearly states NOT to defrag your SSD.
> 
> 
> 
> ...


Then better don't use Windows, since Windows does defrag the SSD once a month (ish) to maintain filesystem access speed. The irony of losing context seems to be lost on you.

Go ahead and cripple your own system, just don't advise others to do the same. And learn to swallow ego, some people who are providing the right info aren't doing so to win, only to help others optimise systems properly.

Calm down, read my posts a bit slower and the extracted paragraphs. I doubt you even opened the article I linked. Cos that's what I used to do a couple of years ago when I was about to be proven wrong. 

-----

"I dug deeper and talked to developers on the Windows storage team and this post is written in conjunction with them to answer the question, once and for all

"WHAT'S THE DEAL WITH SSDS, WINDOWS AND DEFRAG, AND MORE IMPORTANTLY, IS WINDOWS DOING THE RIGHT THING?"​*It turns out that the answer is more nuanced than just yes or no, as is common with technical questions.*

The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD."


----------



## lexluthermiester (Aug 13, 2022)

trieste15 said:


> Go ahead and cripple your own system, just don't advise others to do the same.


I disable the defrag service by default,* always*. Been doing that since the Windows XP days. Have never had a problem. Ever. I will give advice that I know works well as I see fit.


trieste15 said:


> And learn to swallow ego


Take your own advice, thank you.


trieste15 said:


> some people who are providing the right info aren't doing so to win


But you're not and I have already proven this fact.


trieste15 said:


> only to help others optimise systems properly.


You're not helping anyone.


trieste15 said:


> Calm down


Oh? I am perfectly calm. It's silly that you think otherwise. Quit with the personal remarks.


trieste15 said:


> I doubt you even opened the article I linked.


Oh? Then how did I quote that near 8 year old article? I did in fact read it. I then disregarded it as it is old information based on things that no longer apply.


trieste15 said:


> Cos that's what I used to do a couple of years ago when I was about to be proven wrong.


Interesting insight. One critical flaw in your logic: I am not wrong. Defraging SSD's made anytime during the past 6 or 7 years, for any reason, is a very bad idea. Full stop, end of discussion.

You have not proven your assertions. I have proven mine. Quit arguing.


----------



## Mussels (Aug 13, 2022)

Wirko said:


> That's wear leveling, and your're right, it's probably a bit less efficient when the drive is close to full.


Thanks i forgot the name

The thing here is that DRAMless drives, TLC, QLC etc all make that 'bit less' bigger and bigger



Assimilator said:


> Defrag on an SSD either TRIMs, or is a no-op. Given how smart current SSD controllers are, even the first is likely a no-op because the controller likely treats commands like TRIM that can affect block allocation, and therefore performance, as hints rather than pure "do this now" orders - the controller automatically runs TRIM itself at quiet times when the drive is at low load anyway.
> 
> 
> Uh, no. Completely, utterly, horribly wrong. As has already been explained multiple times in this thread, *the operating system doesn't know that files on an SSD are fragmented or not because the SSD controller handles file allocation itself internally and does not need to expose that information externally.*


Defrag programs can look at the drive and see fragments - they literally see every file as a chaotic crazy ass mess

Windows is smart enough to know the difference and run TRIM instead, but i've seen that break on closed OS's in the past - cloning mech to SSD, and then windows defragged the SSD

Heres defraggler, an up to date still commonly used program






Oh no my nvidia driver






An OS doing a cleanup of caches, is not defragging them - that's deleting whatever existed, created a new copy without un-needed fluff and then the new version gets written with less fragments - if space permits

It gives you optimise and defrag options, which uhhh... not great for beginners?






Oh god optimise is not TRIM




Well thats a start






Shrek said:


> Why does a hard drive slow up when near full if it can just wait till things are quiet?


1. Because they're a circle, and the outside of a circle spins faster, so its got lower access times

2. because you dont have space to write new files in a single solid line (Contiguous file, mmm fancy words), so not only are the files in a slower part of the disk they're more heavily fragmented too.

And... how long is it meant to wait?

If you're writing 5GB of small files and it's writing away at 10MB/s since they're fragmented and messy (Yes, they can get slower than that) you could be waiting hours for the drive to 'not be busy' - they have like 64MB of cache, not enough to outlast that sort of thing.
SMR drives can be even worse.


Defragmenting is literally grabbing a file in it's smaller pieces, and sticking into an empty part of the drive with enough room to fit the entire file in one solid piece.
A drives access time could be 20ms, so a 500 piece file could have 10,000ms of idle wait time for the drive to seek between its parts, and i've had large files (BD rips, 40GB+) have tens of thousands of fragments

Over time defraggers got fancy enough to 'consolidate files' which meant finding already contiguous files, and moving them to empty spaces near the start/outer edge of the disk - the worst offenders (small files) got a speed boost (access times are fastest near the spindle), with lots of free space for large files at the far end

Ultimate Defrag was great at visualising this
(Not my screenshots, source was blurry AF. blue/red/green is what we care about here)
Normal drive:




optimised drive
(You could set folder locations for performance and archive, aka outside and inside)


----------



## ThrashZone (Aug 13, 2022)

trieste15 said:


> The article I linked is not talking about sector management. It is talking about OS filesystem metadata fragmentation.
> 
> Please link a clear article describing changed filesystem design in Win 11 if you're so certain that metadata fragmentation is no longer an issue.


Hi,
Win-11 is not main stream and has only about the same luddies using it as us win-7 users


----------



## Mussels (Aug 13, 2022)

Oh i figured a better answer for the OP's original question, thanks to UDefrags images below

A partition on a mech disk, gives the first partition the best performance. It's dividing a circle into rings.

An SSD is spread over various NAND chips - but a 10% sized partition doesnt say okay C: is NAND 1, and D: gets 2 through 10 - it gives C: 10% of each NAND chip






Shrek said:


> My bad... force of habit... I meant
> 
> Why does a *solid state* hard drive slow up when near full if it can just wait till things are quiet?


Because it has to erase things to write - and it needs enough free space and time to erase a stack to write the original + new data
Do that enough times (especially with smaller files, wasting writes) and it's not a fast process

Using a RAM cache like with primocache can alleviate that stress, if you trust your stability




ThrashZone said:


> Hi,
> Win-11 is not main stream and has only about the same luddies using it as us win-7 users


W11 is mainstream - just because it's slow on things like steam (where half the users are using core 2 duos and onboard graphics in windows XP) doesnt mean it's not mainstream



trieste15 said:


> Then better don't use Windows, since Windows does defrag the SSD once a month (ish) to maintain filesystem access speed. The irony of losing context seems to be lost on you.
> 
> Go ahead and cripple your own system, just don't advise others to do the same. And learn to swallow ego, some people who are providing the right info aren't doing so to win, only to help others optimise systems properly.
> 
> ...


Your answers here are good - yes fragmentation can still occur, windows will defrag individual files if neccesary under specific circumstances - but not by the same rules as defragging a mech drive

You guys don't need to fight over this one - although i've had similar arguments with lex on other topics. He's fine to say what he does on his systems and why, but often gives advice that while fine for him could have negative complications for others. I think it's a matter of defending his choices, and simply poor wording at times


----------



## ThrashZone (Aug 13, 2022)

Hi,
Only main stream os I see here is win-10


----------



## claes (Aug 13, 2022)

Not arguing, but was curious since actual Windows developers were suggesting that Windows defrags SSDs. From what I’ve read it defragmentation the metadata of the file system, which the controller is unaware of, due to limits in various file systems about how much metadata can be stored based on the block size. The premise seems logically sound to me, but I don’t know enough about file systems to understand why metadata would need to be defragmented on flash based storage.

Looking into it further, I found this doc that says defrag will defragment an SSD as of 2021:








						defrag
					

Reference article for the defrag command, which locates and consolidates fragmented files on local volumes to improve system performance.



					docs.microsoft.com
				




But optimize-volume will skip defragmentation on SSDs?








						Optimize-Volume (Storage)
					

Use this topic to help manage Windows and Windows Server technologies with Windows PowerShell.



					docs.microsoft.com
				




This IT guy says it does happen, but only when the fragmentation threshold is reached, like that old Hanselman article suggests:





						SOLVED: Does Windows Defrag SSD’s & What Is SSD Optimization? | Up & Running Technologies, Tech How To's
					

In this article we explain what defrag is and why it is generally a bad…




					www.urtech.ca
				












There’s also this recent bug where windows defragmented ssds too often but at this point I’m not even sure what’s being defragmented:








						Windows 10 Alert: Defragger bug defrags SSD Drives too often
					

With the release of Windows 10 version 2004, the Windows Defragger has become a mess as it starts to defrag SSD drives too often, perform trim on non-SSD drives, and forgets when it last optimized a drive.




					www.bleepingcomputer.com
				




But… why? Didn’t Windows 8 (IIRC) increase maximum file size to some absurd number (200GB+?). When would the metadata become an issue when Windows has exceeded NTFS limitations in it’s implementation and RAIDs aren’t defragged?



Mussels said:


> Well thats a start


I’m even more lost now lol, never tried to defragment a SSD but good to know


Mussels said:


> but not by the same rules as defragging a mech drive


This makes sense to me, but why aren’t they documenting the difference? I saw a user on OCN who’s SSD had 5% fragmentation, then ran defraggler, but only saw a 1% write increase relative to drive space used, which might suggest a difference in how mediums are handled, but I’ve never done this math on a HDD.


----------



## trieste15 (Aug 13, 2022)

Mussels said:


> Oh i figured a better answer for the OP's original question, thanks to UDefrags images below
> 
> A partition on a mech disk, gives the first partition the best performance. It's dividing a circle into rings.
> 
> ...


Thanks for being a voice of reason.
I had already decided to stand down and not bother fighting, right before seeing your post, but for the record I'm not backing down from sharing the knowledge I've gained when I myself was confused during the transition period from HDD to SSD. My first SSD was a Samsung 830, 256 GB.

I'm not advocating a user go and seek out a way to run a HDD style defrag on their SSD. I'm simply saying to trust that Windows 10/11 are designed by rational, competent engineers in the Storage team, and there's no need to "help them further" or "disable defrag, it's of the devil".

Just choose how regular one prefers the default disk maintenance tool to run trim/retrim, and then forget it.

Going to see how primocache works, thanks! I'm currently using IMDisk to create a RAM disk as temp drive, dynamically allocating up to 4 GB for this purpose.


----------



## lexluthermiester (Aug 13, 2022)

trieste15 said:


> I'm simply saying to trust that Windows 10/11 are designed by rational, competent engineers in the Storage team, and there's no need to "help them further" or "disable defrag, it's of the devil".


The problem with that statement is that drive manufacturers advise against defragging an SSD. Windows itself warns against it. Common sense says those two examples are enough.



claes said:


> I’m even more lost now


As the old saying goes, when in doubt, don't.

What more does one really need?


----------



## trieste15 (Aug 13, 2022)

trieste15 said:


> Going to see how primocache works, thanks!


I think I don't need to use this app, as my current SSD is an NVMe, SX 8200 Pro.


----------



## claes (Aug 13, 2022)

lexluthermiester said:


> As the old saying goes, when in doubt, don't.
> 
> What more does one really need?


Sure, but Windows devs who built a custom NTFS with extended block size limitations and scheduled drive optimization tailored to SSDs think it’s sometimes necessary after analysis due to block size limitations. Why?

Obvs wouldn’t encourage anyone to do it manually but I’m not understanding something about the limits of block sizes (I think?) and am curious.


----------



## lexluthermiester (Aug 13, 2022)

claes said:


> Sure, but Windows devs who built a custom NTFS with extended block size limitations and scheduled drive optimization tailored to SSDs think it’s sometimes necessary after analysis due to block size limitations. Why?


Honestly, no idea. The rational does not make sense.


claes said:


> Obvs wouldn’t encourage anyone to do it manually but I’m not understanding something about the limits of block sizes (I think?) and am curious.


I think you're referring to a problem that existed years ago but has since been solved.


----------



## claes (Aug 13, 2022)

With respect, did you follow the links in my post? I’m not sure it’s been resolved, nor do I have any reason to believe so, given that the defrag docs from last year explicitly say Windows defragments hard drives. Idk what this means — do you?

Because I don’t actually know, and really appreciate and am heartened that two frustrated forum users put their feelings aside over a pointless debate (that I’m invested in) to come to an agreement while demonstrating humility (), I’m not going to elaborate. I know we have beef, and we both get a certain joy out of it, but I’d hope you’d do the same (unless of course you have a change log or something to point to!).


----------



## lexluthermiester (Aug 13, 2022)

claes said:


> With respect, did you follow the links in my post? I’m not sure it’s been resolved, nor do I have any reason to believe so, given that the defrag docs from last year explicitly say Windows defragments hard drives. Idk what this means — do you?


I did. What was stated in those documents is in direct conflict with known facts. Everyone who understands SSD NAND on even a basic level knows that NAND has a limited number of Program/Erase cycles. As Mussels stated correctly above, SSD controllers do not handle NAND in continuous memory, but instead broken up into stripped sectors like a RAID array. It's actually more complicated than that, but you get the idea. As Windows itself warns against defraging, I'm inclined to lean toward the idea that Windows does not actually do so, by default. Even if it does, the extent to which data is reordered is likely minimal.

Due to the automatic sector allocation and wear-leveling all drives now do, if the Windows defrag service is actually running, it shouldn't be because of the extra wear such would impose on a drive. This conflicting and confusing information is exactly why I do not let Windows manage itself.

So where does that leave us all? We know NAND cells have a limited lifespan. We know that drive makers advise not allowing defrag operations and we know that Windows itself warns about it.


----------



## WarthogARJ (Aug 13, 2022)

In direct answer to the qustion:
NO.
There are ZERO wear problems due to partitioning an SSD.

An SSD does NOT use the file/partition system in order to write to any specific PLACE on the SSD.
Your data can be spread out all over the SSD.

There might be OTHER issues associated with a LOT of partitions, but wear is NOT one of them.

You can read the first answer in this link.
And if you aren't sure, then go to his references.
Disadvantages of partitioning an SSD?


----------



## Mussels (Aug 13, 2022)

ThrashZone said:


> Hi,
> Only main stream os I see here is win-10
> 
> View attachment 257923


Most popular and mainstream are not the same words, nor do they have the same meaning



trieste15 said:


> Going to see how primocache works, thanks! I'm currently using IMDisk to create a RAM disk as temp drive, dynamically allocating up to 4 GB for this purpose.


My advice: Set a 2GB delayed write cache, dont bother with a read cache.

If and when you write to the drives, it's got enough buffer for a mech drive to catch up without any issues, and on SSD's the delay reduces writes.
Primocache can also re-use deleted files (recurring files like a browser cache from the same website) so it can reduce the amount of writes done to an SSD over time. It's not large in the MB sense, but i've seen greater than 50% write reductions on my C: partition, just by delaying small singular writes browsing the web


I'll try and get a screenshot, the stats reset when I reboot the PC so it's currently zeroed

D: drive (Users folders)
Almost nothing has been written, so nothing was cached
That 1.6% missing here is either still in the cache, or never needed to be written - if deferred blocks is at 0, then it's all been written and thats the amount of writes you've saved.






C: drive, after 30 minutes: Totally different story




29.1% reduction
10.2% still in RAM
Trimmed Blocks - 2141 - This is data that avoided getting written, because it was either on the disk already and 'un deleted' or was no longer needed by the time the 60 second write delay had ended.




You could think of this like a Log file being written to every second - by delaying the writes to every 60 seconds, you've saved 59 writes every minute it's running
(Reality isn't quite like that as windows has it's own limited caching system, but it's a clear example)


So depending how you view this: Wowee, i've saved 64MB of writes.
Or: Wowee, my OS drive will live 30% longer






> As they say on their website: You risk data loss or corruption if you have an unstable system, or suffer power outages. Use a smaller delay if you're worried, and a UPS if you can.


----------



## Shrek (Aug 13, 2022)

Mussels said:


> My advice: Set a 2GB delayed write cache, dont bother with a read cache.



Would this cause issues during a power cut or crash? (I have an SSD in mind as cache as opposed to RAM)

How does the Mac fusion drive technology work?


----------



## trieste15 (Aug 13, 2022)

What's the difference between deferred-writes and Windows' own "turn off write-cache buffer flushing" ?


----------



## chrcoluk (Aug 13, 2022)

Mussels said:


> This thread might have needed a cleanup, but any perceived grumpiness got sorted out via PM's
> Let's keep things nice, we all seem to have different views on SSD's but seriously - there are garbage drives out there, and a lot of garbage information too. Things change in the SSD world, common knowledge from the SLC days means nothing on a QLC drive.
> 
> *This thread has changed topic, because we got the answer to the OP's question (no) - but we're still discussing the actual concern of SSD wear*
> ...


Yep kingston have been going backwards, when I last brought some, I deliberately got old ones from ebay, as the newer models even back then were a clear downgrade.

But even the older ones I have I now actually have issues.  I own 4 kingston, 2 are dead, no detection or anything, these drives barely got used, I tried to use after they were powered down ages and just dead.  One has a weird Issue I have never heard of before where it reports space full when its not in an xbox, still does it after new format of filesystem.  It was used for a while to record game clips so heavy write use.  One still works normally.

Currently all my samsungs are ok, I have 2 really old 830s, they run way slower than new, but no errors in active use or stalling issues on reads suggesting functionally they still fine just lower peak speeds.  850 pro which has had a ton of use including time in my ps4 pro as the main drive (so again clip recordings), this runs like new, full performance no active errors.  The drive feels like it will last forever.  3 860 evo's. 1 870 evo, 1 970 evo and 1 980 pro.

Finally own 2 mx500s early revisions drives with the wear cycling issues, both originals developed signs of failure after a while, one only had light use, other moderate use in laptop.

Also in m-sata interface, got a 860 evo and a kingston ssd, I replaced the kingston in my pfsense firewall a few months back after for unknown reasons the boot sectors got corrupted.

Dramless ssd's in a tomshardware article before they first came to market, an industry insider warned that tbw would nosedive due to the mapping tables needing to be written all the time instead of stored in onboard ram cache, the cells for those cannot be remapped with wear levelling. This may have now changed as was an old article, but because of that article I will avoid dramless ssd's.

Of the drives I own that did develop issues, none were anywhere near their rated endurance.  Only my two 830s have significant usage of their rated erase cycles.


----------



## Shrek (Aug 13, 2022)

Mussels said:


> View attachment 257639



Would it be simpler to also specify how many times a drive can be completely re-written?

That way the

Samsung 970 Pro becomes: 1200
Samsung 980 Pro becomes: 600
just a thought


----------



## Valantar (Aug 13, 2022)

Shrek said:


> Would it be simpler to also specify how many times a drive can be completely re-written?
> 
> That way the
> 
> ...


In a way, but TBW is more legible: it states an amount of data, which lends itself to simple comparisons to other amounts of data (which is something most computer users are at least somewhat familiar with), instead of requiring a conversion to reach this. So, TBW is more useful for reading against actual human use of the drive, full drive writes is more legible if what you're after is some kind of nominally like-for-like comparison of drive endurance on a more abstract/less evertday level. Which means that TBW is far more useful of a metric overall, but not the be-all, end-all of describing drive endurance.


----------



## Count von Schwalbe (Aug 13, 2022)

Valantar said:


> In a way, but TBW is more legible: it states an amount of data, which lends itself to simple comparisons to other amounts of data (which is something most computer users are at least somewhat familiar with), instead of requiring a conversion to reach this. So, TBW is more useful for reading against actual human use of the drive, full drive writes is more legible if what you're after is some kind of nominally like-for-like comparison of drive endurance on a more abstract/less evertday level. Which means that TBW is far more useful of a metric overall, but not the be-all, end-all of describing drive endurance.


Yes and no - I see both sides. TBW is great for comparing drives of the same size, but if you are comparing drives of different sizes, it can provide a different perspective.


----------



## Valantar (Aug 13, 2022)

Count von Schwalbe said:


> Yes and no - I see both sides. TBW is great for comparing drives of the same size, but if you are comparing drives of different sizes, it can provide a different perspective.


To some degree, yes, and as we've discussed above the amount of writes you're likely to perform varies with drive size. Still, if what you're after is a simple, legible, relatable measure for what a drive can handle, TBW is much more of that than full drive write cycles, regardless of size.


----------



## claes (Aug 13, 2022)

lexluthermiester said:


> I did. What was stated in those documents is in direct conflict with known facts.


Sure, but “facts” that Microsoft themselves dispute and provide current documentation and bug fixes for.


lexluthermiester said:


> As Windows itself warns against defraging, I'm inclined to lean toward the idea that Windows does not actually do so, by default. Even if it does, the extent to which data is reordered is likely minimal.


With respect, your inclination doesn’t clarify the mixed messaging that MS presents. Why would they put out a fix for their SSD defragmentation feature if they didn’t have one that needed to be fixed?


lexluthermiester said:


> So where does that leave us all? We know NAND cells have a limited lifespan. We know that drive makers advise not allowing defrag operations and we know that Windows itself warns about it.


And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why? We can assume this is different from a HDD defrag — how so?

It’s a boring question, and probably doesn’t interest or effect the vast majority of MS users or forum goers. Maybe curiosity killed the cat, but I’m just out here trying to live dangerously.



Spoiler: Words



Honestly I’m over here talking about format and volume limitations and acknowledging that the controller is ambivalent to these things, all the while pointing out MS’s documentation on the question — a conversation requires meeting a person where they’re at, not hand waving because you don’t have the answers, which I certainly don’t expect you to (it’s not like MS is providing them, at least as far as my research went)… it’s okay not to know and to make your own decisions based on your own knowledge, but I am curious and would like to know more


----------



## Valantar (Aug 13, 2022)

claes said:


> And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why? We can assume this is different from a HDD defrag — how so?


If it is only defragmenting metadata then that's quite different from a regular HDD defrag - essentially not touching normal files at all, but just various file system data and possibly even lower level metadata. Should have a vastly smaller impact on drive endurance simply due to the much, much smaller amount of data being worked on.


----------



## lexluthermiester (Aug 13, 2022)

claes said:


> Sure, but “facts” that Microsoft themselves dispute and provide current documentation and bug fixes for.


They contradict themselves frequently. I really don't take microsoft documentation seriously.


claes said:


> With respect, your inclination doesn’t clarify the mixed messaging that MS presents.


Wasn't intending to. Don't care what microsoft presents in their documentation.


claes said:


> Why would they put out a fix for their SSD defragmentation feature if they didn’t have one that needed to be fixed?


Because it's microsoft. In that company, the situation of the right-hand not knowing what the left-hand is doing happens frequently. This has the effect of frequent misinformation being documented and making microsoft look like monkey's diddling a football. So "why" is nearly impossible to answer and just as equally irrelevant.


claes said:


> And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why?


Ok, that one has an answer. Fragmenting drive metadata can(but does not always) inhibit performance. However, that problem was solved many years ago when SSD makers moved all drive internal fuctions to the drive NAND controller and baked it all into the drive firmware package. Operating systems no longer have any effect on such functionality.


claes said:


> We can assume this is different from a HDD defrag — how so?


That assumption would be correct. HDD defragmenting is a completely different task.


claes said:


> not hand waving because you don’t have the answers


It's not about not having the answers, it's about not wanting to spend the huge amount of time to hand-hold people to flesh those answers out. If people are not happy/satisfied/understanding concerning an answer provided, they need to do their own research and dig deeper for themselves.



Valantar said:


> If it is only defragmenting metadata then that's quite different from a regular HDD defrag


Very much so and correct. In the case of older SSD's, that function was needed from time to time, but only rarely.


----------



## claes (Aug 13, 2022)

lexluthermiester said:


> Ok, that one has an answer. Fragmenting drive metadata can(but does not always) inhibit performance. However, that problem was solved many years ago when SSD makers moved all drive internal fuctions to the drive NAND controller and baked it all into the drive firmware package. Operating systems no longer have any effect on such functionality.


The drive controller is unaware of the filesystem format and volume attributes, which not only includes block size but also the metadata written by the file system. These are all determined by the OS and, to some extent (like block size and filesystem used), the user — much like the previous discussion on partitions. I think you are misunderstanding the question.


lexluthermiester said:


> That assumption would be incorrect. HDD defragmenting is a completely different task.


You are agreeing with my assumption, not showing that it’s incorrect. We are both assuming they are different tasks — I even provided data to attempt to demonstrate as much, which also shows that Windows can and does defragment SSDs.


lexluthermiester said:


> It's not about not having the answers, it's about not wanting to spend the huge amount of time to hand-hold people to flesh those answers out. If people are not happy/satisfied/understanding concerning an answer provided, they need to do their own research and dig deeper for themselves.


I have, and am attempting to. You’re uninterested, and that’s okay, but it’s not hand holding, it’s hand waving. Please consider your own advice.



lexluthermiester said:


> Very much so and correct. In the case of older SSD's, that function was needed from time to time, but only rarely.


I don’t think you can demonstrate that either of these claims are true — Windows does defrag modern SSDs, and we don’t actually know why or how often, at least in terms of the body of knowledge this thread has provided.


----------



## lexluthermiester (Aug 13, 2022)

claes said:


> You are agreeing with my assumption, not showing that it’s incorrect.


Sorry, see edit. I meant correct. HDD's do work a very different way.


claes said:


> I have, and am attempting to. You’re uninterested, and that’s okay, but it’s not hand holding, it’s hand waving. Please consider your own advice.


I'm not talking about you in particular, just people in general. However the answers for the question of defragging an SSD are very simple: Don't do it. Why is easy to understand. Explaining microsoft's nonsensicle documentation is not easy and I choose not to bother. This is not because I don't think you personally are worth it, only that the effort itself isn't worth it. Does that make sense?


claes said:


> and we don’t actually know why or how often


But we don't need to know why. It is a choice made on top of the mountain of poor choices microsoft has made concerning Windows configurations. Explaining them is pointless and not worth the effort & time. Helping people understand a better choice is simple and productive.


----------



## claes (Aug 14, 2022)

The same could be said of @Shrek ’s original question — no one needs to know why, but there’s an answer — why not pursue it? I’ve conceded as much, and we’ve both identified that it doesn’t matter to you, so I’m not sure what all the words are about.

To be sure, there’s an explicit reason why metadata fragmentation on an SSD matters to Microsoft — snapshot integrity. Windows defrags metadata to prevent excessive block sizes as a normal part of it’s operation. It’s actually incorrect to simply say “don’t do it,” even if for many users it might not matter, or the chance that it does is one in a million.

Idk I like the enlightenment and science, so technical questions are interesting to me. Feel free to generalize, but I’d like a technical answer, and am hoping fellow forum members can guide me to one. It’s clear you’re uninterested, and that’s okay, but please don’t be dismissive — if you don’t care, why bother?


----------



## Dr. Dro (Aug 14, 2022)

claes said:


> The same could be said of @Shrek ’s original question — no one needs to know why, but there’s an answer — why not pursue it? I’ve conceded as much, and we’ve both identified that it doesn’t matter to you, so I’m not sure what all the words are about.
> 
> To be sure, there’s an explicit reason why metadata fragmentation on an SSD matters to Microsoft — snapshot integrity. Windows defrags metadata to prevent excessive block sizes as a normal part of it’s operation. It’s actually incorrect to simply say “don’t do it,” even if for many users it might not matter, or the chance that it does is one in a million.
> 
> Idk I like the enlightenment and science, so technical questions are interesting to me. Feel free to generalize, but I’d like a technical answer, and am hoping fellow forum members can guide me to one. It’s clear you’re uninterested, and that’s okay, but please don’t be dismissive — if you don’t care, why bother?



I wonder if the metadata issue would be solved with a modern, self-healing file system. Windows is still incapable of booting from a ReFS volume and it seems that Microsoft has decided to position it as a premium feature, too, since instead of extending ReFS support to basic editions of Windows such as Home, they actually removed it from Pro and created a new SKU, Pro for Workstations, that adds back support for this file system.


----------



## Mussels (Aug 14, 2022)

trieste15 said:


> What's the difference between deferred-writes and Windows' own "turn off write-cache buffer flushing" ?


They're exact opposites
Windows write cache buffer IS a cache, but its a lot smaller with no delay




Shrek said:


> Would this cause issues during a power cut or crash? (I have an SSD in mind as cache as opposed to RAM)
> 
> How does the Mac fusion drive technology work?


Yes. A 60 second buffer would mean upto 60 seconds of data loss.
As for corruption, the risk is exactly the same as normal. NTFS and GPT for example, have redundancy features to reduce corruption (shadow copies "un delete", not being as easy to corrupt as MBR's MFT, etc)

No idea. I dont use a mac.
The ram cache is great for writing, to remove issues from writing more than a drive can handle.
Using an SSD is for a READ cache, to buffer frequently read files from multiple mechanical drives. Like a 500GB SSD caching the most common small files off your 40TB of word documents or whatever.





chrcoluk said:


> Yep kingston have been going backwards, when I last brought some, I deliberately got old ones from ebay, as the newer models even back then were a clear downgrade.
> 
> But even the older ones I have I now actually have issues.  I own 4 kingston, 2 are dead, no detection or anything, these drives barely got used, I tried to use after they were powered down ages and just dead.  One has a weird Issue I have never heard of before where it reports space full when its not in an xbox, still does it after new format of filesystem.  It was used for a while to record game clips so heavy write use.  One still works normally.
> 
> ...


DRAMless use system ram, via HMB
Overall, it's a pretty decent alternative.
Some drives using HMB without DRAM, actually have extremely good performance  "Second-fastest PCIe 3.0 SSD we ever tested"





Shrek said:


> Would it be simpler to also specify how many times a drive can be completely re-written?
> 
> That way the
> 
> ...





Valantar said:


> In a way, but TBW is more legible: it states an amount of data, which lends itself to simple comparisons to other amounts of data (which is something most computer users are at least somewhat familiar with), instead of requiring a conversion to reach this. So, TBW is more useful for reading against actual human use of the drive, full drive writes is more legible if what you're after is some kind of nominally like-for-like comparison of drive endurance on a more abstract/less evertday level. Which means that TBW is far more useful of a metric overall, but not the be-all, end-all of describing drive endurance.





Count von Schwalbe said:


> Yes and no - I see both sides. TBW is great for comparing drives of the same size, but if you are comparing drives of different sizes, it can provide a different perspective.



Eeeeeeehhh
TBW makes more sense. Otherwise users have to go do math.

You get a hardware reading for your drive in software, okay cool its got 17.8 rewrites left: and that figure is useless
At least with TBW, you know literally - how many terabytes of data you have left to go. You dont sit there and think "ah yes, let's copy 0.67 SSD's worth of data to this drive.... oh wait thats the wrong capacity, let me math it out again"




claes said:


> Sure, but “facts” that Microsoft themselves dispute and provide current documentation and bug fixes for.
> 
> With respect, your inclination doesn’t clarify the mixed messaging that MS presents. Why would they put out a fix for their SSD defragmentation feature if they didn’t have one that needed to be fixed?
> 
> ...



you're pretty much correct on this.

MS had some VERY specific situations where super fragmented files were an issue on an SSD, in enterprise or server style setups.
They have a method to defrag them, and optimise it to do as little writing as possible.

That's it. It's not the same as defragging an entire disk, or treating them like mech drives at all. It's defragging a single file in problematic situations only, and is probably done in the background for us already but never gets triggered "Does file have 500+ fragments? No? leave it TF alone"
If yes, make a copy into contiguous free space as reported by the drive, and mark the old locations for TRIM to clear up


----------



## gdp77 (Aug 14, 2022)

Shrek said:


> That is my point... if I just work on one end of my desk, I'll wear out that section long before the desk would wear out if I were to use the whole area.



Your analogy is wrong. You may see 2 or more partitions on you SSD but this is what SSD's controller shows you. In reality, SSD's on board controller doesn't care and will use evenly all SSD's cells.


----------



## lexluthermiester (Aug 14, 2022)

gdp77 said:


> Your analogy is wrong. You may see 2 or more partitions on you SSD but this is what SSD's controller shows you. In reality, SSD's on board controller doesn't care and will use evenly all SSD's cells.


We've been over that. You've missed the conversation a bit.


----------



## Valantar (Aug 14, 2022)

Mussels said:


> Eeeeeeehhh
> TBW makes more sense. Otherwise users have to go do math.
> 
> You get a hardware reading for your drive in software, okay cool its got 17.8 rewrites left: and that figure is useless
> At least with TBW, you know literally - how many terabytes of data you have left to go. You dont sit there and think "ah yes, let's copy 0.67 SSD's worth of data to this drive.... oh wait thats the wrong capacity, let me math it out again"


Yep, exactly this. If your denomination requires math to become understandable in a real-world usage situation, it is not a generally legible denomination. And that might be fine for a bunch of use cases, but not this one. It would be similar to a car's gas tank not measuring from full to empty, but instead zeroing out every time it was filled, and then counting out the consumed liters/gallons used since filling. Like ... sure, that's useful in a way, but now I have to know how large my tank is and then calculate how much is actually left to actually get the most immediately useful information from what I'm being shown.


----------



## Count von Schwalbe (Aug 14, 2022)

I guess I am weird then...

I see it like:
TBW=This many gallons of gas to drive from here to Chicago 
TDW*=This many tanks of gas to drive from here to Chicago

I guess TBW is more useful for smaller writes and TDW for larger ones. 

*Total Drive Writes


----------



## Mussels (Aug 15, 2022)

Count von Schwalbe said:


> I guess I am weird then...
> 
> I see it like:
> TBW=This many gallons of gas to drive from here to Chicago
> ...


No it's more like being measured by minutes at the pump station, when every pump has different flow rates
Because it's only useful for a static drive size/one location - the moment you want to compare to anywhere else, you cant.

It'd only be useful if every time you used the drive, you filled it completely - no one ever uses a drive that way, we write in bytes at a time, and TB's covers the entire range (thanks to decimal points)


----------



## Zareek (Nov 10, 2022)

I used to set up my page file to use the secondary hard drive, thinking it boosted performance. With my first SSD, I still kept the page file on a spinner. I was worried about all the extra writes to my write limited, super expensive storage. I figured out pretty quick that if you have a reasonable about of RAM the page file barely gets used. With my current boot drive, a 1TB Samsung 970 EVO Plus, I never moved the page file. It's nearly 3 and a half years old now, and it is the primary application drive. Only games get stored to the game drive, and random stuff like program installers get saved to another SSD. In 3 years and 4 months, I've written 17 TB. Yeah 17 TB, this PC is on 12 to 16 hours a day every day and is used a lot! I use it for everything but work. My point is the drive will die from something else or be obsolete a LONG TIME before I reach the 1200 TBW. The page file isn't an issue. In case you are wondering, the first two years, I had 16 GB of RAM, and now I have 32 GB. I don't baby the drive in any way, page file is set to Windows 10 defaults, automatic sizing on the boot drive. It defaults to something between 16 MB and almost 5 GB.


----------

