- Joined
- Jul 13, 2016
- Messages
- 3,281 (1.07/day)
Processor | Ryzen 7800X3D |
---|---|
Motherboard | ASRock X670E Taichi |
Cooling | Noctua NH-D15 Chromax |
Memory | 32GB DDR5 6000 CL30 |
Video Card(s) | MSI RTX 4090 Trio |
Storage | Too much |
Display(s) | Acer Predator XB3 27" 240 Hz |
Case | Thermaltake Core X9 |
Audio Device(s) | Topping DX5, DCA Aeon II |
Power Supply | Seasonic Prime Titanium 850w |
Mouse | G305 |
Keyboard | Wooting HE60 |
VR HMD | Valve Index |
Software | Win 10 |
I think I should patent "Continuous Defragmentation". Method and apparatus and all. But, as always, too many people have thought of useful things before I did. Such as these guys.
What I mean is: the file system driver should be gathering some statistics about the files being read. It would detect and count situations where a sequential read has to jump all over the disk because of fragmentation, and temporarily store (in RAM) some metadata about these problematic parts of files. That operation would consume very little time. Some time later, perhaps when the disk were idle, a process akin to a garbage collector would defragment only those specific parts of specific files, putting priority on fragments that were read most often, and also trying to reduce the fragmentation of free space along the way.
I would of course throw in some advanced (but still dumb) statistics, to make the whole process somewhat adaptive depending on what's going on on the disk. Violà, artificial intelligence!
Diskeeper used to outright prevent fragmentation completely, it made sure files were always written to disk in one continuous piece. Unfortunately they no longer offer it anymore and all it's features are now rolled up into their enterprise products.