• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Critical Flaw in Windows 10 Could Corrupt Your Hard Drive

Joined
Dec 16, 2017
Messages
2,919 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
Microsoft has MORE than it's share of problems, but the NTFS file system isn't one of them.
I think NTFS is due for some upgrades, honestly. Though, I agree that for the majority of users, specially in normal use-cases, it's reliable enough.

A while back Ars Technica wrote an interesting article about that:

 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Now that is a flaw. Copying a file without actually copying the file? Flawed design indeed.
CoW is not a design flaw, it is literally a design decision. If you have a 10GB file and you copy it (not change it, copy it,) why do you need to store a second copy of the data if the data didn't change? It's a waste of space. The file reference and metadata is copied, but the actual data isn't unless one of those "files" changes. Only then will it become a different copy, but until that happens, there is absolutely no reason why you can't share that data under the hood. It's almost like you're advocating for wasting disk space.
Those are nice features, however NTFS has similar features of it's own.
NTFS only supports LZ77, isn't configurable, and doesn't support checksumming files at the FS level.
Interesting, but not a flaw so much as it's deliberate engineering choice.
Now this is what I would call a poor engineering choice, not CoW.
You kinda failed to sell your argument there.

Microsoft has MORE than it's share of problems, but the NTFS file system isn't one of them.
Do you have trouble reading or something? Do I need to quote myself again?
A filesystem is pretty useless if it's not stable, reliable, or error proof so that's a pretty low bar. The benefits come from all the features that these file systems implement beyond doing what any FS should be capable of doing. Just because Windows can't run them doesn't mean NTFS is better. It means that Microsoft shoves it down your throat without giving you the option.
 
Last edited:
Joined
Jul 5, 2013
Messages
27,840 (6.68/day)
CoW is not a design flaw, it is literally a design decision. If you have a 10GB file and you copy it (not change it, copy it,) why do you need to store a second copy of the data if the data didn't change? It's a waste of space.
That's an opinion. Stupid design idea. What happens when a change is made to one of the "copies"? Does the actual copy function happen then? One way or the other an actual copy function is going to happen. For it not to happen on command is a design flaw, not a design decision. Technology needs follow user commands. If I command a file copy, I expect the file to be copied, bit for bit, as commanded at the moment I command it. A file system should NEVER have the "option" to do something other than what a user has instructed.
It's almost like you're advocating for wasting disk space.
"Wasting" is a subjective term. What I expect from my technology is that it follows my commands as I issue them, the way that I issue them and does NOT take liberties interpreting what I "might" mean or need in future or whatever. So no, I care not about "wasting disk space". It's my space to waste if I so choose.
NTFS only supports LZ77, isn't configurable, and doesn't support checksumming files at the FS level.
Incorrect. NTFS has, since the days of NT 4.0, supported error correction which requires a form of check sum calculations. It might be MS's private brand of it, but it is there.
Do you have trouble reading or something? Do I need to quote myself again?
No you don't. What you need to do is re-read your own statements and think very carefully about the context you are attempting, and failing, to convey.

You're still not selling your point. Would you like to continue trying?
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
That's an opinion. Stupid design idea. What happens when a change is made to one of the "copies"? Does the actual copy function happen then? One way or the other an actual copy function is going to happen. For it not to happen on command is a design flaw, not a design decision. Technology needs follow user commands. If I command a file copy, I expect the file to be copied, bit for bit, as commanded at the moment I command it. A file system should NEVER have the "option" to do something other than what a user has instructed.
It's a filesystem, not a block device. I would expect it to do what I ask it to do the most efficient way possible. There is zero reason to duplicate date on the filesystem. You're advocating for wasting space, it's really that simple.
"Wasting" is a subjective term. What I expect from my technology is that it follows my commands as I issue them, the way that I issue them and does NOT take liberties interpreting what I "might" mean or need in future or whatever. So no, I care not about "wasting disk space". It's my space to waste if I so choose.
It's wasting space because there is absolutely no good reason to duplicate data on the same disk. As the consumer of that data, you shouldn't care where on the disk it comes from. You should only care that it's there and is valid. Once again, you're advocating for duplicating data, which is wasteful if it's on the same disk. There aren't any ifs about this, it simply is. If it were on another disk, that'd be a different story, but it's not. Structural sharing when data is immutable is never a bad thing.
Incorrect. NTFS has, since the days of NT 4.0, supported error correction which requires a form of check sum calculations. It might be MS's private brand of it, but it is there.
Wat? CRC errors you hear about in Windows aren't from the filesystem but from the disk during transfers. NTFS does not store checksums.
No you don't. What you need to do is re-read your own statements and think very carefully about the context you are attempting, and failing, to convey.

You're still not selling your point. Would you like to continue trying?
Says the guy who doesn't have his facts straight and who has an ass backwards view of copy on write. It's almost like you've never used a sparse disk image before. Maybe we should just store every zero for empty space in a disk image because you asked for a copy. :kookoo:
 
Joined
May 19, 2019
Messages
46 (0.02/day)
System Name Home brew
Processor Intel i7 8700k 4.5g
Motherboard Asus RoG Strix Z370 H
Cooling Noctua NH-U12 S/LG window air conditioner
Memory G Skill Ripjaws V ddr 4 3200 32 gb.
Video Card(s) Sapphire Vega 64
Storage WD 240gb 3D nand SSD/WD 1tb 3D nand SSD
Display(s) Acer XF240 H 144hz Freesync
Case Rosewill Viper Z mid tower
Audio Device(s) Motherboard/EV 740 2 way & pwrd woofer/Logitech wireless headset #?
Power Supply EVGA supernova 850 G2
Mouse Cougar Revenger S
Keyboard Havit KB390L
Software Windows 10
Judging from everyone's replies, I'm not sure I wanna install Windows 10 on all my computers. I use Windows 10 regularly but my B-52 browser is still Windows 7. Clearly aside from a few hardware updates and security ( which I use to 3rd party security programs), I find gliding along with Windows 7 still pretty convenient. However, I do get perturbed about how often Windows keeps updating itself. I know Microsoft works with the Govt. and sometimes feel Windows 10 ability to spy a little unnerving. Just what do they look for? I know all Windows copies of Windows have spy tools implanted inside since NT but to what degree, I have no clue.
I wonder if these updates to spy programs have a direct or indirect impact as Windows keeps updating.
These are the former spy programs I've been notified of and kept tabs on. I don't know what they do. The last I think was in 2016.
NSAKEY_key2
ADVAP132
Does anyone have any clues on how they conduct passing information when asked to do so? If I uninstall them, what harm could it do? I could just say, if asked, "Oh it looked like a form of malware so I uninstalled it." On the other hand, could it lock Windows if I do?
 
Joined
Jul 5, 2013
Messages
27,840 (6.68/day)
It's a filesystem, not a block device. I would expect it to do what I ask it to do the most efficient way possible. There is zero reason to duplicate date on the filesystem. You're advocating for wasting space, it's really that simple.
That is a series of opinions. And you're welcome to them.
It's wasting space because there is absolutely no good reason to duplicate data on the same disk.
Also an opinion, especially if the copied data is going to be modified. Wasted space or not, I expect the copy function to actually make a copy of whatever I commanded it to copy. That is simple common sense and a functional requirement.
Wat? CRC errors you hear about in Windows aren't from the filesystem but from the disk during transfers. NTFS does not store checksums.
Your citation isn't one. SuperUser is a public forum and does not qualify as a citable source. But the following does. Do read..
Says the guy who doesn't have his facts straight and who has an ass backwards view of copy on write. It's almost like you've never used a sparse disk image before. Maybe we should just store every zero for empty space in a disk image because you asked for a copy.
Typical straw-man argument. You need a mirror.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
That is a series of opinions. And you're welcome to them.
Also an opinion, especially if the copied data is going to be modified. Wasted space or not, I expect the copy function to actually make a copy of whatever I commanded it to copy. That is simple common sense and a functional requirement.
It's not an opinion. It's a waste of space because there is no good reason to copy data multiple times. You're free to give me good examples of why you want to store the same data on disk multiple times though. Your view is misguided and you've yet to prove to me why it's not. Until it's modified, why would you proactively waste space? It's dumb because your assumption rests on the premise that it will be eventually modified which is quite the expectation.
Your citation isn't one. SuperUser is a public forum and does not qualify as a citable source. But the following does. Do read..
That link says nothing about NTFS having checksums. :kookoo:
Typical straw-man argument. You need a mirror.
That's because you're not giving any good reason and you're just saying I'm wrong. I'm not misrepresenting anything, your take is just bad. Take your own advice and stop being a tool.
 
Last edited:
Joined
Jul 16, 2014
Messages
8,198 (2.16/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
You're free to give me good examples of why you want to store the same data on disk multiple times though
main frame data redundancy was a huge over-protective thing at one time because of the failure rates of hard drives and how easily tape drive date could be lost from a simple stupid mistake. Imagine a whole raid array lost without "a copy". Some are still concerned about losing data so the practice of storing duplicate data continues.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
main frame data redundancy was a huge over-protective thing at one time because of the failure rates of hard drives and how easily tape drive date could be lost from a simple stupid mistake. Imagine a whole raid array lost without "a copy". Some are still concerned about losing data so the practice of storing duplicate data continues.
We're talking about file systems, not block devices. I'm not advocating for people to not backup their data to different disks or to use technologies like RAID. I'm saying that copying the same data on the same logical disk is wasteful and copy on write helps prevent redundancy. It literally lets you get more out of the storage you have.
 
Joined
Aug 21, 2015
Messages
1,725 (0.51/day)
Location
North Dakota
System Name Office
Processor Ryzen 5600G
Motherboard ASUS B450M-A II
Cooling be quiet! Shadow Rock LP
Memory 16GB Patriot Viper Steel DDR4-3200
Video Card(s) Gigabyte RX 5600 XT
Storage PNY CS1030 250GB, Crucial MX500 2TB
Display(s) Dell S2719DGF
Case Fractal Define 7 Compact
Power Supply EVGA 550 G3
Mouse Logitech M705 Marthon
Keyboard Logitech G410
Software Windows 10 Pro 22H2
We're talking about file systems, not block devices. I'm not advocating for people to not backup their data to different disks or to use technologies like RAID. I'm saying that copying the same data on the same logical disk is wasteful and copy on write helps prevent redundancy. It literally lets you get more out of the storage you have.
Are CoW systems by necessity journalled? Or is there another mechanism that directs a read op on the copy to the original, and forces a write on the copy if the original is deleted or altered? I don't know much about filesystems.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Are CoW systems by necessity journalled? Or is there another mechanism that directs a read op on the copy to the original, and forces a write on the copy if the original is deleted or altered? I don't know much about filesystems.
Most implementations of structural sharing under the hood has references to the spot in memory or on disk and much like the garbage collector in an application or in a VM like the JVM, those areas are cleaned up once all references are gone. There are a number of strategies for accomplishing this, but the most relevant similar technology I can think of are persistent data structures. Copy on write essentially means that data becomes immutable because it's not updated in place, which is a requirement for something like this because a new version requires copying of the non-shared parts, depending on how the underlying technology handles it. Much like MVCC, the "changed" data doesn't get overwritten, it gets stored someone else then the old area gets cleaned up later. A big advantage here is that since the old data is never written over, a crash during a transfer does not corrupt the old data. The new data just doesn't get saved. NTFS actually does something like this via its transaction system, but it's still not checksummed so you can't validate down the line to make sure that no bits flipped after the write is complete.
 
Joined
Dec 16, 2017
Messages
2,919 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
Are CoW systems by necessity journalled? Or is there another mechanism that directs a read op on the copy to the original, and forces a write on the copy if the original is deleted or altered? I don't know much about filesystems.
They are, not so much by necessity for the CoW feature itself, but because there is hardly ever a reason to disable journaling. In general, I think all modern filesystems (ext4, NTFS, btrfs, ZFS, etc.) are journaled.

Regarding how the duplicate copies are handled depends on the filesystem, but btrfs for example simply makes a pointer that says that DuplicateFile uses the same blocks (clusters, disk sectors, etc.) as OriginalFile, which makes creating a duplicate an instant operation, regardless of the file size. As long as at least one of that set of files (Original and Duplicate/s) uses those blocks, they're not marked for deletion.

Let's say you change one of those files. The change is written down in new filesystem blocks and then the filesystem adds and/or changes pointers that for a human would read like "DuplicateFile uses the same blocks as Original File except it replaces block 5 for block 12879821 and discards blocks 50 and 51". So the filesystem shows the original file as it was and the duplicate that changed shows up with the changes you may have made to it.

CoW is also important for file system snapshots, as it basically saves the entire state of the filesystem in an instant. Say, you took a snapshot of the drive at 5:30:01 AM, July 4th 2019. The snapshot is instantaneous, as all changes after the snapshot are written down in new blocks, without changing the original ones. And this changes can be "atomic", that is that they don't have to save the whole file in new blocks, the file system writes down only the exact changes made and nothing more. So, if you "flip a bit" in a 400 GB file, the operation won't require writing down the full 400 GB file, it will simply save the change and add a pointer in the filesystem index so that it knows to look for the specific block of that file that was changed when reading said file.

EDIT: Ninja-ed by Aquinus, lol
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
EDIT: Ninja-ed by Aquinus, lol
No, I like your response better. Snapshotting is such a nice feature of filesystems like btrfs. It's such a powerful tool. I'd actually argue that ext4 is inferior to btrfs because it lacks this.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
You're missing some context, read a little closer.
I did and there are no references to checksums or hashes. It only retries and recovers from a bad write. If something happens to the data after it has already been written, you're SoL. Your source even plainly spells this out. The checksum isn't to ensure that data is correct just after it has been written. It's to ensure that it's always correct after being written after any arbitrary amount of time. There is a big difference.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
What is the purpose of check-suming and hashing? Hmm? Data integrity, is it not? Let's review...

That's the TITLE of the information page. It's even in the web address;
http://ntfs.com/data-integrity.htm
Just because it commits writes in transactions doesn't mean that it ensures the integrity of your data in the long term, only at the time of writing. This does not ensure that the data written 5 years ago is still the way it should be. That's the point of the checksum, not to make sure that the write was successful like with NTFS' transaction system. What NTFS does is to ensure that write failures due to something like a crash or power loss event won't corrupt data already on disk which you get out of the box with a CoW solution. It does not ensure the correctness of data after it has been written and committed.
You were saying?
Apparently things you've been ignoring.
NTFS actually does something like this via its transaction system, but it's still not checksummed so you can't validate down the line to make sure that no bits flipped after the write is complete.
 
Joined
Jul 5, 2013
Messages
27,840 (6.68/day)
Just because it commits writes in transactions doesn't mean that it ensures the integrity of your data in the long term, only at the time of writing. This does not ensure that the data written 5 years ago is still the way it should be. That's the point of the checksum, not to make sure that the write was successful like with NTFS' transaction system. What NTFS does is to ensure that write failures due to something like a crash or power loss event won't corrupt data already on disk which you get out of the box with a CoW solution. It does not ensure the correctness of data after it has been written and committed.

Apparently things you've been ignoring.
You're dismissing the entire context of the point by marginalizing the proven data integrity functionality built-in to NTFS in a vain attempt to support your argument. This debate is over. You have failed to prove your original claim about NTFS. Move along.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
You're dismissing the entire context of the point by marginalizing the proven data integrity functionality built-in to NTFS in a vain attempt to support your argument. This debate is over. You have failed to prove your original claim about NTFS. Move along.
I've posted several times about when it ensures that data integrity; which at write time. There is no checksum and nothing to enable you to validate data integrity after the write transaction has be committed.
Just because it commits writes in transactions doesn't mean that it ensures the integrity of your data in the long term, only at the time of writing. This does not ensure that the data written 5 years ago is still the way it should be. That's the point of the checksum, not to make sure that the write was successful like with NTFS' transaction system. What NTFS does is to ensure that write failures due to something like a crash or power loss event won't corrupt data already on disk which you get out of the box with a CoW solution. It does not ensure the correctness of data after it has been written and committed.
I did and there are no references to checksums or hashes. It only retries and recovers from a bad write. If something happens to the data after it has already been written, you're SoL. Your source even plainly spells this out. The checksum isn't to ensure that data is correct just after it has been written. It's to ensure that it's always correct after being written after any arbitrary amount of time. There is a big difference.
Wat? CRC errors you hear about in Windows aren't from the filesystem but from the disk during transfers. NTFS does not store checksums.
Also:
This debate is over. You have failed to prove your original claim about NTFS. Move along.
You were saying?
Typical straw-man argument. You need a mirror.
^: I've gotten responses like this every time I've attempted to provide more context. You've yet to prove anything yourself, yet your attitude remains bad and you keep digging your heels in despite not even being able to address any of the things I brought up. You're right, the conversation is over, but it's not because I failed to prove my point. It's because you failed listen. Since you seem to suck at listening, I'll re-quote yet another thing I said, because I'm getting tired of being a broken record for some schmuck who can't read, is ignorant, or both.
That's because you're not giving any good reason and you're just saying I'm wrong. I'm not misrepresenting anything, your take is just bad. Take your own advice and stop being a tool.
 
Joined
Mar 6, 2017
Messages
3,332 (1.18/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
There is no checksum and nothing to enable you to validate data integrity after the write transaction has be committed.
My argument now is... OK, the data is corrupt, the data isn't what it's supposed to be. What are you going to do about it? It's not like you can do anything once the data has been corrupted except restore it from another source of the same data.

As for corrupted data happening, I guess that the reasoning behind NTFS not having built-in checksums is because Microsoft doesn't believe that data corruption is as big of a deal as you may think it is on a consumer level. Perhaps at the data center level like in Microsoft Azure they have that kind of support but that's at the corporate level and that's the kind of stuff that one would expect at that level. Whereas at the home level, data corruption has happened to me twice in ten years and that's only because the drive itself was failing which at that point I assumed all the data was corrupt. But again, that's why you have data backups and why you should always have multiple copies of the same data that you can't lose. Don't trust one repository for your most important of your data, have it in multiple places.

I, myself, have the most important of data saved in the cloud in Microsoft OneDrive. Vacation photos mostly, the kinds of memories that I absolutely cannot lose. There's other data that I keep saved up in the cloud but that's a story for another day. Suffice it to say, if the data is that important to you, keep it in multiple places and most importantly, keep it off site so if a disaster happens like God forbid a house fire, robbery, flood, electrical damage, etc. that data is safely stored offsite where you can recover said data.

As for CoW, I really don't see a problem considering that drives are reaching absolutely ungodly high capacities so at this point, CoW is a bit of a moot point. When you have a ten TB drive, who gives a damn? Are you really going to care? Nope, not I.
 
Joined
Dec 16, 2017
Messages
2,919 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
As for corrupted data happening, I guess that the reasoning behind NTFS not having built-in checksums is because Microsoft doesn't believe that data corruption is as big of a deal as you may think it is on a consumer level. Perhaps at the data center level like in Microsoft Azure they have that kind of support but that's at the corporate level and that's the kind of stuff that one would expect at that level
There is checksum support on ReFS, introduced with Windows Server 2012. By default, it only checksums metadata, but file data checksums can be turned on. However, if the checksum doesn't match, the file is deleted. There is no repair there. Btrfs at the very least can detect a bit flip and repair the affected file (though I'd like to know how far the repair can go before it hits a dead-end)
Whereas at the home level, data corruption has happened to me twice in ten years and that's only because the drive itself was failing which at that point I assumed all the data was corrupt. But again, that's why you have data backups and why you should always have multiple copies of the same data that you can't lose. Don't trust one repository for your most important of your data, have it in multiple places.
Agreed. Home users and most prosumers will probably not care (and dare I say should not care?) at all as long as they are using at least some sort of backup.
As for CoW, I really don't see a problem considering that drives are reaching absolutely ungodly high capacities so at this point, CoW is a bit of a moot point
To be fair, it's not so much for the capacity, but rather that it's useful for snapshotting the drive in a quick and painless fashion. And without the snapshots consuming too much space, specially if you want to do something like a snapshot per hour or so, which is useful for rolling back changes or keeping multiple versions of a file.

Also, someone must be asking for CoW support, seeing as it is supported not just on ZFS and BTRFS, but also on APFS.
 
Joined
Mar 6, 2017
Messages
3,332 (1.18/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
Btrfs at the very least can detect a bit flip and repair the affected file (though I'd like to know how far the repair can go before it hits a dead-end)
I still wouldn't trust it. If I knew that a file was corrupt, I'd wouldn't trust any repair of it. Replace it from a known good source.
To be fair, it's not so much for the capacity, but rather that it's useful for snapshotting the drive in a quick and painless fashion. And without the snapshots consuming too much space, specially if you want to do something like a snapshot per hour or so, which is useful for rolling back changes or keeping multiple versions of a file.
Windows does have this capability, it's called Shadow Copies.
 
Joined
Aug 20, 2007
Messages
21,471 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
Hi,
Nope just a bad example of yours really but I do have a gun carry permit = Check ;)

There is more to security than os patches and MS browser which have always been targets
Remote access..... disabled = Check
Mbam Pro license = Check
Ublock Origin browser security = Check
Don't use MS apps = Check
Using ms os that is obsolete with a built in netstack that will never get patched again, check?

Now that is a flaw. Copying a file without actually copying the file? Flawed design indeed.
You just described write buffering.

ReFS is the future for Windows anyways.
 
Last edited:
Top