Monday, January 18th 2021

Critical Flaw in Windows 10 Could Corrupt Your Hard Drive

Windows OS security is taken seriously, as the OS is wide-spread across millions of PCs around the world, however, there may be issues where OS has some security flaw that is found by external researchers. Due to the sheer code base of the new OS like Windows 10, there are a plethora of bugs and security flaws waiting to get discovered by someone. And today, thanks to the team of cybersecurity researchers, we have found out that in Windows 10 file-system called NTFS, there is a bug that corrupts your hard drive by simply triggering a specific variable name in a file.

If the end-user inside Windows 10 tries to access the NTFS attribute called "$i30" in a specific way, the flaw is exploited. The NTFS search index attribute, specifically the string "$i30", is containing a list of files and subfolders in a directory, and there is even a log of deleted files and folders. After running a specific command inside the command line (CMD) or inside the browser, Windows will start to display warnings of "File or directory is corrupted and cannot be read". After that, the OS will prompt a user to restart the machine and repair the damaged drive, so the Windows disk check utility will start. Once corrupted, Windows 10 will start displaying a notification indicating that the main file table (MFT) on the particular disk is corrupted and thus can not operate. Starting from the build Windows 10 Build 1803 the OS is vulnerable until the current version and a possible fix is expected to be released soon.
Sources: Jonas L (Twitter), Siam Alam (Twitter), via Security Newspaper
Add your own comment

124 Comments on Critical Flaw in Windows 10 Could Corrupt Your Hard Drive

#76
lexluthermiester
MelvisWindows 7 users be like
Yeah, pretty much.
windwhirlI think NTFS is due for some upgrades, honestly.
It could do with a few improvements, sure.
Posted on Reply
#77
Aquinus
Resident Wat-man
lexluthermiesterNow that is a flaw. Copying a file without actually copying the file? Flawed design indeed.
CoW is not a design flaw, it is literally a design decision. If you have a 10GB file and you copy it (not change it, copy it,) why do you need to store a second copy of the data if the data didn't change? It's a waste of space. The file reference and metadata is copied, but the actual data isn't unless one of those "files" changes. Only then will it become a different copy, but until that happens, there is absolutely no reason why you can't share that data under the hood. It's almost like you're advocating for wasting disk space.
lexluthermiesterThose are nice features, however NTFS has similar features of it's own.
NTFS only supports LZ77, isn't configurable, and doesn't support checksumming files at the FS level.
lexluthermiesterInteresting, but not a flaw so much as it's deliberate engineering choice.
Now this is what I would call a poor engineering choice, not CoW.
lexluthermiesterYou kinda failed to sell your argument there.

Microsoft has MORE than it's share of problems, but the NTFS file system isn't one of them.
Do you have trouble reading or something? Do I need to quote myself again?
AquinusA filesystem is pretty useless if it's not stable, reliable, or error proof so that's a pretty low bar. The benefits come from all the features that these file systems implement beyond doing what any FS should be capable of doing. Just because Windows can't run them doesn't mean NTFS is better. It means that Microsoft shoves it down your throat without giving you the option.
Posted on Reply
#78
lexluthermiester
AquinusCoW is not a design flaw, it is literally a design decision. If you have a 10GB file and you copy it (not change it, copy it,) why do you need to store a second copy of the data if the data didn't change? It's a waste of space.
That's an opinion. Stupid design idea. What happens when a change is made to one of the "copies"? Does the actual copy function happen then? One way or the other an actual copy function is going to happen. For it not to happen on command is a design flaw, not a design decision. Technology needs follow user commands. If I command a file copy, I expect the file to be copied, bit for bit, as commanded at the moment I command it. A file system should NEVER have the "option" to do something other than what a user has instructed.
AquinusIt's almost like you're advocating for wasting disk space.
"Wasting" is a subjective term. What I expect from my technology is that it follows my commands as I issue them, the way that I issue them and does NOT take liberties interpreting what I "might" mean or need in future or whatever. So no, I care not about "wasting disk space". It's my space to waste if I so choose.
AquinusNTFS only supports LZ77, isn't configurable, and doesn't support checksumming files at the FS level.
Incorrect. NTFS has, since the days of NT 4.0, supported error correction which requires a form of check sum calculations. It might be MS's private brand of it, but it is there.
AquinusDo you have trouble reading or something? Do I need to quote myself again?
No you don't. What you need to do is re-read your own statements and think very carefully about the context you are attempting, and failing, to convey.

You're still not selling your point. Would you like to continue trying?
Posted on Reply
#79
Aquinus
Resident Wat-man
lexluthermiesterThat's an opinion. Stupid design idea. What happens when a change is made to one of the "copies"? Does the actual copy function happen then? One way or the other an actual copy function is going to happen. For it not to happen on command is a design flaw, not a design decision. Technology needs follow user commands. If I command a file copy, I expect the file to be copied, bit for bit, as commanded at the moment I command it. A file system should NEVER have the "option" to do something other than what a user has instructed.
It's a filesystem, not a block device. I would expect it to do what I ask it to do the most efficient way possible. There is zero reason to duplicate date on the filesystem. You're advocating for wasting space, it's really that simple.
lexluthermiester"Wasting" is a subjective term. What I expect from my technology is that it follows my commands as I issue them, the way that I issue them and does NOT take liberties interpreting what I "might" mean or need in future or whatever. So no, I care not about "wasting disk space". It's my space to waste if I so choose.
It's wasting space because there is absolutely no good reason to duplicate data on the same disk. As the consumer of that data, you shouldn't care where on the disk it comes from. You should only care that it's there and is valid. Once again, you're advocating for duplicating data, which is wasteful if it's on the same disk. There aren't any ifs about this, it simply is. If it were on another disk, that'd be a different story, but it's not. Structural sharing when data is immutable is never a bad thing.
lexluthermiesterIncorrect. NTFS has, since the days of NT 4.0, supported error correction which requires a form of check sum calculations. It might be MS's private brand of it, but it is there.
Wat? CRC errors you hear about in Windows aren't from the filesystem but from the disk during transfers. NTFS does not store checksums.
superuser.com/questions/566113/does-windows-calculate-crcs-to-check-every-file-operation
lexluthermiesterNo you don't. What you need to do is re-read your own statements and think very carefully about the context you are attempting, and failing, to convey.

You're still not selling your point. Would you like to continue trying?
Says the guy who doesn't have his facts straight and who has an ass backwards view of copy on write. It's almost like you've never used a sparse disk image before. Maybe we should just store every zero for empty space in a disk image because you asked for a copy. :kookoo:
Posted on Reply
#80
Muck Muster
Judging from everyone's replies, I'm not sure I wanna install Windows 10 on all my computers. I use Windows 10 regularly but my B-52 browser is still Windows 7. Clearly aside from a few hardware updates and security ( which I use to 3rd party security programs), I find gliding along with Windows 7 still pretty convenient. However, I do get perturbed about how often Windows keeps updating itself. I know Microsoft works with the Govt. and sometimes feel Windows 10 ability to spy a little unnerving. Just what do they look for? I know all Windows copies of Windows have spy tools implanted inside since NT but to what degree, I have no clue.
I wonder if these updates to spy programs have a direct or indirect impact as Windows keeps updating.
These are the former spy programs I've been notified of and kept tabs on. I don't know what they do. The last I think was in 2016.
NSAKEY_key2
ADVAP132
Does anyone have any clues on how they conduct passing information when asked to do so? If I uninstall them, what harm could it do? I could just say, if asked, "Oh it looked like a form of malware so I uninstalled it." On the other hand, could it lock Windows if I do?
Posted on Reply
#81
lexluthermiester
AquinusIt's a filesystem, not a block device. I would expect it to do what I ask it to do the most efficient way possible. There is zero reason to duplicate date on the filesystem. You're advocating for wasting space, it's really that simple.
That is a series of opinions. And you're welcome to them.
AquinusIt's wasting space because there is absolutely no good reason to duplicate data on the same disk.
Also an opinion, especially if the copied data is going to be modified. Wasted space or not, I expect the copy function to actually make a copy of whatever I commanded it to copy. That is simple common sense and a functional requirement.
AquinusWat? CRC errors you hear about in Windows aren't from the filesystem but from the disk during transfers. NTFS does not store checksums.
superuser.com/questions/566113/does-windows-calculate-crcs-to-check-every-file-operation
Your citation isn't one. SuperUser is a public forum and does not qualify as a citable source. But the following does. Do read..
ntfs.com/data-integrity.htm
AquinusSays the guy who doesn't have his facts straight and who has an ass backwards view of copy on write. It's almost like you've never used a sparse disk image before. Maybe we should just store every zero for empty space in a disk image because you asked for a copy.
Typical straw-man argument. You need a mirror.
Posted on Reply
#82
Aquinus
Resident Wat-man
lexluthermiesterThat is a series of opinions. And you're welcome to them.
lexluthermiesterAlso an opinion, especially if the copied data is going to be modified. Wasted space or not, I expect the copy function to actually make a copy of whatever I commanded it to copy. That is simple common sense and a functional requirement.
It's not an opinion. It's a waste of space because there is no good reason to copy data multiple times. You're free to give me good examples of why you want to store the same data on disk multiple times though. Your view is misguided and you've yet to prove to me why it's not. Until it's modified, why would you proactively waste space? It's dumb because your assumption rests on the premise that it will be eventually modified which is quite the expectation.
lexluthermiesterYour citation isn't one. SuperUser is a public forum and does not qualify as a citable source. But the following does. Do read..
That link says nothing about NTFS having checksums. :kookoo:
lexluthermiesterTypical straw-man argument. You need a mirror.
That's because you're not giving any good reason and you're just saying I'm wrong. I'm not misrepresenting anything, your take is just bad. Take your own advice and stop being a tool.
Posted on Reply
#83
DeathtoGnomes
AquinusYou're free to give me good examples of why you want to store the same data on disk multiple times though
main frame data redundancy was a huge over-protective thing at one time because of the failure rates of hard drives and how easily tape drive date could be lost from a simple stupid mistake. Imagine a whole raid array lost without "a copy". Some are still concerned about losing data so the practice of storing duplicate data continues.
Posted on Reply
#84
Aquinus
Resident Wat-man
DeathtoGnomesmain frame data redundancy was a huge over-protective thing at one time because of the failure rates of hard drives and how easily tape drive date could be lost from a simple stupid mistake. Imagine a whole raid array lost without "a copy". Some are still concerned about losing data so the practice of storing duplicate data continues.
We're talking about file systems, not block devices. I'm not advocating for people to not backup their data to different disks or to use technologies like RAID. I'm saying that copying the same data on the same logical disk is wasteful and copy on write helps prevent redundancy. It literally lets you get more out of the storage you have.
Posted on Reply
#85
80-watt Hamster
AquinusWe're talking about file systems, not block devices. I'm not advocating for people to not backup their data to different disks or to use technologies like RAID. I'm saying that copying the same data on the same logical disk is wasteful and copy on write helps prevent redundancy. It literally lets you get more out of the storage you have.
Are CoW systems by necessity journalled? Or is there another mechanism that directs a read op on the copy to the original, and forces a write on the copy if the original is deleted or altered? I don't know much about filesystems.
Posted on Reply
#86
Aquinus
Resident Wat-man
80-watt HamsterAre CoW systems by necessity journalled? Or is there another mechanism that directs a read op on the copy to the original, and forces a write on the copy if the original is deleted or altered? I don't know much about filesystems.
Most implementations of structural sharing under the hood has references to the spot in memory or on disk and much like the garbage collector in an application or in a VM like the JVM, those areas are cleaned up once all references are gone. There are a number of strategies for accomplishing this, but the most relevant similar technology I can think of are persistent data structures. Copy on write essentially means that data becomes immutable because it's not updated in place, which is a requirement for something like this because a new version requires copying of the non-shared parts, depending on how the underlying technology handles it. Much like MVCC, the "changed" data doesn't get overwritten, it gets stored someone else then the old area gets cleaned up later. A big advantage here is that since the old data is never written over, a crash during a transfer does not corrupt the old data. The new data just doesn't get saved. NTFS actually does something like this via its transaction system, but it's still not checksummed so you can't validate down the line to make sure that no bits flipped after the write is complete.
Posted on Reply
#87
windwhirl
80-watt HamsterAre CoW systems by necessity journalled? Or is there another mechanism that directs a read op on the copy to the original, and forces a write on the copy if the original is deleted or altered? I don't know much about filesystems.
They are, not so much by necessity for the CoW feature itself, but because there is hardly ever a reason to disable journaling. In general, I think all modern filesystems (ext4, NTFS, btrfs, ZFS, etc.) are journaled.

Regarding how the duplicate copies are handled depends on the filesystem, but btrfs for example simply makes a pointer that says that DuplicateFile uses the same blocks (clusters, disk sectors, etc.) as OriginalFile, which makes creating a duplicate an instant operation, regardless of the file size. As long as at least one of that set of files (Original and Duplicate/s) uses those blocks, they're not marked for deletion.

Let's say you change one of those files. The change is written down in new filesystem blocks and then the filesystem adds and/or changes pointers that for a human would read like "DuplicateFile uses the same blocks as Original File except it replaces block 5 for block 12879821 and discards blocks 50 and 51". So the filesystem shows the original file as it was and the duplicate that changed shows up with the changes you may have made to it.

CoW is also important for file system snapshots, as it basically saves the entire state of the filesystem in an instant. Say, you took a snapshot of the drive at 5:30:01 AM, July 4th 2019. The snapshot is instantaneous, as all changes after the snapshot are written down in new blocks, without changing the original ones. And this changes can be "atomic", that is that they don't have to save the whole file in new blocks, the file system writes down only the exact changes made and nothing more. So, if you "flip a bit" in a 400 GB file, the operation won't require writing down the full 400 GB file, it will simply save the change and add a pointer in the filesystem index so that it knows to look for the specific block of that file that was changed when reading said file.

EDIT: Ninja-ed by Aquinus, lol
Posted on Reply
#88
Aquinus
Resident Wat-man
windwhirlEDIT: Ninja-ed by Aquinus, lol
No, I like your response better. Snapshotting is such a nice feature of filesystems like btrfs. It's such a powerful tool. I'd actually argue that ext4 is inferior to btrfs because it lacks this.
Posted on Reply
#89
lexluthermiester
AquinusThat link says nothing about NTFS having checksums.
You're missing some context, read a little closer.
Posted on Reply
#90
Aquinus
Resident Wat-man
lexluthermiesterYou're missing some context, read a little closer.
I did and there are no references to checksums or hashes. It only retries and recovers from a bad write. If something happens to the data after it has already been written, you're SoL. Your source even plainly spells this out. The checksum isn't to ensure that data is correct just after it has been written. It's to ensure that it's always correct after being written after any arbitrary amount of time. There is a big difference.
Posted on Reply
#91
lexluthermiester
AquinusI did and there are no references to checksums or hashes.
What is the purpose of check-suming and hashing? Hmm? Data integrity, is it not? Let's review...
Data Integrity and Recoverability with NTFS
That's the TITLE of the information page. It's even in the web address;
ntfs.com/data-integrity.htm

You were saying?
Posted on Reply
#92
Aquinus
Resident Wat-man
lexluthermiesterWhat is the purpose of check-suming and hashing? Hmm? Data integrity, is it not? Let's review...

That's the TITLE of the information page. It's even in the web address;
ntfs.com/data-integrity.htm
Just because it commits writes in transactions doesn't mean that it ensures the integrity of your data in the long term, only at the time of writing. This does not ensure that the data written 5 years ago is still the way it should be. That's the point of the checksum, not to make sure that the write was successful like with NTFS' transaction system. What NTFS does is to ensure that write failures due to something like a crash or power loss event won't corrupt data already on disk which you get out of the box with a CoW solution. It does not ensure the correctness of data after it has been written and committed.
lexluthermiesterYou were saying?
Apparently things you've been ignoring.
AquinusNTFS actually does something like this via its transaction system, but it's still not checksummed so you can't validate down the line to make sure that no bits flipped after the write is complete.
Posted on Reply
#93
lexluthermiester
AquinusJust because it commits writes in transactions doesn't mean that it ensures the integrity of your data in the long term, only at the time of writing. This does not ensure that the data written 5 years ago is still the way it should be. That's the point of the checksum, not to make sure that the write was successful like with NTFS' transaction system. What NTFS does is to ensure that write failures due to something like a crash or power loss event won't corrupt data already on disk which you get out of the box with a CoW solution. It does not ensure the correctness of data after it has been written and committed.

Apparently things you've been ignoring.
You're dismissing the entire context of the point by marginalizing the proven data integrity functionality built-in to NTFS in a vain attempt to support your argument. This debate is over. You have failed to prove your original claim about NTFS. Move along.
Posted on Reply
#94
Aquinus
Resident Wat-man
lexluthermiesterYou're dismissing the entire context of the point by marginalizing the proven data integrity functionality built-in to NTFS in a vain attempt to support your argument. This debate is over. You have failed to prove your original claim about NTFS. Move along.
I've posted several times about when it ensures that data integrity; which at write time. There is no checksum and nothing to enable you to validate data integrity after the write transaction has be committed.
AquinusJust because it commits writes in transactions doesn't mean that it ensures the integrity of your data in the long term, only at the time of writing. This does not ensure that the data written 5 years ago is still the way it should be. That's the point of the checksum, not to make sure that the write was successful like with NTFS' transaction system. What NTFS does is to ensure that write failures due to something like a crash or power loss event won't corrupt data already on disk which you get out of the box with a CoW solution. It does not ensure the correctness of data after it has been written and committed.
AquinusI did and there are no references to checksums or hashes. It only retries and recovers from a bad write. If something happens to the data after it has already been written, you're SoL. Your source even plainly spells this out. The checksum isn't to ensure that data is correct just after it has been written. It's to ensure that it's always correct after being written after any arbitrary amount of time. There is a big difference.
AquinusWat? CRC errors you hear about in Windows aren't from the filesystem but from the disk during transfers. NTFS does not store checksums.
Also:
lexluthermiesterThis debate is over. You have failed to prove your original claim about NTFS. Move along.
lexluthermiesterYou were saying?
lexluthermiesterTypical straw-man argument. You need a mirror.
^: I've gotten responses like this every time I've attempted to provide more context. You've yet to prove anything yourself, yet your attitude remains bad and you keep digging your heels in despite not even being able to address any of the things I brought up. You're right, the conversation is over, but it's not because I failed to prove my point. It's because you failed listen. Since you seem to suck at listening, I'll re-quote yet another thing I said, because I'm getting tired of being a broken record for some schmuck who can't read, is ignorant, or both.
AquinusThat's because you're not giving any good reason and you're just saying I'm wrong. I'm not misrepresenting anything, your take is just bad. Take your own advice and stop being a tool.
Posted on Reply
#95
trparky
AquinusThere is no checksum and nothing to enable you to validate data integrity after the write transaction has be committed.
My argument now is... OK, the data is corrupt, the data isn't what it's supposed to be. What are you going to do about it? It's not like you can do anything once the data has been corrupted except restore it from another source of the same data.

As for corrupted data happening, I guess that the reasoning behind NTFS not having built-in checksums is because Microsoft doesn't believe that data corruption is as big of a deal as you may think it is on a consumer level. Perhaps at the data center level like in Microsoft Azure they have that kind of support but that's at the corporate level and that's the kind of stuff that one would expect at that level. Whereas at the home level, data corruption has happened to me twice in ten years and that's only because the drive itself was failing which at that point I assumed all the data was corrupt. But again, that's why you have data backups and why you should always have multiple copies of the same data that you can't lose. Don't trust one repository for your most important of your data, have it in multiple places.

I, myself, have the most important of data saved in the cloud in Microsoft OneDrive. Vacation photos mostly, the kinds of memories that I absolutely cannot lose. There's other data that I keep saved up in the cloud but that's a story for another day. Suffice it to say, if the data is that important to you, keep it in multiple places and most importantly, keep it off site so if a disaster happens like God forbid a house fire, robbery, flood, electrical damage, etc. that data is safely stored offsite where you can recover said data.

As for CoW, I really don't see a problem considering that drives are reaching absolutely ungodly high capacities so at this point, CoW is a bit of a moot point. When you have a ten TB drive, who gives a damn? Are you really going to care? Nope, not I.
Posted on Reply
#96
windwhirl
trparkyAs for corrupted data happening, I guess that the reasoning behind NTFS not having built-in checksums is because Microsoft doesn't believe that data corruption is as big of a deal as you may think it is on a consumer level. Perhaps at the data center level like in Microsoft Azure they have that kind of support but that's at the corporate level and that's the kind of stuff that one would expect at that level
There is checksum support on ReFS, introduced with Windows Server 2012. By default, it only checksums metadata, but file data checksums can be turned on. However, if the checksum doesn't match, the file is deleted. There is no repair there. Btrfs at the very least can detect a bit flip and repair the affected file (though I'd like to know how far the repair can go before it hits a dead-end)
trparkyWhereas at the home level, data corruption has happened to me twice in ten years and that's only because the drive itself was failing which at that point I assumed all the data was corrupt. But again, that's why you have data backups and why you should always have multiple copies of the same data that you can't lose. Don't trust one repository for your most important of your data, have it in multiple places.
Agreed. Home users and most prosumers will probably not care (and dare I say should not care?) at all as long as they are using at least some sort of backup.
trparkyAs for CoW, I really don't see a problem considering that drives are reaching absolutely ungodly high capacities so at this point, CoW is a bit of a moot point
To be fair, it's not so much for the capacity, but rather that it's useful for snapshotting the drive in a quick and painless fashion. And without the snapshots consuming too much space, specially if you want to do something like a snapshot per hour or so, which is useful for rolling back changes or keeping multiple versions of a file.

Also, someone must be asking for CoW support, seeing as it is supported not just on ZFS and BTRFS, but also on APFS.
Posted on Reply
#97
trparky
windwhirlBtrfs at the very least can detect a bit flip and repair the affected file (though I'd like to know how far the repair can go before it hits a dead-end)
I still wouldn't trust it. If I knew that a file was corrupt, I'd wouldn't trust any repair of it. Replace it from a known good source.
windwhirlTo be fair, it's not so much for the capacity, but rather that it's useful for snapshotting the drive in a quick and painless fashion. And without the snapshots consuming too much space, specially if you want to do something like a snapshot per hour or so, which is useful for rolling back changes or keeping multiple versions of a file.
Windows does have this capability, it's called Shadow Copies.
Posted on Reply
#98
R-T-B
ThrashZoneHi,
Nope just a bad example of yours really but I do have a gun carry permit = Check ;)

There is more to security than os patches and MS browser which have always been targets
Remote access..... disabled = Check
Mbam Pro license = Check
Ublock Origin browser security = Check
Don't use MS apps = Check
Using ms os that is obsolete with a built in netstack that will never get patched again, check?
lexluthermiesterNow that is a flaw. Copying a file without actually copying the file? Flawed design indeed.
You just described write buffering.

ReFS is the future for Windows anyways.
Posted on Reply
#99
lexluthermiester
R-T-BUsing ms os that is obsolete with a built in netstack that will never get patched again, check?
As long as you don't use the built-in Windows Firewall, that's not really a problem.
R-T-BYou just described write buffering.
Write-buffering actually completes the writes as some point.
Posted on Reply
#100
R-T-B
lexluthermiesterAs long as you don't use the built-in Windows Firewall, that's not really a problem.
Yet. But someday it will be. Hopefully by the time that comes around though most users will have migrated.
lexluthermiesterWrite-buffering actually completes the writes as some point.
True. At least, usually true. There are some exceptions but not generally worth mentioning. JFS on linux for example, has the amazing property of a write buffer that won't flush until it's full: that's right, no timeout.

No one really uses that anymore though.
Posted on Reply
Add your own comment
Nov 25th, 2024 03:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts