Friday, July 19th 2024

Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

A faulty software update to enterprise computers by cybersecurity firm CrowdStrike has taken millions of computers offline, most of which are in a commercial or enterprise environment, or are Azure deployments. CrowdStrike provides periodic software and security updates to commercial PCs, enterprise PCs, and cloud instances, with a high degree of automation. The latest update reportedly breaks the Windows bootloader, causing bluescreens of death (BSODs), and if configured, invokes Windows Recovery. Enterprises tend to bulletproof the bootloaders of their client machines, and disable generic Windows Recovery tools from Microsoft, which means businesses around the world are left with large numbers of machines that will each take manual fixing. The so-called "Windows CrowdStrike BSOD deluge" has hit critical businesses such as banks, airlines, supermarket chains, and TV broadcasters. Meanwhile, sysadmins on Reddit are wishing each other a happy weekend.
Source: The Verge
Add your own comment

234 Comments on Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

#176
Dark Revenger
OnasiWat. It’s a security update for kernel level operation, from my understanding. ANY OS can be bricked by such a thing. Modern Windows, for all its flaws, is at its core incredibly robust. Why are we acting like MS engineers (and I do mean engineers, not people who shove marketing driven shit on top of a good core) are incompetent mole-people who fail at basic tasks?


Nothing would change then, the potential for failure will increase with wider adoption. Linux isn’t some fantabulous mythical unbreakable OS which can never go wrong. It has comparatively less issues and less security concerns to patch for because it’s used less. That’s it.
And yes, many critical tasks already run under some form of Linux, sure. But there are things where it isn’t feasible.
This. All of it.
Posted on Reply
#177
Vayra86
R-T-BOr not. I fought tooth and nail to avoid it. And I did. Might not be possible everywhere but at least at my lowly records storage role it was possible. I just have to jump through a longer list of OTHER compliance proofs, but worth it to avoid headaches like this.
Just reboot in safe mode now and you are golden, except thats not allowed in almost any business or gov environment :)
Posted on Reply
#178
windwhirl
DaemonForceHow would you feel if you didn't eat breakfast today? :rolleyes:
:) Whatever you tried to say is completely lost on me :)
Posted on Reply
#179
R-T-B
Vayra86Oh is that why gov sites all over the globe are down? O365 included? You might want to double check your info. Im not getting mine from a news site. Even despite MS redundancy and maximum reliability policies those went down simply because MS lost four data locations in the US. Closer to my workspace we lost Azure devops.

Additionally we arent out of the woods yet even with the Crowdstrike update rolled back; contrary to what news outlets say now.
You would have to be running azure WITH cloudstrike.
Posted on Reply
#180
unwind-protect
bugOn Linux this would be a simple script that iterates over machines and sshes into each one. I'd be surprised if PowerShell doesn't have something similar.
If you have a kernel module causing a panic on boot before ssh comes up it would not.

You'd have to boot into single user mode, too.

The real difference of cause is that Linux and FreeBSD users generally only run kernel modules that came with the kernel, not some closed-source third-party garbage. Except for the NVidia drivers. Errr...
mab1376TBF if Microsoft offered user-mode APIs into kernel events, it wouldn't be necessary to install a kernel driver.
FreeBSD has dtrace, Linux has eBPF. But we can't know whether they would be sufficient for Crowdstrike. They have a Linux version, I bet they use a kernel module, too.
Posted on Reply
#181
Makaveli
AssimilatorPeople running in Azure were completely unaffected...
I run an azure environment and we had no issues.
Posted on Reply
#182
remixedcat
Would something like dell idrac work to fix servers that have this?? So you won't have to go to them physically??
Posted on Reply
#183
DaemonForce
I could see idrac fixing this by one time booting a WinPE scripted to automatically delete \Windows\System32\drivers\CrowdStrike\C-00000291*.sys from the first three drive letters (mounting is a little weird) and then rebooting the machine normally.
Posted on Reply
#184
remixedcat
DaemonForceI could see idrac fixing this by one time booting a WinPE scripted to automatically delete \Windows\System32\drivers\CrowdStrike\C-00000291*.sys from the first three drive letters (mounting is a little weird) and then rebooting the machine normally.
Awesome that's cool.. I'll tell ppl that then.. the only poweredge stuff I've messed with recently storage stuff
Posted on Reply
#185
Redwoodz
R-T-BActually banks are amongst those having issues. I'd carry cash for a bit.
The banks and governments are trying to shove a cashless society on us, please everyone do not let them!

All tech fails at some point
Posted on Reply
#186
Caring1
HTCSo ... i just went to the hypermarket ... and it was affected by this CrowdStrike problem ...

Thing i found weird is that only the SELF SERVICE payment area was affected: non self service WAS NOT affected.
I went shopping before and half the checkouts were down with the windows logo on screen, first time I've seen that. o_O
damricLas Vegas late last night early this morning:

I've seen similar on overhead digital signage on highways.
Posted on Reply
#187
R-T-B
RedwoodzThe banks and governments are trying to shove a cashless society on us, please everyone do not let them!
Again, conspiracy theories don't belong here. Can you still get cash? Yes? Then we don't need to theorize about anything else.
Posted on Reply
#188
Launcestonian
No problems where I am, everything smooth as, at least as of now.
Posted on Reply
#189
the54thvoid
Intoxicated Moderator
Stay away from the off-topic conspiracy theories. This was a major problem caused by a major oversight (and possibly lax attitude to update roll-outs). Hopefully it's a wake-up call for organisations to not cut corners and tie in redundancies.
Posted on Reply
#190
HTC
Caring1I went shopping before and half the checkouts were down with the windows logo on screen, first time I've seen that.
Couldn't see the checkouts' screens (there are only 4).

A security guard was placing two shopping carts "fixed together with something" blocking the access to these checkouts, which is why i asked if it was related to this global issue, to which he said yes.
Posted on Reply
#191
ZoneDymo
Solaris17The title is wrong, I wouldn't put much stake in it. This is and is only a crowdstrike issue; they even admitted it.

If you really want to blame someone, try your management that under funded the IT dept so much that didnt have the budget to roll this out to testing before it hit mass.

For the rest, please keep wack conspiracy theories away from the thread.
wack conspiracies?
what?
Posted on Reply
#192
chrcoluk
Am I the only one here who had never heard of CrowdStrike until yesterday?
Posted on Reply
#193
HTC
chrcolukAm I the only one here who had never heard of CrowdStrike until yesterday?
Nope ...
Posted on Reply
#195
Evildead666
Yes, this affected our Global Business yesterday.
Had great fun helping end-users try to get their machines back online, and then explain why they then couldn't access any company services.
It mostly came back online quite quickly, but our AD was still having problems yesterday evening, causing problems for user authentification, which is used across most of our sites and services...so the sites and services were up, but people couldn't log into them.
Our Bitlocker key server wasn't available for most of yesterday morning, but came back up pretty quickly thankfully.

We are expecting a few things to still be down on monday, as there aren't very many people available to go to the still down critical machines manually.

Just want to put my 2c that this shouldn't have happened.
Any deployment should be tested nefore release, and even when released, to one or two "test" customers who get high support and low prices for their help with testing, and potential risks.

Hats off to all those sysadmins that have to spend their whole weekend, and more, getting these systems back up manually.
Posted on Reply
#196
mab1376
www.csoonline.com/article/2872861/crowdstrike-ceo-apologizes-for-crashing-it-systems-around-the-world-details-fix.html
CrowdStrike updates configuration files for the endpoint sensors that are part of its Falcon platform several times a day. It calls those updates “Channel Files.”









The defect was in one it calls Channel 291, the company said in Saturday’s technical blog post. The file is stored in a directory named “C:\Windows\System32\drivers\CrowdStrike\” and with a filename beginning “C-00000291-” and ending “.sys”. Despite the file’s location and name, the file is not a Windows kernel driver, CrowdStrike insisted.
Channel File 291 is used to pass the Falcon sensor information about how to evaluate “named pipe” execution. Windows systems use these pipes for intersystem or interprocess communication, and are not in themselves a threat — although they can be misused.
“The update that occurred at 04:09 UTC was designed to target newly observed, malicious named pipes being used by common C2 [command and control] frameworks in cyberattacks,” the technical blog post explained.









However, it said, “The configuration update triggered a logic error that resulted in an operating system crash.”
Posted on Reply
#197
Solaris17
Super Dainty Moderator
ZoneDymowack conspiracies?
what?
That last line wasnt for you, that was a general warning to the thread given the posts that were deleted yall cant see. I figured "For the rest" was clear enough but I guess not.
Posted on Reply
#198
Redwoodz
the54thvoidStay away from the off-topic conspiracy theories. This was a major problem caused by a major oversight (and possibly lax attitude to update roll-outs). Hopefully it's a wake-up call for organisations to not cut corners and tie in redundancies.
All tech fails in some fashion eventually because we can't predict every scenario. That's not a conspiracy. It's a wake-up call to make sure we don't rely too heavily on automated sytems.
Posted on Reply
#199
the54thvoid
Intoxicated Moderator
RedwoodzAll tech fails in some fashion eventually because we can't predict every scenario. That's not a conspiracy. It's a wake-up call to make sure we don't rely too heavily on automated sytems.
Those posts were removed, you can't see them.
Posted on Reply
#200
azrael
Not sure if this has been mentioned since I couldn't bring myself to read all 199 comments. At least for the first couple of pages, the uninformed seem to blame Microsoft for this. As much as Microsoft screws up, this particular issue isn't on them ...in any way, shape or form.

The reason why this happened is because the CrowdStrike agent is a boot level driver. This means that it gets loaded pretty much before most of anything else, except when you boot in Safe Mode. Then, only absolutely necessary drivers are loaded. You also need Safe Mode to be able to delete the offending file, since in a regular session (when the PC wouldn't crash) the file would be in use and thus locked.

I must admit, when I read about the fix I couldn't believe my eyes. A file with the .sys extension is usually a driver. This means actual executable code. Usually anti-malware and HIPS applications work with some form of pattern file. CrowdStrike really does distribute its "signature" updates as executable code. And therein lies the problem. I don't know how many of you know about coding and pointers in particular, but here goes: CrowdStrike tried to call some code in that update (C-00000291*.sys). The problem was, the file CrowdStrike had pushed contained zeros. Now, when you try to call or dereference a pointer of 0 (nullptr), that just won't fly. Usually, to get around potential nullptrs you make a check for it before trying to use the pointer. You can also use try/catch statements. Apparently, someone at CrowdStrike didn't think this was necessary. And... BOOOM!

At the company I work, we also got hit pretty hard by this issue. While our company is actually on the smaller size, the corporation that owns us uses CrowdStrike. A lot of us are tech-savvy, being developers. Still we weren't able to help ourselves because these days you're not allowed to have admin permissions on your workstation. Our consultants are issued laptops, which, because they're used both on- and off-site, are BitLocker-encrypted. That's not necessarily a problem, because each consultant has their key. What they don't have is the recovery key, which for some reason is needed when you actually manage to get into Repair Mode. We had to have our sys admin take a break from his vacation to help get us up and running again. Many systems are still down, because there only was time to bring the most important ones back on-line.

And yes, this could just as easily have hit *nix and macOS. But the majority of businesses out there use Windows. Like it or not.
Posted on Reply
Add your own comment
Aug 19th, 2024 23:23 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts