Friday, July 19th 2024

Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

A faulty software update to enterprise computers by cybersecurity firm CrowdStrike has taken millions of computers offline, most of which are in a commercial or enterprise environment, or are Azure deployments. CrowdStrike provides periodic software and security updates to commercial PCs, enterprise PCs, and cloud instances, with a high degree of automation. The latest update reportedly breaks the Windows bootloader, causing bluescreens of death (BSODs), and if configured, invokes Windows Recovery. Enterprises tend to bulletproof the bootloaders of their client machines, and disable generic Windows Recovery tools from Microsoft, which means businesses around the world are left with large numbers of machines that will each take manual fixing. The so-called "Windows CrowdStrike BSOD deluge" has hit critical businesses such as banks, airlines, supermarket chains, and TV broadcasters. Meanwhile, sysadmins on Reddit are wishing each other a happy weekend.
Source: The Verge
Add your own comment

149 Comments on Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

#26
P4-630
Let AI solve it....:D


At least then it turns out to be useful....
Posted on Reply
#27
Onasi
efikkanAnd yes, Microsoft certainly deserves blame for how easily their systems break, and for how tedious it is to roll back.
Wat. It’s a security update for kernel level operation, from my understanding. ANY OS can be bricked by such a thing. Modern Windows, for all its flaws, is at its core incredibly robust. Why are we acting like MS engineers (and I do mean engineers, not people who shove marketing driven shit on top of a good core) are incompetent mole-people who fail at basic tasks?
DavenIs it just me or do others think critical IT and society infrastructure services need to switch from Windows to Linux?
Nothing would change then, the potential for failure will increase with wider adoption. Linux isn’t some fantabulous mythical unbreakable OS which can never go wrong. It has comparatively less issues and less security concerns to patch for because it’s used less. That’s it.
And yes, many critical tasks already run under some form of Linux, sure. But there are things where it isn’t feasible.
ZoneDymoSo you feel MS is in no way to blame? arnt they the onces who have a contract with this firm? is it not up to MS to check and verify this crap before letting it through?
MS isn’t the ones who contract this firm, no. Where did you even infer it?
Posted on Reply
#28
Chomiq
P4-630Let AI solve it....:D


At least then it turns out to be useful....
AI already did the QA checks...
Posted on Reply
#29
P4-630
ChomiqAI already did the QA checks...
So it should fix it too.
Posted on Reply
#30
Robits
We are in an era of incompetency, get used to it.
Posted on Reply
#31
Assimilator
OnasiWat. It’s a security update for kernel level operation, from my understanding. ANY OS can be bricked by such a thing. Modern Windows, for all its flaws, is at its core incredibly robust. Why are we acting like MS engineers (and I do mean engineers, not people who shove marketing driven shit on top of a good core) are incompetent mole-people who fail at basic tasks?

MS isn’t the ones who contract this firm, no. Where did you even infer it?
I don't like quoting myself, but:
AssimilatorYou're expecting the anti-Microsoft crowd to be capable of basic reading comprehension...
Posted on Reply
#32
mb194dc
This is a major cluster fuck and the focus will be on Crowdstrike QA and update release procedure...

Prayers for admins dealing with this and especially those that have to manually access bitlocker encypted machines one by one. If they have the keys.
Posted on Reply
#33
Chomiq
Their first mistake was rolling update to Production on Friday.
Posted on Reply
#34
P4-630
Some good news too:
Not a problem for F1 as the show goes on.
:clap: :D
Posted on Reply
#35
Onasi
@Chomiq
This is a good point, actually. Good practice is to not roll shit out before weekends or, god forbid, long holidays. But maybe there was some rapid response fix or vulnerability protection they felt needed to be applied ASAP. Who even knows, at this point.
Posted on Reply
#37
Wirko
efikkanHaving client PCs go offline may not be surprising, but seeing banks, traders, airlines, media companies etc., having their central services being offline from an update, that's just ridiculous. Come on guys, it's not 1995 any more, this level of incompetence isn't excusable. If you're making billions you can afford having properly trained staff and a properly managed tech "stack" with whatever appropriate failovers, backups, recovery images/procedures, etc. is needed to ensure reliability and security.
Assuming this affected client PCs primarily, or exclusively: companies don't just have "failovers" for those. Or any other *quick* recovery procedure if many of them fail all at once.
Posted on Reply
#39
izy
How can this even happen
Posted on Reply
#40
Wirko
ChomiqTheir first mistake was rolling update to Production on Friday.
Or maybe they found out that companies spend three days to recover from an average Microsoft (and SAP, Adobe and Oracle) Patch Tuesday.
Posted on Reply
#41
Bones
First off I will say I don't know if this would fall under MS's automatic updating scheme or not, which I do not like period.
I have known it to wreck things before (Personally saw this happen at work one morning from an overnight forced update / Win 10 no less) and lead to downtime and all the rest you'd expect.

Regardless of that, it's a major screwup and the fallout will certainly cause some heads to roll wherever.

I also feel for the IT guys having to address this because you know some are clocking in and just learning about it and that would include the boss..... Depending on the boss and the sheer number of machines affected wherever they are, it may be a really bad & long day for those guys.
Posted on Reply
#42
bug
WonkoTheSaneUKPour one out for sysadmins, who have just learned that the fix is to log into each affected PC one at a time and delete the single bad file from each one.
It's going to be a loooooooooooooooooooooooooooooooooooooooong day for those in bigger organizations!
On Linux this would be a simple script that iterates over machines and sshes into each one. I'd be surprised if PowerShell doesn't have something similar.

On another note, this is why I insist most software I install will edit my my boot loader. Or at least they install some kernel-level shenanigans (looking at you anti-cheats). /s
Posted on Reply
#43
64K
MS update and chaos ensues. If you've never had your PC borked by an MS update then consider yourself blessed. MS is notorious for rolling out updates with QA that is pitiful.

Posted on Reply
#44
mab1376
This was 100% caused by CrowdStrike and not Microsoft.

The fix can only be done manually from recovery mode. This will take days to weeks to repair at scale.
64KMS update and chaos ensues. If you've never had your PC borked by an MS update then consider yourself blessed. MS is notorious for rolling out updates with QA that is pitiful.

Posted on Reply
#45
WonkoTheSaneUK
bugOn Linux this would be a simple script that iterates over machines and sshes into each one. I'd be surprised if PowerShell doesn't have something similar.

On another note, this is why I insist most software I install will edit my my boot loader. Or at least they install some kernel-level shenanigans (looking at you anti-cheats). /s
Sadly, many organizations use thousands of BitLocker-enabled PCs, which require individual visits to repair.
Posted on Reply
#46
mb194dc
mab1376This was 100% caused by CrowdStrike and not Microsoft.

The fix can only be done manually from recovery mode. This will take days to weeks to repair at scale.
There are automated ways to fix it in some environments. The problem is drive encryption... I seriously wonder if question will be asked, why do you need Bitlocker or equivalent on PCs that don't have any sensitive data on them?

It's the people who's keys are also on crashed servers that are most FUBAR. Even if they have them somewhere, have to manually do it all. If no keys, guess it's time to restore from backups.
Posted on Reply
#47
mab1376
WonkoTheSaneUKSadly, many organizations use thousands of BitLocker-enabled PCs, which require individual visits to repair.
Exactly the boat I'm in... I'm the infosec manager so I'm just the one documenting the wreckage.
Posted on Reply
#48
Chomiq
From a buddy of mine working in MS:
"There was an outage confined in central US datacenters but it was resolved hours before crowdstrike shat its pants"
Posted on Reply
#49
mab1376
mb194dcThere are automated ways to fix it in some environments. The problem is drive encryption... I seriously wonder if question will be asked, why do you need Bitlocker or equivalent on PCs that don't have any sensitive data on them?

It's the people who's keys are also on crashed servers that are most FUBAR. Even if they have them somewhere, have to manually do it all. If no keys, guess it's time to restore from backups.
the problem with that is having a way to classify PCs with and without sensitive info and dynamically enrolling in Bitlocker. Most of our PCs have sensitive info being an electronics company. There's very few without such as receptionists, janitors/maintenance, etc. The effort isn't worth the reward in that case.

Besides even without the effort is equivalent since we use LAPS.
Posted on Reply
#50
Wirko
mb194dcThere are automated ways to fix it in some environments. The problem is drive encryption..
Another problem is PCs that won't boot.

Although ... isn't there a thing called Intel Management Engine, which system admins can use to access disks and everything on a PC even if it's turned off or unable to boot?
Posted on Reply
Add your own comment
Jul 19th, 2024 13:36 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts