this post was submitted on 24 Jul 2024
216 points (96.2% liked)

Technology

60032 readers
3008 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 0x0@programming.dev 41 points 5 months ago* (last edited 5 months ago) (4 children)

On Wednesday, CrowdStrike released a report outlining the initial results of its investigation into the incident, which involved a file that helps CrowdStrike’s security platform look for signs of malicious hacking on customer devices.

The company routinely tests its software updates before pushing them out to customers, CrowdStrike said in the report. But on July 19, a bug in CrowdStrike’s cloud-based testing system — specifically, the part that runs validation checks on new updates prior to release — ended up allowing the software to be pushed out “despite containing problematic content data.”

...

When Windows devices using CrowdStrike’s cybersecurity tools tried to access the flawed file, it caused an “out-of-bounds memory read” that “could not be gracefully handled, resulting in a Windows operating system crash,” CrowdStrike said.

Couldn't it, though? 🤔

And CrowdStrike said it also plans to move to a staggered approach to releasing content updates so that not everyone receives the same update at once, and to give customers more fine-grained control over when the updates are installed.

I thought they were already supposed to be doing this?

[–] whatwhatwhatwhat@lemmy.world 9 points 4 months ago (1 children)

The fact that they weren’t already doing staggered releases is mind-boggling. I work for a company with a minuscule fraction of CrowdStrike’s user base / value, and even we do staggered releases.

[–] foggenbooty@lemmy.world 3 points 4 months ago (1 children)

They do have staggered releases, but it's a bit more complicated. The client that you run does have versioning and you can choose to lag behind the current build, but this was a bad definition update. Most people want the latest definition to protect themselves from zero days. The whole thing is complicated and a but wonky, but the real issue here is cloudflare's kernel driver not validating the content of the definition before loading it.

[–] whatwhatwhatwhat@lemmy.world 2 points 4 months ago

Makes sense that it was a definitions update that caused this, and I get why that’s not something you’d want to lag behind on like you could with the agent. (Putting aside that one of the selling points of next-gen AV/EDR tools is that they’re less reliant on definitions updates compared to traditional AV.) It’s just a bit wild that there isn’t more testing in place.

It’s like we’re always walking this fine line between “security at all costs” vs “stability, convenience, etc”. By pushing definitions as quickly as possible, you improve security, but you’re taking some level of risk too. In some alternate universe, CS didn’t push definitions quickly enough, and a bunch of companies got hit with a zero-day. I’d say it’s an impossible situation sometimes, but if I had to choose between outage or data breach, I’m choosing outage every time.

[–] Plopp@lemmy.world 3 points 5 months ago (1 children)

Couldn't it, though? 🤔

IANAD and AFAIU, not in kernel mode. Things like trying to read non existing memory in kernel mode are supposed to crash the system because continuing could be worse.

[–] 0x0@programming.dev 2 points 4 months ago (1 children)

I.meant couldn't they test for a NULL pointer.

[–] chaospatterns@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

They could and clearly they should have done that but hindsight is 20/20. Software is complex and there's a lot of places that invalid data could come in.

[–] cheddar@programming.dev 2 points 4 months ago (1 children)

The company routinely tests its software updates before pushing them out to customers, CrowdStrike said in the report. But on July 19, a bug in CrowdStrike’s cloud-based testing system — specifically, the part that runs validation checks on new updates prior to release — ended up allowing the software to be pushed out “despite containing problematic content data.”

It is time to write tests for tests!

[–] Passerby6497@lemmy.world 1 points 4 months ago

My thoughts are to have a set of machines that have to run the update for a while, and if any single machine doesn't pass and all allow it to move forward, it halts any further rollout.

[–] AA5B@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

a bug in CrowdStrike’s cloud-based testing system

Always blame the tests. There are so many dark patterns in this industry including blaming qa for being the last group to touch a release, that I never believe “it’s the tests”.

There’s usually something more systemic going on where something like this is missed by project management and developers, or maybe they have a blind spot that it will never happen, or maybe there’s a lack of communication or planning, or maybe they outsourced testing to the cheapest offshore providers, or maybe everyone has huge time pressure, but “it’s the tests”

Ok, maybe I’m not impartial, but when I’m doing a root cause on how something like this got out, my employer expects a better answer than “it’s the tests”

[–] aStonedSanta@lemm.ee 2 points 4 months ago

There was probably one dude at CrowdStrike going. Uh hey guys??? 😆