this post was submitted on 25 Sep 2024
22 points (82.4% liked)

Technology

58999 readers
4253 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

After reading this article, I had a few dissenting thoughts, maybe someone will provide their perspective?

The article suggests not running critical workloads virtually based on a failure scenario of the hosting environment (such as ransomware on hypervisor).

That does allow using the 'all your eggs in one basket' phrase, so I agree that running at least one instance of a service physically could be justified, but threat actors will be trying to time execution of attacks against both if possible. Adding complexity works both ways here.

I don't really agree with the comments about not patching however. The premise that the physical workload or instance would be patched or updated more than the virtual one seems unrelated. A hesitance to patch systems is more about up time vs downtime vs breaking vs risk in my opinion.

Is your organization running critical workloads virtual like anything else, combination physical and virtual, or combination of all previous plus cloud solutions (off prem)?

top 35 comments
sorted by: hot top controversial new old
[–] Im_old@lemmy.world 28 points 1 month ago (2 children)

That article is SO wrong. You don't run one instance of a tier1 application. And they are on separate DCs, on separate networks, and the firewall rules allow only for application traffic. Management (rdp/ssh) is from another network, through bastion servers. At the very least you have daily/monthly/yearly (yes, yearly) backups. And you take snapshots before patching/app upgrades. Or you even move to containers, with bare hypervisors deployed in minutes via netinstall, configured via ansible. You got infected? Too bad, reinstall and redeploy. There will be downtime but not horrible. The DBs/storage are another matter of course, but that's why you have synchronous and asynchronous replicas, read only replicas, offsites, etc. But for the love of what you have dear, don't run stuff on bare metal because "what if the hypervisor gets infected". Consider the attack vector and work around that.

[–] thirteene@lemmy.world 4 points 1 month ago

You can prevent downtime by mirroring your container repository and keeping a cold stack in a different cloud service. We wrote an loe, decided the extra maintenance wasn't worth the effort to plan for provider failures. But then providers only sign contracts if you are in their cloud and you end up doing it anyways.

Unfortunately most victims aren't using best practices let alone industry standards. The author definitely learned the wrong lesson though.

[–] redfox@infosec.pub 1 points 2 weeks ago (1 children)

Good comments.

Do you think there's still a lot of traditional or legacy thinking in IT departments?

Containers aren't new, neither is the idea of infrastructure as code, but the ability to redeploy a major application stack or even significant chunks of the enterprise with automation and the restoration of data is newer.

[–] Im_old@lemmy.world 2 points 2 weeks ago

There is so much old and creaky stuff lying around and people have no idea what it does. Beige boxes in a cabinet that when we had to decommission it the only way to understand what it does was doing the scream test: turn it off and see who screams!

Or even stuff that was deployed as IaC by an engineer but then they left and so was managed "clickOps", but documentation never updated.

When people talk about the Tier1 systems they often forget the peripheral stuff required to make them work. Sure the super mega shiny ERP system is clustered, with FT and DR, backups off site etc. But it talks to the rest of the world through an internal smtp server running on a Linux box under the stairs connected to a single consumer grade switch (I've seen this. Dust bunnies were almost sentient lol).

Everyone wants the new shiny stuff but nobody wants to take care of the old stuff.

Or they say "oh we need a new VM quickly, we'll install the old way and then migrate to a container in the cloud". And guess what, it never happens.

[–] CameronDev@programming.dev 19 points 1 month ago* (last edited 1 month ago) (3 children)

If the hypervisor or any of its components are exposed to the Internet

Lemme stop you right there, wtf are you doing exposing that to the internet...

(This is directed at the article writer, not OP)

[–] redfox@infosec.pub 3 points 1 month ago

Lol, even in 2024 with free VPN/overlay solutions...they just won't stop public Internet exposure of control plane things...

[–] umami_wasbi@lemmy.ml 2 points 1 month ago (1 children)

Well. Misconfiguration happens, and sadly, quite often.

[–] CameronDev@programming.dev 2 points 1 month ago

Sure, but the author makes it sounds like thats its their standard way of doing things, which is insane.

And if you do have a misconfiguration, the rational thing is to fix that, not dump the entire platform.

[–] terminhell@lemmy.world 2 points 1 month ago (1 children)

True horrors

Like, that's what vpns and jump boxes are for at the very least.

[–] CameronDev@programming.dev 2 points 1 month ago (1 children)

Wanna bet they expose SSH on port 22 to the internet on their "critical" servers? 🤣

[–] terminhell@lemmy.world 2 points 1 month ago (1 children)

Ive been tempted to setup a Honeypot like this lol

[–] CameronDev@programming.dev 1 points 1 month ago

You'll definitely get lots of login attempts. I used to have a port 22 ssh, hundreds of attempts per day.

Would be interesting to see what post login behavior was.

[–] floofloof@lemmy.ca 18 points 1 month ago (2 children)

Most organizations will avoid patching due to the downtime alone, instead using other mitigations to avoid exploitation. 

If you can't patch because of downtime, maybe you are cheaping out too much on redundancy?

[–] PiJiNWiNg@sh.itjust.works 3 points 1 month ago

That immediately stuck out to me as well, what a lame excuse not to patch. I've been in IT for a while now, and I've never worked in any shop that would let that slide.

[–] redfox@infosec.pub 2 points 1 month ago

Yeah, that's pretty risky for this point in time.

I guess the MBA people look at total cost of revenue/reputation loss for things like ransomware recovery, restoration of backups vs the cost of making their IT systems resilient?

Personally, I don't think so (in many cases) or they'd spend more money on planning/resilience.

[–] catloaf@lemm.ee 16 points 1 month ago

"Don't use virtualization", says exec whose product doesn't run on virtualization

[–] superkret@feddit.org 14 points 1 month ago* (last edited 1 month ago) (3 children)

I work for a newspaper. It was published without fail every single day since 1945 (when my country was still basically just rubble, deservedly).
So even when all our systems are encrypted by ransomware, the newspaper MUST BE ABLE TO BE PRINTED as a matter of principle.
We run all our systems virtualized, because everything else would be unmaintainable and it's a 24/7 operation.

But we also have a copy of the most essential systems running on bare metal, completely air-gapped from everything else, and the internet.
Even I as the admin can't access them remotely in any way. If I want to, I have to walk over to another building.

In case of a ransomware attack, the core team meets in a room with only internal wifi, and is given emergency laptops from storage with our software preinstalled. They produce the files for the paper, save them on a USB stick, and deliver that to the printing press.

[–] redfox@infosec.pub 7 points 1 month ago (1 children)

Seems like your org has taken resilience and response planning seriously. I like it.

[–] superkret@feddit.org 5 points 1 month ago (1 children)

Another newspaper in our region was unprepared and got ransomwared. They're still not back to normal, over a year later.
After that, our IT basically got a blank check from executive to do whatever is necessary.

[–] redfox@infosec.pub 5 points 1 month ago (1 children)

Blank check

Funny how that seems to often be the case. They need to see the consequences, not just be warned. An 'I told you so' moment...

[–] superkret@feddit.org 2 points 1 month ago

I'm just glad they got to see the consequences in another company.
Their senior IT admin had a heart attack a month after the ransomware attack.

[–] 0x0@programming.dev 3 points 1 month ago (1 children)

save them on a USB stick

...which is also kept with the air-gaped system and tossed once used, i assume...

[–] superkret@feddit.org 4 points 1 month ago

There's several for redundancy, in their original packaging, locked in a safe, and replaced yearly.

[–] umami_wasbi@lemmy.ml 2 points 1 month ago (1 children)

How you keep the air gapped system in sync?

[–] superkret@feddit.org 3 points 1 month ago (1 children)

We don't. It's a separate, simplified system that only lets the core team members access the layout-, editing- and typesetting-software that is locally installed on the bare metal servers.
In emergency mode, they get written articles and images from the reporters via otherwise unused, remotely hosted email addresses, and as a second backup, Signal.
They build the pages from that, send them to the printers, and the paper is printed old-school using photographic plates.

[–] umami_wasbi@lemmy.ml 2 points 1 month ago (1 children)

That's a very high degree of BCDR planning, and quite costly I assume.

[–] superkret@feddit.org 2 points 1 month ago* (last edited 1 month ago)

It's less than the cost of our cybersecurity insurance, which will probably drop us on a technicality when the day comes.
And it's not entirely an economic decision. The paper is family-owned in the 3rd generation, historically relevant as one of the oldest papers in the country, and absolutely no one wants to be the one in charge when it doesn't print for the first time ever.

[–] linearchaos@lemmy.world 11 points 1 month ago

Heh, whatever you do don't do what everybody in the world has been doing successfully for the past 20 years.

[–] solrize@lemmy.world 6 points 1 month ago

Most everything everywhere is virtual these days, even when the host hardware is single tenant. Companies running hosted applications on bare metal are rare. I run personal stuff that way because proxmox was too much hassle, but a more serious user would have just dealt with it.

[–] 0x0@programming.dev 4 points 1 month ago

It the virtual borks, spin it back up. That's a plus.

Some should run at least one instance baremetal, like domain controllers.

It's not a one-size-fits-all.

[–] ramielrowe@lemmy.world 3 points 1 month ago* (last edited 1 month ago) (2 children)

If we boil this article down to it's most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE's that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don't even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.

[–] redfox@infosec.pub 2 points 1 month ago

Agreed.

Dont we all use centralized management because there is cost and risk involved when we don't.

More management complexity, missed systems, etc.

So we're balancing risk vs operational costs.

Makes sense to swap out virtual for container solutions or automation solutions for discussion.

[–] francisfordpoopola@lemmy.world 1 points 1 month ago (1 children)

Would you care to expand on this? I understand many of the pieces mentioned but am not an expert on this and am trying to learn.

[–] ramielrowe@lemmy.world 1 points 1 month ago (1 children)

In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual's credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.

Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.

[–] francisfordpoopola@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Makes perfect sense. I'm not as familiar with the admin side of things.

TY for taking the time to explain.