this post was submitted on 13 Jun 2024
733 points (97.9% liked)

Technology

59021 readers
3146 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be "more important even than the company’s work on artificial intelligence."

Satya Nadella, Microsoft's CEO, "has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security," Smith told Congress.

His testimony comes after Microsoft admitted that it could have taken steps to prevent two aggressive nation-state cyberattacks from China and Russia.

According to Microsoft whistleblower Andrew Harris, Microsoft spent years ignoring a vulnerability while he proposed fixes to the "security nightmare." Instead, Microsoft feared it might lose its government contract by warning about the bug and allegedly downplayed the problem, choosing profits over security, ProPublica reported.

This apparent negligence led to one of the largest cyberattacks in US history, and officials' sensitive data was compromised due to Microsoft's security failures. The China-linked hackers stole 60,000 US State Department emails, Reuters reported. And several federal agencies were hit, giving attackers access to sensitive government information, including data from the National Nuclear Security Administration and the National Institutes of Health, ProPublica reported. Even Microsoft itself was breached, with a Russian group accessing senior staff emails this year, including their "correspondence with government officials," Reuters reported.

you are viewing a single comment's thread
view the rest of the comments
[–] tabular@lemmy.world 88 points 4 months ago (2 children)

Pick one:

  • security
  • proprietary OS
[–] Dudewitbow@lemmy.zip 81 points 4 months ago (3 children)

you can have a propietary os thats secure, but the problem is once you get to the point where youre selling data and allow anything to be installed of course, its no longer secure.

[–] tabular@lemmy.world 19 points 4 months ago* (last edited 4 months ago) (2 children)

You can't verify it's secure if it's proprietary, so it's never secure? Having control over other people's computing creates bad incentives to gain at your user's expense, so it's day 1 you should lose trust.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 42 points 4 months ago (1 children)

You can have audits done on proprietary software. Just because the public can't see it doesn't mean nobody else can.

[–] tabular@lemmy.world 3 points 4 months ago (1 children)

That just moves requiring trust from the 1st party to 2nd or 3rd party. Unreasonable trust.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 32 points 4 months ago (2 children)

Do you yourself actually audit the software you use, or do you just trust what others say?

[–] circuscritic@lemmy.ca 21 points 4 months ago* (last edited 4 months ago)

Wait....you don't audit every package and dependency before you compile and install?

That's crazy risky my man.

Me? I know security and take it seriously, unlike some people here. I'm actually almost done with my audit and should be ready to finally boot Fedora 8 within the next 6-8 months.

[–] tabular@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (2 children)

This is like asking if you do scientific experiments yourself or do you trust others' results. I distrust private prejudice and trust public, verifiable evidence that's survived peer review.

[–] TropicalDingdong@lemmy.world 18 points 4 months ago* (last edited 4 months ago) (1 children)

Scientists in the room who have to base their experiments off other peoples data and results:

Tongue in cheek but this is actually giving me particular headache because of some results (not mine) that should have never been published.

[–] tabular@lemmy.world -2 points 4 months ago

That sucks, but the answer to bad results is still more/better tests 😇

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 11 points 4 months ago (2 children)

If you're a big enough organization (like the US government) you can pay anyone you want (or even your own people) to audit Microsoft's code.

[–] dfeldman@hachyderm.io 8 points 4 months ago

@fuckwit_mcbumcrumble @tabular I’ve never worked at Microsoft, but I worked at a different enterprise company and they did indeed fly in representatives of different governments who got free access to the code on a company laptop in a conference room to look for any back doors. I always thought it was silly because it is impossible to read all the code.

[–] tabular@lemmy.world -3 points 4 months ago* (last edited 4 months ago)

If I'm a government I'm hella criminalising the sharing of proprietary software.

[–] Dudewitbow@lemmy.zip 11 points 4 months ago* (last edited 4 months ago) (2 children)

id argue arguing the unknown can't be used to say if its technically secure, nor insecure. If that kind of coding is brought into place, then say any OS using non open source hardware is insecure because the VHDL/Verilog code is not verifiable.

Unless everyone running an open source version of RISC-V code or a FPGA for their hardware, its a game of goalposts on where someone puts said flag.

[–] tabular@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

Consider people counting paper votes in an election. Multiple political parties are motivated by their own self interests to watch the counting to prevent each other faking votes. That is a security feature and without it then the validity of the election has a critical unknown making it very sussy.

An OS using proprietary software is like as an electronic voting machine, we pretend it's secure to feel better about a failing we can't change.

[–] Dudewitbow@lemmy.zip 2 points 4 months ago* (last edited 4 months ago) (1 children)

the problem is the bad actors have direct access to said voting machines. in the case of security, the people creating the OS is not the bad actor typically in question when you think of bad actors, which kind of goes back to the goalpost situation. Unless you knew how everything is designed from the ground up (including the hardware code in whatever language it is) then thats just setting an arbitrary goalpost. basically typical NSA backdoor, or foreign backdoor via hardware situation, independent of the OS. To bluntly place it only at the OS stage is setting said goalpost there when you can really apply it to any part of the line (the chip design, the hardware assembler, the os designer, the software maker). Setting it at the OS level fundamentally means all OS' are insecure by nature unless you're actively running it on a FPGA thats constantly getting updates.

For instance, any CPU with speculative programming fundamentally is insecure and is virtually in all modern processors. never even mind the CPU when the door is already open regardless of the OS.

[–] tabular@lemmy.world 3 points 4 months ago

When I think of bad actors and software I think of security from 3rd parties after the intentions of the authors. Not just security but also privacy and any other anti-features users wouldn't want. That applies to the OS, apps or drivers. Hardware indeed has concerns like software, which is just a wider conversation about security, which is just part of user/consumer rights.

[–] rambling_lunatic@sh.itjust.works 1 points 4 months ago

Security is in degrees. The highest level would indeed use open-source hardware. I hope to build a rig like that someday.

[–] tengkuizdihar@programming.dev 13 points 4 months ago (1 children)

Sure its secure, but is it verifiably secure?

[–] TORFdot0@lemmy.world 7 points 4 months ago (1 children)

I mean you can provide audit findings and results and it’s a pretty big part of vendor management and due diligence but at some point you have to accept risk in using open source software that can be susceptible to supply chain hacks, might be poorly maintained, etc or accept the risk of taking the closed source company’s documentation at face value (and that can also be poorly maintained and susceptible to supply chain attacks)

There’s got to be some level of risk tolerance to do business and open source doesn’t actually reduce risk. But it can at least reduce enshittification

[–] cybersandwich@lemmy.world 7 points 4 months ago (2 children)

It's pretty hilarious when people act like being open source means it's "more secure". It can be, but it's absolutely not guaranteed. The xz debacle comes to mind.

There are tons of bugs in open source software. Linux has had its fair share.

[–] PopOfAfrica@lemmy.world 12 points 4 months ago (1 children)

The XZ thing is actually a great point to open source's favor. All it took was some dude to figure it out.

If you try to inject maligned code, you will be found out. That can't happen with proprietary software.

[–] cybersandwich@lemmy.world 2 points 4 months ago* (last edited 4 months ago)

It highlighted some pretty glaring weaknesses in OSS as well. Over worked maintainers, unvetted contributers, etc etc.

The XZ thing seems like we got "lucky" more than anything. But that type of attack may have been successful already or in progress elsewhere. It's not like people are auditing every line of every open source tool/library. It takes really talented devs and researchers to truly audit code.

I mean, I certainly couldn't do it for anything semi advanced, super clever, or obfuscated the way the XZ thing was.

But I agree, that the fact we could audit it at all is a plus. The flip side is: an unvetted bad actor was able to publish these changes because of the nature of open source. I'm not saying bad actors can't weasel their way into Microsoft, but that's a much higher bar in terms of vetting.

[–] tabular@lemmy.world 3 points 4 months ago* (last edited 3 months ago)

Proprietary software has to be caught being insecure to be "guilty of" being insecure. Free software can be publically verified, effectively "proven innocent" - a much higher standard.

[–] TWeaK@lemm.ee 3 points 4 months ago

That's the crux of it here. Microsoft wanted to get into the data game they saw Facebook and Google reaping. However, Microsoft still charge you for the software they use to harvest your data.

[–] Cosmos7349@lemmy.world 11 points 4 months ago

I mean what they have to do is obvious, right? Only one of these two options can help increase ad revenue.