Max_UL

joined 1 year ago
[–] Max_UL@lemmy.pro 6 points 1 year ago

If you haven’t read The Culture by Iain Banks, it’s among the best and most enjoyable sci-fi ever, in my opinion. The humans of the culture are quite near the most advanced in the universe, but there are entities more advanced, their own AI ships, prominently, but other species too that chose to “sublime” and exist outside of the normal universe, but because of that such ones are ever barely around. The humans of the culture could evolve that far too, but didn’t choose to do so yet in the series.

[–] Max_UL@lemmy.pro 1 points 1 year ago

Me coming to Lemmy to turn my mind off for a minute, relax and read memes. Drats!

[–] Max_UL@lemmy.pro 1 points 1 year ago

What about Tiffany, she likes to travel.

[–] Max_UL@lemmy.pro 5 points 1 year ago

I sea what you did there!

[–] Max_UL@lemmy.pro 8 points 1 year ago

Probably like me and our instance, it runs as an extra on a company server, there’s no risk of it going down, it’s negligible for costs.

It’s just been one of the most popular months for vacation, gone on vacation, will update Lemmy instance to latest version when get around to it.

[–] Max_UL@lemmy.pro 2 points 1 year ago

Mirror the profile of actually happy, older people who have lives you would like to have.

Take care of your health, eat well and exercise.

Be successful: you don’t have to be rapacious, but there is a level of financial success and stability that definitely decreases stress and affords more opportunities, like travel and hobbies.

Be social: the happiest people have strong social networks.

Be wise: don’t worry about what you can’t change, but be engaged and try to make the world a better place.

[–] Max_UL@lemmy.pro 6 points 1 year ago

This sounds awesome, will try to use it

[–] Max_UL@lemmy.pro 4 points 1 year ago

Insert Jerry and George making their sheet requests to the chambermaid

[–] Max_UL@lemmy.pro 0 points 1 year ago (5 children)

Do you recommend we all give up and not try to do what we can with our own agency? Is that how you live your life, have you given up?

 

Few areas of cybersecurity measure up against penetration testing in terms of importance and excitement. This activity boils down to finding flaws in computer systems so that organizations can address them proactively and forestall real-world attacks.

A pentester worth their salt should have outstanding tech skills, be a social engineering guru, and have enough confidence to try and outsmart seasoned IT professionals working for large corporations. Pentesters are often referred to as ethical hackers, and for good reason – they need to infiltrate well-secured systems to pinpoint loopholes that black hat hackers can parasitize for nefarious purposes.

 

The MOVEit Attack: 'Human2' Fingerprint

The group behind Cl0p has used a number of vulnerabilities in file transfer services, such as GoAnywhere MFT in January (CVE-2023-0669), and the MOVEit managed file transfer platforms in late May and early June (CVE-2023-34362).

Initially, the attackers installed a web shell, named LEMURLOOT, using the name "human2.aspx" and used commands sent through HTTP requests with the header field set to "X-siLock-Comment". The advisory from the Cybersecurity and Infrastructure Security Agency also includes four YARA rules for detecting a MOVEit breach.

The attack also leaves behind administrative accounts in associated databases for persistence — even if the Web server has been completely reinstalled, the attackers can revive their compromise. Sessions in the "activesessions" database with Timeout = '9999' or users in the User database with Permission = '30' and Deleted = '0' may indicate an attacker activity, according to CrowdStrike.

One hallmark of the MOVEit attack, however, is that often few technical indicators are left behind. The extended success of the Cl0p attack against MOVEit managed file transfer software and the difficulty in finding indicators of compromise shows that product vendors need to spend additional effort on ensuring that forensically useful logging is available, says Caitlin Condon, a security manager with vulnerability-management firm Rapid7.

 

Below detail at this link: https://owasp.org/www-project-top-10-for-large-language-model-applications/descriptions/

This is a draft list of important vulnerability types for Artificial Intelligence (AI) applications built on Large Language Models (LLMs) LLM01:2023 - Prompt Injections

Description: Bypassing filters or manipulating the LLM using carefully crafted prompts that make the model ignore previous instructions or perform unintended actions. LLM02:2023 - Data Leakage

Description: Accidentally revealing sensitive information, proprietary algorithms, or other confidential details through the LLM’s responses. LLM03:2023 - Inadequate Sandboxing

Description: Failing to properly isolate LLMs when they have access to external resources or sensitive systems, allowing for potential exploitation and unauthorized access. LLM04:2023 - Unauthorized Code Execution

Description: Exploiting LLMs to execute malicious code, commands, or actions on the underlying system through natural language prompts. LLM05:2023 - SSRF Vulnerabilities

Description: Exploiting LLMs to perform unintended requests or access restricted resources, such as internal services, APIs, or data stores. LLM06:2023 - Overreliance on LLM-generated Content

Description: Excessive dependence on LLM-generated content without human oversight can result in harmful consequences. LLM07:2023 - Inadequate AI Alignment

Description: Failing to ensure that the LLM’s objectives and behavior align with the intended use case, leading to undesired consequences or vulnerabilities. LLM08:2023 - Insufficient Access Controls

Description: Not properly implementing access controls or authentication, allowing unauthorized users to interact with the LLM and potentially exploit vulnerabilities. LLM09:2023 - Improper Error Handling

Description: Exposing error messages or debugging information that could reveal sensitive information, system details, or potential attack vectors. LLM10:2023 - Training Data Poisoning

Description: Maliciously manipulating training data or fine-tuning procedures to introduce vulnerabilities or backdoors into the LLM.