this post was submitted on 16 Mar 2024
28 points (93.8% liked)

Technology

34894 readers
942 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

ArtPrompt is what’s known as a jailbreak, a class of AI attack that elicits harmful behaviors from aligned LLMs, such as saying something illegal or unethical. Prompt injection attacks trick an LLM into doing things that aren't necessarily harmful or unethical but override the LLM's original instructions nonetheless.

you are viewing a single comment's thread
view the rest of the comments
[–] flambonkscious@sh.itjust.works 6 points 8 months ago

Someone made a really good point, that putting safety filters around the prompts is really just a band aid. Ideally, it needs to have not been in the training data to begin with...

Obviously that's not going to fly with 'our' get rich quick approach to anything GenAI.

Having just written that, I'm wondering if we're better off having filters at the other end, emulating what we do as parents (concealing knowledge/nuance I don't want children picking up on), so it filters what it says?