this post was submitted on 23 Oct 2023
82 points (100.0% liked)
196
17958 readers
897 users here now
Be sure to follow the rule before you head out.
Rule: You must post before you leave.
Other rules
Behavior rules:
- No bigotry (transphobia, racism, etc…)
- No genocide denial
- No support for authoritarian behaviour (incl. Tankies)
- No namecalling
- Accounts from lemmygrad.ml, threads.net, or hexbear.net are held to higher standards
- Other things seen as cleary bad
Posting rules:
- No AI generated content (DALL-E etc…)
- No advertisements
- No gore / violence
- Mutual aid posts are not allowed
NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.
If you have any questions, feel free to contact us on our matrix channel or email.
Other 196's:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don’t think they said anything like that “it can’t be intelligent because it’s wrong sometimes”. It’s more like the AI doesn’t exist outside of the prompts you feed it. Humans can introspect, reflect on the actions we’ve done and question what effect our actions had on the situation. Humans can have desires, we can want to be more accurate, truthful in our actions, and reflect on how we might have failed doing this in the past. AI cannot do this. And we can do this outside of the prompt of a similar situation. AI only takes an input and then generates an output, wipes its hands, and calls it a day. It doesn’t matter if it gave you a correct answer, wrong answer, or gave you a completely illegible sentence.
The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.
And it's also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.
And is having wants and desires necessary to be an "intelligence"? That's getting into the philosophy side of the house, but I would argue that's superfluous.