this post was submitted on 30 Aug 2023
229 points (91.9% liked)

Showerthoughts

29643 readers
1103 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The best ones are thoughts that many people can relate to and they find something funny or interesting in regular stuff.

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. Avoid politics (NEW RULE as of 5 Nov 2024, trying it out)
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct

founded 1 year ago
MODERATORS
 

I used to think typos meant that the author (and/or editor) hadn't checked what they wrote, so the article was likely poor quality and less trustworthy. Now I'm reassured that it's a human behind it and not a glorified word-prediction algorithm.

you are viewing a single comment's thread
view the rest of the comments
[–] j4k3@lemmy.world 8 points 1 year ago (5 children)

Think of AI more like human cultural consciousness that we collectively embed into everything we share publicly.

Its a tool that is available for anyone to tap into. The thing you are complaining about is not the AI, it is the results of the person that wrote the code that generated the output. They are leveraging a tool, but it is not the tool that is the problem. This is like blaming Photoshop because a person uses it to make child porn.

[–] Niello@kbin.social 0 points 1 year ago* (last edited 1 year ago) (2 children)

Photoshop is a general purpose image editting tool that is mostly harmless. That's not the same for AI. The people who created them and allow other people to use them do so anyway without enough consideration to the risks they know is much much higher than something like Photoshop.

What you say applies to photoshop because the devs know what it can do and the possible damage it can cause from misuse is within reasons. The AI you are talking about are controlled by the companies that create them and use them to provide services. It follows it is their responsibility to make sure their products are not harmful to the extend they are, especially when the consequences are not fully known.

Your reasoning is the equivalent of saying it's the kids fault for getting addicted to predatory mobile games and wasting excessive money on them. Except that it's not entirely their fault and programs aren't just a neutral tool but a tool that is customised to the wills of the owners (the companies that own them). So there is such a thing as an evil tool.

It's all those companies, and the people involved, as well as law makers responsiblity to make the new technology safe with minimal negative impacts to society rather than chase after their own profits while ignoring the moral choices.

[–] j4k3@lemmy.world 5 points 1 year ago (1 children)

This is not true. You do not know all the options that exist, or how they really work. I do. I am only using open source offline AI. I do not use anything proprietary. All of the LLM's are just a combination of a complex system of categories, with a complex network that calculates what word should come next. Everything else is external to the model. The model itself is not anything like an artificial general intelligence. It has no persistent memory. The only thing is actually does is predict what word should come next.

[–] Niello@kbin.social 1 points 1 year ago* (last edited 1 year ago)

Do you always remember things as is? Or do you remember an abstraction of it?

You also don't need to know everything about something to be able to interpret risks and possibilities, btw.

load more comments (2 replies)