this post was submitted on 04 Apr 2025
194 points (96.2% liked)
Technology
68305 readers
5844 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've been assuming it's because they truly have no idea how this tech works
Hey.
I've been in tech for 20 years. I know python, Java, c#. I've worked with tensorflow and language models. I understand this stuff.
You absolutely could train an AI on safe material to do what you're saying.
Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.
It's like going to buy a burger, and the restaurant says "We can't guarantee there's no human meat in here". At best it's lazy. At worst it's abusive.
I mean, there is no photograph of a school bus with pegasus wings diving to the titanic, but I bet one of these AIs can crank out that picture. If it can do that...?
Ok, but by that definition Google should be banned because their trawler isn't guaranteed to not pick up CP.
In my opinion, if the technology involves casting a huge net, and then creating an abstracted product from what is caught in the net, with no steps in between seen by a human, then is it really causing any sort of actual harm?