this post was submitted on 01 Aug 2023
520 points (82.6% liked)

Technology

59201 readers
2829 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

you are viewing a single comment's thread
view the rest of the comments
[–] AbouBenAdhem@lemmy.world 92 points 1 year ago* (last edited 1 year ago) (2 children)

They should just call AIs “confirmation bias amplifiers”.

[–] ghariksforge@lemmy.world 27 points 1 year ago (1 children)

AI learns what is in the data.

[–] brambledog@infosec.pub -1 points 1 year ago (1 children)

The AI we have isn't "learning". They are pre-trained.

[–] rebelsimile@sh.itjust.works 12 points 1 year ago

The “pre-training” is learning, they are often then fine-tuned with additional training (that’s the training that isn’t the ‘pre-training’), i.e. more learning, to achieve specific results.

[–] ShakeThatYam@lemmy.world 14 points 1 year ago (2 children)
[–] postmateDumbass@lemmy.world 2 points 1 year ago (1 children)

Humans will identify sterotypes in AI generated materials that match the dataset.

Assume the dataset will grow and eventually mimic reality.

How will the law handle discrimination based on data supported sterotypes?

[–] Pipoca@lemmy.world 3 points 1 year ago (1 children)

Assume the dataset will grow and eventually mimic reality.

How would that happen, exactly?

Stereotypes themselves and historical bias can bias data. And AI trained on biased data will just learn those biases.

For example, in surveys, white people and black people self-report similar levels of drug use. However, for a number of reasons, poor black drug users are caught at a much higher rate than rich white drug users. If you train a model on arrest data, it'll learn that rich white people don't use drugs much but poor black people do tons of drugs. But that simply isn't true.

[–] postmateDumbass@lemmy.world 1 points 1 year ago (1 children)

The datasets will get better because people have started to care.

Historically much of the data used was what was easy and cheap to acquire. Surveys of class mates. Arrest reports. Public available, government curated data.

Good data costs money and time to create.

The more people fact check, the more flaws can be found and corrected. The more attention the dataset gets the more funding is likely to come to resurvey or w/e.

It part of the peer review thing.

[–] Pipoca@lemmy.world 1 points 1 year ago

It's not necessarily a matter of fact checking, but of correcting for systemic biases in the data. That's often not the easiest thing to do. Systems run by humans often have outcomes that reflect the biases of the people involved.

The power of suggestion runs fairly deep with people. You can change a hiring manager's opinion of a resume by only changing the name at the top of it. You can change the terms a college kid enrolled in a winemaking program uses to describe a white wine using a bit of red food coloring. Blind auditions for orchestras result in significantly more women being picked than unblinded auditions.

Correcting for biases is difficult, and it's especially difficult on very large data sets like the ones you'd use to train chatgpt. I'm really not very hopeful that chatgpt will ever reflect only justified biases, rather than the biases of the broader culture.

[–] altima_neo@lemmy.zip -1 points 1 year ago

That's just stupid and shows a lack of understanding of how this all works.