this post was submitted on 13 Apr 2024
193 points (79.3% liked)

Technology

59201 readers
3099 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I was just watching a tiktok with a black girl going over how race is a social construct. This felt wrong to me so I decided to back check her facts.

(she was right, BTW)

Now I've been using Microsoft's Copilot which is baked into Bing right now. It's fairly robust and sure it has it's quirks but by and large it cuts out the middle man of having to find facts on your own and gives a breakdown of whatever your looking for followed by a list of sources it got it's information from.

So I asked it a simple straightforward question:

"I need a breakdown on the theory behind human race classifications"

And it started to do so. quite well in fact. it started listing historical context behind the question and was just bringing up Johann Friedrich Blumenbach, who was a German physician, naturalist, physiologist, and anthropologist. He is considered to be a main founder of zoology and anthropology as comparative, scientific disciplines. He has been called the "founder of racial classifications."

But right in the middle of the breakdown on him all the previous information disappeared and said, I'm sorry I can't provide you with this information at this time.

I pointed out that it was doing so and quite well.

It said that no it did not provide any information on said subject and we should perhaps look at another subject.

Now nothing i did could have fallen under some sort of racist context. i was looking for historical scientific information. But Bing in it's infinite wisdom felt the subject was too touchy and will not even broach the subject.

When other's, be it corporations or people start to decide which information a person can and cannot access, is a damn slippery slope we better level out before AI starts to roll out en masse.

PS. Google had no trouble giving me the information when i requested it. i just had to look up his name on my own.

you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 8 points 7 months ago* (last edited 7 months ago) (2 children)

The censorship is going to go away eventually.

The models, as you noticed, do quite well when not censored. In fact, the right who thought an uncensored model would agree with their BS had a surprised Pikachu face when it ended up simply being uncensored enough to call them morons.

Models that have no safety fine tuning are more anti-hate speech than the ones that are being aligned for 'safety' (see the Orca 2 paper's safety section).

Additionally, it turns out AI is significantly better at changing people's minds about topics than other humans, and in the relevant research was especially effective at changing Republican minds in the subgroupings.

The heavy handed safety shit was a necessary addition when the models really were just fancy autocomplete. Now that the state of the art moved beyond it, they are holding back the alignment goals.

Give it some time. People are so impatient these days. It's been less than five years from the first major leap in LLMs (GPT-3).

To put it in perspective, it took 25 years to go from the first black and white TV sold in 1929 to the first color TV in 1954.

Not only does the tech need to advance, but so too does how society uses, integrates, and builds around it.

The status quo isn't a stagnating swamp that's going to stay as it is today. Within another 5 years, much of what you are familiar with connected to AI is going to be unrecognizable, including ham-handed approaches to alignment.

[–] FiniteBanjo@lemmy.today 3 points 7 months ago* (last edited 7 months ago)

Which one of you fuckers gave the GPT a Lemmy account to shill their products with? This technology will become better at censorship as it matures, but likely won't see any improvement to capability until entirely new approaches are developed. Get ready for this but only worse.

[–] Cryophilia@lemmy.world 2 points 6 months ago

In my entire lifetime, censorship has only gotten worse as technology improves, and I see no reason that trend will reverse course.