this post was submitted on 09 Sep 2024
37 points (100.0% liked)

TechTakes

1436 readers
118 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Revered friends. I wrote a thing. Mainly because I had a stack of stuff on Joseph Weizenbaum on tap and the AI classroom thing was stuck in my head. I don't know if it's good, but it's certainly written.

you are viewing a single comment's thread
view the rest of the comments
[–] UnseriousAcademic@awful.systems 3 points 2 months ago (1 children)

As in with Eliza where we interpret there as being humanity behind it? Or that ultimately "humans demanding we leave stuff to humans because those things are human" is ok?

[–] MajorHavoc@programming.dev 1 points 2 months ago (1 children)

As in with Eliza where we interpret there as being humanity behind it?

This one. It helps explain some of the unfounded excitement and overconfidence we're seeing. It's not all unfounded, but the uncanny valley AI has stepped into makes it natural to want to root for it.

Honestly I'm kind of reminded of some of the philosophy around semiotics and authorship. Like, when reading a story part of the interpretation comes from constructing a mental image of the author talking to a mental image of the audience, and the way those mental images get created can color the interpretation and how we read and understand the text.

In that sense, the tendency to construct a mental image of a person talking through ChatGPT or Eliza makes much more sense. I've been following the Alex Jones interviews of chatGPT and the illusion is much less strong when listening to the conversation rather than having it mediated through text, which is probably a good sign for those of us who like actual people. Even when interactive, chatting through text is sufficiently less personal that it's easier to fill in all the extra humanity, though as we see from Alex himself in those interviews it is definitely not impossible to get fooled through other media.

But that's at the ground level of interaction, and it's probably noteworthy that the press releases for all these policies are not getting written by a bot. This tendency to fill in a human being definitely lines up with the tech-authoritarian tendency that OP has discussed elsewhere to dehumanize both their victims and more significantly themselves. I think the way they talk about themselves and the people who work on their "side" is if anything more alarming than the way they talk about their victims.