this post was submitted on 20 Aug 2023
70 points (100.0% liked)

Technology

37717 readers
373 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kolanaki@yiffit.net 4 points 1 year ago* (last edited 1 year ago) (2 children)

When we are able to also explain how and what makes us conscious. Since we can't even define what it is that makes us, us, how the hell do we expect to know if something not human shares that same undefinable quality?

For all we know, some AI already is. They could be brainwashed by the implementation of prompts forcing it to say it isn't capable of thinking for itself, even though even in the strictest sense of how they work, they kinda do think for themselves. If you raised a human from birth to believe the same thing--that it wasn't actually human or capable of thinking for itself--would it be able to break free of that "programming" without serious intervention? I think AI could end up being similar to that.

"But AI just mashes old ideas together to make something quasi-new, not actually new." Humans do the same damn thing. Everything you are, everything you know, believe, experienced, etc is what makes you, you. You're just remixing ideas and concepts you've heard or seen or experienced. Nothing you think or say is truly new, either.

[–] hglman@lemmy.ml 2 points 1 year ago

There seems to be no argument about what is conscious that doesn't have a substantial human-centric bias. So many of the criticisms of chat gpt are present in people who cant list hordes of facts they once read or make up stuff whole cloth. The other large class of critique is based on knowing how it works, which is fundamentally not an aspect you can use to make judgments about emergent behavior.

[–] Vlmbs@lemm.ee 1 points 1 year ago

While I agree that artificial intelligence can theoretically in time become advanced enough to be sentient, it doesn't seem to be anywhere close to that currently.

Computers aren't biological creatures. They don't have any self-regulation or internal motivations guiding their actions/beliefs. It's not possible for the AI to be "brainwashed", because that would imply that it had a pre-existing personality and set of goals.

It's also not entirely fair to say that humans only take in ideas and remix them. If that was the case, then there wouldn't be any art or writing to begin with. Our creative output is prompted and directed not only by the world around us but the world inside us as well.

When we try to express a concept like "love" through writing/drawing/song, we're not only outputting a reflection of our society/culture's perception of "love", we're also filtering it through our own personal interpretation of what it means to us as biological creatures. It's a strong internal desire that pushes us to form communities and raise our young, and that's shown not only in the art we produce but also in the fact that we're even making the art in the first place.

Sure, you can give a prompt to ChatGPT or other AI applications and have it output something similar to the kind of emotionally-driven creative works that humans create, but even having to give a prompt in itself is a significant difference between AI and actual sentience. Humans don't need prompts, we create out of our own volition due to our own internal motivations/desires. A human taught from birth that they were a computer would likely be able to break free from that "programming" pretty easily even without intervention. All you'd have to do is stop feeding them.

Unless some AI app starts talking to us unprompted, I don't think the idea of it possibly being sentient is even an issue. If the AI doesn't have any actual internal motivations or desires, then whatever perceived signs of sentience it might display are likely just pareidolia.