this post was submitted on 28 Jun 2025
629 points (94.1% liked)

Technology

72012 readers
2737 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

top 50 comments
sorted by: hot top controversial new old
[–] benni@lemmy.world 20 points 2 hours ago (1 children)

I think we should start by not following this marketing speak. The sentence "AI isn't intelligent" makes no sense. What we mean is "LLMs aren't intelligent".

[–] undeffeined@lemmy.ml 4 points 38 minutes ago

I make the point to allways refer to it as LLM exactly to make the point that it's not an Inteligence.

[–] fodor@lemmy.zip 5 points 4 hours ago

Mind your pronouns, my dear. "We" don't do that shit because we know better.

[–] Bogasse@lemmy.ml 15 points 7 hours ago

The idea that RAGs "extend their memory" is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

[–] aceshigh@lemmy.world 16 points 9 hours ago* (last edited 2 hours ago) (4 children)

I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

E: I use it to give me ideas that I then test out solo.

[–] Xande@discuss.tchncs.de 3 points 2 hours ago

So, you say AI is a tool that worked well when you (a human) used it?

[–] PushButton@lemmy.world 10 points 5 hours ago* (last edited 5 hours ago) (1 children)

That sounds fucking dangerous... You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor...

[–] SkyeStarfall@lemmy.blahaj.zone 0 points 50 minutes ago

I mean, sure, but that's really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places

[–] Snapz@lemmy.world 23 points 7 hours ago (1 children)

This is very interesting... because the general saying is that AI is convincing for non experts in the field it's speaking about. So in your specific case, you are actually saying that you aren't an expert on yourself, therefore the AI's assessment is convincing to you. Not trying to upset, it's genuinely fascinating how that theory is true here as well.

[–] aceshigh@lemmy.world 2 points 2 hours ago

I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

[–] biggerbogboy@sh.itjust.works 5 points 9 hours ago (1 children)

Are we twins? I do the exact same and for around a year now, I've also found it pretty helpful.

[–] Liberteez@lemm.ee 5 points 6 hours ago

I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it's just an inner dialogue enhancer

[–] bbb@sh.itjust.works 19 points 13 hours ago (2 children)

This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

[–] JackbyDev@programming.dev 11 points 7 hours ago

Asking a question and then immediately answering it? That's AI-speak.

HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

[–] sobchak@programming.dev 14 points 13 hours ago (1 children)

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

[–] bbb@sh.itjust.works 15 points 10 hours ago* (last edited 10 hours ago) (2 children)

"…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

[–] mr_satan@lemmy.zip 5 points 7 hours ago (1 children)

Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

[–] Sternhammer@aussie.zone 3 points 5 hours ago (1 children)

I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

[–] mr_satan@lemmy.zip 1 points 3 hours ago

My language doesn't really have hyphenated words or different dashes. It's mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

load more comments (1 replies)
load more comments
view more: next ›