this post was submitted on 28 Jun 2025
834 points (94.6% liked)

Technology

72062 readers
2782 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

(page 3) 50 comments
sorted by: hot top controversial new old
[–] fodor@lemmy.zip 6 points 16 hours ago

Mind your pronouns, my dear. "We" don't do that shit because we know better.

[–] aceshigh@lemmy.world 17 points 22 hours ago* (last edited 15 hours ago) (8 children)

I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

E: I use it to give me ideas that I then test out solo.

[–] biggerbogboy@sh.itjust.works 5 points 21 hours ago (1 children)

Are we twins? I do the exact same and for around a year now, I've also found it pretty helpful.

[–] Liberteez@lemm.ee 8 points 18 hours ago

I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it's just an inner dialogue enhancer

load more comments (7 replies)
[–] bbb@sh.itjust.works 21 points 1 day ago (2 children)

This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

[–] sobchak@programming.dev 17 points 1 day ago (1 children)

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

[–] bbb@sh.itjust.works 18 points 22 hours ago* (last edited 22 hours ago) (2 children)

"…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

[–] mr_satan@lemmy.zip 6 points 19 hours ago (4 children)

Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

[–] Sternhammer@aussie.zone 4 points 17 hours ago (1 children)

I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

[–] mr_satan@lemmy.zip 3 points 15 hours ago (2 children)

My language doesn't really have hyphenated words or different dashes. It's mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

load more comments (2 replies)
load more comments (3 replies)
[–] sqgl@sh.itjust.works 4 points 22 hours ago

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

Not on my phone it didn't. It looks as you intended it.

load more comments (1 replies)
[–] psycho_driver@lemmy.world 14 points 1 day ago (1 children)

Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

[–] laranis@lemmy.zip 14 points 23 hours ago (4 children)

Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

load more comments (4 replies)
[–] Imgonnatrythis@sh.itjust.works 51 points 1 day ago (4 children)

Good luck. Even David Attenborrough can't help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it's human nature for us to want to give just about every damn thing human qualities. I'd explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

[–] audaxdreik@pawb.social 23 points 1 day ago

This is the current problem with "misalignment". It's a real issue, but it's not "AI lying to prevent itself from being shut off" as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it's trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don't actually want to hear the truth. They want to hear what they want to hear.

LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I

load more comments (3 replies)
[–] mechoman444@lemmy.world 12 points 1 day ago* (last edited 1 day ago) (19 children)

In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

load more comments (19 replies)
[–] Geodad@lemmy.world 33 points 1 day ago (8 children)

I've never been fooled by their claims of it being intelligent.

Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

[–] kromem@lemmy.world 22 points 23 hours ago (6 children)

It very much isn't and that's extremely technically wrong on many, many levels.

Yet still one of the higher up voted comments here.

Which says a lot.

[–] Hotzilla@sopuli.xyz 1 points 11 hours ago* (last edited 11 hours ago) (1 children)

Calling these new LLM's just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU's.

load more comments (1 replies)
load more comments (5 replies)
[–] anzo@programming.dev 16 points 1 day ago* (last edited 1 day ago)

I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)..

In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. ..

Also, Anthropic (ironically) has some nice paper(s) about the limits of "reasoning" in AI.

load more comments (6 replies)
[–] some_guy@lemmy.sdf.org 20 points 1 day ago (1 children)

People who don't like "AI" should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

[–] kibiz0r@midwest.social 18 points 1 day ago* (last edited 1 day ago) (1 children)

Citation Needed (by Molly White) also frequently bashes AI.

I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

load more comments (1 replies)
[–] RalphWolf@lemmy.world 24 points 1 day ago (9 children)

Steve Gibson on his podcast, Security Now!, recently suggested that we should call it "Simulated Intelligence". I tend to agree.

load more comments (9 replies)
load more comments
view more: ‹ prev next ›