this post was submitted on 12 Jun 2025
333 points (96.9% liked)

Fuck AI

3109 readers
1105 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] lime@feddit.nu 3 points 1 day ago (2 children)

it's such a weird stretch, honestly. songs and conversations are not different to predictive text, it's just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.

[–] WanderingThoughts@europe.pub 2 points 17 hours ago

It helped that this advanced autocorrect could get high scores on many exams at university level. That might also mean the exams don't test logic and reasoning as well as the teachers think they do.

[–] kogasa@programming.dev 3 points 22 hours ago* (last edited 22 hours ago)

Not necessarily do logic, but mimic it, like it can mimic coherent writing and basic conversation despite only being a statistical token muncher. The hope is that there's sufficient information in the syntax to model the semantics, in which case a sufficiently complex and well-trained model of the syntax is also an effective model of the semantics. This apparently holds up well for general language tasks, meaning "what we mean" is well-modeled by "how we say it." It's plausible, at face value, that rigorous argumentation is also a good candidate, which would give language models some way of mimicking logic by talking through a problem. It's just not very good in practice right now. Maybe a better language model could do better, maybe not for a reasonable cost.