this post was submitted on 28 Jan 2025
95 points (99.0% liked)

Asklemmy

44763 readers
617 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

The US blocked high power graphics cards to specific countries, and then got all shaken up when their money moat was pole-vaulted by an embargo'd country wielding jank cards.

Why is this a big deal, exactly?

Who benefits if the US has the best AI, and who benefits if it's China?

Is this like the Space Race, where it's just an effort to spit on each other, but ultimately no one really loses, and cool shit gets made?

What does AI "supremacy" mean?

you are viewing a single comment's thread
view the rest of the comments
[–] fckreddit@lemmy.ml 2 points 1 week ago (4 children)

Don't believe the hype: LLMs are not AI. Not even close. They are in fact, much closer to pattern recognition models. Fundamentally, our brains are able to 'understand' any query posed to it. Only problem is we don't know what 'understanding' even means. How can we then even judge if some model is capable of understanding, or is the output just something that is statistically most likely?

Second, can AI even know what a human experience is like? We cannot give AI inputs in the exact form we receive them in. In fact, we cannot input the sensations of touch, flavor and smell to AI at all. So, AI as of yet cannot tell you how a freshly baked bread smells like or feels like, for example. Human experience is still our domain. That means our inspirations are intact and AI cannot create works of art that feel truly human.

Finally, AI by default has no concept of truth or false. It takes every statement in it's training data as true, unless, they are labelled individually by hand. Of course, such an approach doesn't scale well for petabytes of text data. So, LLMs tend to hallucinate stuff because again it is only giving out text that is only statistically most likely, given the input.

In short, we still don't have many pieces of puzzle that is true AI. We know it is possible because we exist, but that's about it. Sure, AI is doing better than humans in specific cases, but they nowhere close humans in understanding and reasoning.

[–] davel@lemmy.ml 9 points 1 week ago

That’s all well and good—that LLMs aren’t AGI—but not really what’s being asked.

load more comments (3 replies)