this post was submitted on 17 Mar 2024
87 points (91.4% liked)

Technology

34894 readers
942 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

There's an extraordinary amount of hype around "AI" right now, perhaps even greater than in past cycles, where we've seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn't matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another "Web3"-style froth bubble because they've again abandoned one of the core values of actual technology-based advancement: reason.

you are viewing a single comment's thread
view the rest of the comments
[–] Windex007@lemmy.world 16 points 8 months ago* (last edited 8 months ago)

I agree that the author didn't do a great job explaining, but they are right about a few things.

Primarily, LLMs are not truth machines. That just flatly and plainly not what they are. No researcher, not even OpenAI makes such a claim.

The problem is the public perception that they are. Or that they almost are. Because a lot of time, they're right. They might even be right more frequently than some people's dumber friends. And even when they're wrong, they sound right. Even when it's wrong, it still sounds smarter than most peoples smartest friends.

So, I think that the point is that there is a perception gap between what LLMs are, and what people THINK that they are.

As long as the perception is more optimistic than the reality, a bubble of some kind will exist. But just because there is a "reckoning" somewhere in the future doesn't imply it will crash to nothing. It just means the investment will align more closely to realistic expectations as the clarity of what realistic expectations even are become more clear.

LLMs are going to revolutionize and also destroy many industries. It will absolutely fundamentally change the way we interact with technology. No doubt...but for applications which strictly demand correctness, they are not appropriate tools. And investors don't really understand that yet.