this post was submitted on 13 Nov 2024
661 points (95.0% liked)

Technology

59415 readers
2808 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Mushroomm@sh.itjust.works 13 points 3 days ago

It's been 5 minutes since the new thing did a new thing. Is it the end?

[–] Boxscape@lemmy.sdf.org 31 points 4 days ago (5 children)

Well duhhhh.
Language models are insufficient.
They also need:

load more comments (5 replies)
[–] KeenFlame@feddit.nu 16 points 3 days ago (3 children)

I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you

load more comments (3 replies)
[–] nl4real@lemmy.world 7 points 3 days ago

Fingers crossed.

[–] Etterra@lemmy.world 18 points 3 days ago

Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.

[–] rational_lib@lemmy.world 11 points 3 days ago (17 children)

As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:

  1. Copilot can't read my mind and figure out what I'm trying to do.
  2. I'm working on an uncommon problem where the typical solutions don't work
  3. Copilot is unable to tell when it doesn't "know" the answer, because of course it's just simulating communication and doesn't really know anything.

2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.

I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.

[–] OsrsNeedsF2P@lemmy.ml 1 points 2 days ago
  1. Copilot can't read my mind and figure out what I'm trying to do.

Try writing comments

load more comments (16 replies)
[–] homesweethomeMrL@lemmy.world 32 points 4 days ago (5 children)

"The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence."

"As I have always warned," he added, "that's just a fantasy."

load more comments (5 replies)
[–] dog_@lemmy.world 6 points 3 days ago
[–] shortwavesurfer@lemmy.zip 19 points 4 days ago

Because nobody could have possibly saw that coming. /s

load more comments
view more: ‹ prev next ›