this post was submitted on 19 Oct 2023
535 points (96.7% liked)

Technology

59438 readers
3397 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash

you are viewing a single comment's thread
view the rest of the comments
[–] SkyeStarfall@lemmy.blahaj.zone 109 points 1 year ago (58 children)

The thing with AI, is that it mostly only produces trash now.

But look back to 5 years ago, what were people saying about AI? Hell, many thought that the kind of art that AI can make today would be impossible for it to create! ..And then it suddenly did. We'll, it wasn't actually suddenly, and the people in the space probably saw it coming, but still.

The point is, we keep getting better at creating AIs that do stuff we thought were impossible a few years ago, stuff that we said would show true intelligence if an AI can do them. And yet, every time some new impressive AI gets developed, people say it sucks, is boring, is far from good enough, etc. While it slowly, every time, creeps on closer to us, replacing a few jobs here and there in the fringes. Sure, it's not true intelligence, and it still doesn't beat humans, but, it beats most, at demand, and what happens when inevitably better AIs get created?

Maybe we're in for another decades long AI winter.. or maybe we're not, and plenty more AI revolutions are just around the corner. I think AIs current capabilities are frighteningly good, and not something I expected to happen this soon. And the last decade or so has seen massive progress in this area, who's to say where the current path stops?

[–] Telodzrum@lemmy.world 50 points 1 year ago (35 children)

Nah, nah to all of it. LLM is a parlor trick and not a very good one. If we are ever able to make a general artificial intelligence, that's an entirely different story. But text prediction on steroids doesn't move the needle.

[–] GnuLinuxDude@lemmy.ml 16 points 1 year ago (3 children)

Sam Altman (Creator of the freakish retina scanning based Worldcoin) would agree, it seems. The current path for LLMs and GPT seems to be in something of a bind, because to seriously improve upon what it currently does it needs to do something different, not more of the same. And figuring out something different could be very hard. https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

At least that's what I understand of it.

[–] TheWiseAlaundo@lemmy.whynotdrs.org 12 points 1 year ago* (last edited 1 year ago) (2 children)

He's not saying "AI is done, there's nothing else to do, we've hit the limit", he's saying "bigger models don't necessarily yield better results like we had initially anticipated"

Sam recently went before congress and advocated for limiting model sizes as a means of regulation, because, at the time, he believed bigger would generally always mean better outputs. What we're seeing now is that if a model is too large it will have trouble producing truthful output, which is super important to us humans.

And honestly, I don't think anyone should be shocked by this. Our own human brains have different sections that control different aspects of our lives. Why would an AI brain be different?

[–] gregoryw3@lemmy.ml 3 points 1 year ago

Future of AI is definitely going towards Manager/Agent model. It allows for an AI to handle all the tasks without keeping it to one model or method. We’re already seeing this with ChatGPT using Mathematica for math questions. Soon we can see art AI using different models and methods based on text input.

[–] Browning@lemmings.world 1 points 1 year ago

I gather that this is partly because data sizes haven't been going up with model sizes. That is likely to change soon as synthetic data starts to overtake organic data in both quantity and quality.

load more comments (31 replies)
load more comments (53 replies)