this post was submitted on 22 May 2024
346 points (91.4% liked)

Technology

59651 readers
2681 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

It can solve existing problems in new ways, which might be handy.

[–] funkless_eck@sh.itjust.works 4 points 6 months ago (1 children)

can

might

sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.

[–] essteeyou@lemmy.world 1 points 6 months ago (2 children)

That's how it currently is, but I'd be astounded if it didn't progress quickly from now.

[–] funkless_eck@sh.itjust.works 1 points 6 months ago* (last edited 6 months ago) (1 children)

i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt ("technically the operators are contractors and not employed by the company" etc).

We invented the printing press 584 years ago, it still requires a team of human operators.

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

A printing press is not a technology with intelligence. It's like saying we still have to manually operate knives... of course we do.

[–] funkless_eck@sh.itjust.works 0 points 6 months ago* (last edited 6 months ago) (1 children)

the comment I originally replied to claimed AI will design the autonomous machines.

It will not. It will facilitate some of the research done by humans to aid in the designing of willfully human operated machinery.

To my knowledge the only autonomous machine that exists is a roomba, which moves blindly around until it physically strikes an object, rotates a random degree and continues in a new direction until it hits something else.

Even then, it is controlled with an app and on more expensive models, some boundary setting.

It is extremely generous to call that "autonomy."

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

I was in a self-driving taxi yesterday. It didn't need to bump into things to figure out where it was.

[–] funkless_eck@sh.itjust.works 1 points 6 months ago

Fair, I thought they all got recalled but I guess they're back. but I'd also counter that Waymo is extremely limited about where it can operate - roughly 10 miles max - which, relevant to my original point was entirely hand-mapped and calibrated by human operators, and the rides are monitored and directed by a control center responding in real-time to the car's feedback.

Like my printing press example - it still takes a large human team to operate the "self" - driving car.

[–] FiniteBanjo@lemmy.today 1 points 6 months ago (1 children)

OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they're incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

5 years ago I don't think most people thought ChatGPT was possible, or StableDiffusion/MidJourney/etc.

We're in an era of insane technological advancement, and I don't think it'll slow down.

[–] FiniteBanjo@lemmy.today 2 points 6 months ago* (last edited 6 months ago) (2 children)

Okay but the people who made the advancements are telling you it has already slowed down. Why don't you understand that? A flawed Chatbot and some art theft machines who can't draw hands aren't exactly worldchanging, either, tbh.

[–] essteeyou@lemmy.world 0 points 6 months ago

There are other people in the world. Some of them are inventing completely new ways of doing things, and one of those ways could lead to a major breakthrough. I'm not saying a GPT LLM is going to solve the problem, I'm saying AI will.

[–] lanolinoil@lemmy.world -1 points 6 months ago (1 children)

This is such a rich-country-centric view that I can't stand. LLMs have already given the world maybe it's greatest gift ever -- access to a teacher.

Think of the 800 million poor children in the world and their access to a Kahn academy level teacher on any subject imaginable with a cellphone/computer as all they need. How could that not have value and is pearl clutching drawing skills becoming devalued really all you can think about it?

[–] FiniteBanjo@lemmy.today 1 points 6 months ago* (last edited 6 months ago) (1 children)

Anything you learn from an LLM has a margin of error that makes it dangerous and harmful. It hallucinates documentation and fake facts like an asylum inmate. And it's so expensive compared to just having real teachers that it's all pointless. We've got humans, we don't need more humans, adding labor doesn't solve the problem with education.

[–] lanolinoil@lemmy.world 0 points 6 months ago (1 children)

bro I was taught in a textbook in the US in the 00s that the statue of liberty was painted green.

No math teacher I ever had actually knew the level of math they were teaching.

Humans hallucinate all the time. almost 1 billion children don't even have access to a human teacher, thus the boon to humanity

[–] FiniteBanjo@lemmy.today 2 points 6 months ago* (last edited 6 months ago) (1 children)

Those textbooks and the people who regurgitate their contents are the training data for the LLM. Any statement you make about human incompetence is multiplied by an LLM. If they don't have access to a human teacher then they probably don't have PCs and AI subscriptions, either.

[–] lanolinoil@lemmy.world 0 points 6 months ago

yeah but whatever the stats about as N increases alpha/beta error goes away thing is