this post was submitted on 10 Apr 2024
40 points (100.0% liked)

TechTakes

1405 readers
69 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 26 points 7 months ago (5 children)

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.

Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.

Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.

wait, you mean the same models that supposed AI researchers were swearing had “glimmerings of intelligent reasoning” and “a complex world model” really were just outputting the most likely next word for a prompt? the current models are just fancy autocomplete but now that there’s a new product to sell, that one will be the real thing? and of course, the new models are getting pre-announced as revolutionary as interest in this horseshit in general takes a nosedive.

LeCun said it was working on AI “agents” that could, for instance, plan and book each step of a journey, from someone’s office in Paris to another in New York, including getting to the airport.

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid? like, there’s apps that do this already. this shit was solved already by application of the least-terrible surviving algorithms from the first AI boom. what the fuck is the point of re-solving travel planning, but now incredibly expensive and you can’t trust the results?

[–] sailor_sega_saturn@awful.systems 19 points 7 months ago (1 children)

Ah yes, "getting to the airport", one of the great unsolved challenges in computing.

[–] self@awful.systems 15 points 7 months ago

in order to solve the Traveling Salesman Problem, the first step is to use a machine model to confirm the user isn’t a salesman

[–] blakestacey@awful.systems 19 points 7 months ago

So, if anyone is keeping score, the promise of Artificial Intelligence has descended from "the computers on Star Trek" to "spicy ticket-booking".

[–] froztbyte@awful.systems 17 points 7 months ago* (last edited 7 months ago) (2 children)

the thing that bothers me about that lecunn statement is that it's another of those not-even-wrong fuckers with an implicit assumption: that the problem is not that it doesn't have intelligence, just that the intelligence isn't very advanced yet - "oh yeah it just didn't think ahead! that's why foot in mouth! it's like your drunk friend at a party!"

which, y'know, is not the case. but they all fucking speak with that implicit foundation, as though the intelligence is proven fact instead of total suggestion (I wanted to say "conjecture", but that isn't the right word either)

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid?

it's also the pitch I keep seeing from a number of places, including that rabbit or whatever the fuck thing? and, frankly, can we not? these goddamn things can barely parse sentences and keep context, and someone wants to tell me that a model use is for it to plan my travel? with visas and flight times and transfers? nevermind all the extra implications of accounting for real-world issues (e.g. political sensitivity), preferences in sight-seeing, data privacy considerations (visiting friends)....

like it's just a gigantic fucking katamari ball of nope

[–] carlitoscohones@awful.systems 13 points 7 months ago

someone wants to tell me that a model use is for it to plan my travel?

I don't think any of these people have ever traveled. Honestly, I used to work for a company where the corporate travel people mostly lived in a small village in Germany, and their recommendations could be insane sometimes, but at least they knew what being a human was like.

[–] dgerard@awful.systems 6 points 7 months ago* (last edited 7 months ago) (1 children)

bro, bro. i'm not going to answer your question about the obvious and glaring problems, but here read these three preprints that are very exciting about the possibilities!!! no i can't just explain in my own words what they say. but if you cannot refute the mathematics (you can tell it's real maths because it's got squiggly symbols in it) then you must acquit

[–] self@awful.systems 4 points 7 months ago

if you cannot refute, you must not compute

[–] V0ldek@awful.systems 13 points 7 months ago* (last edited 7 months ago)

This is yet another example of people calling the shots here being completely detached from reality of an average person and bereft of imagination.

Surely the plebs would all want to have an underpaid secretary that plans your private jet trips for you so that you don't have to interact with anyone. It's the dream! I can't imagine a life without that, surely they need it too!