this post was submitted on 13 Mar 2024
875 points (99.8% liked)

Programmer Humor

32558 readers
364 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
top 49 comments
sorted by: hot top controversial new old
[–] gregorum@lemm.ee 270 points 8 months ago* (last edited 8 months ago) (2 children)

lmao... when you give an LLM unlimited power and an ill-defined role, it assumes the position of a shitty project manager, of course

[–] stevedidwhat_infosec@infosec.pub 69 points 8 months ago (1 children)

It’s learning capabilities are clearly unrivaled

I kinda feel like GPT is if you skipped college and just went with the apprenticeship strategy but it’s apprenticeship was with Reddit posts

Good enough but every now and then has some wildly inaccurate shit sprinkled in just enough to make you question the integrity of the whole thing.

LLMs (unless implemented with general knowledge AI) will never be accurate or more than a novelty toy. It’s close to being iRobot but right now it’s just an abacus. The future won’t be about one model, it’ll be about orchestration of models or the development of model ecosystems to make a better overall symphony as the product/tool

[–] brbposting@sh.itjust.works 24 points 8 months ago (2 children)

LLMs (unless implemented with general knowledge AI) will never be accurate or more than a novelty toy.

I see Bing horribly confabulate all the time (and sometimes subsequently gaslight).

Thus I was surprised at last month’s Klarna news:

Wonder what’s going on behind the scenes.

[–] Trainguyrom@reddthat.com 19 points 8 months ago (2 children)

This is the value I see in AI is letting human agents work way faster. An AI which is trained on your previous human-managed tickets and suggests the right queue, status and response but still allows the human agents to ultimately approve or rewrite the AI response before sending would save a mountain of work for any kind of queue work and chat support work

[–] Kidplayer_666@lemm.ee 12 points 8 months ago

I bet that 75% of support requests are people who didn’t read the FAQ, and if you can get humans not doing that, it’s much better for both

[–] theneverfox@pawb.social 10 points 8 months ago

People just don't get it... LLMs are unreliable, casual, and easily distracted/incepted.

They're also fucking magic.

That's the starting point - those are the traits of the technology. So what is it useful for?

You said drafting basically - and yeah, absolutely. Solid use case.

Here's the biggest one right now, IMO - education. An occasionally unreliable tutor is actually better than a perfect one - it makes you pay attention. Hook it into docs or a search through unstructured comments? It can rephrase for you, dumb it down or just present it casually. It can generate examples, and even tie concepts together thematically

Text generation - this is niche for "proper" usage, but very useful. I'm making a game, I want an arbitrarily large number of quest chains with dialogue. We're talking every city in the US (for now), I don't need high quality or perfect accuracy - I need to take a procedurally generated quest and fluff it up with some dialogue.

Assistants - if you take your news feed or morning brief (or most anything else), they can present the information in a more human way. It can curate, summarize, or even make a feed interactive with conversation. They can even do fantastic transcriptions and pretty good image recognition to handle all sorts of media

There's plenty more, but here's the thing - none of those are particularly economically valuable. Valuable at an individual/human level, but not something people are willing to pay for.

The tech is far from useless... Even in it's current state, running on minimal hardware, it can do all sorts of formerly impossible things.

It's just being sold as what they want it to be, not what it is

[–] JCreazy@midwest.social 7 points 8 months ago (1 children)

If the AI works then fantastic. It's inevitable so it's going to get used by companies but the issue is companies using it without understanding what it does or what it's capable of doing.

[–] brbposting@sh.itjust.works 5 points 8 months ago

without understanding

Or without caring too eh?

[–] blazeknave@lemmy.world 14 points 8 months ago

I laughed my ass off at this! So well put!

Today, my last 3 messages to Gemini were all pretty much: "cool! We're agreed on the framework and tone etc in which you'll communicate this thing to me. Now please, create the fucking thing already"

[–] Rentlar@lemmy.ca 216 points 8 months ago (1 children)

Oh! I can do better than the LLM to write code:

// TODO (Linus Torvalds) write this app for me, kthxbye.
[–] nxdefiant@startrek.website 135 points 8 months ago (2 children)
[–] JoYo@lemmy.ml 39 points 8 months ago* (last edited 8 months ago)

@torvalds@social.kernel.org has a fedi account.

[–] ikidd@lemmy.world 14 points 8 months ago

Well, nVidia just got told.

[–] CaptDust@sh.itjust.works 95 points 8 months ago (1 children)

Copilot isn't wrong, they are the best person for the task. 🤷

[–] ekky@sopuli.xyz 66 points 8 months ago (1 children)

AI: "Can I copy your work?"

Phil: "Just don't make it obvious."

AI:

[–] gregorum@lemm.ee 11 points 8 months ago

Oh, great. Now it’s able to copy being lazy and confused, too. Fuck!

[–] toastal@lemmy.ml 70 points 8 months ago (1 children)
[–] Sibbo@sopuli.xyz 14 points 8 months ago

Oh nice! Does this exist for EU as well?

[–] glimse@lemmy.world 54 points 8 months ago (3 children)

Can someone explain why April is nervous about having the username April? I don't get it

[–] bdonvr@thelemmy.club 78 points 8 months ago (1 children)

Other people reading ToDo(April) will probably assume that feature is slated for April, the month.

[–] glimse@lemmy.world 9 points 8 months ago

Thank you, that makes sense.

[–] PM_Your_Nudes_Please@lemmy.world 40 points 8 months ago

Because if you didn’t know better, someone seeing “TODO(April)” would probably assume it means “do this sometime in April.” Especially since we’re in the middle of March, with April just around the corner. She’s probably about to get e-mail bombed by git requests.

[–] shootwhatsmyname@lemm.ee 11 points 8 months ago (1 children)

Apathetically Program Ruthless International Launches

might start a war.

[–] glimse@lemmy.world 4 points 8 months ago

Oh so it's like C.O.O.K.S.!

[–] JoYo@lemmy.ml 35 points 8 months ago (2 children)

lol, did you post Xcrement of a post from Bluesky.

[–] Blisterexe@lemmy.zip 42 points 8 months ago (1 children)

im so sad that people are going to bluesky instead of mastadon

[–] FoD@startrek.website 9 points 8 months ago* (last edited 8 months ago) (1 children)

I really tried, a few times and I just can't make it exciting. I find it so boring to search for people and tags I wantto follow. That said, I wasn't a huge Twitter user before, and i don't have bluesky. I'm just hoping one day, mastodon clicks with me.

[–] Obi@sopuli.xyz 20 points 8 months ago

You're like me you just don't like user-based sites, I simply much prefer to follow topics than people, I fucking hate people why would I follow them online.

[–] Templa@beehaw.org -1 points 8 months ago (1 children)

"Why are people not using Mastodon?"

Mastodon Users: "lololol you are posting excrement from an inferior platform"

You could just... Not engage with posts you don't like, y'know?

[–] JoYo@lemmy.ml 1 points 8 months ago

the fuck? i didnt mention mastodon at all. please never join mastodon.

[–] mesamunefire@lemmy.world 27 points 8 months ago (1 children)

So how exactly does this work?

[–] Sibbo@sopuli.xyz 83 points 8 months ago (1 children)

Copilot is just an LLM trained on all GitHub code. Hence it gives you random stuff from some open source code bases.

[–] gibmiser@lemmy.world 56 points 8 months ago (1 children)

Not random, the most used stuff It means this man is either very productive, or else he is constantly having to be told to do his job.

[–] Sibbo@sopuli.xyz 35 points 8 months ago

It is enough if his name appears just once. Modern language models take more than just the last few characters as context.

[–] alexdeathway@programming.dev 14 points 8 months ago (3 children)

Does anybody mind explaining, how this might have happened?

[–] manny_stillwagon@mander.xyz 71 points 8 months ago (2 children)

Copilot is a LLM. So it's just predicting what should come next, word by word, based off the data its been fed. It has no concept of whether or not its answer makes sense.

So if you've scraped a bunch of open source github projects that this guy has worked on, he probably has a lot of TODOs assigned to him in various projects. When Copilot sees you typing "TODO(" it tries to predict what the nextthing you're going to type is. And a common thing to follow "TODO(" in it's data set is this guy's username, so it goes ahead and suggests it, whether or not the guy is actually on the project and suggesting him would make any sort of sense.

[–] pearsaltchocolatebar@discuss.online 8 points 8 months ago (3 children)

You can absolutely add constraints to control for hallucinations. Copilot apparently doesn't have enough, though.

[–] kadu@lemmy.world 42 points 8 months ago (1 children)

If GitHub Copilot is anything like Windows Copilot, I can't say I'm surprised.

"Please minimize all my windows"

"Windows are glass panes invented by Michael Jackson in imperial China, during the invasion of the southern sea. Sources 1 2 3"

[–] Darkassassin07@lemmy.ca 20 points 8 months ago

Lmao. That's even better when you consider the copilot button replaced the 'show desktop' (ie 'minimize all my windows') button.

[–] shootwhatsmyname@lemm.ee 16 points 8 months ago

My guess is that Copilot was using a ton of other lines as context, so in that specific case his name was a more likely match for the next characters

[–] jherazob@beehaw.org 1 points 8 months ago

No matter how many constraints you add, it's never enough, that's the weakness of a model that only knows language and nothing else

[–] alexdeathway@programming.dev 2 points 8 months ago

I thought it synced some requests and assigned projects to another user (Saw an ad about github Copilot managing issues and writing PR descriptions sometime ago)

[–] dojan@lemmy.world 16 points 8 months ago* (last edited 8 months ago) (1 children)

It’s no different from GPT knowing the plot of Aliens or who played the main role in Matilda.

It's seen enough code to recognise the pattern, it knows an author name goes in there, and Phil Nash is likely a prolific enough author that it just plopped his name in there. It's not intelligence, just patterns.

[–] planish@sh.itjust.works 5 points 8 months ago

"Yeah this sounds like a Phil Nash sort of problem, I'll just stick him in here."

[–] ilinamorato@lemmy.world 15 points 8 months ago

The other answers are great, but if I were to be a bit more laconic:

Copilot is spicy autocorrect. It autocorrected that todo to insert that guy's name because he gets a lot of todos.

[–] TrickDacy@lemmy.world 1 points 8 months ago (1 children)

I use copilot every day and I have zero clue what this means but I assume it has nothing to do with copilot... ?

[–] whatsisface@sh.itjust.works 53 points 8 months ago (1 children)

Copilot is regurgitating the code he wrote, complete with comments addressed to himself

[–] TrickDacy@lemmy.world 0 points 8 months ago (1 children)
[–] Adanisi@lemmy.zip 10 points 8 months ago* (last edited 8 months ago)

This is exactly what it does. It steals code and does not abide by it's license.

Often it also removes or changes the license/attribution.