this post was submitted on 13 May 2025
414 points (100.0% liked)

TechTakes

1858 readers
633 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] vga@sopuli.xyz 1 points 18 hours ago* (last edited 18 hours ago) (6 children)

So how do you tell apart AI contributions to open source from human ones?

[–] Irelephant@lemm.ee 5 points 7 hours ago
  1. see if the code runs
[–] froztbyte@awful.systems 3 points 7 hours ago

for anyone that finds this thread in the future: "check if vga@sopuli.xiz contributed to this codebase" is an easy hack for this test

[–] V0ldek@awful.systems 12 points 13 hours ago

It's usually easy, just check if the code is nonsense

[–] Architeuthis@awful.systems 24 points 17 hours ago* (last edited 17 hours ago) (1 children)

To get a bit meta for a minute, you don't really need to.

The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.

Until then it's probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don't really need to debunk every separate witness testimony, it's self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.

[–] self@awful.systems 15 points 17 hours ago (1 children)

if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

[–] kuberoot@discuss.tchncs.de 12 points 18 hours ago (1 children)

GitHub, for one, colors the icon red for AI contributions and green/purple for human ones.