this post was submitted on 20 May 2025
228 points (99.1% liked)

Technology

70173 readers
3732 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Select Topic Area

Product Feedback Feature Area

Issues Body

I find the following two news items on the front page:

https://github.blog/changelog/2025-05-19-creating-issues-with-copilot-on-github-com-is-in-public-preview/

https://github.blog/changelog/2025-05-19-github-copilot-coding-agent-in-public-preview/

This says to me that github will soon start allowing github users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct¹. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account. If we are not granted these tools, and "AI" junk submissions become a problem, I may be forced to take drastic actions such as closing issues and PRs on my repos entirely, and moving issue hosting to sites such as Codeberg which do not have these maintainer-hostile tools built directly into the website.

Note: Because it appears that both issues and PRs written this way are posted by the "copilot" bot, a straightforward way to implement this would be if users could simply block the "copilot" bot. In my testing, it appears that you have special-cased "copilot" so that it is exempt from the block feature.

image

So you could satisfy my feature request by just not doing that.

¹ i don't at this time have codes of conduct on all my projects, but i will now be adding them for purposes of barring "AI"-generated submissions Guidelines

you are viewing a single comment's thread
view the rest of the comments
[–] Bezier@suppo.fi 22 points 9 hours ago (1 children)

An LLM could be useful for explaining people how they failed to follow simple guidelines, like including the software version or not filing multiple issues in one. However, issues written by AI can fuck the right off.

[–] thebestaquaman@lemmy.world 17 points 8 hours ago* (last edited 8 hours ago)

I would absolutely be happy to have a feature where an LLM could read previous issues, the docpage, and the FAQ/wiki, then you could query it regarding your issue to (a) see if it is a legitimate issue, (b) check that the issue you submit contains the info you need, and c) help you link in previous issues/PR's referring to relevant stuff.

Never in hell do I want an LLM to be generating issues by itself.