Select Topic Area
Product Feedback Feature Area
Issues Body
I find the following two news items on the front page:
https://github.blog/changelog/2025-05-19-github-copilot-coding-agent-in-public-preview/
This says to me that github will soon start allowing github users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct¹. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).
As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account. If we are not granted these tools, and "AI" junk submissions become a problem, I may be forced to take drastic actions such as closing issues and PRs on my repos entirely, and moving issue hosting to sites such as Codeberg which do not have these maintainer-hostile tools built directly into the website.
Note: Because it appears that both issues and PRs written this way are posted by the "copilot" bot, a straightforward way to implement this would be if users could simply block the "copilot" bot. In my testing, it appears that you have special-cased "copilot" so that it is exempt from the block feature.
image
So you could satisfy my feature request by just not doing that.
¹ i don't at this time have codes of conduct on all my projects, but i will now be adding them for purposes of barring "AI"-generated submissions Guidelines
I would absolutely be happy to have a feature where an LLM could read previous issues, the docpage, and the FAQ/wiki, then you could query it regarding your issue to (a) see if it is a legitimate issue, (b) check that the issue you submit contains the info you need, and c) help you link in previous issues/PR's referring to relevant stuff.
Never in hell do I want an LLM to be generating issues by itself.