AI research is going great. Researchers leave instructions in their papers to any LLM giving a review, telling them to only talk about the positives. These instructions are hidden using white text or a very small font. The point is that this exploits any human reviewer who decides to punt their job to ChatGPT.
My personal opinion is that ML research has become an extreme form of the publish or perish game. The most prestigious conference in ML (NeurIPS) accepted a whopping 4497 papers in 2024. But this is still very competitive, considering there were over 17000 submissions that year. The game for most ML researchers is to get as many publications as possible in these prestigious conferences in order to snag a high paying industry job.
Normally, you'd expect the process of reviewing a scientific paper to be careful, with editors assigning papers to people who are the most qualified to review them. However, with ML being such a swollen field, this isn't really practical. Instead, anyone who submits a paper is also required to review other people's submissions. You can imagine the conflicts of interest that can occur (and lazy reviewers who just make ChatGPT do it).