this post was submitted on 15 Jun 2025
328 points (98.2% liked)

World News

47510 readers
3031 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

you are viewing a single comment's thread
view the rest of the comments
[–] confusedwiseman@lemmy.dbzer0.com 13 points 1 day ago (4 children)

In some regard I don’t think it should be considered cheating. Don’t beat me up yet, I’m old and think AI sucks at most things.

AI typically outputs crap. So why does this use of a new and widely available tech get called out differently?

Using Google (in the don’t be evil timeframe) wasn’t cheating when open book was permitted. Using the text book was cheating on a closed book test. In some cases using a calculator was cheating.

Is it cheating if you write a paper completely on your own and use spell check and grammar check within word? What if a grammarly type extension is used? It’s a slippery slope that advances with technology.

I remember testing and assignments that were designed to make it harder to cheat, show your work, for math type approaches. Quizzes and short essays that make demonstration of the subject matter necessary.

Why doesn’t the education environment adapt to this? For writing assignments, maybe they need to be submitted with revision history so the teacher can see it wasn’t all done in one go via an LLM.

The quick answer responses are somewhat like using Wikipedia for a school paper. Don’t site Wikipedia and don’t use the generated text for anything but a base understanding of the topic. Now go use all the sources these provided, to actually do the assignment.

[–] rescue_toaster@lemmy.zip 32 points 1 day ago (2 children)

Chatgpt output isn't crap anymore. I teach introductory physics at a university and require fully written out homework, showing math steps, to problems that I've written. I wrote my own homework many years ago when chegg blew up and all major textbook problems were on chegg.

Just two years ago, chatgpt wasn't so great at intro physics and math. It's pretty good now, and shows all the necessary steps to get the correct answer.

I do not grade my homework on correctness. Students only need to show me effort that they honestly attempted each problem for full credit. But it's way quicker for students to simply upload my homework pdf to chatgpt and copy down the output than give it their own attempt.

Of course, doing this results in poor exam performance. Anecdotally, my exams from my recent fall semester were the lowest they've ever been. I put two problems on my final that directly came from from my homework, one of them being the problem that made me realize roughly 75% of my class was chatgpt'ing all the homework as chatgpt isn't super great at reading angles from figures, and it's like these students had never even seen a problem like it before.

I'm not completely against the use of AI for my homework. It could be like a tutor that students ask questions to when stuck. But unfortunately that takes more effort than simply typing "solve problems 1 through 5, showing all steps, from this document" into chatgpt.

[–] confusedwiseman@lemmy.dbzer0.com 5 points 1 day ago (1 children)

This is very insightful and provides good perspective.

If I boil it down to take away is that GPT is enough to get through the fundamentals of student material, students can fake competence of the subject up to the cliff they fall off at the test.
This ultimately isn’t preparing them for the world. It’s nearly impossible to catch until it’s too late. The pass or fail options aren’t helping because neither really represents the students best interests.

The call to ban it for school is the only lever we can grasp for is because every other KNOWN option has been tried or assessed.

[–] rescue_toaster@lemmy.zip 2 points 22 hours ago

Yeah, that fake competence is a big thing. Physics Education Research has become a big field and while i don't follow it too closely, that seems to be a reoccuring theme - students think they are learning the material with such reliance on AI.

I intend to read a bit more of this over the summer and try to dedicate a bit of the first day or two next semester addressing how this usage of chatgpt hurts their education. I teach a lot of engineering students, which already has around a 80% attrition rate, i.e. 200 freshman, but only 40 of these graduate with an engineering degree. Probably won't change behavior at all, but I gotta try something.

[–] avidamoeba@lemmy.ca 3 points 1 day ago (1 children)

Horrifying. Is there any obvious solution being discussed in your circles?

[–] Warl0k3@lemmy.world 5 points 23 hours ago* (last edited 23 hours ago) (1 children)

Not them but also an instructor - where I teach, we're having to pivot sharply towards grades being based mostly on performance in labs and in person quiz/test results. Its really unfortunate since there are many students with test anxiety and labs are really exhausting to turn into evaluation instead of instruction, but it's the only workable solution we've been able to figure out.

[–] rescue_toaster@lemmy.zip 4 points 23 hours ago

Yep same here. I liked having homework a significant portion of grade. But with the prevalence of chatgpt, am reducing that portion of the grade and increasing the in-class exam weight.

[–] ech@lemm.ee 11 points 1 day ago

It's absolutely cheating - it's plagiarism. It's no different in that regard than copying a paper found online, or having someone else write the paper for you. It's also a major self-own - these students have likely one opportunity to better themselves through higher education, and are trashing that opportunity with this shit.

I do agree that institutions need to adapt. Edit history is an interesting idea, though probably easy to work around. Imo, direct teacher-student interfacing would be the most foolproof, but also incredibly taxing on time and effort for teachers. It would necessitate pretty substantial changes to current practices.

[–] meliante@lemm.ee 5 points 1 day ago

I agree. This is a paradigm shift, it won't get erased out of use.

[–] raspberriesareyummy@lemmy.world 0 points 23 hours ago

Repeat after me: it's NOT FUCKING A FUCKING I when it is LLMs or other machine learning snakeoil.