I read about a teacher who had her students use chat gpt to write an essay, and then review it and highlight the inaccuracies.
Professors
Attribution statement: I have stolen this text from the Professors subreddit with the hopes of providing an alternate community on Lemmy for us.
This community is BY professors FOR professors. Whether you are tenured, tenure-stream, a lecturer, adjunct faculty, or grad TA, if you are instructional faculty or work with college students in a similar capacity, this forum is for you to talk with colleagues. This community is not for students. While students may lurk and occasionally comment, they should identify themselves as students, and comments are subject to removal at mods’ discretion.
SYLLABUS
This community is a place for professors to BS with each other, share professional concerns, get advice and encouragement, vent (oh yes, especially that), and share memes. It has erstwhile been described as “kind of a 'teacher's lounge' for college professors.” This community is not for non-professors to ask questions of professors or about The Life™; it is for professors to ask each other questions.
As such, we ask all posters to abide by the following rules:
-
No student posts/comments: This is a place for those teaching at the college level to discuss and share. While some student posts or comments may sneak by, and Mods may allow a richly upvoted post or comment that has spawned useful discussion to remain, that is the exception, NOT the rule.
-
Don't Be Inappropriate: No weird sexual fantasy stuff, no confessions of crushes, no questions about dating or anything of that nature. Any posts of this type will most likely to be removed without question, explanation, or hesitation.
-
No Incivility: No personal attacks, racism, or any other diatribes against students, or each other, that cross the line of civility. For that matter, attacks IN GENERAL are not tolerated. Disagree, challenge, vent, express frustration, but don’t cross that line. Attacks, hostility, or inappropriate conduct/content of any kind may result in a ban (temporary or permanent) at the Mods’ discretion.
-
No "How do I become a professor?": Go to the website of the school you want to teach at. Look at the job listings. If the position you want is available, look at the qualifications. If you don't have those qualifications, get them. Apply for the job. That's it.
-
No Spam/Surveys: No spam, no external surveys. We are not here to be marketed to; we're a bunch of academics who are here to goof off, vent, get advice, and share stories from the podium. Using the poll function in a post is, however, acceptable to let users weigh in on how they feel about an issue. For IRB approved surveys, you can message the Mods with a pitch and we will consider allowing it.
-
No Bigotry: Racism, bigotry, sexism, or homophobia, or any other similar despicable type behavior will get your comment(s)/post(s) removed and you muted or banned. We will try not to penalize politically challenging speech (we mods are only human, after all), but it is essential that it be delivered thoughtfully and with circumspection. Low-effort sloganeering and hashtag-mentality posting will be removed; offensive content will result in a mute or ban. You will not always agree with the mods’ decisions in this regard, but it is the price we pay to have this little corner of cyberspace to ourselves.
I’ve heard of that one too. It seems like a good idea to try and show the pitfalls.
Where I’m at it’s a circus. Some professors encourage use, while others have it banned entirely. The ambiguity of consequence has caused some students to be referred to student conduct for plagiarism and other classes to simply have their own simple punishment.
It’s quite distressing to find out that the restructuring of your own sentences can be considered plagiarism, even though the main idea is still your own.
I don’t teach but I can say that it helps a lot of students that aren’t great at expressing ideas on paper have a somewhat coherent idea. Anecdotally, ChatGPT fails at reaching professional level concepts such as medical law or chemotherapy regimens and will easily land you in a heap of trouble.
But if your having a hard time coming up with ideas for an essay related to “the economic downturn of the early 2000s and the political climate that fueled the depression” or “war in the Middle East” I wouldn’t call it unfair to smash a few key research points into an app that basically does the “explain” part of the essay you worked hard to build a setup for.
It’s gross and lazy when the whole thing is obvious crappy AI hallucination talk. It’s nice when it is actually used as a tool that enhances the work.
Edit: I also just realized this is a community for professors. Thank you for all the hard work you all put into us students. A lot of our lives are secretly shaped around some of the behaviors we pick up from you, and most importantly the things we learn from you. For some of us you are the shimmering hope of escaping a more traditional way of life.
The education we take home after the day is done and into our future careers that you all help build is massive and I sometimes think y’all are so humble up your own asses that it sends you to cringe land to hear it. Thanks all!
At least here it doesn’t sound like I’m begging for a grade
Indeed, it does not sound like grade begging in this context. I'm sure that your professors would agree that you're welcome, and thanks for acknowledging their work as well. I hope you find a way to say this to them.
Mostly, I agree with you about acceptable AI use, although this is still very fluid and subject to change. There's no significant difference, to my mind, in working with a human sounding board and working with AI in that role. The problem comes when the AI generates most or all of the final product and the student submits it as their own. That's not even close to acceptable, particularly in a writing class where the entire point is learning to produce one's own good writing. However, as you note, other profs have different perspectives depending on their course objectives and professional fields. That is appropriate.
My issue in the original post was students who either mostly or entirely copy the prompt into an AI generator and then submit the result for a grade. Such essays hardly ever actually address the question or even follow instructions. They would fail in any circumstance, but the nature of their creation also violates the course's academic integrity policy.
Even text spinners, while useful to improve a few words to express an idea, can land students in trouble. When spinners are overused, the student's voice (and sometimes their entire message) is lost. No one should want that. I've had students fail assignments because the submission no longer represented their own writing; rather, it reflected lengthy periods refining the input. It isn't plagiarism according to my class definition, but it's also not acceptable.
You do raise an interesting idea: How long until we need to include a lesson on crafting appropriate AI prompts in order to help students use them as tools and not as unpaid ghost writers? That's probably a very different, deep, and interesting rabbit hole.
At any rate, even though it technically violates the community rules, I hope the mods leave your message here. To my mind, it's good to have the student perspective as we wrestle with this new ~~menace~~, er, "tool" in education settings. Thanks for posting.
For reference, my working practice this semester is to treat unauthorized AI use (we discuss what is authorized repeatedly) as an academic integrity violation. I'll begin an inquiry if at least two different AI detectors indicate that a majority of a submission was AI generated (either ≥50% of sentences or ≥50% probability that the entire paper was written by AI). So far guilty students have either immediately confessed or tried a variety of stalling tactics. One had me emailing with the AI for week, offering one excuse after another until the F was recorded and we moved on. Another relayed Helicopter Parent's instruction that I was to be lenient in grading and to stop talking with Student; that didn't go as they expected. Here at the end of the semester, others have simply ignored multiple emails, seemingly trying to run out the clock (hey, it works in sportsball).
I'll give a fair chance to explain, and there have been cases where those explanations passed muster. I'm completely happy to base a judgement on preponderance of evidence. But they have to actually offer some evidence, and neither my patience, time, nor the semester is infinite.