I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. This is a special post on how teachers ought to respond to issues of cheating with ChatGPT and similar generative AI systems.
While the rest of the world is grappling with fears of job loss, economic revolution and apocalypse (however overblown) since the release of ChatGPT in November, the teachers of the world seem to be primarily concerned with cheating.
Okay, I’m being a little facetious. But if you look at the articles about ChatGPT in the world of teaching, you’d come to the same conclusion.
These worries are valid. A survey from January, about two months after ChatGPT’s release, showed that about 90% of students had used ChatGPT to help with homework, half used it for an at-home test or quiz, half used it to write an essay, and about one-quarter used it to create an outline for a paper. And at the same time, over 70% of students believed the program should be banned at their institution.
So teachers are, naturally, trying to do something about this.
Cheating, of course, is as old as school. It’s not a new thing for teachers to have to deal with. And I think the lessons learned about cheating in prior eras can show us teachers the way forward in the age of ChatGPT.
The Individual Approach to Cheating
One way to think about cheating is in terms of individual character. Students who cheat are lazy at best, shorting themselves of their education. Perhaps they are morally bad and should be corrected.
Before ChatGPT, this mindset led to punitive policies for academic integrity, technological safeguards, surveillance, suspicion. These things have their place sometimes. In the world of ChatGPT, this mindset has led to the rise of supposed AI detectors, tools that are meant to reveal if students are using AI to generate the text they are submitting as their own.
The biggest problem with AI detectors is that they are simply not reliable. Even if a given tool is broadly accurate, it still flags AI-generated text as human-written sometimes and vice versa. Individual teachers will make their own decisions, but I would not feel comfortable using such a flawed tool even as one piece of evidence among many. Misinformation is worse than no information.
Another problem is that these tools set up an arms race between students, teachers, generative AI developers and AI detection developers. It’s not a productive use of anyone’s time, and at any given moment it’s a losing situation for at least one of the parties involved. This pits students and teachers against each other, leading toward a learning environment of animosity, resentment and aggression. Education should not be like that.
And beyond all this, AI detectors may miss the point entirely. A recent op-ed by an undergrad in The Chronicle of Higher Education illustrates this well. In the author’s experience, students are using ChatGPT to develop thesis statements and generate supporting points, not to generate whole essays in a single click. “The ideas on the paper can be computer-generated while the prose can be the student’s own.” While this sort of work does require some intellectual work on the student’s part, it’s surely not quite what teachers have in mind when assigning an essay.
The Environmental Approach to Cheating
Fortunately, there is an alternative, and I am grateful to educator James Lang for spelling it out in his book Cheating Lessons.
We can assume, as Aristotle said, that all people by nature desire to know. Yes, sometimes our students are in our classrooms because they have to by law; and yes, sometimes they are only there to get a credential on their resume. So maybe I’m being naive, but I like to think that my students are in front of me because basically they want to learn something. And even if that desire isn’t sparked before they walk in the door, maybe I can do my best to spark it in the first week of class. I’m trusting Aristotle here.
If our students basically want to learn, why do they sometimes resort to cheating? Maybe it’s out of convenience sometimes, or because of pressure or lack of preparation. In Cheating Lessons, Lang puts it a little more formally. He asks: Are there aspects of the learning environment that influence students to resort to cheating?
Lang finds four types of learning environments that make students more likely to cheat:
when the emphasis is on performance
when the stakes are high
when motivation is extrinsic
when expectations for success are low
If you are a teacher, this framework alone may help you see some ways to create learning environments where students will be less likely to (have to) cheat. Some questions that come to mind for me:
Can you emphasize process over product? For instance, rather than assigning an essay and evaluating only the final product, could you assess students on all stages of the essay-writing journey?
Can you split up assignments into more, lower-stakes assignments?
Can you help students cultivate intrinsic motivation? For instance, by assigning work that is relevant to their life, by articulating the reasoning behind each assignment, and by offering choice in the work students do?
Can you increase students’ expectations for themselves? For instance, by helping them see what they are learning through self-reflection and conversation?
As we finish out this school year and prepare for the next one, these are all good questions to mull over. ChatGPT caught the world by surprise midway through this academic year, but with the summer ahead of us we have the opportunity to enter the coming year a little better prepared.
Thank you for reading. See you later this week with our regular weekly piece. As ever, if you have comments or feedback to share on Ports or any of my pieces, feel free to leave a comment or reach out to me directly.