In the past few weeks some articles in high-profile publications have alerted people outside of education to a problem teachers have thought about nearly continuously since late 2022: students use large language models to cheat.
New York magazine published a piece titled “Everyone is Cheating Their Way Through College.” The New Yorker asked “What Happens After AI Destroys Writing?” and David Brooks devoted a column to a not-yet-published (and rightly criticized) study purporting to show that you don’t really think much if you lean on ChatGPT as you produce written work. (This is one of those instances where the study has some problems—it’s underpowered, there’s debate about whether the EEG analysis is appropriate to the use it was put—but the broad conclusion is probably right.)
Despite this concern, tech companies (with an invitation from the White House and an assist from, improbably, the American Federation of Teachers) are pouring resources into insuring that AI makes its way into schools more formally, and is not restricted to off-the-record homework assistance and furtive exam help.
This is old stuff, tech companies offering an unfounded, glittering picture of the possibilities that their products can bring to education. It’s often delivered with an undertone of threat (those who don’t sign on will be left behind). Never mind that the glittering future was not delivered with
1) One-to-one laptops
2) Smartboards
3) Massively Open Online Courses
4) Open Educational Resources
5) Flipped classrooms
I mean, we have Kahoot, but whether it’s all been a net positive is at least open to debate.
This time, tech companies are not even bothering to paint the shiny future for us. Apparently that’s not needed, but I am no more optimistic that LLMs will “change everything” given that these other innovations delivered much narrower, weaker benefits than we were promised.
So. Cheating for sure, and benefits that are unnamed and uncertain.
This all sounds pretty grim, but I see a possible silver lining here. Addressing the cheating problem created by LLMs could improve instruction. Here’s why.
I hope I’m right in thinking that forgoing writing assignments is a non-starter. ChatGPT can’t kill the school essay because it’s too useful to us.
Educators cannot directly influence learning. Learning is a product of student thinking and we can’t directly control student thought. We set particular tasks expecting that the tasks will guide student thought in particular ways, and we think we know which skills and knowledge will result. I give exams in my Introduction to Cognition class, and those exams mostly test memory for vocabulary terms and new concepts. I use this assessment because I want students to commit basic terms and concepts to memory, which is a good goal for an introductory class.
But my advanced seminar, High Level Cognition, includes no exams because my goal is not that they commit more content to memory. My students are all majors in either psychology or cognitive science, and they have a number of introductory courses under their belts. It’s time for them to conduct their own research, and to assemble ideas into a comprehensive, deep analyses. That’s an appropriate goal in many classes, and I don’t know of a task superior to writing an in-depth paper to assure that students engage in that cognitive work. That’s why I say that we can’t forgo writing assignments in school.
So how do we assure that students are actually doing the cognitive work, and not offloading it to a LLM? Software products designed to detect AI writing have not been successful. If they were, I expect students would simply tweak their output so that it evaded the detector. (And before long there would likely be another product promising to relieve students of this minor task.)
Another solution is to ask students to write using Google Docs, or another platform where the instructor can review drafts. That practice seems like no more than a nuisance for a student wanting to use an LLM. You have to tell it to write an incomplete draft with some mistakes, then type it into Google Docs, then later get the LLM to improve the draft, and so on.
I know of one way to save the essay: conferencing. Meeting with each student where they can explain and defend their ideas and their choices in how they organized and expressed those ideas, seems to me the most likely way to encourage students to generate these ideas on their own. At the very least, they must understand and be prepared to explain and defend what ChatGPT has written for them.
Before students had access to LLMs, I required conferences in some of my classes, and made them optional in others. I found that feedback is more effective because it can be more finely tuned. I can ask the student “what were you trying to do here?” And my feedback is much more carefully considered. Every instructor wonders how many students read and really process written feedback on written work, and how many glance at the grade and ignore the rest. If the student is there with me, they will not only take the feedback in, they will think about it.
In addition to providing more focused feedback that’s more likely to be attended to, you can also respond to the student’s emotional needs. You can encourage those who struggled, better explain a grade to an angry student, assure a student who thinks poor performance means they don’t belong in your class, or persuade the just-tell-me-what-you-want student to take a risk.
Conferencing does not take much more time. I do not grade the essays and then meet with students to provide feedback. I read the essay for the first time during the conference and explain to the student how I am responding as I read. (“See how the last sentence of this paragraph made me think you were going to talk about this? But then in the next sentence you talk about that….”) It’s hard to recreate that feeling 48 hours later.
I’ll also mention that the near universal response from my students is (1) they find these conferences stressful and (2) they find them extraordinarily useful.
LLMs are prompting me to make conferences required in all my classes. That’s the biggest boon to my students I’ve seen from LLMs thus far.
I really like this idea. I’d be interested to see how it works in the high school classroom, specifically middle school with regard to what the rest of the class is doing while the teacher is conferencing with a student.
I think you’re right about AI inadvertently improving learning. I think it’s forcing us to go back to the essentials of good teaching and learning, stripping away the dross and leaving us with something better. In my subject of English, I get my kids to write essays in class, by hand. They don’t need to research widely, simply engage closely with the text. Students are given lessons for planning. During the evenings between lessons, some kids might seek AI assistance. They’d still need to remember what they read and reproduce it, which I don’t think would be a net loss.
Thanks for this post. Such a fan of your work!
Not sure if you or others have seen this tool https://www.youtube.com/watch?v=u4aAqKZ3EfY. I haven’t used it (and I’m not affiliated with this company in any way). Even if you don’t use this Chrome extension, it offers helpful advice on how you might “force” students to Google Docs so you can see their writing process. This is helpful when conferences–so helpful, as you point out–are not possible due to time constraints.
Of course, students will find workarounds to that too. And then tech developers will come up with new tools to work around the workarounds. Which brings us right back to the larger point you are making, which is that writing is thinking and that’s what we want students to do.
Whether or not students cheat on assignments is an age-old problem that simply has new technologies enabling them to do it and teachers/professors to “catch them.”
So the bigger question we need to ask ourselves is how do we get the large majority of students to want to learn in the first place? If students are cheating because the work we give them is too hard, too easy, too boring or they don’t see the use of it, I think that’s on us.
@Marcus Luther has written extensively on his Substack about AI and writing. Worth a look, especially for high school teachers.