LLM Chatbots Are the Inscrutable Unknown Variable in Lesson Plans
The cognitive consequences of using LLM Chatbots in class can be subtle and hard to predict
The more I read about the use of LLM chatbots like ChatGPT in education, the more I think that what are pitched as the promising applications are not well thought through.
The most glaring example is an LLM chatbot serving as a tutor, which I think is based on faulty assumptions about human motivation. But that topic requires a lengthy treatment, so I’ll leave it for another day.
But here’s another example: an LLM chatbot as a partner in the writing process, an offered in a number of places, for example, this popular guide published by Elon College. It’s sometimes framed as an exercise in realism; once out of school, students will, of course, use an LLM chatbots, so it makes sense for them to use it now, just as they will in the future. Good writers will not have LLMs write for them, but will make wise use of their capabilities.
There are assumptions in this tact. One is the facile expectation that doing the same task now is always the best preparation for doing that task later; in other words there are “working with LLMs” skills that must be burnished. The plausible alternative is that the “working with LLMs” skills are trivial, and skills that really need development relate to research, thinking, and writing. In which case students would benefit more from an assignment in which they were responsible for all aspects of the work.
Or perhaps the “partnering” strategy is a nod to the reality that students are going to use LLMs anyway, so partnering is meant to put some guardrails on that use. That assumes that this assignment will make it less likely that students will not use it in proscribed ways. Would it? I have no idea and I don’t think anyone else does either.
Here’s one more example. When educators are advised to have students partner with an LLM chatbot in the writing of papers, they are almost always told that students should cite the LLM, just like any other source. But this advice shows a narrow understanding of the purpose of citations.
The motivation behind telling students to cite an LLM chatbot is academic integrity; you cite your sources to give credit for ideas you didn’t generate on your own. You are avoiding plagiarism, and that’s a good purpose.
But you also cite sources for the sake of credibility. You are showing where your information comes from with the goal of assuring the reader that the information in your argument is trustworthy. But of course, citing an LLM doesn’t really serve this purpose, because you don’t know where the information came from. You’re citing an aggregator, the information that went into the aggregator is cloaked, and the method of aggregation is not understood by the reader. (There are LLM chatbots that cite sources, so far with mixed success.)
Another reason we cite sources is for verification. If I want to challenge the accuracy of your paper, one thing I might do is look up your sources to see if your attributions are accurate. But of course that’s a tricky matter with an LLM, as the same prompt can yield a different response.
My aim here is to point out that an LLM chatbot is a cognitive wild card. The effects of including it in assignments may be subtle and difficult to appreciate. That suggests that educators should be thoughtful, deliberate, and cautious in their use of LLM chatbots in the classroom. Which runs counter to one of the persuasive techniques of enthusiasts: “you don’t want to be left behind!” We’ve bought that argument more than once in the last twenty years, and it hasn’t served us well.
The other persuasive technique is to argue “it’s already here—deal with it.” I would rephrase that as “it’s upon us, so we must deal with it,” and there is a world of difference in that small change. I haven’t forgotten the people who made the same argument advocating that educators embrace the use of smartphones in class, one-to-one Chromebooks, and YouTube, Twitter, and TikTok as learning platforms. I remember, and I think they know I remember.

Appreciate the thoughts. LLMs seem like a wild card and there is a rush to implement and use in the class without any thought.
Would love to hear more about school supplied one to one devices such as chromebooks and iPads in the class. Most of the focus has been on cell phones. Anyone doing work in this area?
Most AI plans that ask for “citation” are really, I think, asking for disclosure of assistance. I have added this to my syllabus and am putting the burden of proof on the students, who should save Chat transcripts, drafts, screenshots, and other evidence of their acceptable use.