21 May

AI in the university classroom

Facebook Twitter Email Pinterest Reddit Tumblr

Ah, the winter of 2023 – after ChatGPT publicly launched in November 2022, university instructors everywhere had a collective freakout when everyone realized that students could engage in all sorts of misconduct in an entirely new way. Certainly academic misconduct was always a part of our jobs, but this was different. AI-facilitated misconduct was more sophisticated and obviously far easier for students to use. The writing that AI can generate seems original and of decent quality.

Even now, spring of 2024, whenever instructors gather – online or in-person – the discussion quickly turns to the issue of AI in student work.

Like many others, in winter 2023, I responded to this by engaging with the various AI detection tools (which we now know are unreliable). I was spending hours each week copying and pasting text into detection tools and I was becoming angrier by the minute.

I also revised my assignments and activities to be more “AI-proof.” This was and continues to be incredibly time consuming. Good assignments and activities take time to develop and typically I need to offer them a few times before finalizing the instructions and expectations. Further, this was just following the COVID-19 pandemic, when all of us had already spent a great deal of time changing and creating new assignments and activities. Further, “AI-proof” – while optimally this means that the requirements are more complex, as to evade AI, this often results in things being “harder.” The other major outcome of the assignment revision was that it changed the classroom vibe and impacted grade distribution. In thinking about students and their grade distribution – those that always did well on assignments before, continued to do well on assignment that became more complex. Many of them appreciated the more complex assignments even. And then there were the students who, in the before times, did poorly on the assignments. Some of these students are tempted to use AI to complete their assignments, so even with the “AI-proof” “more complex” assignments, with AI they probably can do decently, or at least better than they would have without AI. Then there are the students that I believe are the most upset about this entire situation – the students who did okay in the before times. With only exerting a bit of effort, they could get a B-/C+ or so on an assignment and then they could go on with their lives. The “AI-proof” “more complex” assignments mean that these students are not able to get a B-/C+ with a little bit of effort. They are getting barely-passing grades due to the increased complexity. Those students are very angry. Another category of students are those that perhaps never considered engaging in serious misconduct in the before times, but AI is so tempting and appears to do a decent job, so they have adopted it.

Similarly, on multiple choice quizzes or exams, instructors have re-written questions to be harder to answer with AI – especially AI browser plug-ins like “StudyBuddy” that can answer questions even when there is a lockdown browser (which my university does not use, but still). Those previous B-/C+ students used to be able to do okay on a multiple choice quiz or exam in the past, but now with more complex questions, they are not passing anymore. And again, they are angry.

Not everyone is creating more “AI-proof” assignments however, which may engender divisions between instructors. Also, as AI advances, what “AI-proof” means also must advance. For example, I’ve downloaded an AI-based app whereby I can point my smartphone camera at my teenager’s algebra homework problem and the app explains how to solve the problem in 4 different ways, in seconds. This has been a huge asset for us parents to help with math homework, but I also presume that a student could use the same app to cheat.

Another strategy that some people use is creating assignment rubrics that somehow can “punish” or at least “not reward” AI generated text. Related to this is adding in requirements to assignments that are harder for AI to do well like asking for direct quotes and page numbers or requiring that students engage with a specific number of the assigned materials. While this seems to “work” more or less, if the goal is to ensure that students engaged in authentic writing, it is not entirely resolving the issue of students submitting AI-generated work.

Some are requiring access to a document’s history and there are tools that can analyze a Google Doc history for actual typing. And I have been using Sherpa, an AI-based interview tool whereby students upload their work and are interviewed about it (or be interviewed about material that you’ve assigned). I have experimented with having Sherpa interviews be worth more and less points in the class and in general, I’ve really enjoyed it as a tool. However, both the document history and video interview “solution” are difficult to manage in a larger class.

Many people are moving “back” to in-class writing with blue books or paper exams. This also introduces labor in terms of grading and we lose a lot of the efficiency and tools that digital assignments afforded. I really liked being able to tell the course management system to shuffle questions and answers, compared to me making 4 different versions of an exam, copying and pasting questions, and managing a key.

This is all to say – my primary job is not to be a writing or composition instructor. I have options to not assign as much or any written work in my courses. There are writing and composition instructors who are developing new ways to address AI in their classrooms and I look forward to seeing what they develop and seeing how that can be applied in other courses.

But it is important to consider the bigger picture. I have thought about this quite a bit and have discussed it with many trusted colleagues. One such colleague told me that they are trying to entirely focus on the students who are authentically engaging with the materials. Another colleague told me that after the first week or two, if a student continues to submit AI answers, they no longer provide any feedback, just a grade. Some others say that this AI panic will pass just like the Wikipedia panic or the calculator panic before it. I am trying to get to a better place emotionally about this. I have spent many hours being angry about AI misconduct and I will never get those hours back. I have tried to unpack why it feels so offensive to me. Is it because I spend so much time trying to provide them with a great educational experience and an AI generated response seems to spit at that? Is it because I worry about the value of a course, an education, and even a grade and I fear that those ideas will become meaningless if many students do not actually do the work?

One of my responses has been to embrace AI in my own life and work. Over the last year or so, I’ve developed a lot of AI tools and tricks, treating AI as my personal assistant, especially for tedious tasks. I believe that I have “claimed back” many hours because of AI, so that gives me some peace.

I think that it is also important to acknowledge that AI is not going away. Last week I listed to a podcast with José Bowen, a noted AI education expert. I’ve really enjoyed his co-authored book “Teaching with AI.” But in the podcast, he said this about the future of work: “A senior radiologist still needs to check your scan. But but the junior jobs, the intern jobs, the rough draft of the press release, all of those sorts of things are no longer gonna be jobs. So we have to get our students to do the part that we value with critical thinking, right, asking better questions, making sure the output is correct, and and making sure the output is excellent, not just okay or average. And so I think AI has changed what we can accept as average and mediocre quality.” — two important things here: one is that it is 100% true that a lot of what “junior” jobs are currently doing are going to be replaced by AI, and in a very short period of time. So our current university students are going to be entering a workforce where jobs that they would previously be qualified for will now be done by AI. Two, what we understand as average and mediocre is going to change. He gave this example earlier in the podcast with regard to spelling and spell checkers, “Many of us started our careers, we were still grading spelling or giving at least it was a line on the rubric. And now I expect perfect spelling because if there’s a spelling mistake, I just say no, use your spell checker. All those little red lines, fix them, and then resubmit. Right? I’m not gonna accept this because in the workplace, right, it’s not gonna no one’s gonna accept your spelling errors. So the technology changed the standard that we accepted.”

Following José, I really believe that we do need to prepare our students for the AI working world that they will enter. And I’m committed to bringing more AI into my classroom activities and modeling positive and ethical use. For example, in an assignment where students have to design a poster for a middle school classroom, I instruct students to ask AI if the wording used on their poster is appropriate and understandable by most 7th grade students. I have designed an assignment where students receive instant feedback on a conceptualization and operationalization. This has significantly improved the assignment, as they are able to receive personalized feedback before they submit the assignment to me.

But I am still wading in the challenges of learning assessment and how to feel okay about the fact that some students are going to use AI to do the assignment for them and then I have to grade something that was not authentically created by the student. I think that my colleague who has turned their focus to the students who are engaging authentically probably has the right idea. But I also worry about my future with a surgeon who used AI instead of learning something in medical school, right? In the meantime, I am going to try to engage in more meditative thinking on this – with the time that I’ve gained back with AI tools.