10 Jul

Summer 2025 verbal AI policy + AI actions + AI policy statement

Facebook Twitter Email Pinterest Reddit Tumblr

silly AI image of meThis is an update to an earlier AI policy statement, but with some additions of my AI policy in-person verbally given; and what I’m doing currently to try to decrease inappropriate AI usage. This also riffs on a previous blog post that I made about AI in the university classroom.

Verbal policy

Verbally – this is what I say on the first day of class. I’m not putting it directly into my statement though, yet. I find that it is more meaningful verbally. “AI is obviously a big part of our world now and I want to talk about how AI is used in this classroom and what my policies are around AI. Of course I do want you to read the policy in the syllabus, but, basically, this is my attitude [note that this is loosely based off of this wonderful Ted Chiang piece in the New Yorker]: There are different ways to lift weights. If you worked at Fred Meyer [our local large grocery/household supply chain], and your task was to get a big crate of bananas from the warehouse to the produce section, and there was a forklift nearby, it makes totally sense for you to use the forklift to complete that task to lift and carry that huge crate of bananas. But also, let’s talk about another way that we can lift a weight – like people engage in weightlifting, right? Why do people do that? [Students will say “To get strong” “To get fit” “To better themselves” “To get muscles”] Exactly! So, in this class, I want you to think of AI use this way – if you’re using AI to better yourself, to get smarter – then it is probably okay. But if you’re using AI like a forklift, to basically do the work for you, then it is probably not okay.” [I’ve found that this really “clicks” for students and they will refer to it often, even in discussing a group member’s behavior ‘Marissa was totally forklifting and I told her not to.’]… “So let’s talk more specifically… I think that we ALL can agree that using AI to generate something and pass that content off as our own work is NEVER okay in a classroom environment, right? [students nod] And, extending that, that is also probably the case in most situations. So, at the most basic level, know that you can NEVER use AI to generate something for this class – writing, graphics, citations, etc. and not disclose that you used AI to do it. Moreover, in situations when you do use AI, I want you to get in the habit of disclosing it and being transparent about how you used it. For example, saving the chatlog, taking screenshots, telling me exactly what prompt and what tool you used. This is how we’re going to be transparent. Also in this class there is a lot of group work and AI use in group work gets a little more questionable because not everyone in the group may have the same attitudes toward AI. AND let’s say that someone uses AI for group work and that part of the assignment is marked down. Certainly in those situations, group members are more upset than they would be if it was human error. So when you’re in groups, I want you to be even more thoughtful and transparent about AI use, because it impacts more than just you. Finally, I know that some students use AI to summarize readings for them. I know that I can’t stop you from doing that. However, I do want to ask you to reflect upon the role that AI may be playing in your course performance. If you’re using AI to summarize the readings and you’re not getting the quiz scores that you want, for example, I REALLY want you to think about trying something else and try not using the AI for awhile to see how it goes.”

Then I talk to the students about AI in our world… “I also want to talk a bit about why we DO use AI in this classroom environment and you’ll probably find that in my classroom, we will use AI fairly often. I think that it is extremely important that all of you develop AI literacy and understand the ethics of its use, including being transparent about using it. Further, to be totally honest with you all, I know that you all are going to be entering a workforce where there is a LOT of AI. And I think back to when I graduated college, and the kinds of jobs that my friends and I applied for – like entry-level office jobs – and the kind of tasks that we were doing are now easily done by AI. I don’t want to freak you all out, and I’m sure that a lot of you are already thinking about this, but there is already and will continue to be some big changes in work due to AI. So, as an educator, I feel responsible to help you all prepare for that. So I PROMISE you that every week in this class, we’re going to be working on skills that will help you to be BETTER than AI. We’re going to focus on things that AI is not good at, for the foreseeable future.”

What I’m doing

In my in-person classes, I’ve moved to a lot of in-class writing and in-class quizzes. I no longer use course management site quizzes or even “clicker” or app-based quizzes. I presume most educators know this by now, but there are dozens of plug-ins and apps for AI to answer those questions. During these in-class writing and quizzes, I have students put their technology away. For bigger exams, I have them put phones, smart watches, ear buds, etc. in plastic storage bags up at the front of the room. I’m printing things out and for multiple choice I’m using ZipGrade. My university has a bubble sheet scanning service, but ZipGrade allows me to scan their sheets instantly. This does use a lot of paper and adds work, compared to the automatic grading of the recent past, but I’ve found that, more-or-less it works well. I also do make multiple versions of every quiz or exam. For in-class writing, I’m having students write on printer paper – I tried lined paper, but the scanner doesn’t like it. I then immediately scan their written work, just to have a record of it. I give the written work back to them and they have 24 hours to turn that handwritten paper into a Google Doc. I give them instructions on how to use OCR scanning functions on their phones and they get the hang of it after 1 or 2 attempts. It only takes them a minute or so to do this. This way I can read things on a screen, typed; and they are allowed to do minor edits. Overall, this is working for me, but I have had to cut out material to use more in-class time for writing because I’m not comfortable with out-of-class writing right now.

For bigger class projects or on the rare occasion that I’m teaching online, I have students write IN Google Docs and give me edit access so that I can see the history of the document. I use various plug-ins that scan the work to see if they really typed it. Revision History is a popular one. The latest version of Revision History also says that it can detect ‘text to speech’ – because some students will generate writing in an AI tool and then have the AI tool read it out loud to Google Docs, so it appears that there is real typing. There are also “ghost writing” tools, but I hope that my efforts are enough to discourage this. It does take students a bit of time to get used to sharing their document and to sometimes be held accountable for not working in the Google Doc. Another thing that I’m doing is that my writing assignments are HIGHLY structured and tied to the material and all references/citations must include page numbers or time stamps – including for citations of assigned materials. I also have group projects in my in-person class, which generally do decrease AI usage. For my online class this summer, I’m having students also highlight/annotate in the actual sources to show me where they drew the reference from. I believe that I will do something similar next academic year for a larger project with writing, that students will need to highlight PDFs and upload those PDFs to a Google Drive for me to review. [I hate doing all of this, but the problem is truly that common.] The biggest remaining problem that I’m encountering regularly are hallucinated citations, even within a small set of materials.

Yet, there are a TON of AI-related classroom activities. For example, I have students use AI comic illustration generators to create comic strips about theories. They have to input the “right” information though – they can’t just ask the tool to make a comic strip about social identity theory. I also have a number of chatbots for student support on big assignments. Students can ask the chatbot for help or feedback. I’ve spent many months making those chatbots have the right tone and not “give away” the answers, while also being useful.

I also have a “low tech” classroom. Students can bring devices, but I let them know when they can bring them out – like for a group activity. I don’t want students to have devices out if I’m lecturing (briefly, because my classes are flipped), but I do record everything important and I post it online for students later that day. I tell students that if they really believe that they need to have a device for note taking, that I don’t need formal accommodations or anything like that, but that they do need to meet with me to talk through the pros and cons.

Sometimes people ask me if I think that AI is having an impact on student writing, critical thinking, etc. And I do think that it is, but not in entirely obvious ways AND it is difficult to separate this out from the fact that current undergraduates had much of high school during COVID. I do think that it is true that fewer students are doing the assigned readings before class and I do think that there is general malaise.

Syllabus policy

Artificial Intelligence and Large Language Model Policy: We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting. Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong. If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. 
  • There are drawbacks in using AI language translation tools. There may be misunderstandings and a lack of precision. This is true for students translating course materials as well as students translating their own work into English.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our materials or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. 
  • I spend a great deal of time and energy bringing together materials for students to engage with. When students use AI summarizing tools instead of reading/watching the assigned material, it is certain that some of the nuances of the materials will be missed. And this is likely to reflect poorly in students’ assignments.
  • Answers written by artificial intelligence text generators are somewhat detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.
  • AI is likely to produce “C” level work at best. For some things in life, “C” level is okay. But please be aware that as AI continues to develop and can do more and more tasks that humans used to do, you as a future employee and worker in the world will need to demonstrate that you can do a better job than AI. If you are using AI in this course to do the work for you, you’re not developing yourself to be BETTER than AI. You’re not learning skills or content that will matter. Consider AI-generated work as your new competition and that you need to do better work than that. Further, if AI can produce “C” level work circa 2018, very soon that will not be considered a passing grade. Instead of banning AI, instructors are going to “ban” all “C” level work (circa 2018). We’ve already seen that most instructors have raised their standards since AI became widely available. Currently, it is unlikely that even well crafted AI work will allow you to pass this course. Rubrics are designed so that AI-generated work is unlikely to get high marks.    
  • I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments in COM 303 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting, and absolutely not for generating original text. Using AI to write an answer in another language and translate it is also not within acceptable use for this course. 
  • Do not use AI to write original material such as Hypothesis annotations.
  • Tools like StudyBuddy or other techniques to “take pictures” of quiz questions or to get answers to quiz questions are 100% not allowed. 
  • It is acceptable to use AI in COM 303 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus.
  • If there is unauthorized AI work in group assignments, ALL students in the group will be held accountable for the AI work and the associated outcomes, whether that be a reduced score or a formal misconduct report.

Leave a Reply

Your email address will not be published. Required fields are marked *