10 Jul

Summer 2025 verbal AI policy + AI actions + AI policy statement

silly AI image of meThis is an update to an earlier AI policy statement, but with some additions of my AI policy in-person verbally given; and what I’m doing currently to try to decrease inappropriate AI usage. This also riffs on a previous blog post that I made about AI in the university classroom.

Verbal policy

Verbally – this is what I say on the first day of class. I’m not putting it directly into my statement though, yet. I find that it is more meaningful verbally. “AI is obviously a big part of our world now and I want to talk about how AI is used in this classroom and what my policies are around AI. Of course I do want you to read the policy in the syllabus, but, basically, this is my attitude [note that this is loosely based off of this wonderful Ted Chiang piece in the New Yorker]: There are different ways to lift weights. If you worked at Fred Meyer [our local large grocery/household supply chain], and your task was to get a big crate of bananas from the warehouse to the produce section, and there was a forklift nearby, it makes totally sense for you to use the forklift to complete that task to lift and carry that huge crate of bananas. But also, let’s talk about another way that we can lift a weight – like people engage in weightlifting, right? Why do people do that? [Students will say “To get strong” “To get fit” “To better themselves” “To get muscles”] Exactly! So, in this class, I want you to think of AI use this way – if you’re using AI to better yourself, to get smarter – then it is probably okay. But if you’re using AI like a forklift, to basically do the work for you, then it is probably not okay.” [I’ve found that this really “clicks” for students and they will refer to it often, even in discussing a group member’s behavior ‘Marissa was totally forklifting and I told her not to.’]… “So let’s talk more specifically… I think that we ALL can agree that using AI to generate something and pass that content off as our own work is NEVER okay in a classroom environment, right? [students nod] And, extending that, that is also probably the case in most situations. So, at the most basic level, know that you can NEVER use AI to generate something for this class – writing, graphics, citations, etc. and not disclose that you used AI to do it. Moreover, in situations when you do use AI, I want you to get in the habit of disclosing it and being transparent about how you used it. For example, saving the chatlog, taking screenshots, telling me exactly what prompt and what tool you used. This is how we’re going to be transparent. Also in this class there is a lot of group work and AI use in group work gets a little more questionable because not everyone in the group may have the same attitudes toward AI. AND let’s say that someone uses AI for group work and that part of the assignment is marked down. Certainly in those situations, group members are more upset than they would be if it was human error. So when you’re in groups, I want you to be even more thoughtful and transparent about AI use, because it impacts more than just you. Finally, I know that some students use AI to summarize readings for them. I know that I can’t stop you from doing that. However, I do want to ask you to reflect upon the role that AI may be playing in your course performance. If you’re using AI to summarize the readings and you’re not getting the quiz scores that you want, for example, I REALLY want you to think about trying something else and try not using the AI for awhile to see how it goes.”

Then I talk to the students about AI in our world… “I also want to talk a bit about why we DO use AI in this classroom environment and you’ll probably find that in my classroom, we will use AI fairly often. I think that it is extremely important that all of you develop AI literacy and understand the ethics of its use, including being transparent about using it. Further, to be totally honest with you all, I know that you all are going to be entering a workforce where there is a LOT of AI. And I think back to when I graduated college, and the kinds of jobs that my friends and I applied for – like entry-level office jobs – and the kind of tasks that we were doing are now easily done by AI. I don’t want to freak you all out, and I’m sure that a lot of you are already thinking about this, but there is already and will continue to be some big changes in work due to AI. So, as an educator, I feel responsible to help you all prepare for that. So I PROMISE you that every week in this class, we’re going to be working on skills that will help you to be BETTER than AI. We’re going to focus on things that AI is not good at, for the foreseeable future.”

What I’m doing

In my in-person classes, I’ve moved to a lot of in-class writing and in-class quizzes. I no longer use course management site quizzes or even “clicker” or app-based quizzes. I presume most educators know this by now, but there are dozens of plug-ins and apps for AI to answer those questions. During these in-class writing and quizzes, I have students put their technology away. For bigger exams, I have them put phones, smart watches, ear buds, etc. in plastic storage bags up at the front of the room. I’m printing things out and for multiple choice I’m using ZipGrade. My university has a bubble sheet scanning service, but ZipGrade allows me to scan their sheets instantly. This does use a lot of paper and adds work, compared to the automatic grading of the recent past, but I’ve found that, more-or-less it works well. I also do make multiple versions of every quiz or exam. For in-class writing, I’m having students write on printer paper – I tried lined paper, but the scanner doesn’t like it. I then immediately scan their written work, just to have a record of it. I give the written work back to them and they have 24 hours to turn that handwritten paper into a Google Doc. I give them instructions on how to use OCR scanning functions on their phones and they get the hang of it after 1 or 2 attempts. It only takes them a minute or so to do this. This way I can read things on a screen, typed; and they are allowed to do minor edits. Overall, this is working for me, but I have had to cut out material to use more in-class time for writing because I’m not comfortable with out-of-class writing right now.

For bigger class projects or on the rare occasion that I’m teaching online, I have students write IN Google Docs and give me edit access so that I can see the history of the document. I use various plug-ins that scan the work to see if they really typed it. Revision History is a popular one. The latest version of Revision History also says that it can detect ‘text to speech’ – because some students will generate writing in an AI tool and then have the AI tool read it out loud to Google Docs, so it appears that there is real typing. There are also “ghost writing” tools, but I hope that my efforts are enough to discourage this. It does take students a bit of time to get used to sharing their document and to sometimes be held accountable for not working in the Google Doc. Another thing that I’m doing is that my writing assignments are HIGHLY structured and tied to the material and all references/citations must include page numbers or time stamps – including for citations of assigned materials. I also have group projects in my in-person class, which generally do decrease AI usage. For my online class this summer, I’m having students also highlight/annotate in the actual sources to show me where they drew the reference from. I believe that I will do something similar next academic year for a larger project with writing, that students will need to highlight PDFs and upload those PDFs to a Google Drive for me to review. [I hate doing all of this, but the problem is truly that common.] The biggest remaining problem that I’m encountering regularly are hallucinated citations, even within a small set of materials.

Yet, there are a TON of AI-related classroom activities. For example, I have students use AI comic illustration generators to create comic strips about theories. They have to input the “right” information though – they can’t just ask the tool to make a comic strip about social identity theory. I also have a number of chatbots for student support on big assignments. Students can ask the chatbot for help or feedback. I’ve spent many months making those chatbots have the right tone and not “give away” the answers, while also being useful.

I also have a “low tech” classroom. Students can bring devices, but I let them know when they can bring them out – like for a group activity. I don’t want students to have devices out if I’m lecturing (briefly, because my classes are flipped), but I do record everything important and I post it online for students later that day. I tell students that if they really believe that they need to have a device for note taking, that I don’t need formal accommodations or anything like that, but that they do need to meet with me to talk through the pros and cons.

Sometimes people ask me if I think that AI is having an impact on student writing, critical thinking, etc. And I do think that it is, but not in entirely obvious ways AND it is difficult to separate this out from the fact that current undergraduates had much of high school during COVID. I do think that it is true that fewer students are doing the assigned readings before class and I do think that there is general malaise.

Syllabus policy

Artificial Intelligence and Large Language Model Policy: We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting. Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong. If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. 
  • There are drawbacks in using AI language translation tools. There may be misunderstandings and a lack of precision. This is true for students translating course materials as well as students translating their own work into English.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our materials or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. 
  • I spend a great deal of time and energy bringing together materials for students to engage with. When students use AI summarizing tools instead of reading/watching the assigned material, it is certain that some of the nuances of the materials will be missed. And this is likely to reflect poorly in students’ assignments.
  • Answers written by artificial intelligence text generators are somewhat detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.
  • AI is likely to produce “C” level work at best. For some things in life, “C” level is okay. But please be aware that as AI continues to develop and can do more and more tasks that humans used to do, you as a future employee and worker in the world will need to demonstrate that you can do a better job than AI. If you are using AI in this course to do the work for you, you’re not developing yourself to be BETTER than AI. You’re not learning skills or content that will matter. Consider AI-generated work as your new competition and that you need to do better work than that. Further, if AI can produce “C” level work circa 2018, very soon that will not be considered a passing grade. Instead of banning AI, instructors are going to “ban” all “C” level work (circa 2018). We’ve already seen that most instructors have raised their standards since AI became widely available. Currently, it is unlikely that even well crafted AI work will allow you to pass this course. Rubrics are designed so that AI-generated work is unlikely to get high marks.    
  • I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments in COM 303 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting, and absolutely not for generating original text. Using AI to write an answer in another language and translate it is also not within acceptable use for this course. 
  • Do not use AI to write original material such as Hypothesis annotations.
  • Tools like StudyBuddy or other techniques to “take pictures” of quiz questions or to get answers to quiz questions are 100% not allowed. 
  • It is acceptable to use AI in COM 303 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus.
  • If there is unauthorized AI work in group assignments, ALL students in the group will be held accountable for the AI work and the associated outcomes, whether that be a reduced score or a formal misconduct report.

12 Jul

Summer 2024 AI policy statement

Oh what a few years it has been with AI. This is built off of my previous statement. But after reading Teaching with AI, I thought more about the authors’ discussion of AI producing C-level work and that the “new” standards should be better than AI. Those authors argue that instead of banning AI, we should be banning C-level work. This ties a bit to what I’ve discussed before about evolving standards.

Research methods class policy:

Artificial Intelligence and Large Language Model Policy

We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting. Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and quiz answers and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong (this is sometimes described as “hallucinating”. (For example, for our sampling assignment, I had ChatGPT generate lists of Pokemon that can evolve and not evolve and it was wrong for 15% of them.) If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. Further, in COM 382, you are not being evaluated on your writing, so there is no need to use extensive proofreading.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our textbook or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. For example, AI does not understand the difference between measurement validity and study validity. AI does not understand the difference between ethics more broadly and research ethics. 
  • Answers written by artificial intelligence text generators are somewhat detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.
  • AI is likely to produce “C” level work at best. For some things in life, “C” level is okay. But please be aware that as AI continues to develop and can do more and more tasks that humans used to do, you as a future employee and worker in the world will need to demonstrate that you can do a better job than AI. If you are using AI in this course to do the work for you, you’re not developing yourself to be BETTER than AI. You’re not learning skills or content that will matter. Consider AI-generated work as your new competition and that you need to do better work than that. Further, if AI can produce “C” level work circa 2018, very soon that will not be considered a passing grade. Instead of banning AI, instructors are going to “ban” all “C” level work (circa 2018). We’ve already seen that most instructors have raised their standards since AI became widely available. Currently, it is unlikely that even well crafted AI work will allow you to pass this course. Rubrics are designed so that AI-generated work is unlikely to get high marks.    
  • I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments (quizzes, tickets, etc.) in COM 382 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting, and absolutely not for generating original text. And in COM 382 you are not being evaluated on your grammar, so we discourage this use, while acknowledging that some students want to use it.
  • Do not use AI to write original material such as Hypothesis annotations and quiz answers.
  • Tools like StudyBuddy or other techniques to “take pictures” of quiz questions or to get answers to quiz questions are 100% not allowed. 
  • It is acceptable to use AI in COM 382 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus. 

Then within the course, there are module-level learning objectives, and I’ve added a list of specific AI-“proof” skills to those learning objectives. For example…

Module 3 Learning Objectives

1. Define measurement in the context of social scientific research and explain its importance.

2. Differentiate between key terms such as theory, concepts, variables, attributes, constants, hypotheses, and observations.

3. Explain the difference between independent and dependent variables and identify them in research scenarios.

4. Describe the processes of conceptualization and operationalization, and apply them to research examples.

5. Distinguish between manifest and latent constructs, providing examples of each.

6. Identify and explain the four levels of measurement (nominal, ordinal, interval, and ratio), and classify variables according to these levels.

7. Compare and contrast categorical and continuous variables, providing examples of each.

8. Define measurement validity and reliability, and explain their importance in research.

9. Identify and describe different types of measurement validity (face, content, criterion-related, construct, convergent, and discriminant validity).

10. Recognize and explain various methods for assessing measurement reliability (test-retest, split-half, inter-coder reliability).

11. Analyze the tension between measurement validity and reliability, and discuss strategies for balancing them in research design.

12. Evaluate the strengths and weaknesses of different measurement approaches for studying diverse populations, including marginalized groups.

13. Apply principles of inclusive measurement practices to create more representative and culturally sensitive research instruments.

14. Identify potential sources of random and systematic error in measurement and suggest ways to minimize them.

15. Critically assess the implications of high and low measurement reliability and validity combinations in research scenarios.

Regarding helping students become “better” than AI. My syllabus statement: I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

Module 3 contributes to developing these skills:

  1. Original thinking:
    • Students learn to create conceptual definitions, which requires synthesizing information and developing unique understandings of complex concepts.
    • The process of operationalization encourages students to think creatively about how to measure abstract concepts.
  2. Problem solving:
    • Students learn to tackle the challenge of translating abstract concepts into measurable variables.
    • They must find solutions to balance validity and reliability in measurement.
  3. Critical thinking:
    • The module encourages students to critically evaluate different types of measurement and their appropriateness for various research scenarios.
    • Students learn to assess the strengths and weaknesses of different measurement approaches.
  4. Strategic thinking:
    • Students learn to strategically choose between different levels of measurement based on research goals and statistical analysis requirements.
    • They must think strategically about how to balance validity and reliability in research design.
  5. Emotional intelligence:
    • The discussion on inclusive measurement practices for marginalized groups helps students develop empathy and cultural sensitivity.
    • Understanding the complexities of measuring social and psychological constructs requires emotional intelligence.
  6. Ethical decision making:
    • The module addresses ethical considerations in measurement, particularly regarding inclusive practices and representation of diverse populations.
    • Students learn to make ethical decisions about how to operationalize concepts in ways that are fair and representative.
  7. Collaboration:
    • The emphasis on established measures and building upon previous research underscores the collaborative nature of scientific inquiry.
    • Group activities and discussions encourage collaborative learning and problem-solving.
  8. Global/cultural awareness:
    • The module highlights the importance of considering cultural context in measurement, especially when studying diverse populations.
    • Students learn to be aware of potential biases and limitations in measurement across different cultural contexts.

By learning these complex processes of conceptualization and operationalization, students develop skills that go beyond simple information retrieval or basic analysis. These skills require nuanced understanding, contextual awareness, and creative problem-solving – areas where human intelligence still far surpasses AI capabilities. This module prepares students to engage in the type of high-level thinking and decision-making that will remain valuable and uniquely human in an AI-augmented workplace.


21 May

AI in the university classroom

Ah, the winter of 2023 – after ChatGPT publicly launched in November 2022, university instructors everywhere had a collective freakout when everyone realized that students could engage in all sorts of misconduct in an entirely new way. Certainly academic misconduct was always a part of our jobs, but this was different. AI-facilitated misconduct was more sophisticated and obviously far easier for students to use. The writing that AI can generate seems original and of decent quality.

Even now, spring of 2024, whenever instructors gather – online or in-person – the discussion quickly turns to the issue of AI in student work.

Like many others, in winter 2023, I responded to this by engaging with the various AI detection tools (which we now know are unreliable). I was spending hours each week copying and pasting text into detection tools and I was becoming angrier by the minute.

I also revised my assignments and activities to be more “AI-proof.” This was and continues to be incredibly time consuming. Good assignments and activities take time to develop and typically I need to offer them a few times before finalizing the instructions and expectations. Further, this was just following the COVID-19 pandemic, when all of us had already spent a great deal of time changing and creating new assignments and activities. Further, “AI-proof” – while optimally this means that the requirements are more complex, as to evade AI, this often results in things being “harder.” The other major outcome of the assignment revision was that it changed the classroom vibe and impacted grade distribution. In thinking about students and their grade distribution – those that always did well on assignments before, continued to do well on assignment that became more complex. Many of them appreciated the more complex assignments even. And then there were the students who, in the before times, did poorly on the assignments. Some of these students are tempted to use AI to complete their assignments, so even with the “AI-proof” “more complex” assignments, with AI they probably can do decently, or at least better than they would have without AI. Then there are the students that I believe are the most upset about this entire situation – the students who did okay in the before times. With only exerting a bit of effort, they could get a B-/C+ or so on an assignment and then they could go on with their lives. The “AI-proof” “more complex” assignments mean that these students are not able to get a B-/C+ with a little bit of effort. They are getting barely-passing grades due to the increased complexity. Those students are very angry. Another category of students are those that perhaps never considered engaging in serious misconduct in the before times, but AI is so tempting and appears to do a decent job, so they have adopted it.

Similarly, on multiple choice quizzes or exams, instructors have re-written questions to be harder to answer with AI – especially AI browser plug-ins like “StudyBuddy” that can answer questions even when there is a lockdown browser (which my university does not use, but still). Those previous B-/C+ students used to be able to do okay on a multiple choice quiz or exam in the past, but now with more complex questions, they are not passing anymore. And again, they are angry.

Not everyone is creating more “AI-proof” assignments however, which may engender divisions between instructors. Also, as AI advances, what “AI-proof” means also must advance. For example, I’ve downloaded an AI-based app whereby I can point my smartphone camera at my teenager’s algebra homework problem and the app explains how to solve the problem in 4 different ways, in seconds. This has been a huge asset for us parents to help with math homework, but I also presume that a student could use the same app to cheat.

Another strategy that some people use is creating assignment rubrics that somehow can “punish” or at least “not reward” AI generated text. Related to this is adding in requirements to assignments that are harder for AI to do well like asking for direct quotes and page numbers or requiring that students engage with a specific number of the assigned materials. While this seems to “work” more or less, if the goal is to ensure that students engaged in authentic writing, it is not entirely resolving the issue of students submitting AI-generated work.

Some are requiring access to a document’s history and there are tools that can analyze a Google Doc history for actual typing. And I have been using Sherpa, an AI-based interview tool whereby students upload their work and are interviewed about it (or be interviewed about material that you’ve assigned). I have experimented with having Sherpa interviews be worth more and less points in the class and in general, I’ve really enjoyed it as a tool. However, both the document history and video interview “solution” are difficult to manage in a larger class.

Many people are moving “back” to in-class writing with blue books or paper exams. This also introduces labor in terms of grading and we lose a lot of the efficiency and tools that digital assignments afforded. I really liked being able to tell the course management system to shuffle questions and answers, compared to me making 4 different versions of an exam, copying and pasting questions, and managing a key.

This is all to say – my primary job is not to be a writing or composition instructor. I have options to not assign as much or any written work in my courses. There are writing and composition instructors who are developing new ways to address AI in their classrooms and I look forward to seeing what they develop and seeing how that can be applied in other courses.

But it is important to consider the bigger picture. I have thought about this quite a bit and have discussed it with many trusted colleagues. One such colleague told me that they are trying to entirely focus on the students who are authentically engaging with the materials. Another colleague told me that after the first week or two, if a student continues to submit AI answers, they no longer provide any feedback, just a grade. Some others say that this AI panic will pass just like the Wikipedia panic or the calculator panic before it. I am trying to get to a better place emotionally about this. I have spent many hours being angry about AI misconduct and I will never get those hours back. I have tried to unpack why it feels so offensive to me. Is it because I spend so much time trying to provide them with a great educational experience and an AI generated response seems to spit at that? Is it because I worry about the value of a course, an education, and even a grade and I fear that those ideas will become meaningless if many students do not actually do the work?

One of my responses has been to embrace AI in my own life and work. Over the last year or so, I’ve developed a lot of AI tools and tricks, treating AI as my personal assistant, especially for tedious tasks. I believe that I have “claimed back” many hours because of AI, so that gives me some peace.

I think that it is also important to acknowledge that AI is not going away. Last week I listed to a podcast with José Bowen, a noted AI education expert. I’ve really enjoyed his co-authored book “Teaching with AI.” But in the podcast, he said this about the future of work: “A senior radiologist still needs to check your scan. But but the junior jobs, the intern jobs, the rough draft of the press release, all of those sorts of things are no longer gonna be jobs. So we have to get our students to do the part that we value with critical thinking, right, asking better questions, making sure the output is correct, and and making sure the output is excellent, not just okay or average. And so I think AI has changed what we can accept as average and mediocre quality.” — two important things here: one is that it is 100% true that a lot of what “junior” jobs are currently doing are going to be replaced by AI, and in a very short period of time. So our current university students are going to be entering a workforce where jobs that they would previously be qualified for will now be done by AI. Two, what we understand as average and mediocre is going to change. He gave this example earlier in the podcast with regard to spelling and spell checkers, “Many of us started our careers, we were still grading spelling or giving at least it was a line on the rubric. And now I expect perfect spelling because if there’s a spelling mistake, I just say no, use your spell checker. All those little red lines, fix them, and then resubmit. Right? I’m not gonna accept this because in the workplace, right, it’s not gonna no one’s gonna accept your spelling errors. So the technology changed the standard that we accepted.”

Following José, I really believe that we do need to prepare our students for the AI working world that they will enter. And I’m committed to bringing more AI into my classroom activities and modeling positive and ethical use. For example, in an assignment where students have to design a poster for a middle school classroom, I instruct students to ask AI if the wording used on their poster is appropriate and understandable by most 7th grade students. I have designed an assignment where students receive instant feedback on a conceptualization and operationalization. This has significantly improved the assignment, as they are able to receive personalized feedback before they submit the assignment to me.

But I am still wading in the challenges of learning assessment and how to feel okay about the fact that some students are going to use AI to do the assignment for them and then I have to grade something that was not authentically created by the student. I think that my colleague who has turned their focus to the students who are engaging authentically probably has the right idea. But I also worry about my future with a surgeon who used AI instead of learning something in medical school, right? In the meantime, I am going to try to engage in more meditative thinking on this – with the time that I’ve gained back with AI tools.

21 Aug

AI/LLM policy statements

This is what it feels like to be teaching sometimes (made with Bing Image Creator, powered by Dall-E)

Me, teaching to a room of robots

It seems that all I can think about lately is AI in the classroom. I wanted to share my summer 2023 AI/LLM policy statement for my class. Feel free to use with attribution.

Important to note:

  • This is for a research methods class with very little writing, so if your class has writing in it, YMMV.
  • I’m currently teaching remote and asynchronous classes, so I don’t have the same options for in-class assignments as others do.
  • I have MANY other tools to discourage AI/LLM use in my courses. (Hypothesis social annotations, video reactions, etc.)
  • I give FREQUENT reminders to students about this policy.
  • This is a work-in-progress. I took a summer workshop on AI/LLM in the classroom that helped me refine it. I read through Aleksandra Urman‘s work on this. I am in dozens of AI/LLM in the classroom Facebook and subreddits. All of this contributed.
  • It is ESSENTIAL that your policy aligns with your university policy and what your student conduct group’s policy is as well as how they react to such cases.
  • AI detection software is quite flawed. A single tool for detection is insufficient as evidence of AI. And these tools are notorious for flagging non-native English writing as AI. More on this here.

Artificial Intelligence and Large Language Model Policy

We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting.  Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and quiz answers and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong. (For example, for our sampling assignment, I had ChatGPT generate lists of Pokemon that can evolve and not evolve and it was wrong for 15% of them.) If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. Further, in COM 382, you are not being evaluated on your writing, so there is no need to use extensive proofreading.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our textbook or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. For example, AI does not understand the difference between measurement validity and study validity. AI does not understand the difference between ethics more broadly and research ethics. 
  • Answers written by artificial intelligence text generators are detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments (quizzes, tickets, etc.) in COM 382 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting. And in COM 382 you are not being evaluated on your grammar, so we discourage this use, while acknowledging that some students want to use it.
  • It is acceptable to use AI in COM 382 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus.

02 Feb

Wordle me this

Although my research is primarily about technology and inequality in Armenia and Azerbaijan, I do dabble occasionally in studying games. Also all of my teaching, undergrad and grad, is on broader technology and society, so I keep up with the research.

I got into Wordle like many others did in January 2022 and I tweeted about it. A tech journalist saw me tweeting about it and contacted me for an email interview. I replied and gave some thoughts. This has turned into me being interviewed about Wordle quite a bit in the past few weeks. I’ll archive them here.

My main points:

  • Wordle is really easy to pick up (no app, no login, etc.).
  • Wordle is easy to get started with.
  • Being forced to only play once a day on the official Wordle page is nice compared to other social media “breaks” where it is easy to get sucked in.
  • It allows for a performance of being “smart” or “intellectual” by sharing results.
  • During the pandemic in particular, people are really tired and don’t have a ton of bandwidth to interact with others, but sharing Wordle results allows people to be social with very little labor.
  • One can feel part of a community of fellow Wordle players or part of the “in-crowd” or at least the “intellectual” crowd
  • There are now clones in many languages and to me, this is getting very interesting – folks are playing in a second language, folks are playing in their heritage language.
  • There are people trying to figure out the best starter word, which is fun.
  • There is already backlash about sharing results, and I suspect that the sharing of results will die out soon.
  • Now that the New York Times has bought Wordle, eventually they will put it behind their games paywall, which is currently $5/month. People online are annoyed about paying for it, but IMHO, NYT Games is probably the best home for it. Their existing games are really nice and Wordle fits in well. And it is nice that the inventor got paid.

What you should know: Wordle.The Hawk Newspaper. February 8, 2022.

Wordle and the future of the internet’s favorite word game. NPR’s On Point [radio interview]. February 4, 2022.

What Makes Wordle So Popular? Psychologists Explain Its Appeal. GameSpot. February 1, 2022.

Wordle. KNX In Depth [radio interview]. February 1, 2022.

Why Is Everyone Suddenly Playing Wordle? Psychologists Explain. Inc. January 30, 2022.

Wordle is a deceptively easy game for burnt-out pandemic shut-ins. Vox. January 20, 2022.

15 Dec

Questions I’m asked as a recommendation writer

These are questions that are asked of me regarding self-funded professional masters programs in marketing and communication. I’m skipping the questions regarding how well I know the person, etc.

These questions are to be answered in addition to a full letter.

Program A’s questions

  1. Please rate the applicant in the following areas:
  • Motivation
  • Analytical skills
  • Intellectual capacity
  • Communication skills
  • Interpersonal skills

2. What do you consider the applicant’s strengths and/or weaknesses?

3. Describe a specific situation where you have observed the applicant using critical thinking skills or applied a new skill.

4. How would you describe the applicant’s leadership skills?

Program B’s questions

  1. Please list three to five adjectives describing the applicant’s strengths.
  2. Please compare the applicant’s performance to that of his or her peers.
  3. What does the applicant do best?
  4. If you were giving feedback to the applicant regarding his or her professional performance and personal effectiveness, in what areas would you suggest he or she work to improve?
  5. How does the applicant accept constructive criticism or handle conflicts?
  6. How effective are the applicant’s interpersonal skills in the workplace?
  7. On the below scale, please rate the applicant’s individual vs. team orientation (1 = most effective as an individual contributor; 5 = focused exclusively on the team). Please elaborate on your rating:
  8. Please give an example of how the applicant has demonstrated leadership.
  9. Is there anything else you feel we should know?

Please evaluate the applicant by entering the following quality ratings for the traits below. Truly exceptional (Top 2%) Outstanding (Top 10%) Very good (Top 20%) Good (Top third) Average (Middle third) Poor (Bottom third).

  • Intellectual Ability
  • Maturity
  • Quantitative Ability
  • Analytical Skills
  • Poise/Professionalism
  • Initiative
  • Personal integrity/ethics
  • Interpersonal skills/ability to work well with others
  • Sense of humor
  • Verbal English Communication Skills
  • Written English communication skills
  • Self confidence
  • Leadership ability
  • Future managerial or business success
  • Please provide us with your overall impression of the applicant

Program C’s questions

What are the applicant’s chief weaknesses or areas of growth

Rating 0-5

  • Integrity
  • Interpersonal Relations
  • Oral Communications
  • Self-Awareness
  • Analytical Ability
  • Research Ability
  • Initiative
  • Potential for Success in Chosen Field
  • Maturity
  • Self-Confidence
  • Written Communication
  • Overall Evaluation

Program D’s questions

Rate 0-5

  • Communication and writing skills
  • Intellectual curiosity, originality and independence in thinking
  • Initiative
  • Interpersonal skills, ability to work well with others
  • Leadership ability and potential
  • Academic and analytical ability
  • Problem-solving ability
  • Flexibility, adaptability, willingness to learn new skills
  • Organizational ability
  • Maturity and professionalism
  • Research and reporting skills
  • Integrity

Program E’s questions

Rating 0-5

  • Academic performance
  • Intellectual ability
  • Written communication skills
  • Oral communication skills, including willingness to contribute valuably to discussion/debate where applicable
  • Analytical skills, including research and critical thinking skills where applicable
20 Aug

Manuscript trimming tips

Word and page limits exist for a reason, but they present a challenge. These are my top tips for trimming manuscripts down.

  1. Cut references

I know that this can be really difficult but these take up a lot of space. Hopefully you’re using a reference manager already, so cutting is a bit easier.

What I do is go to my references and search for the surname in the text.

I then will make a comment if it used only once. I usually add something like “But it is a meta-analysis” or “But it is in the journal that we’re submitting to” or sometimes I’ll delete it immediately. These one-offs can be really difficult. I will also ask myself if it is at all possible that some other reference that I used says the same thing or if I absolutely need to cite that paper only once.

I’ll also consider if I’m citing the same person/team multiple times in the same citation and ask myself if I have to do so. Perhaps one of the citations is the key theoretical piece, so I can’t avoid it. But if it is a smaller finding that they had in both a 2012 and a 2016 paper and perhaps the 2016 paper is in a better venue or is more widely cited, I’ll cut the 2012 reference.

Similarly with the same person, I’ll do a scan for them throughout the paper. For example, if in the entire paper I cited B, T, and R 2012 and T and B 2009 and R, T, and B 2016 each 4 times, but I only cited R, B, and T 2015 once. I’ll re-skim the 2015 paper and ask myself if I absolutely need to include it or if the finding was in one of the other papers as well.

Thinking about the venue is important too. For example, let’s say I’m working on a paper about walruses playing board games with an outcome of better walrus solidarity. And I’m submitting this paper to a journal that is really focused on board game playing and less on solidarity or walruses. While I cannot remove all of the citations that tie back to walrus or solidarity literature, I should prioritize the board game playing literature as that is the journal’s audience and reviewers will come from that field. But I do always have older versions of the paper that have all of the references in it just in case the reviewers ask why there isn’t more theorizing and literature from Walrus Studies.

Finally, the most heartbreaking reference cuts are studies with too many authors and/or really long titles. This is presuming that references count towards the word count.

2. Cut words

Do searches for common adverbs. Delete transitional words (this is painful for me).

3. Merge words

Sometimes hyphens are appropriate and can save space.

4. Wordiness

You’re probably being too wordy. Try to read the paragraph out loud to yourself or have your computer read it to you. It can be easier to hear the problems. I also sometimes do better when I’m editing a printed version of the paper versus on a screen. I think that taking a break from the manuscript also helps.

5. Text -> Table

Sometimes you can turn text into tables. This will reduce words and if the journal doesn’t count tables toward the word count, save you a ton of words. However, if page count is the issue, tables sometimes can be longer than text.

6. If qualitative/interview based, look for redundant participant quotes

It is easy to fall in love with a great direct quote or example from a participant. Sometimes they’re just so delicious and represent the theme so well. But I find that a lot of people will have 3 or even more examples for a particular theme. And that is okay, but one must also remember that some of them have to be cut eventually.

Ask yourself if the quote is absolutely necessary to illustrate the theme or is so perfect that it really sells the theme in a sincere way. Look at each quote in comparison with every other quote within the theme and ask yourself if they both need to be there.

Also keep a table of quote/example counts by participant. Sometimes some participants are chattier or articulate themselves better and we lean more heavily on their quotes. It is important to have a count at the end so you don’t accidentally have many more quotes from a few participants. This is not to say to purposefully manipulate your quote choices or include some participants artificially. Rather, seeing that you’re already quite heavy with quotes from “Alice” can help you make decisions between two quotes more easily.

7. Redundancy in the findings section

Sometimes we have a fairly complicated framework and we need to remind our reader what the conceptual definition of a particular theme was. However, this does take up a lot of space. This is another one that is painful for me.

8. Could this be two papers?

Sometimes there is so much going on that you can split the manuscript into two papers. Honestly, this happens to me about 75% of the time. This does require work to ensure that the theoretical scaffolding is different and that you’re not reusing findings. However, in some cases it might make sense to divide.

9. Have an editing partner

Certainly if there are multiple authors in a study, someone else can look at the manuscript. But you may also be able to have a friend with whom you trade editing/trimming tasks.

10. Check your conclusion/discussion

Sometimes we get a little bit freewheeling at the end of the manuscript. This can be a place where entire sections could be removed.

25 Mar

Facebook in Armenia, March 2020

It has been awhile since I last blogged about Facebook use in the Caucasus. Again, here is a guide to how I get these data. Click on the tags for previous rates – here’s September 2017, here is May 2018. Here is December 2018.

As of March 2020, there are about 1,500,000 Facebook users in Armenia, according to Facebook. That is 50% of the total population, and 46% of the population over age 14 (Facebook technically isn’t available to those under 13.) There is a bit of growth since December when Facebook ads estimated 1,400,000 users (47% of the population).

As far as gender, 50% of the total male population, or 64% of males over age 14 are on Facebook. 53% of the total female population, or 49% of the female 14+ population. So there are some gender differences, but probably within the margin of error.

Just looking at the 15-24 year olds, 82% of them are on the site (this is a drop from 2 years ago — I suspect people have moved to Instagram), 82% of young men and 87% of young women.

Some trends to look for – as in the rest of the world, young people are moving more toward Instagram. Everyone is moving toward WhatsApp and other private messaging services.

25 Mar

2017 Internet access in Armenia

With the move toward online learning all over the world, someone asked me about Internet access in Armenia. The most recent publicly available data that we have is from the 2017 Caucasus Barometer. Here are a few relevant statistics. Please note that the survey respondents are a mix of household answers (like owning a computer) and individual answers (frequency of Internet use), so we are making some methodological leaps in claiming that this would reflect the access that young people would have.

It is difficult to ascertain the presence of children in the household using the Caucasus Barometer online analysis tool. When I have some time I will create a variable that subtracts the number of adults in the household from the total number of household residents to create a new variable called “Number of children” — I’d like to use that to look at all of the analyses described below in the future.

Another caveat – in homes across the world, adults working from home are having to share their technology with their children. There may also be greater demands on home Internet access and adult work use becomes prioritized over children’s.

Analysis

Overall, 29% of Armenia adults never access the Internet. This has remained fairly stable for the past few years. The 2017 CB did not ask about why people did not use the Internet, but in previous years, the answered varied and were not all tied to resource access issues.

Home Internet access is far more available in the capital city, although mobile Internet (and nearly two-thirds of Armenians have mobile Internet) certainly bridges that gap for many households.

Nonetheless, in 2017, over a third of rural respondents never accessed the Internet.

Mobile Internet does vary a bit by urbanness. Two-thirds or nearly two-thirds of Yerevan residents and regional urban center residents have mobile Internet, while a little over half of rural residents do.

While mobile phones have come a long way (and nearly all Armenians have owned a mobile phone for over a decade), no one can deny that some activities are conducted more easily using a personal computer. As of 2017 58% of households had a computer. This does not mean that people do not have access to computers at cafes, work, or school. However, in terms of considering distance learning or working from home, the lack of a computer may be a barrier for some.

Urbanness has always been an important part of the Armenian digital divide story. Personal computer ownership is far higher in the capital (67%) than in regional urban environments (56%) or rural areas (51%).

11 Dec

Facebook in Azerbaijan, December 2019

It has been quite awhile since I last blogged about Facebook use in the Caucasus. Again, here is a guide to how I get these data. Click on the tags for previous rates.

According to Facebook, as of December 2019, around 3,300,000 Azerbaijanis, about 35% of the total population, or more accurately, 32% of the population over age 14, are on Facebook.

Over half of all Azerbaijani men (over age 14) are on Facebook (well, 61%) and 29% of Azerbaijani women (over age 14) are on Facebook. This has been the trend for as long as I’ve been tracking this.

Looking at just youth, about 44% of Azerbaijanis ages 15-24 use Facebook (this is a drop from last year!). 60% of males that age and 27% of females that age.

As always, these numbers are to be taken with a grain of salt. This is information from Facebook ads.