12 Jul

Summer 2024 AI policy statement

Oh what a few years it has been with AI. This is built off of my previous statement. But after reading Teaching with AI, I thought more about the authors’ discussion of AI producing C-level work and that the “new” standards should be better than AI. Those authors argue that instead of banning AI, we should be banning C-level work. This ties a bit to what I’ve discussed before about evolving standards.

Research methods class policy:

Artificial Intelligence and Large Language Model Policy

We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting. Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and quiz answers and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong (this is sometimes described as “hallucinating”. (For example, for our sampling assignment, I had ChatGPT generate lists of Pokemon that can evolve and not evolve and it was wrong for 15% of them.) If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. Further, in COM 382, you are not being evaluated on your writing, so there is no need to use extensive proofreading.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our textbook or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. For example, AI does not understand the difference between measurement validity and study validity. AI does not understand the difference between ethics more broadly and research ethics. 
  • Answers written by artificial intelligence text generators are somewhat detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.
  • AI is likely to produce “C” level work at best. For some things in life, “C” level is okay. But please be aware that as AI continues to develop and can do more and more tasks that humans used to do, you as a future employee and worker in the world will need to demonstrate that you can do a better job than AI. If you are using AI in this course to do the work for you, you’re not developing yourself to be BETTER than AI. You’re not learning skills or content that will matter. Consider AI-generated work as your new competition and that you need to do better work than that. Further, if AI can produce “C” level work circa 2018, very soon that will not be considered a passing grade. Instead of banning AI, instructors are going to “ban” all “C” level work (circa 2018). We’ve already seen that most instructors have raised their standards since AI became widely available. Currently, it is unlikely that even well crafted AI work will allow you to pass this course. Rubrics are designed so that AI-generated work is unlikely to get high marks.    
  • I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments (quizzes, tickets, etc.) in COM 382 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting, and absolutely not for generating original text. And in COM 382 you are not being evaluated on your grammar, so we discourage this use, while acknowledging that some students want to use it.
  • Do not use AI to write original material such as Hypothesis annotations and quiz answers.
  • Tools like StudyBuddy or other techniques to “take pictures” of quiz questions or to get answers to quiz questions are 100% not allowed. 
  • It is acceptable to use AI in COM 382 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus. 

Then within the course, there are module-level learning objectives, and I’ve added a list of specific AI-“proof” skills to those learning objectives. For example…

Module 3 Learning Objectives

1. Define measurement in the context of social scientific research and explain its importance.

2. Differentiate between key terms such as theory, concepts, variables, attributes, constants, hypotheses, and observations.

3. Explain the difference between independent and dependent variables and identify them in research scenarios.

4. Describe the processes of conceptualization and operationalization, and apply them to research examples.

5. Distinguish between manifest and latent constructs, providing examples of each.

6. Identify and explain the four levels of measurement (nominal, ordinal, interval, and ratio), and classify variables according to these levels.

7. Compare and contrast categorical and continuous variables, providing examples of each.

8. Define measurement validity and reliability, and explain their importance in research.

9. Identify and describe different types of measurement validity (face, content, criterion-related, construct, convergent, and discriminant validity).

10. Recognize and explain various methods for assessing measurement reliability (test-retest, split-half, inter-coder reliability).

11. Analyze the tension between measurement validity and reliability, and discuss strategies for balancing them in research design.

12. Evaluate the strengths and weaknesses of different measurement approaches for studying diverse populations, including marginalized groups.

13. Apply principles of inclusive measurement practices to create more representative and culturally sensitive research instruments.

14. Identify potential sources of random and systematic error in measurement and suggest ways to minimize them.

15. Critically assess the implications of high and low measurement reliability and validity combinations in research scenarios.

Regarding helping students become “better” than AI. My syllabus statement: I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

Module 3 contributes to developing these skills:

  1. Original thinking:
    • Students learn to create conceptual definitions, which requires synthesizing information and developing unique understandings of complex concepts.
    • The process of operationalization encourages students to think creatively about how to measure abstract concepts.
  2. Problem solving:
    • Students learn to tackle the challenge of translating abstract concepts into measurable variables.
    • They must find solutions to balance validity and reliability in measurement.
  3. Critical thinking:
    • The module encourages students to critically evaluate different types of measurement and their appropriateness for various research scenarios.
    • Students learn to assess the strengths and weaknesses of different measurement approaches.
  4. Strategic thinking:
    • Students learn to strategically choose between different levels of measurement based on research goals and statistical analysis requirements.
    • They must think strategically about how to balance validity and reliability in research design.
  5. Emotional intelligence:
    • The discussion on inclusive measurement practices for marginalized groups helps students develop empathy and cultural sensitivity.
    • Understanding the complexities of measuring social and psychological constructs requires emotional intelligence.
  6. Ethical decision making:
    • The module addresses ethical considerations in measurement, particularly regarding inclusive practices and representation of diverse populations.
    • Students learn to make ethical decisions about how to operationalize concepts in ways that are fair and representative.
  7. Collaboration:
    • The emphasis on established measures and building upon previous research underscores the collaborative nature of scientific inquiry.
    • Group activities and discussions encourage collaborative learning and problem-solving.
  8. Global/cultural awareness:
    • The module highlights the importance of considering cultural context in measurement, especially when studying diverse populations.
    • Students learn to be aware of potential biases and limitations in measurement across different cultural contexts.

By learning these complex processes of conceptualization and operationalization, students develop skills that go beyond simple information retrieval or basic analysis. These skills require nuanced understanding, contextual awareness, and creative problem-solving – areas where human intelligence still far surpasses AI capabilities. This module prepares students to engage in the type of high-level thinking and decision-making that will remain valuable and uniquely human in an AI-augmented workplace.


21 May

AI in the university classroom

Ah, the winter of 2023 – after ChatGPT publicly launched in November 2022, university instructors everywhere had a collective freakout when everyone realized that students could engage in all sorts of misconduct in an entirely new way. Certainly academic misconduct was always a part of our jobs, but this was different. AI-facilitated misconduct was more sophisticated and obviously far easier for students to use. The writing that AI can generate seems original and of decent quality.

Even now, spring of 2024, whenever instructors gather – online or in-person – the discussion quickly turns to the issue of AI in student work.

Like many others, in winter 2023, I responded to this by engaging with the various AI detection tools (which we now know are unreliable). I was spending hours each week copying and pasting text into detection tools and I was becoming angrier by the minute.

I also revised my assignments and activities to be more “AI-proof.” This was and continues to be incredibly time consuming. Good assignments and activities take time to develop and typically I need to offer them a few times before finalizing the instructions and expectations. Further, this was just following the COVID-19 pandemic, when all of us had already spent a great deal of time changing and creating new assignments and activities. Further, “AI-proof” – while optimally this means that the requirements are more complex, as to evade AI, this often results in things being “harder.” The other major outcome of the assignment revision was that it changed the classroom vibe and impacted grade distribution. In thinking about students and their grade distribution – those that always did well on assignments before, continued to do well on assignment that became more complex. Many of them appreciated the more complex assignments even. And then there were the students who, in the before times, did poorly on the assignments. Some of these students are tempted to use AI to complete their assignments, so even with the “AI-proof” “more complex” assignments, with AI they probably can do decently, or at least better than they would have without AI. Then there are the students that I believe are the most upset about this entire situation – the students who did okay in the before times. With only exerting a bit of effort, they could get a B-/C+ or so on an assignment and then they could go on with their lives. The “AI-proof” “more complex” assignments mean that these students are not able to get a B-/C+ with a little bit of effort. They are getting barely-passing grades due to the increased complexity. Those students are very angry. Another category of students are those that perhaps never considered engaging in serious misconduct in the before times, but AI is so tempting and appears to do a decent job, so they have adopted it.

Similarly, on multiple choice quizzes or exams, instructors have re-written questions to be harder to answer with AI – especially AI browser plug-ins like “StudyBuddy” that can answer questions even when there is a lockdown browser (which my university does not use, but still). Those previous B-/C+ students used to be able to do okay on a multiple choice quiz or exam in the past, but now with more complex questions, they are not passing anymore. And again, they are angry.

Not everyone is creating more “AI-proof” assignments however, which may engender divisions between instructors. Also, as AI advances, what “AI-proof” means also must advance. For example, I’ve downloaded an AI-based app whereby I can point my smartphone camera at my teenager’s algebra homework problem and the app explains how to solve the problem in 4 different ways, in seconds. This has been a huge asset for us parents to help with math homework, but I also presume that a student could use the same app to cheat.

Another strategy that some people use is creating assignment rubrics that somehow can “punish” or at least “not reward” AI generated text. Related to this is adding in requirements to assignments that are harder for AI to do well like asking for direct quotes and page numbers or requiring that students engage with a specific number of the assigned materials. While this seems to “work” more or less, if the goal is to ensure that students engaged in authentic writing, it is not entirely resolving the issue of students submitting AI-generated work.

Some are requiring access to a document’s history and there are tools that can analyze a Google Doc history for actual typing. And I have been using Sherpa, an AI-based interview tool whereby students upload their work and are interviewed about it (or be interviewed about material that you’ve assigned). I have experimented with having Sherpa interviews be worth more and less points in the class and in general, I’ve really enjoyed it as a tool. However, both the document history and video interview “solution” are difficult to manage in a larger class.

Many people are moving “back” to in-class writing with blue books or paper exams. This also introduces labor in terms of grading and we lose a lot of the efficiency and tools that digital assignments afforded. I really liked being able to tell the course management system to shuffle questions and answers, compared to me making 4 different versions of an exam, copying and pasting questions, and managing a key.

This is all to say – my primary job is not to be a writing or composition instructor. I have options to not assign as much or any written work in my courses. There are writing and composition instructors who are developing new ways to address AI in their classrooms and I look forward to seeing what they develop and seeing how that can be applied in other courses.

But it is important to consider the bigger picture. I have thought about this quite a bit and have discussed it with many trusted colleagues. One such colleague told me that they are trying to entirely focus on the students who are authentically engaging with the materials. Another colleague told me that after the first week or two, if a student continues to submit AI answers, they no longer provide any feedback, just a grade. Some others say that this AI panic will pass just like the Wikipedia panic or the calculator panic before it. I am trying to get to a better place emotionally about this. I have spent many hours being angry about AI misconduct and I will never get those hours back. I have tried to unpack why it feels so offensive to me. Is it because I spend so much time trying to provide them with a great educational experience and an AI generated response seems to spit at that? Is it because I worry about the value of a course, an education, and even a grade and I fear that those ideas will become meaningless if many students do not actually do the work?

One of my responses has been to embrace AI in my own life and work. Over the last year or so, I’ve developed a lot of AI tools and tricks, treating AI as my personal assistant, especially for tedious tasks. I believe that I have “claimed back” many hours because of AI, so that gives me some peace.

I think that it is also important to acknowledge that AI is not going away. Last week I listed to a podcast with José Bowen, a noted AI education expert. I’ve really enjoyed his co-authored book “Teaching with AI.” But in the podcast, he said this about the future of work: “A senior radiologist still needs to check your scan. But but the junior jobs, the intern jobs, the rough draft of the press release, all of those sorts of things are no longer gonna be jobs. So we have to get our students to do the part that we value with critical thinking, right, asking better questions, making sure the output is correct, and and making sure the output is excellent, not just okay or average. And so I think AI has changed what we can accept as average and mediocre quality.” — two important things here: one is that it is 100% true that a lot of what “junior” jobs are currently doing are going to be replaced by AI, and in a very short period of time. So our current university students are going to be entering a workforce where jobs that they would previously be qualified for will now be done by AI. Two, what we understand as average and mediocre is going to change. He gave this example earlier in the podcast with regard to spelling and spell checkers, “Many of us started our careers, we were still grading spelling or giving at least it was a line on the rubric. And now I expect perfect spelling because if there’s a spelling mistake, I just say no, use your spell checker. All those little red lines, fix them, and then resubmit. Right? I’m not gonna accept this because in the workplace, right, it’s not gonna no one’s gonna accept your spelling errors. So the technology changed the standard that we accepted.”

Following José, I really believe that we do need to prepare our students for the AI working world that they will enter. And I’m committed to bringing more AI into my classroom activities and modeling positive and ethical use. For example, in an assignment where students have to design a poster for a middle school classroom, I instruct students to ask AI if the wording used on their poster is appropriate and understandable by most 7th grade students. I have designed an assignment where students receive instant feedback on a conceptualization and operationalization. This has significantly improved the assignment, as they are able to receive personalized feedback before they submit the assignment to me.

But I am still wading in the challenges of learning assessment and how to feel okay about the fact that some students are going to use AI to do the assignment for them and then I have to grade something that was not authentically created by the student. I think that my colleague who has turned their focus to the students who are engaging authentically probably has the right idea. But I also worry about my future with a surgeon who used AI instead of learning something in medical school, right? In the meantime, I am going to try to engage in more meditative thinking on this – with the time that I’ve gained back with AI tools.

21 Aug

AI/LLM policy statements

This is what it feels like to be teaching sometimes (made with Bing Image Creator, powered by Dall-E)

Me, teaching to a room of robots

It seems that all I can think about lately is AI in the classroom. I wanted to share my summer 2023 AI/LLM policy statement for my class. Feel free to use with attribution.

Important to note:

  • This is for a research methods class with very little writing, so if your class has writing in it, YMMV.
  • I’m currently teaching remote and asynchronous classes, so I don’t have the same options for in-class assignments as others do.
  • I have MANY other tools to discourage AI/LLM use in my courses. (Hypothesis social annotations, video reactions, etc.)
  • I give FREQUENT reminders to students about this policy.
  • This is a work-in-progress. I took a summer workshop on AI/LLM in the classroom that helped me refine it. I read through Aleksandra Urman‘s work on this. I am in dozens of AI/LLM in the classroom Facebook and subreddits. All of this contributed.
  • It is ESSENTIAL that your policy aligns with your university policy and what your student conduct group’s policy is as well as how they react to such cases.
  • AI detection software is quite flawed. A single tool for detection is insufficient as evidence of AI. And these tools are notorious for flagging non-native English writing as AI. More on this here.

Artificial Intelligence and Large Language Model Policy

We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting.  Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and quiz answers and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong. (For example, for our sampling assignment, I had ChatGPT generate lists of Pokemon that can evolve and not evolve and it was wrong for 15% of them.) If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. Further, in COM 382, you are not being evaluated on your writing, so there is no need to use extensive proofreading.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our textbook or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. For example, AI does not understand the difference between measurement validity and study validity. AI does not understand the difference between ethics more broadly and research ethics. 
  • Answers written by artificial intelligence text generators are detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments (quizzes, tickets, etc.) in COM 382 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting. And in COM 382 you are not being evaluated on your grammar, so we discourage this use, while acknowledging that some students want to use it.
  • It is acceptable to use AI in COM 382 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus.

02 Feb

Wordle me this

Although my research is primarily about technology and inequality in Armenia and Azerbaijan, I do dabble occasionally in studying games. Also all of my teaching, undergrad and grad, is on broader technology and society, so I keep up with the research.

I got into Wordle like many others did in January 2022 and I tweeted about it. A tech journalist saw me tweeting about it and contacted me for an email interview. I replied and gave some thoughts. This has turned into me being interviewed about Wordle quite a bit in the past few weeks. I’ll archive them here.

My main points:

  • Wordle is really easy to pick up (no app, no login, etc.).
  • Wordle is easy to get started with.
  • Being forced to only play once a day on the official Wordle page is nice compared to other social media “breaks” where it is easy to get sucked in.
  • It allows for a performance of being “smart” or “intellectual” by sharing results.
  • During the pandemic in particular, people are really tired and don’t have a ton of bandwidth to interact with others, but sharing Wordle results allows people to be social with very little labor.
  • One can feel part of a community of fellow Wordle players or part of the “in-crowd” or at least the “intellectual” crowd
  • There are now clones in many languages and to me, this is getting very interesting – folks are playing in a second language, folks are playing in their heritage language.
  • There are people trying to figure out the best starter word, which is fun.
  • There is already backlash about sharing results, and I suspect that the sharing of results will die out soon.
  • Now that the New York Times has bought Wordle, eventually they will put it behind their games paywall, which is currently $5/month. People online are annoyed about paying for it, but IMHO, NYT Games is probably the best home for it. Their existing games are really nice and Wordle fits in well. And it is nice that the inventor got paid.

What you should know: Wordle.The Hawk Newspaper. February 8, 2022.

Wordle and the future of the internet’s favorite word game. NPR’s On Point [radio interview]. February 4, 2022.

What Makes Wordle So Popular? Psychologists Explain Its Appeal. GameSpot. February 1, 2022.

Wordle. KNX In Depth [radio interview]. February 1, 2022.

Why Is Everyone Suddenly Playing Wordle? Psychologists Explain. Inc. January 30, 2022.

Wordle is a deceptively easy game for burnt-out pandemic shut-ins. Vox. January 20, 2022.

15 Dec

Questions I’m asked as a recommendation writer

These are questions that are asked of me regarding self-funded professional masters programs in marketing and communication. I’m skipping the questions regarding how well I know the person, etc.

These questions are to be answered in addition to a full letter.

Program A’s questions

  1. Please rate the applicant in the following areas:
  • Motivation
  • Analytical skills
  • Intellectual capacity
  • Communication skills
  • Interpersonal skills

2. What do you consider the applicant’s strengths and/or weaknesses?

3. Describe a specific situation where you have observed the applicant using critical thinking skills or applied a new skill.

4. How would you describe the applicant’s leadership skills?

Program B’s questions

  1. Please list three to five adjectives describing the applicant’s strengths.
  2. Please compare the applicant’s performance to that of his or her peers.
  3. What does the applicant do best?
  4. If you were giving feedback to the applicant regarding his or her professional performance and personal effectiveness, in what areas would you suggest he or she work to improve?
  5. How does the applicant accept constructive criticism or handle conflicts?
  6. How effective are the applicant’s interpersonal skills in the workplace?
  7. On the below scale, please rate the applicant’s individual vs. team orientation (1 = most effective as an individual contributor; 5 = focused exclusively on the team). Please elaborate on your rating:
  8. Please give an example of how the applicant has demonstrated leadership.
  9. Is there anything else you feel we should know?

Please evaluate the applicant by entering the following quality ratings for the traits below. Truly exceptional (Top 2%) Outstanding (Top 10%) Very good (Top 20%) Good (Top third) Average (Middle third) Poor (Bottom third).

  • Intellectual Ability
  • Maturity
  • Quantitative Ability
  • Analytical Skills
  • Poise/Professionalism
  • Initiative
  • Personal integrity/ethics
  • Interpersonal skills/ability to work well with others
  • Sense of humor
  • Verbal English Communication Skills
  • Written English communication skills
  • Self confidence
  • Leadership ability
  • Future managerial or business success
  • Please provide us with your overall impression of the applicant

Program C’s questions

What are the applicant’s chief weaknesses or areas of growth

Rating 0-5

  • Integrity
  • Interpersonal Relations
  • Oral Communications
  • Self-Awareness
  • Analytical Ability
  • Research Ability
  • Initiative
  • Potential for Success in Chosen Field
  • Maturity
  • Self-Confidence
  • Written Communication
  • Overall Evaluation

Program D’s questions

Rate 0-5

  • Communication and writing skills
  • Intellectual curiosity, originality and independence in thinking
  • Initiative
  • Interpersonal skills, ability to work well with others
  • Leadership ability and potential
  • Academic and analytical ability
  • Problem-solving ability
  • Flexibility, adaptability, willingness to learn new skills
  • Organizational ability
  • Maturity and professionalism
  • Research and reporting skills
  • Integrity

Program E’s questions

Rating 0-5

  • Academic performance
  • Intellectual ability
  • Written communication skills
  • Oral communication skills, including willingness to contribute valuably to discussion/debate where applicable
  • Analytical skills, including research and critical thinking skills where applicable
20 Aug

Manuscript trimming tips

Word and page limits exist for a reason, but they present a challenge. These are my top tips for trimming manuscripts down.

  1. Cut references

I know that this can be really difficult but these take up a lot of space. Hopefully you’re using a reference manager already, so cutting is a bit easier.

What I do is go to my references and search for the surname in the text.

I then will make a comment if it used only once. I usually add something like “But it is a meta-analysis” or “But it is in the journal that we’re submitting to” or sometimes I’ll delete it immediately. These one-offs can be really difficult. I will also ask myself if it is at all possible that some other reference that I used says the same thing or if I absolutely need to cite that paper only once.

I’ll also consider if I’m citing the same person/team multiple times in the same citation and ask myself if I have to do so. Perhaps one of the citations is the key theoretical piece, so I can’t avoid it. But if it is a smaller finding that they had in both a 2012 and a 2016 paper and perhaps the 2016 paper is in a better venue or is more widely cited, I’ll cut the 2012 reference.

Similarly with the same person, I’ll do a scan for them throughout the paper. For example, if in the entire paper I cited B, T, and R 2012 and T and B 2009 and R, T, and B 2016 each 4 times, but I only cited R, B, and T 2015 once. I’ll re-skim the 2015 paper and ask myself if I absolutely need to include it or if the finding was in one of the other papers as well.

Thinking about the venue is important too. For example, let’s say I’m working on a paper about walruses playing board games with an outcome of better walrus solidarity. And I’m submitting this paper to a journal that is really focused on board game playing and less on solidarity or walruses. While I cannot remove all of the citations that tie back to walrus or solidarity literature, I should prioritize the board game playing literature as that is the journal’s audience and reviewers will come from that field. But I do always have older versions of the paper that have all of the references in it just in case the reviewers ask why there isn’t more theorizing and literature from Walrus Studies.

Finally, the most heartbreaking reference cuts are studies with too many authors and/or really long titles. This is presuming that references count towards the word count.

2. Cut words

Do searches for common adverbs. Delete transitional words (this is painful for me).

3. Merge words

Sometimes hyphens are appropriate and can save space.

4. Wordiness

You’re probably being too wordy. Try to read the paragraph out loud to yourself or have your computer read it to you. It can be easier to hear the problems. I also sometimes do better when I’m editing a printed version of the paper versus on a screen. I think that taking a break from the manuscript also helps.

5. Text -> Table

Sometimes you can turn text into tables. This will reduce words and if the journal doesn’t count tables toward the word count, save you a ton of words. However, if page count is the issue, tables sometimes can be longer than text.

6. If qualitative/interview based, look for redundant participant quotes

It is easy to fall in love with a great direct quote or example from a participant. Sometimes they’re just so delicious and represent the theme so well. But I find that a lot of people will have 3 or even more examples for a particular theme. And that is okay, but one must also remember that some of them have to be cut eventually.

Ask yourself if the quote is absolutely necessary to illustrate the theme or is so perfect that it really sells the theme in a sincere way. Look at each quote in comparison with every other quote within the theme and ask yourself if they both need to be there.

Also keep a table of quote/example counts by participant. Sometimes some participants are chattier or articulate themselves better and we lean more heavily on their quotes. It is important to have a count at the end so you don’t accidentally have many more quotes from a few participants. This is not to say to purposefully manipulate your quote choices or include some participants artificially. Rather, seeing that you’re already quite heavy with quotes from “Alice” can help you make decisions between two quotes more easily.

7. Redundancy in the findings section

Sometimes we have a fairly complicated framework and we need to remind our reader what the conceptual definition of a particular theme was. However, this does take up a lot of space. This is another one that is painful for me.

8. Could this be two papers?

Sometimes there is so much going on that you can split the manuscript into two papers. Honestly, this happens to me about 75% of the time. This does require work to ensure that the theoretical scaffolding is different and that you’re not reusing findings. However, in some cases it might make sense to divide.

9. Have an editing partner

Certainly if there are multiple authors in a study, someone else can look at the manuscript. But you may also be able to have a friend with whom you trade editing/trimming tasks.

10. Check your conclusion/discussion

Sometimes we get a little bit freewheeling at the end of the manuscript. This can be a place where entire sections could be removed.

25 Mar

Facebook in Armenia, March 2020

It has been awhile since I last blogged about Facebook use in the Caucasus. Again, here is a guide to how I get these data. Click on the tags for previous rates – here’s September 2017, here is May 2018. Here is December 2018.

As of March 2020, there are about 1,500,000 Facebook users in Armenia, according to Facebook. That is 50% of the total population, and 46% of the population over age 14 (Facebook technically isn’t available to those under 13.) There is a bit of growth since December when Facebook ads estimated 1,400,000 users (47% of the population).

As far as gender, 50% of the total male population, or 64% of males over age 14 are on Facebook. 53% of the total female population, or 49% of the female 14+ population. So there are some gender differences, but probably within the margin of error.

Just looking at the 15-24 year olds, 82% of them are on the site (this is a drop from 2 years ago — I suspect people have moved to Instagram), 82% of young men and 87% of young women.

Some trends to look for – as in the rest of the world, young people are moving more toward Instagram. Everyone is moving toward WhatsApp and other private messaging services.

25 Mar

2017 Internet access in Armenia

With the move toward online learning all over the world, someone asked me about Internet access in Armenia. The most recent publicly available data that we have is from the 2017 Caucasus Barometer. Here are a few relevant statistics. Please note that the survey respondents are a mix of household answers (like owning a computer) and individual answers (frequency of Internet use), so we are making some methodological leaps in claiming that this would reflect the access that young people would have.

It is difficult to ascertain the presence of children in the household using the Caucasus Barometer online analysis tool. When I have some time I will create a variable that subtracts the number of adults in the household from the total number of household residents to create a new variable called “Number of children” — I’d like to use that to look at all of the analyses described below in the future.

Another caveat – in homes across the world, adults working from home are having to share their technology with their children. There may also be greater demands on home Internet access and adult work use becomes prioritized over children’s.

Analysis

Overall, 29% of Armenia adults never access the Internet. This has remained fairly stable for the past few years. The 2017 CB did not ask about why people did not use the Internet, but in previous years, the answered varied and were not all tied to resource access issues.

Home Internet access is far more available in the capital city, although mobile Internet (and nearly two-thirds of Armenians have mobile Internet) certainly bridges that gap for many households.

Nonetheless, in 2017, over a third of rural respondents never accessed the Internet.

Mobile Internet does vary a bit by urbanness. Two-thirds or nearly two-thirds of Yerevan residents and regional urban center residents have mobile Internet, while a little over half of rural residents do.

While mobile phones have come a long way (and nearly all Armenians have owned a mobile phone for over a decade), no one can deny that some activities are conducted more easily using a personal computer. As of 2017 58% of households had a computer. This does not mean that people do not have access to computers at cafes, work, or school. However, in terms of considering distance learning or working from home, the lack of a computer may be a barrier for some.

Urbanness has always been an important part of the Armenian digital divide story. Personal computer ownership is far higher in the capital (67%) than in regional urban environments (56%) or rural areas (51%).

11 Dec

Facebook in Azerbaijan, December 2019

It has been quite awhile since I last blogged about Facebook use in the Caucasus. Again, here is a guide to how I get these data. Click on the tags for previous rates.

According to Facebook, as of December 2019, around 3,300,000 Azerbaijanis, about 35% of the total population, or more accurately, 32% of the population over age 14, are on Facebook.

Over half of all Azerbaijani men (over age 14) are on Facebook (well, 61%) and 29% of Azerbaijani women (over age 14) are on Facebook. This has been the trend for as long as I’ve been tracking this.

Looking at just youth, about 44% of Azerbaijanis ages 15-24 use Facebook (this is a drop from last year!). 60% of males that age and 27% of females that age.

As always, these numbers are to be taken with a grain of salt. This is information from Facebook ads.

12 Apr

Social media and bullying in Azerbaijan

This week a young woman in Azerbaijan took her own life as a result of possible bullying. Video of the act and subsequent events were widely shared and discussed on social media. As a result, many Azerbaijanis are discussing the problem of bullying in schools. Here’s a summary of the case in English.

Here’s a NodeXL hashtag analysis of #BullinqəSon (End Bullying)

Here’s a NodeXL hashtag analysis of #Elinaüçünsusma (Don’t be silent about Elina) — this is more popular

Some caveats here regarding any sort of hashtag analysis:

  •  Twitter data downloads like this are always incomplete, as it is impossible to get the full dataset.
  • The results are a little skewed because a lot of the users are tweeting at the President and First Lady to do something. Obviously they have a lot of followers, so a lot of these “importance” metrics are impacted by that.
  • Twitter isn’t a great venue to consider Azerbaijani public discussion of such topics, in particular this one that is of great interest to young people. I’ve seen far more on Facebook, Instagram, TikTok, etc. I can only imagine WhatsApp has a great deal of this as well.
  • Social media are always performative. The need to let one’s audience know that they care about this issue is not always the same as discussion about solutions.
  • Social media “influencers” sometimes feel compelled to have a hot take on the topic of the day and sometimes they’ll say provocative things because it leads to more engagement. There have been a few “celebrities” in Azerbaijan, especially on Instagram, doing this. I wouldn’t take this as the complete story.

And some general thoughts on social media campaigns, with a bullying angle. I’m not a bullying expert, but I’ve talked to people that are and these thoughts are influenced by that. (Thanks to them! Especially Lindsay Blackwell and her excellent work on cyberbullying.)

  • For campaigns to be taken seriously by young people, they need to feel sincere. I’m under the impression that some of these campaigns have been started by people that young people in Azerbaijan may not follow and that may not be entirely relatable. Decades of research shows that adult-created campaigns aimed at youth frequently fail. I worked on an environmental campaign aimed at middle school and high school students and I cannot even begin to tell you how much money was spent on materials that the young people laughed at. We ran focus groups to see what sort of messages resonated with the young people and where they were most likely to be influenced (both media channels and with peers) and it was entirely different from what the campaign organizers had done.
  • Campaigns need to be multifaceted, especially when there is a goal of behavioral change. There is currently criticism that schools are securing windows (the young woman jumped out of a window), but at the same time, such an act is probably a good idea and can be done immediately. It does not mean that the school is not working on other strategies and actions to reduce bullying and its effects.
  • Who is the target of the campaign? Is it bullies asking them to not bully other children? Is it those that are the victims of bullying telling them to be strong? Or is it potential bystanders, asking them to intervene? The messages will be different!
    • With bystanders in particular, which is the hypothetical largest audience, in any campaign, people need to be told what to do. For example, in the US, there was a campaign about forest fires that said “Only you can prevent forest fires” but the campaign did not actually tell people how to prevent forest fires! So, in the bullying case, potential bystanders need instructions about what to do if they see a classmate being bullied. (And at least at my own child’s school, there is an entire curriculum about this from the very early years.)
    • Victims need different messaging about what they can do as well as a sense that there are others out there experiencing bullying. According to my expert colleagues, bullying feels very isolating and it is hard to see that there are others in the same position. So campaigns whereby people disclose that they were bullied and what they did about it can be helpful.
    • The bullies themselves are also children and conventional wisdom says that children that are bullies are not infrequently subject to problems at home, have mental health issues, etc. – they also need help.
    • Schools and parents also need help! Law enforcement too!
  • For campaigns to be effective with young people, the message needs to also be relatable and from an authentic figure. For example, a gorgeous 30-year-old actress saying that she was bullied in school may not be believable in the eyes of a 13-year-old who sees that actress leading a glamorous life and looking beautiful. We see that a bit with the “It Gets Better” type campaigns with celebrity focuses versus “#metoo” whereby the majority of the messaging comes from “regular” people.
  • Figuring out who is influential among young people in Azerbaijan (and on what platform) but still relatable and if that person did in fact experience bullying (and it is believable that they did) or perhaps they were an intervening bystander would be a very good tactic. In fact, in the words of one of my bullying expert colleagues, that would be infinitely more helpful than a Ministry of Education campaign that takes months or years to design.
  • Decades of research shows that suicide has a potential copycat effect. It is incredibly irresponsible for media outlets (and individual users) to share the video where this young woman takes the action.
  • In Azerbaijan there are laws related to suicide and it is a criminal offense to “cause” someone to commit suicide. I have to admit that this seems strange to me as an American. Certainly it is not possible to demonstrate this beyond reasonable doubt and the mental health issues that those considering suicide are facing are numerous. There is a great deal of social media speculation about this young woman’s family, the role that the school administration played or didn’t play in her death, the young woman’s romantic and sexual life, etc. In my opinion, such discussions do little to help anyone – those grieving or those trying to reducing bullying and its effects.

As a takeaway, although tragic, as a result of this young woman’s death, the very real problem of bullying in schools is now being discussed more widely in Azerbaijan and that is a good thing. Those that want to try to help – both immediately and in the long term – would be well-advised to look at the existing work on anti-bullying campaigns before jumping in.