Talking to Your Students About Course Evaluations
A How To Guide
I’m a precariously employed academic teaching a bunch of courses online in the midst of COVID-19. I don’t have a lot of time to write a long essay about how and why we should talk to our students about student evaluations of teaching, but here’s something quick and dirty you can use this academic term. Yes, this academic term, even next week or the week after!
I’m writing this to you, my fellow instructors, because I know that a lot of you (though certainly not all) normally don’t hold space for this kind of discussion in your courses. I get it. For most of my teaching career I’ve been too afraid to talk about many, many of the things that affect my working conditions and my students’ learning conditions (such is the nature of precarity). I’m here to tell you that breaking the silence around evals feels good and right. I’m also here to offer you a structure for how to do that.
Let’s start with your mentality.
1. Get out of your head the idea that you shouldn’t do this. Students fill out course evaluations too many times in their college career to not know why they’re doing this and what impact it has on their campus community. I teach a lot of first-year students and I see it as my duty to help them navigate institutions and transfer their critical thinking to everything that they do at the university and beyond. Related to this, I believe in transparency. There are things that we know but our students don’t know. It’s our job to share that knowledge so students can be as mindful as possible in their engagement with the university as an institution and system. Talking to students about evals is about transparency. There is no social justice on a college campus without institutional transparency.
2. Don’t go into this hoping to raise your scores. I have no idea if talking to students will mean higher numbers for you. My evidence is anecdotal. I haven’t conducted a scientific study on this. If you know of one, please share it with me! More than that, I think it’s a good idea to not go into a conversation about evals with a “gaming the system” mentality, as problematic as the system is. Go into it with a commitment to transparency and don’t focus on the numbers. Yes, the numbers are important and many careers depend on them. The long game is to change how the university assesses us. It shouldn’t be to bump up our scores.
Now, let’s get down to the brass tacks: what you’ll do in your classroom. I suggest setting aside 20 minutes for the conversation.
1. Tell your students why you’re doing this. In general, I keep lecturing to a bare minimum so I carry over this approach to the convo about evals. I do, however, begin by introducing the topic by acknowledging that we’re about to talk about course evaluations and that we’re doing this because this is a feature of their college experience and that it’s good for them to transfer their critical thinking skills to this activity, given that they’ll perform it over and over again. I even give a rough estimate of the number of evals my students will fill out in their college career. For example, my university is on the quarter system and our students normally fill out two sets of evals (numerical evals online, standardized for the whole college, and narrative forms for a specific department). Just remember to keep this part brief to get to the interactive portion.
2. Find out what they know. I usually start by asking students to answer a question like “What do you think is the purpose of course evaluations?” When I did face-to-face instruction, I used the platform PollEverywhere and had students reply to the question anonymously, on their laptops or phones. When teaching on Zoom, I had students type into the chat, under their names. The lack of anonymity didn’t deter them and I saw responses from all or almost all of my students. I then read some of their answers out loud, identifying themes. If answers are notable, I ask students to elaborate. I also affirm much of what they say, add to their comments or gently correct them, being very clear that everything I say is to the best of my knowledge. For example, I’ll observe that a lot of students wrote that the purpose of course evals is to help the instructor improve the course and their teaching and confirm that I do indeed take student feedback seriously and give them an example of how I’ve previously changed my course based on student responses. If students mention administrators and/or department chairs, I’ll explain to them a bit about the process by which we get evaluated, identifying the different individuals (not names but positions) and committees that use evals to make decisions about hiring, retaining and promoting faculty.
If none of my students mention supervisors/administrators/chairs, I ask another question: “Who do you think reads the summary reports of course evaluations?” Often, this is how I get students to realize that it’s not just the instructor or other students who read evaluation results, but also university management.
3. Get them to think critically. Again, I primarily teach first-year writing and my job is to facilitate the development of students’ critical thinking and writing. I also currently teach on the topic of social media. Both of these things set me up nicely for getting students to push against simplistic assumptions about anonymous course evaluations. It’s very likely that whatever it is you’re teaching, from Statistics to Philosophy, has some kind of readymade lead-in for getting students to engage with evals in the context of your classroom. What I do is pose another question, “What do you think are some potential limitations or problems that accompany anonymous course evaluations?” Students can answer this question anonymously or in the chat as well. This is when things get interesting! Students get to reflect on their own experience with these forms and be pretty real about it. They’ll often say that they fill out the forms quickly, barely thinking about them. I’ve heard a student say that when she likes a professor (notice the language!), she gives the same high score for every category, just as she gives the lowest score for every category to the professor she doesn’t like. Because in my course we talk about the opportunities and perils that come with communicating anonymously on digital platforms, students are quick to note that anonymity can result in cruel, vengeful commentary. They also usually state that grade expectations probably play a critical role in how students rate instructors.
Things get real spicy when students bring up the issue of BIAS. Why did I put that in all caps? Well, it comes up a lot in student responses! And in the context of our conversation, they’re eager to bring up bias but are less sure about how it plays out. This is when I bring out the big proverbial guns and introduce students to some of the research on bias in student evaluations of teaching. If you’re reading this essay, you are likely well aware of the countless studies and meta-analyses of research on the effectiveness of evals for measuring teaching effectiveness. How many studies I discuss depends on how much time I have for this portion of the class (I never dedicate an entire session). One study I regularly reference is Boring, Ottoboni and Stark’s “Student evaluations of teaching (mostly) do not measure teaching effectiveness.” I project it and we talk about some of their findings, including “SET are biased against female instructors by an amount that is large and statistically significant” and “Gender biases can be large enough to cause more effective instructors to get lower SET than less effective instructors.” Another great study to use here is Chávez and Mitchell’s “Exploring Bias in Student Evaluations: Gender, Race, and Ethnicity.”
Now, you’re possibly worried that touching on the subject of bias will be alienating to students because they’ll feel that you’re leveraging the critique against them personally, but I can’t say that I find students to take a defensive position here. In my experience, they feel comfortable acknowledging this reality because it’s consistent with what we’re already discussing in our class and what they’re learning in some of their other courses, from Psychology to Ethnic Studies. To what extent that acknowledgement alters student behavior is a more complicated question, but research suggests that being aware of bias can have the short-term impact of offsetting it.
4. Bonus round: share your evaluations report with your students. Full disclosure, I did this after a student asked me what these looked like. So um, yeah, I projected one of my survey summary results to the whole class. Here’s the thing, at my university, these are available for all students and colleagues to see. It’s not as if my students couldn’t look them up. But my first year students aren’t necessarily aware of this or are in the habit of consulting them. Because I committed myself to transparency I put my money where my mouth was and showed them the results. This gave them a preview of the questions and an opportunity to raise objections with them without my prodding. What I really wanted to highlight though was the emphasis on the cumulative score. I don’t know what your university survey summaries are like, but mine averages out all the scores and puts the number at the top. This means that we essentially get a mark, on a scale of 1 to 5. At my university, that number matters. The number is compared to department means and then college means. We are told that we’re not merely evaluated based on this score (and the rest of survey results) but that narrative evals, our teaching materials and peer observations also matter. What we don’t know, however, is how much a committee weighs each category.
Look, once you put a grade on an instructor’s performance in the classroom, that grade is hard to look away from. We focus on it. Our chairs focus on it. The evaluation committees focus on it. I know this because it is brought up over and over again. Most of the colleagues I talk to about this stuff (which is many) think of themselves as individuals who either get good or bad scores. This is terrible for both teachers and students. It means that we’re focusing on the wrong things. It means that we’re operating under the fear of punishment. It means that the system is broken. Our students need to know about this. To my delight, they’re generally receptive to this knowledge and curious about the ways in which the sausage is made. That’s why I plan on continuing to hold this conversation in my classroom and to encourage other instructors, particularly contingently employed instructors to do the same.
A few final suggestions.
Stay flexible. The conversation might take longer than you expect or go into directions you didn’t anticipate. I find that this is usually good. It means that the students are engaged and taking an active role in the discussion.
Take opportunities to sympathize. Our students are intimately familiar with how various, at times arbitrary or at least reductive, metrics determine their fates. For my first-year students, much of education has been about meeting standardized test targets and maintaining a very specific GPA. They sympathize and often draw these parallels themselves, knowing the pitfalls of numeric evaluation.
Acknowledge your discomfort if you’re feeling it. Every time I’ve held this conversation with my students, I’ve told them something like “This is very awkward for me to bring up but it’s important for you to be as informed as possible.” If you feel uncomfortable, which you likely will, it will probably come across, but that’s ok. Remember, that vulnerability from the teacher is often met with vulnerability from the students. When you open up, they open up. If you’re sweating through this conversation, just say so! Remember, you’re doing it for transparency! Transparency and sweat is a better combo than we’ve been taught to believe.
OK, now I must return to my regularly scheduled grading session.
If you end up trying out this method or have experience in holding the conversation in your unique way, please share!