“What Did You Like Least About This Instructor?” Student Evaluations and the Pimping Out of Higher Education

“Would You Recommend This Class?”

This is one of the first questions universities ask students on end-of-semester evaluation forms — even for mandatory classes, which students are required to take, and grad students are often required to teach. In most cases, this is like a restaurant critic asking a vegan if they would “recommend” the veal chop, and then trashing the restaurant when they say “no.”

Fact: students’ answers to questions like these can directly affect their teachers’ careers. Universities often turn to student evaluations when assigning classes, distributing teaching awards, considering professors for tenure or raises, and even hiring new faculty.

Unfortunately, students rarely have these consequences in mind when they evaluate us. In fact, when asked to evaluate a class, undergraduates are often guided by the basic pleasure/pain principle: in most cases, instances of “liking” or “disliking” have far more sway than “learning” or “not learning.” For example, colleagues of mine who teach 8am classes frequently receive lower scores on the “recommendation” question, as do those who teach unpopular (but required) subjects. Is this a reflection on the teacher’s skill? Of course not — but it will be applied as one.

Many educators have spoken out (often anonymously) about the fear evaluations can engender at particular institutions. Some professors have learned to equate challenging their students (a.k.a.: the task for which they’re hired) with risking their careers. In cases like these, evaluations are policing difficulty in the classroom, damaging the fabric of the educational system by privileging student satisfaction over learning.

After a slew of recent studies (e.g.: here, or here), much has been said about the inadmissibility of student evaluations. These studies have found (among other things) that evaluations “are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness” and that “gender biases can be large enough to cause more effective instructors to get lower [scores] than less effective instructors” (Boring, Ottoboni, and Stark, Jan 2016). In other words, student evals hinge on clear personal biases that result in lower “effectiveness” scores not only for very conscientious graders, but also for ALL WOMEN INSTRUCTORS (gender pay-gap, anyone?).

This is not to say that all teachers with great evaluations are layabouts, or that teachers who get bad evals are all suffering under the yoke of integrity, but rather to point out a general finding that evaluations (as they are currently applied) are worse than useless: they can actually be dangerous. Moreover, this danger stretches beyond the careers of individual instructors and threatens both to hinder the progress of gender equality, and also to lower the caliber of instruction your kids can expect when they go to college.

A Recalibration

It’s not that undergrads are unable to evaluate teachers; the problem is that they’re unable to evaluate teachers as teachers. This is, in large part, because they’re being asked to evaluate them as products.

This should come as no surprise. There are precious few of us left in public education who haven’t heard a student say something like: “I’m paying for this class; give me the grade I need.” Considering the commercialization of higher education (in which college degrees have become like fuel or food — indispensable commodities without which people are debarred from averagely decent lives), students are preconditioned to evaluate the whole college experience based on whether or not it performs “as advertised.”

So unless your university markets itself as a no-guarantees thunderdome in which accolades are hard-won, this usually means a few things:

  • Did the grade I got match the grade I wanted?
  • Was the teacher strict about course policies/grading?
  • Did I find classes boring?

These are questions students know how to answer — and these are the questions they DO answer, regardless of what they’re being asked. They have purchased a product called “an education,” and (for what it cost) they want what they ordered.

But it’s not their fault, really, is it? Universities encourage students to view their educations (teachers included) as products through the nature of the questions they provide. Most standardized evaluations include questions like: “What did you LIKE most about this instructor?” “What did you LIKE least about this instructor?” and of course, “Would you RECOMMEND this class?”

Replace “class” and “teacher” with “bluetooth speaker” or “hairdryer,” and it’s no different from shopping on Amazon; how are students to know that they might be ruining someone’s career?

Moreover, even if schools did ask students to evaluate their teachers’ skill — in other words, to evaluate their teachers as teachers— they wouldn’t be able to. How could they possibly? We cannot expect students to assess pedagogical goals — goals that we ourselves only fully grasped after years of graduate training— and to do so while trying to learn the course material themselves. That’s unfair and absurd; we shouldn’t be doing that, either.

So what should we be asking our students?

Student feedback is valuable; there’s no question about that. In particular, there is a short-answer segment on most evaluations — a segment in which students describe (in their own words, not in multiple choice bubbles) what they found interesting, distinctive, helpful, and/or unhelpful about the class. Some schools offer teachers the option to add questions to this section, and I think I’m justified in saying that all instructors appreciate and have benefitted from reading some of these comments. This kind of feedback is vital for keeping our courses fresh and student-oriented.

Unfortunately, this is the only part of evaluations that universities tend not to care about (or even read). Instead, they look at the statistics generated by the multiple choice section — the section that asks questions like “Would you recommend this class.”

Let’s take that question — “Would you recommend this class?” — and break it down. What information does it presume to uncover? There’s no guiding criteria whatsoever: would the student recommend the class based on…what? Its cost? The teacher’s sense of humor? Its meeting time? Its length? How easy it was to get a good grade? How much they learned? Where it meets on campus? Whether or not there’s a convenient sandwich shop on the way? So many things (few of which have any discernable link to education) factor into answering this question that it ends up asking precisely nothing. The data it yields is necessarily indistinct, because the question has no specificity whatsoever — and yet universities use it to rank instructors specifically in order of their “effectiveness.”

If you fudged a quiz question on Medieval Europe, nobody would use it to fail you in algebra class — but this is precisely the logic at play in student evaluations.

So instead of asking students anything-goes, personal preference questions like “Would you recommend this class?”, why not give them a specific voice by inviting them to be course contributors? When asked to add my own personalized questions to evaluations, I include things like:

“Did you find [specific text] useful for learning [specific skill]?”

Or:

“Is there a particular in-class activity you found useful, or can you think of one that future students might benefit from?”

The advantage of questions like these is threefold:

  1. They enfranchise students by positioning them as authors (not just consumers) of the course
  2. Answers can offer specific course improvements (e.g.: keep/take out this text, do more/less of this activity…etc.)
  3. Their specificity minimizes vague and useless comments —when you’re asked about one particular detail, it’s a lot harder to waffle on about the early class meeting time, the instructor’s fashion sense, or how you wish you had gotten an A.

In closing, we must recalibrate not only the content of student evaluations , but also the uses to which institutions put them. When assembled correctly, student evaluations ARE valuable documents, with enormous potential for improving our classes and giving our students a voice. They are NOT, under any circumstances, real, informed assessments of our effectiveness as teachers, and cannot be used as such.