Ratemyprofessors.com … Should You Really Trust it? Think again.

Being in my third year of university, the site ratemyprofessors.com has played a significant role when it came to creating my schedules. Based on my past experience, this site is a useful tool when it comes to choosing courses, as it provides ratings by students who have (supposedly) taken the course, thus giving pre-enrolled students a glimpse of what to expect.

In brief, Ratemyprofessors.com provides college students with lists of professors from many different universities. Each professor is/can be rated based on overall quality, helpfulness, clarity, easiness, as well as interest. In addition, students are also given the options to rate the professor’s level of “hotness” (irrelevant to the course content or teach ability), whether they used a textbook, the grade they received, etc. When rating the professor based on his/her level of helpfulness, clarity and easiness, the students are provided with a scale that is very similar to a six-level likert bipolar scale, with 0 being the lowest rating in the category, and 5 being the highest. However, instead of the responses for each item ranging from strongly agree to strongly disagree, the responses for each item are different. For instance, the item of Easiness is rated from 0 (no description indicated), to “Hardest thing I’ve ever done”, to “Makes you work for it”, to “The usual”, to “Easy A” to “Show up and pass”. Whereas the item of helpfulness is scaled as 0 (no description indicated), to “No help here”, to “You have to beg for help”, to “If you ask for help, its there”, to “Most likely to help”, to “Saved my semester”. The lack of consistency in the rating scale can be interpreted as confusing for the rater, which can affect their responses. Lastly, raters are also given the opportunity to be more specific in their rating by being provided with an open-ended question asking the student to provide some written detail about the professor and/or the course (in 350 characters).

Of course there are benefits to such a rating site, as it provides students with first hand opinions about potential courses and professors. Thus it provides a means of aiding students in determining if a course is appropriate in terms of their academic needs and preferences. However, after acquiring more knowledge about questionnaires and the topics of reliability in PSYC 406 Psychological Tests at McGill, I have begun to question whether these ratings are fostering true reliability.

With that being said, some critics claim the website’s reliability as certainly questionable for multiple reasons. First and foremost, I have noticed many professors having mixed reviews, with some students indicating how amazing the professor was in all items, while at the same time seeing other students’ ratings claiming the exact opposite. The inconsistency of responses from individual students was the first indicator that this site lacks reliability. Ideally, if the items in the rating scale represented the professor’s real ability to teach, we would witness relatively consistent scores on all five scoring items. By displaying low levels of inter-rater reliability (disagreement among raters, lack of homogeneity in responses by raters), one must consider the fact that these ratings may not accurately represent the professor’s teaching abilities. Also, it is easy for a student to ignore the amount of responses each professor has. For instance, one professor may be extremely highly rated, based on only three ratings (which does not provide the student with nearly enough information to base their decisions on taking the class), whereas another professor may be mediocrely rated based on over one hundred ratings. Thus the amount of ratings is an essential statistic that should be analyzed when reviewing professor ratings.

Furthermore, there are many sources of potential biases in the ratings. For example, perhaps a student disliked professor on a personal level, and thus gave a subjective rather than objective evaluation. Any form of bias in the responses will skew results, hindering any information that could have been extracted.

In attempts to combat the many sources of bias, the site attempted to establish guidelines to inform raters on what is acceptable and not acceptible in terms of rating. This may aid the low reliability of the site, however I personally doubt majority of the raters read through these guidelines prior to rating.

As a result, before taking the ratings on Ratemyprofessors.com as a reliable tool in determining a professor’s teaching skills, one has to consider the lack of reliability in these subjective ratings. Indeed, it is tool that provides some value, however each student is different, and therefore it is advisable to users that the ratings should be taken only as a secondary source when collecting information about courses and making enrollment decisions.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.