He said / She said: Gendered Language in Psychological Tests

Response bias is a central issue in test construction. According to a recent study by Vainapel et al. (2015), an overlooked form of response bias is the use of masculine generics, which involves employing the masculine form of words as generic terms intended to refer to both men and women. To illustrate, imagine a survey administered to both men and women that utilizes the pronoun “he”, but never “she.”

Gender-neutral language can be achieved by including both the masculine and feminine versions of words. Consequently, the solution seems straightforward: why not just employ “he / she” or “they” in all test questions? For a language such as English where few words are gendered, the occasional slash or pluralization is enough to arrive at a gender-neutral test. However, in languages like French or Hebrew, most parts of speech are gendered, meaning that verbs, adjectives, pronouns and nouns have a masculine or feminine version. Consequently, constructing a gender-neutral test becomes increasingly complicated.

A critical question emerges: how does the use of masculine generics in questionnaires and surveys affect psychological testing? Vainapel et. al (2015) investigated this issue using two versions of a self-report test measuring motivation. One version was written in masculine generic language, the other was adjusted to be gender-neutral. Both were in Hebrew and administered to college men and women.

Overall, Vainapel et al. (2015) found that relative to those phrased in a gender-neutral manner, masculine-generic language questionnaires biased women’s self-reported motivation, with women reporting lower levels of intrinsic goal orientation and task value on the masculine-generic version. This finding has important implication for the field of psych testing: the use of masculine generics undermines a test’s validity.

An interesting follow-up question is why gendered language biases test response. Vainapel et al. (2015) suggest that women may better identify with phrases that apply to men and women, thus increasing self-report accuracy, or alternatively, that the use of the masculine generics may diminish women’s situational motivation. Additionally, I think that it would be worthwhile to explore the link between gendered language response bias and stereotype threat, as well as sexism and gender stereotypes more broadly.

Given concerns for test validity, future research should expand upon and correct the limitations of Vainapel et al.’s (2015) study. For example, the motivation test used by Vainapel et al. was translated from English to Hebrew, without assessing the translated test’s psychometric properties. A necessary line of research is thus to investigate whether or not Vainapel et al.’s findings would replicate using a variety of tests in different languages looking at a variety of constructs of different degrees of gender relevance, as well as tests employing only female language.

To understand the scope of the problem, it is also necessary to assess how common the use of masculine generic language actually is in psychological testing, especially across languages. I would conjecture that the use of male-only language in questionnaires has declined as our society has become more sensitive to gender equality and diversity. For example, during a recent class project that involved completing twenty tests developed by my peers, I noticed no gendered language. Moreover, in my own questionnaire, I intentionally strived to phrase questions in a manner open to gender diversity. However, as stated above, gender-neutral language is considerably easier to achieve in English than in more gendered languages. Consequently, research into this area should also consider less tedious solutions to masculine-generics than than including the feminine and masculine version of all adjectives, verbs, nouns, pronouns, etc. in every test question.

Overall, researchers should take into account and continue to investigate the effects of gendered language on test validity. It is surprising that prior to Vainapel et al. (2015), no studies explored gendered language as a form of response bias. What other types of response bias are we failing to consider?

Works cited:

Vainapel, S., Shamir, O.Y., Tenenbaum, Y., Gilam, G. (2015). The Dark Side of Gendered Language: The Masculine-Generic Form as Cause of Self-Report Bias. Psychological Assessment, 27, 1513–1519. http://dx.doi.org.proxy3.library.mcgill.ca/10.1037/pas0000156

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.