The Ethical Dilemmas of Gathering and Presenting Gender Data on Employment Platforms

Nitzan Hallel
Aug 23 · 7 min read

Recently, platforms like Instagram, Twitter and Slack began allowing users to share their gender pronoun preferences in their profiles (he/him, she/her, they/them). What started with non-binary individuals wishing to better identify themselves, continued as an initiative of users to show support for the LGBTQ+ community, and is now becoming a standard practice adopted by the platforms.

In the context of an employment platform, this raises interesting ethical questions. While there is a risk that gender data would be used by hiring companies to discriminate against women and other gender minorities, it could also be used to support affirmative action and improve gender equality in recruitment. So, is it acceptable to ask people to provide their gender or pronoun on the profile they use to apply for a job?

Hiding gender to avoid discrimination

A 2014 study by Hays, examined the role of unconscious bias in candidate selection decisions. A group of 1029 male and female hiring managers were asked to review the CV of a candidate for a hypothetical Regional Sales Manager job and rate their skills. Of these, half received a CV of ‘Susan Campbell’. The other half received an identical CV but with the name of the candidate changed to ‘Simon Cook’. The study found that 62 per cent of respondents would interview Simon. Only 56 per cent would interview Susan.

This is a clear example of why candidates might not want to share gender information, or even their name, trying to avoid unconscious bias and possible discrimination.

Should designers of employment platforms take a similar stance? Should they avoid collecting demographic data from candidates to help eliminate unconscious bias in candidate selection?

Choosing not to present gender information for this reason is in line with the ‘veil of ignorance’ theory of justice developed in the 1970s by John Rawls. In his theory he stated that if we were all blind to identity attributes such as race and gender, we would design a fairer society.

An example of how SEEK is currently helping protect candidates against discrimination is a feature developed in its Talent Search platform, a platform that allows recruiters to search and proactively connect with relevant candidates. A checkbox option allows recruiters to hide candidates’ names so they can focus on candidates’ experience and skills. This feature helps recruiters not to fall into a bias trap, reducing gender and ethnicity bias in the hiring process.

Targeting females to correct imbalance

An alternative approach suggests that collecting and presenting data about candidates’ gender can help to achieve more equitable outcomes.

In a recent study conducted by SEEK Lead UX Researcher, Caylie Panuccio, talent acquisition staff in large organisations reported that they have targets or goals in place to make sure they hire for gender balanced teams. To achieve these targets — for example, to ensure half the shortlisted candidates are female — knowing the candidates’ gender can be beneficial. Having gender data available can also help employers identify where in the recruitment process female candidates drop off.

In contrast to Rawls, some feminist thinkers argue that as a diverse society, we should not be blind to identity attributes; instead, we should recognise and celebrate differences and address each group’s unique needs. This creates a more equitable society.

So, given these conflicting viewpoints, should employment platforms hide candidate gender to prevent discrimination or unconscious bias? Or should they present this information to support affirmative action by companies?

Or is there a middle ground?

There are risks and benefits when either showing or hiding gender data. But what if we think about this in a more nuanced way, considering what gender data platforms should collect and present, how they do it, to whom they present these data and for what purpose?

Presenting gender data at an individual level in the candidate selection process could be risky because of unconscious bias. On the other hand, recruiters who wish to hire for diversity want to know who the individuals are so they can ensure a balanced shortlist. As a middle ground, what if to begin with, gender information will only be shared in aggregate statistical analysis, where candidates are not identified as individuals?

Some employment platforms, for example, provide companies with aggregated insights about how their ads perform in terms of gender equality. This allows hirers to adjust their advertising to attract more diverse candidates. In a recent interview with a large tech recruiter, the participant told me that if she saw that her ads were attracting fewer females than the market, she would adjust her ads to highlight things that were more important to females such as flexible hours.

Collecting gender information directly from users allows users to self-identify, including in non-binary ways. However, some users may choose not to indicate their gender out of fear of being discriminated against. In that case, the aggregate data set may be partial, providing a less accurate representation of the gender distribution.

In an attempt to present a fuller data set, platforms may choose to infer all candidates’ gender based on information in their profile, like their first name. However, inferring gender from a person’s name involves the risk of labelling people against their own preferences, and not respecting gender fluidity and non-binary options. Further there is the risk of just getting it wrong, especially for gender neutral names (eg Kim, Sam, Chris).

So which approach is better? Perhaps platforms can combine both: inferring from names and giving users the option to self-identify if they wish to. In this way the name-inferring algorithm could improve and correct its assumptions based on results from the optional self-identified data. But we must acknowledge that this hybrid solution still risks mislabeling people.

What about the companies who don’t have diversity hiring policies and are not aware or unable to make D&I a priority? Should platforms still surface gender data to these companies? Even if platforms provide insights and help educate employers about writing gender neutral ads, there is a risk that they will not want to be educated and will use the data with unconscious bias.

To mitigate this risk, platforms could consider introducing a mechanism where gender information would only be shown to employers who have a demonstrated commitment to gender equity in recruitment. For example, to view gender data, platforms can require companies to show evidence that they are supporting gender diversity, and to complete dedicated training programs, or sign a statement committing to non-discriminatory use of gender information. Permission to access gender data can be expired after several months to ensure ongoing compliance.

Gender data can be used to influence policies to promote a more diverse and equitable workforce, in collaboration with companies, universities and government.

Aggregated data about gender imbalance can help target specific roles or sectors where policies or programs are needed to increase gender diversity. For example, hard evidence on the under-representation of women in tech and studying STEM subjects, led to a wide adoption of leadership programs for women in this field. Camp SEEK, for example, is a program to introduce the creative and varied careers available in the tech industry to girls and non-binary young people aged 14–16. Ongoing collection of gender data can help monitor the impact of such programs and facilitate improvements where necessary.

Gender equality indicators could also be displayed on companies’ profiles. Companies increasingly include diversity statements within their recruitment ads for competitive advantage. Gender data provided by platforms can help companies substantiate such statements. This could help candidates decide if they want to work for a certain company or not, based on its gender equality commitments and performance.

Conclusion

To conclude, gender equality in recruitment is a complex issue. Collecting and sharing gender data may be misinterpreted as enabling discrimination, however when considering the risks and benefits, and with the right safeguards in place, it could be a necessary step to help create more diverse workplaces.

While here I focused on gender equality, similar complexities apply to collecting data about age, Aboriginal and Torres Strait Islander status and disability. Each requires further research and a sensitive design approach that includes consultation with community groups or organisations to arrive at the right solution.

This post is an invitation to start the conversation. Please share in the comments below if you have more thoughts on what designers in employment platforms should consider when thinking about this topic.

Here are few interesting resources for further reading on this subject:

  1. A recent article from dscout, outlines the complexity of gathering personal demographic information like ethnicity.
  2. WGEA Data Explorer: this tool presents gender equality data submitted by organisations that report to the Australian Workplace Gender Equality government agency.
  3. Ethical Product Design: Should We Make the Unconscious Conscious? this post explains the thinking that led to developing the “Hide Names” feature in SEEK’s Talent Search platform.

Thanks to Rob Scherer, Liz Fritzlaff, Wlad Chagas, Daniel Jimenez Nassar, Leah Connolly and Chris Riley for their comments on an earlier version of this post.

SEEK blog

Enjoy our Product & Tech insights…