Generative AI in Online Employment Markets: Risks and Mitigations for Employers

Tao Zhang
SEEK blog
Published in
10 min readAug 27, 2023

In this blog, Dr Tao Zhang, a Data Scientist in Artificial Intelligence & Platform Services (AIPS) at SEEK, Melbourne, writes about the risks posed by generative AI in online job markets and discusses the strategies to mitigate these risks for employers.

Photo generated by DALL·E

In the last post, we discussed the potential risks and mitigation methods that Generative AI (GenAI) may bring to online employment markets. Our main emphasis was on how it impacts individuals seeking employment and the ways to effectively shield them from these risks. Now, it is time to shift the spotlight onto the other crucial player in this game — the employers.

In this post, we will dive into the perspective of employers, understanding how they, too, might encounter certain challenges in the wake of GenAI’s integration into the digital employment scene. We will look at the specific risks and mitigation methods that were pointed out during our in-depth discussions at the ChatLLM23 workshop. From this viewpoint, our goal is to form a balanced perspective, shining light on not just the possible hurdles but also the exciting opportunities that can sprout from GenAI’s implementation.

Hirer risks and mitigation methods for employers

Based on the discussions in the workshop, we realised using GenAI in the hiring process is not as clear-cut as one might think. There are, in fact, some risks associated with its use that need to be addressed to maximise its potential and secure a fair and efficient recruitment process. Since GenAI is a subset of AI, it’s recognised that certain risks and their mitigation methods are inherent to AI (such as transparency risk), while others are unique to GenAI. Through the discussions, we categorised these risks into five distinct groups, as represented in Figure 1. Then, we provide some mitigation methods for these risks for different stakeholders engaging in responsible utilisation of GenAI.

Figure 1: The risks and mitigation methods when employers use GenAI

Fairness

Risk — bias in job ads generated by GenAI

GenAI can help write job ads. However, a significant risk from this application is the potential for bias in job ads generated by GenAI. For instance, if the data used to train the AI includes job ads that use language skewed towards a particular gender, age group, or ethnicity, the AI might continue to use similar biased language in the job ads it generates. This could lead to job ads that discourage certain groups of people from applying. Moreover, if the AI is trained with data that suggests certain roles are predominantly filled by job seekers with a specific background or set of skills, it might overemphasise these factors in the job ads, potentially discouraging a diverse range of job seekers.

For example, an AI system has been trained on a dataset consisting of job ads for software engineer positions from the past few decades. During this time, the tech industry has been predominantly male. The original ads might frequently use words like “rockstar” or “ambitious” which, according to linguistic studies, can inadvertently appeal more to male applicants. If the GenAI system learns from this data, it could generate job ads for software engineer positions that continue to use this language. Thus, even though the employers have no intention of discouraging female applicants, the GenAI-produced job ad could subtly perpetuate a gender bias, making the position seem less appealing to potential female applicants. This kind of unintentional bias could limit the diversity of applicants and perpetuate existing disparities in the industry.

Mitigation methods

  1. AI developers and researchers need to carefully curate the data used to train/fine-tune the GenAI system. This should involve examining and adjusting the training data to ensure it is diverse and free from biased language. The more varied and representative the training data, the less likely the GenAI is to pick up and perpetuate biases.
  2. AI developers and researchers could create a feedback loop to collect and act on feedback from hirers and job seekers. This feedback can help identify biases and adjust the AI’s outputs accordingly.
  3. Hirers need to review the job ads carefully to make sure the generated ads promote equal opportunity and are welcoming to a diverse range of potential applicants.

Risk — bias in finding suitable job seekers

GenAI in the recruitment domain can change the way suitable job seekers are identified for job roles. For example, GenAI can be used to create models that understand the job requirements and assess the match between job seekers’ qualifications as presented in their resumes and the job requirements. However, an inherent risk related to this technology is the potential bias in the job seeker selection process. If the GenAI system has been trained on data that include biased information, it may inadvertently replicate these biases when evaluating job seekers.

For example, suppose GenAI’s training data consists of resumes from past successful job seekers who predominantly come from certain schools, industries or geographical regions. In that case, GenAI might develop a bias favouring such backgrounds. As such, it may underestimate the value of job seekers with diverse, non-traditional experiences and qualifications, even though they can bring unique perspectives and skills to the role. Also, if GenAI is trained to focus on specific keywords or phrases in resumes, this could work against job seekers who have the required skills but express them differently.

Mitigation methods

  1. These potential biases stress the importance of careful management and oversight of GenAI in recruitment. AI developers and researchers should ensure that the data used to train or finetune the GenAI is unbiased and representative.
  2. Companies should establish a regular evaluation method of their systems, involving comprehensive audits. Such audits could dive into the system’s decision-making processes and data sources, and assess any patterns of bias. This helps identify and rectify any biases present, ensuring the fair use of GenAI.
  3. Companies should adopt a balanced approach combining AI-driven screening with human judgement to promote a fair and diverse selection process. For example, in our previous post, we proposed an aptitude-based approach to select job seekers based on estimated capability and commitment to perform or learn a task.

Explainability risk

Risk — failure to explain the response produced by GenAI

One challenge with using GenAI is the risks associated with explainability. Specifically, the difficulty in comprehending and explaining the responses or decisions generated by GenAI. When GenAI is used to evaluate applications, match job seekers to job roles or answer job seeker queries, it uses complex algorithms or models that make extensive calculations based on the data it has been trained on. While this can greatly speed up the recruitment process and help identify patterns or insights that might be missed by humans, it also has a downside. The decision-making process of GenAI can be highly complex and not easily understandable to humans. This lack of explainability can lead to misunderstandings and frustration among job seekers who receive responses or decisions from GenAI systems. They might question the fairness of the selection process and whether their application was fully and accurately evaluated.

For instance, if a job seeker’s application is rejected by a GenAI-driven recruitment tool, it may not be entirely clear why this decision was made. The tool might have factored in numerous variables from the job seeker’s resume, job description and training data to arrive at this conclusion. However, the exact reasoning behind the decision might be challenging to decipher and explain.

Mitigation methods

  1. AI developers and researchers can design GenAI with more explainability. A widely adopted approach to improving explainability is to equip GenAI with mechanisms to refer to underlying reasons or data that influence the generation of specific content. In other words, AI should not only generate output in a vacuum but also provide insights into the information that influences its outcomes.
  2. AI developers and researchers can employ a method known as ‘chain-of-thought prompting’ to allow GenAI to explain its decision-making procedure. To produce an explanation for a given decision, the GenAI is required to retrospectively navigate through the sequence of steps it takes during the decision-making process. This involves identifying which features (e.g. skills, qualifications) are most influential in its decision. For instance, the AI might explain that it ranked a job seeker highly because their skills closely matched the skills of successful job seekers in its training data.

Service quality risk

Risk — a well-crafted CV may have unreliable information about job seekers

In the recruitment landscape, GenAI might lead to risks in service quality. Specifically, in regard to the limitations of a CV in fully representing a job seeker’s actual proficiency and potential. CV, as a fundamental part of the job application process, provide a condensed overview of a job seeker’s qualifications, experiences and skills. When job seekers use GenAI to enhance their resumés, these documents may become more polished and professional, potentially helping the job seekers to stand out in the eyes of recruiters.

However, a well-crafted CV, created with the aid of GenAI, might not accurately reflect a job seeker’s true abilities, potential and fit for a role. It may overemphasise specific skills or experiences, while under-representing others, especially those that are harder to quantify, like interpersonal skills, problem-solving abilities or cultural fit within a team. Further, job seekers may use GenAI to inflate their qualifications or to align their profiles more closely with job descriptions. This could lead to recruiters having difficulty accessing a job seeker’s actual capabilities.

Mitigation methods

  1. Companies need a comprehensive recruitment process that extends beyond just reviewing CVs. Other evaluation methods, such as interviews, skill tests, assignments and reference checks, could be employed to gain a more accurate understanding of a job seeker’s abilities and fit for a role. For example, many jobs require practical skills. Use skill-based tests or tasks to verify the competencies listed on a resume. Depending on the role, these can be subject-specific quizzes, coding tasks, design tasks, etc.
  2. Job seekers are encouraged to be more transparent regarding their use of GenAI. Being transparent and straightforward about when and how GenAI is used to build resumés will give more context and reduce questions that hirers have.

Risk — dependence on GenAI

Overreliance on GenAI could also pose a service risk. When recruiters begin to overly rely on GenAI, they might neglect their professional instincts and experience, aspects that play crucial roles in effective recruitment. GenAI can automate many aspects of the hiring process, including writing job ads, engaging with job seekers via chatbots, evaluating job seekers, etc. but it cannot fully understand the complexities and nuances of human behaviour, personality and potential. For instance, a job seeker might not have all the listed qualifications for a role but might be a perfect fit due to their unique blend of soft skills, adaptability and potential to grow. Furthermore, overreliance on GenAI might lead to a lack of personal touch in the recruitment process. Human interaction is essential in making job seekers feel valued and in assessing aspects such as cultural fit and interpersonal skills, which an AI system might not fully grasp.

Mitigation methods

  1. Companies should strike a balance between leveraging the advantages of GenAI and benefiting from the irreplaceable human touch in hiring decisions.
  2. Hirers of companies should be trained to understand the limitations of AI and how it can best help their work. They should be encouraged to use AI as a guide rather than an infallible decision-maker.

Transparency risk

Risk — transparency of the usage of GenAI

Transparency is a critical component of ethical business practices. In recruitment, the use of GenAI introduces new challenges to maintaining transparency. The risk emerges when job seekers are unaware they are interacting with an AI model. For instance, they may receive automated responses to their queries or follow-ups without knowing these communications are generated by GenAI. This could lead to misunderstandings, as job seekers might assume they are interacting with a human recruiter. They may also feel deceived or uncomfortable if they find out later that AI was used without their knowledge.

Mitigation methods

  1. Companies should be transparent about their use of GenAI during the recruitment process. They should inform job seekers about the extent to which AI is involved, how it influences decision-making, and how their data is used and protected. This upholds ethical standards and promotes trust and understanding between job seekers and the company.
  2. Companies may also need to establish guidelines or policies about disclosing the use of such tools to understand what level of AI assistance is acceptable.

Security and privacy risk

Risk — privacy violation

GenAI tools can scan vast amounts of data online to evaluate job seekers. They can examine digital footprints across social media platforms, blogs, online publications, etc, to form a comprehensive profile of a job seeker. However, if misused, they can violate a job seeker’s privacy. For instance, a recruiter, using these AI tools, might dive deeper into a job seeker’s personal life by accessing information that is not relevant or appropriate for the hiring process. They could uncover and consider personal details such as political views, religious beliefs or family matters, which are often considered irrelevant in recruitment due to ethical considerations and anti-discrimination laws.

Mitigation methods

  1. Companies need to set boundaries on how GenAI is used in recruitment. Policies should clearly define what kind of online data can be considered during recruitment, with a clear emphasis on job-related information only.
  2. Companies must also ensure they comply with relevant data protection and privacy laws and maintain transparency with job seekers about what data they collect and how it is used in the hiring process.

Risk — private data leakage from training/finetuning datasets

Companies sometimes need to finetune GenAI models using specific datasets to enhance utility and accuracy. In recruitment, this data might include resumes, cover letters or employee records — all containing detailed, private information about individuals. However, this process can inadvertently lead to privacy breaches. For example, if a GenAI model has been finetuned with private information data, it might reproduce or display these details when generating content.

Mitigation methods

  1. AI developers and researchers need to anonymise and aggregate data as much as possible before using it to train or finetune a GenAI model.
  2. Companies should have strict data use policies in place and ensure they comply with all relevant data protection regulations.

Summary

In this post, we delved into some of the potential risks and corresponding mitigation strategies of using GenAI in online employment from a hiring perspective. This list is not exhaustive and we encourage readers to suggest further potential risks and mitigations. Addressing associated risks becomes crucial as GenAI becomes more integral to the hiring process. To ensure a positive impact, we recommend adopting responsible mitigation tactics. The key to harnessing the benefits of GenAI while minimising risks lies in responsible AI management by intent — adherence to ethical standards, robust privacy and security protocols and sustained dedication to transparency and fairness.

--

--