Generative AI in Online Employment Markets: Risks and Mitigations for Job Seekers

Tao Zhang
SEEK blog
Published in
8 min readJul 18, 2023

In this blog, Dr Tao Zhang, a Data Scientist in Artificial Intelligence & Platform Services (AIPS) at SEEK, Melbourne, writes about the risks and mitigation methods of generative AI for job seekers in online employment markets.

Photo by Tara Winstead from Pixabay

In the last post, we discussed some of the most impactful opportunities that Generative AI (GenAI) brings to online employment markets. This cutting-edge technology, with its ability to create new content, is an exciting prospect in all fields. However, like all technological advancements, these benefits are inevitably accompanied by potential risks. This is particularly true when dealing with AI systems which are, in many ways, still mysterious and difficult to understand entirely, often referred to as a ‘black box’.

In this post, I will discuss a subset of the risks and mitigation strategies associated with the introduction of GenAI in online job markets. Our attention will focus on the potential risks and corresponding responsible mitigation methods identified during the ChatLLM23 workshop from the standpoint of job seekers. We aim to provide a balanced view, offering insight into both the opportunities and challenges presented by the utilization of GenAI in these rapidly evolving digital employment landscapes.

Risks and Mitigation Methods of Job Seekers

In this post, our attention is directed towards the job seekers’ perspective. Drawing from the workshop deliberations, we organized the potential risks of utilizing GenAI into three distinct categories, as illustrated in Figure 1. We will explore each of these categories in detail, followed by a discussion on mitigations aimed at mitigating these risks. The responsibility for implementing these mitigation strategies is across multiple stakeholders. Subsequently, we will provide a succinct overview of the stakeholders implicated in the responsible application of GenAI.

Stakeholders engaging in responsible utilization of GenAI

Achieving responsible use of generative AI involves many efforts from different stakeholders, and each of these stakeholders could play a vital role in ensuring the responsible use of generative AI. Here are some examples:

Governments can develop regulations or horizontal and normalised guidelines to ensure that GenAI is designed and used responsibly. This could involve guidelines on data privacy, algorithmic fairness, transparency, accountability, and standards for the use of GenAI in different industries. Governments can also promote education and awareness about GenAI. This could involve supporting educational programs in schools and universities, public awareness campaigns, and resources for self-education.

AI developers and researchers need to design and train AI systems in a way that respects user privacy and does not amplify harmful biases. They should also prioritize transparency, explaining how their AI systems make decisions so users can trust and understand the technology.

Companies need to develop clear policies for using and managing GenAI. This includes ethical guidelines for the use of GenAI and procedures for auditing and controlling GenAI systems.

AI ethics committee (which can be internal or external to the company) needs to regularly evaluate the company’s GenAI systems and practices to ensure they meet ethical and legal standards.

Users and consumers need to understand how GenAI is being used in the products and services they use and examine the quality of generated content. They should be able to opt out of using GenAI if they choose, and they should have a way to report concerns or misuse.

Educational institutions should not only provide knowledge about GenAI but also incorporate ethics and responsibility into GenAI curricula to equip future GenAI practitioners with the knowledge and skills to develop, manage and use GenAI responsibly.

Figure 1: The risks and mitigation methods when job seekers use GenAI

Fairness

Risk — Equity of usage of GenAI service

Equity of usage in the context of GenAI services refers to the fair distribution and accessibility of these services across all demographic segments, irrespective of their geographic location, socio-economic status, age, etc. However, in reality, there are billions of people on earth who do not even have reliable access to the internet, let alone the computer literacy to use tools like GPT. This could lead to amplified inequalities between job seekers. For example, if some job seekers use large language models (LLMs) to help them write cover letters, they could have an advantage over those who do not have access to or knowledge of this technology. This could widen the gap between different groups of people.

Mitigation methods

Making GenAI usable for all job seekers can be quite a task, considering the digital divide crosses different regions and socio-economic classes. Ensuring equitable access and usage of GenAI services largely falls under the government's responsibilities. Governments play a crucial role in formulating regulations, and infrastructure that can facilitate fair and inclusive access to such transformative technologies. Here are a few suggestions:

  1. Skills and Jobs Centres: Create tutorials and guides for job seekers on how to make use of GenAI for job seekers.
  2. Tech Support: Provide dedicated hotlines where job seekers can call and receive step-by-step guidance on using GenAI.
  3. Public Access Points: Partner with local libraries, community centres, or schools to provide access to GenAI on public computers. Staff at these locations can also offer assistance to job seekers struggling to use the technology.

Risk —Bias in CV/cover letter generated by GenAI

AI learns from the data it is trained on. If the training data contain human bias, whether implicit or explicit, the AI may inadvertently learn from these biases. If the training data over-represents certain groups, skills, or qualifications and under-represents others, the generated resumes may reflect these imbalances.

For example, suppose we have a large language model (LLM) that has been trained on a dataset where most of the software engineering roles are associated with male pronouns. This is an example of the bias present in the training data reflecting societal stereotypes. Now, let’s imagine two users — one male and one female — using this LLM to generate a resume for a software engineering position. The LLM might be more likely to generate a strong, detail-rich resume for the male user, including more technical details and emphasizing harder skills. On the other hand, it might generate a less technical resume for the female user, perhaps emphasizing softer skills more, based on the biased associations it learned during training.

Mitigation methods

  1. AI developers and researchers of the GenAI system used by the recruitment department should take measures to identify and mitigate potential biases. This can be achieved through diverse and balanced training sets, robust testing, and continuous monitoring to reduce AI bias.
  2. Job seekers need to review and validate the content generated by GenAI. Although this tool can be a very powerful tool for producing resumes and cover letters, it is not always great, and the quality of generated content may vary.

Security and data privacy

Risk — Data leakage of job seekers

When users upload their personal information, such as details regarding their education, work experience, skills, and more, to AI-driven platforms to generate polished resumes or cover letters, they potentially expose this sensitive data to cybersecurity threats. Any weak link in the service provider’s security systems can make this data vulnerable to breaches, potentially leading to unauthorized access, identity theft, or misuse of personal data. Moreover, some AI services may store user data to improve their algorithms, posing additional privacy concerns. Without appropriate safeguards, these databases can also be targets for cyber attacks.

For example, we have a job seeker named Tom. He decides to use an online GenAI service to enhance his resume. He inputs a substantial amount of personal information into this platform, including his full name, contact details, medicare number, educational background, detailed employment history, etc. The GenAI platform he used has limited security measures in place (for example, they haven’t invested heavily in cybersecurity). Later, the AI platform suffers a cyber attack. The attackers are able to bypass the platform’s security measures and gain access to the database containing user information. As a result, they gain access to the personal data of all users, including Tom.

Mitigation methods

  1. Users and consumers need to fully understand the data policies of the platforms they use, and where possible, limit the amount of personal information they share. It’s also advisable to use services from reputable providers who are known for their robust security and privacy measures.
  2. Companies should comply with strict data protection standards, such as encrypting sensitive data and implementing security measures at every stage of the machine learning development process to reduce security risks.
  3. Companies should also be open about how they control and use customer data. This may involve clear and easy-to-understand privacy policies and settings that allow users to opt out of data collection.

Risk — Misuse of GenAI

Unethical users could potentially use GenAI to create professional qualifications, creating a false representation of their abilities and experiences to prospective employers. For example, suppose there is a job seeker named Tom who is aiming to secure a position as a technician. Even though Tom does not possess the necessary technician certification, he decides to use GenAI to fabricate a technician’s certificate as proof of qualification to present to potential employers.

Mitigation methods

  1. For companies, it calls for the need to have rigorous background checks and verification processes in place.
  2. For AI developers and researchers, they might consider limiting what GenAI can generate. For example, it could be programmed to avoid creating content that looks like legal documents, certificates, or other potentially sensitive materials.
  3. AI developers and researchers might include mechanisms to monitor how their GenAI service is used. Undesirable behaviour or usage patterns can trigger alerts or even automatically restrict access. of GenAI tools to consider safeguards to discourage such misuse.
  4. Governments can enact laws and penalties against the fabrication and use of fake certificates generated by GenAI.

Transparency

Risk — Transparency of the usage of GenAI

Job seekers may use GenAI to help write or enhance their resumes or cover letters. This technology can assist in presenting a job seeker’s skills, qualifications, and experiences in a more structured and professional manner. However, this raises questions about authenticity and transparency. Potential employers might be under the impression that the resume or cover letter is entirely the job seeker’s own work, reflective of their communication skills and professionalism. If it’s revealed or suspected that a job seeker used GenAI to craft their application materials, it could lead to concerns about their abilities and whether they might have used the technology to exaggerate or misrepresent their qualifications.

Mitigation methods

  1. This transparency risk requires dialogue in the recruitment field on the ethical use of GenAI tools in job applications. Employers might create an environment where using such tools is seen as a sign of resourcefulness and technical proficiency, rather than a form of deception. Job seekers can be encouraged to publicly disclose their use of GenAI during the job hunting process, such as using it to optimize resumes or prepare for interviews.
  2. Companies need comprehensive evaluation methods for job seekers, to assess their ability to surpass the true abilities presented in their application documents.

Summary

In this post, we discussed risks and mitigation methods extensively when using GenAI in online employment for job seekers. The integration of GenAI in recruitment can yield a wave of improvements, notably in process efficiency and personalization of the job seeker experience. However, the journey isn’t without hurdles. This technology also presents risks, including potential bias in AI-generated content, data privacy and security concerns, and issues around transparency, service quality issues, etc. To deal with these risks, we propose some practical mitigation methods to ensure that the usage of GenAI in online employment is positive. In an upcoming article, we will delve into the potential risks and mitigation methods when utilizing GenAI from the viewpoint of employers.

--

--