Image by Patricio González from Pixabay

AI Ethics at StepStone

Infusing ethics into AI systems as a stimulating intellectual, technical and organisational challenge

Roberta Barone
5 min readSep 5, 2022

--

Jobs are our job

At StepStone, we use artificial intelligence and data science to develop JobTech, the infrastructure supporting the digitization of the labour market and making it more functional. Through increasingly more accurate candidate-job matching and other technologies, StepStone’s products shape users’ job searches and recruiters’ candidate searches, potentially influencing the careers of many.

For most of us, our jobs play a key role in the formation of our identities; our livelihoods often depend on them; and our workplaces, real or virtual, can represent a relevant portion of our social interactions.

As a company working in digital recruiting, we are aware of the impact that our technologies can have on such an important aspect of our lives, and we want to make sure that this impact is positive.

To achieve this goal, we must look at our products through the lens of AI ethics.

What is ethics?

The word “ethics” comes from the ancient Greek ήθος (éthos), which originally meant something along the lines of “custom, habit, character” of a person or a group of people. Over time, ethics has come to indicate the science of governing our habits, actions and character, guiding us towards decisions that are morally good and right. Incidentally, the more clearly characterised word “moralcomes from the ancient Latin mos, moris, also pointing to the same concepts of habit, custom and character.

So, ethics couples what we usually do with the concept of what is morally good and right. As any human who has ever lived knows, though, what is morally good and right often escapes a clear-cut definition and shared understanding, from the trivial (should Alice tell Adrien that she ate from his box of cereal this morning?) to the existential (should euthanasia be permitted and, if so, under what circumstances?), and this is the reason why ethics has evolved into a structured discipline which deals, in varying degrees of sophistication, with difficult moral questions.

What is AI ethics?

AI ethics is ethics applied to the field of artificial intelligence, and it has been defined as the branch of ethics that studies and evaluates moral problems related to data, algorithms and corresponding practices, in order to formulate and support morally good solutions (1).

In Europe, over the past few years, the different guidelines and interpretations of applied ethics have coalesced around a set of principles that have their origin in the Charter of Fundamental Rights of the European Union.

In particular, the EU has rooted its recommendations for Trustworthy AI in the following four ethical principles:

1. Autonomy
2. Fairness
3. Prevention of harm
4. Explicability (or explainability).

The fourth and last is the only principle that is domain-specific, referring in particular to AI system’s opacity. It has sparked the beginning of a very rich research field, Explainable AI (XAI).

The definition of AI ethics involves data, algorithms and corresponding practices. It shows how ethics is not something that exists in isolation, an “ethical” cog that can replace an “unethical” one, leaving a company’s processes undisturbed.

Ethics at the frontier of innovation in AI

At a first glance, it might look like building products with AI ethics in mind might stifle innovation and hinder financial growth. At StepStone, we believe the opposite is true.

Infusing ethics into AI systems is a stimulating intellectual, technical, and organisational challenge. This is reflected in the exploding body of academic research that has been devoted to the study of ethical issues originating from the use of AI systems, and the different techniques to mitigate them.

*Source: Google Scholar

To serve as an example that is very close to us at StepStone, let’s take the 2011 paper Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality, by Waterloo and Duke universities (2). The paper showed that job ads that used “gendered” words (words that are male- or female-coded without being specifically masculine or feminine) would put off the opposite gender from applying to that specific position. Picking up from this study, Totaljobs (StepStone’s UK branch) performed its own analysis in 2018, which eventually formed the basis for the development of our Gender Bias Decoder, a tool to detect gendered words in job ads, which also provides alternative, more balanced wording.

This is a good case study to illustrate how the acknowledgement of an ethical issue by academic research brought to social awareness and change through a technical innovative solution.

Incidentally, the Gender Bias Decoder tool brought visibility and press coverage to the company (3), having a cascading positive effect on brand perception.

From isolated experiments to “ethics by design”

Building pilot products is a great starting point to experiment with AI ethics, but our goal is more ambitious: design all our products with ethics in mind.

Not an easy task! The definition of AI ethics involves data, algorithms and corresponding practices. It shows how ethics is not something that exists in isolation, an “ethical” cog that can replace an “unethical” one, leaving a company’s processes undisturbed.

There are many situations where things can go wrong when looking at only one component. To give three short examples, a product can use anonymized, unbiased data, but use algorithms that disregard the autonomy of its users; another can have a fair decision-making system that feeds on historically biased data; a third can be built on ethical datasets and algorithms, but be completely opaque.

Ethics is thus a cross-departmental effort, and it requires time, coordination, and a shared basis for action. Data, legal, engineering and design teams must be on the same page in order to build ethically consistent products and move toward ethics by design.

Different case studies (4) show the best way forward to reach the goal: adopting an ethical framework or code of conduct, institute ethics boards that function as both repositories of knowledge and enforcement bodies, create a culture of AI ethics that permeate the company starting from the on-boarding phase; and ultimately leverage the innovation brought by AI ethics to create a shared messaging to be communicated consistently by sales, PR, branding and marketing teams.

In the next posts we will explore different aspects of AI Ethics, from its philosophical meaning to its recent popularisation and the reasons for it, to the legislation it inspired and the best practices identified so far to operationalise it.

The first of this series of posts will focus on the ethical principle with the longest history, which recently took a prominent place in the debate about artificial intelligence: Autonomy.

Notes and references

(1) The Ethics of Artificial Intelligence: Principles, Challenges and Opportunities (Floridi, 2022)

(2) Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality

(3) See, for example: How hidden bias can stop you getting a job (BBC); The subtle way most job adverts discriminate against genders (Cosmopolitan); Can Diversity Checker Bots Help Level Gender Equality? (Forbes)

(4) See, for example: IBM case study (WEF, 2021); Microsoft case study (Markkula center for applied ethics, 2021)

Read more about the technologies we use or take an inside look at our organisation & processes.
Interested in working at StepStone? Check out our careers page.

--

--