Autonomy in AI: a very human principle

Roberta Barone
the-stepstone-group-tech-blog
8 min readDec 1, 2022

--

When we think of ethical AI systems, algorithmic bias and fairness are likely the first topics that come to mind. But the principle of autonomy, sometimes excluded from ethical AI frameworks, is just as important. In this post we define autonomy in connection with our freedom to choose and act upon our choices, and we explore how certain AI systems interplay with it. To illustrate, we look at pre-selection of content in recommender systems and its implications for autonomy.

What is autonomy?

Autonomy can be defined in different ways, but we all have a sense of what happens when we have it and when we don’t. It has something to do with our ability to act according to our self-determined choices, desires, reasoning, and inclinations; we can probably also think of examples both at the individual and collective level, like “an autonomous child” or “an autonomous region”.

There are many empirical studies supporting the connection between perceived autonomy and well-being, or life satisfaction, and they make for interesting reads [1]; furthermore, this principle crosses cultures, and finds expression in various articles of the UN’s Human Rights Declaration [2].

Even without empirical proof or theoretical support, though, many would agree on the importance of having autonomy, and on the fact that more autonomy is usually better than less of it.

But what is autonomy, really? We can go on and on about it: we can even argue that everyone is determined by society and culture, or worse, by genetics, and thus real autonomy cannot exist. If this is starting to look a bit like the debate about free will, it’s because the two concepts overlap.

However, to avoid getting sucked into the rabbit hole, let’s just posit for now that, in general, philosophers distinguish two separate conditions that need to be present simultaneously to satisfy the principle of autonomy [3]:

  1. (A certain extent of) liberty to choose without controlling influences, and
  2. (a certain extent of) liberty to intentionally act upon those self-determined choices.

I add the qualifier “a certain extent” here, because in the context of this post autonomy is to be intended in harmony with other ethical principles, for example the principle of non-maleficence. Institutional and legal limits on individual liberty serve to protect the autonomy of others so that, in practice, no cartoon villains, as much as they are autonomous, would make the cut. As the old saying goes, one’s autonomy ends when another’s begins.

Despite his strong preference for doing exactly whatever he wants, Elon Musk is not the champion of autonomy we are looking up to

When machines choose for us: Autonomy in AI systems

If we accept the premises above, when a technology impacts our freedom to 1.choose and 2.act upon our choices, it is impacting our autonomy.

Even “dumb” technologies can meddle with our autonomy in complex ways (to name one, cars give us freedom of movement while limiting the range of other transportation modes for us and others), but information technologies in general, and AI systems in particular, are especially poised to challenge this principle through different mechanisms. In our digital day to day, recommender systems based on machine learning are the technologies that most interfere with our freedom of choice, and for this reason we will focus on them in the next section.

Also the way digital processes are designed, for instance through UX techniques, can effectively “nudge” us toward a specific outcome and manipulate our choices, but I won’t include them in the present discussion for two reasons: first, because they mostly don’t require machine learning or automation, and second because UX techniques can actually be leveraged to make recommender systems more supportive of users’ autonomy. UX “gone bad”, related to the aptly named dark patterns [4], is a fascinating discussion that deserves a separate post.

I guess they were out of free unicorns (image source)

Recommender systems

The algorithms behind recommender systems are born out of the necessity to sort through the deluge of information on the Internet, with the intention of making it more relevant and useful and coupled with an interest in sponsoring certain results over others.

This can also be done through manual filters, but while the latter rely on users actively expressing their preferences to deliver a selection of results, recommenders infer these preferences, using different statistical methods based on the collection of large amounts of data on users (and sometimes their peer group), without the user necessarily having to signal any of them.

The lack of intentional expression of preferences is famously epitomized by TikTok, where a user’s inclinations are inferred mostly through implicit information about how they interact with the content on the app, to create a precisely personalised feed.

Recommenders were conceived by researchers [5], and quickly found commercial applications, first in e-commerce and later in other digital platforms.

Yes, Amazon, you are spot on

Other recommenders that nearly anyone who is online in 2022 will have come across are content recommender systems for music, articles, posts, videos; people recommenders in social networks, dating sites, networking platforms; and job recommenders in online recruiting platforms.

Recommender systems in online recruiting

A great example of the abundance problem recommender systems tried to respond to is related to online recruiting. Candidates can now apply for jobs quickly and almost at no cost; while this is a welcome improvement, it can have the side effect of making it exceedingly easy to apply for multiple positions, sometimes regardless of the affinity to the job description [6].

In certain conditions the number of applications can get so high that it’s difficult for HR departments to go through all of them: candidate recommender systems, developed to address this issue, automatically sort through the pile, and help recruiters by presenting them with a manageable number of candidates to choose from.

The challenges on the candidate’s end are similar. It’s hard to navigate the sea of job ads, now not even limited by geography thanks to the proliferation of remote working options; job recommender systems go through available ads and select the ones that better match the candidates’ preference and characteristics according to the model.

At both ends, recommenders typically rely on a mix of intentionally given and implicitly inferred information about a candidate. This can affect autonomy in important ways when:

  1. We don’t have the possibility to opt out of the collection of implicit signals;
  2. We can’t check why the system offered certain results, or
  3. We can’t modify these results, in case they are not a good fit.

The added problem of bias

Even when we are happy to delegate the grunt work of sorting through candidates and jobs to a recommender, if the criteria employed by the system are not auditable at least to some extent, it can take time to spot errors in the model and this can lead to unfair results.

Candidate and job recommenders sometimes come under scrutiny when they are opaque, i.e. they deliver results based on “black box” calculations that are difficult or impossible to explain, and can hide spurious correlations, correlations that are statistically true but don’t have meaning otherwise. An example often quoted in the literature about recruiting recommenders is the following: Amazon tested (and later discarded) a recruiting algorithm that used machine learning to infer which candidates would be successful if hired, based on the characteristics of successful employees. Unfortunately, successful employees were mostly males. Even if the model was not taking gender into account, the algorithm found a way around this and predicted success more often for male candidates, excluding candidates that had proxies for gender in their CV — for instance, candidates who went to women’s colleges or played in women’s sports teams.

The double ethical whammy to autonomy AND fairness makes recruiting recommender systems particularly sensitive. Because of this they are singled out by the EU AI Act (and GDPR before it), and recently became the object of one of the first laws about algorithmic hiring in the state of New York [7].

So how do we address this challenge? How can we support job seekers and employers with tools to navigate the large numbers of opportunities while respecting their autonomy?

Because this topic is so relevant for us at StepStone, it will be explored in a separate post.

Delegating choice: is it that bad?

There is no clear-cut answer. We might have a point in thinking that it is not such a big deal when YouTube serves us a playlist that we have not deliberately curated, when Amazon suggests our next read or Google gives precedence to a certain search result — we may even really like the suggestions. We would probably also recognise, though, that things change when we talk about different kinds of recommender systems, like job recommenders, as they can materially impact our livelihoods.

However, some researchers argue that there is a subtler way in which delegating our choices to automated systems can harm our autonomy and ultimately our well-being [8]. According to them, forming our preferences and making active choices are not innate but acquired abilities, and they wither if they are not exercised.

In this sense, losing the habit of choosing makes us worse at it, and the cognitive effort required becomes progressively more unsustainable, to the point where it is difficult to make any meaningful choices at all. It is a bit like we are giving up on the intentional formation of our identity, and this can bring to a loss of autonomy and, consequently, a diminished satisfaction with our lives. But there are ways to prevent this worst-case scenario, and ensure technology works to make our lives easier while helping us flourish as human beings.

In the next few posts, we will talk about the state of research, and how recommender systems can be designed to support autonomy.

A caveat: as much as we like to think about technology as a magical tool that will eventually rid humanity of all its problems, the first step to respecting users’ autonomy in recommender systems has something to do with being transparent about the (very unmagical) statistical techniques used to select results, honest about the fact that results can be flawed, and humble about conceding that in some occasions users might be the ones who know best about their own preferences.

Notes and references

[1] The value of autonomy for the good life (Leonie C. Steckermeier, 2020)

[2] See for example UN Human Rights declaration, articles (3), (18), (19)

[3] Tom L. Beauchamp and James F. Childress. 2019. Principles of Biomedical Ethics (8 ed.). in Oxford University Press, in Respect for Human Autonomy in Recommender Systems (Lav R. Varshney, SalesForce research, 2020)

[4] Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions» (Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites, Mathur et al., 2019)

[5] A brief history of recommender systems (Zhenhua Dong et al., 2022)

[6] Hidden workers, untapped talent (Fuller, J. et al, 2021, Harvard Business School)

[7] NYC Proposes Rules in Advance of 2023 Automated Employment Decision Tools Law (Mintz, Nov 2022)

[8] On the Ethics of Public Nudging: Autonomy and Agency (Schubert, 2015)

Read more about the technologies we use or take an inside look at our organisation & processes.
Interested in working at The Stepstone Group? Check out our careers page.

--

--