Too Important To Automate

The case for AI-free zones (and saving our morality from the future)

Say you work in Human Resources at a large corporation and get roughly a hundred applications on your desk every day. Would you appreciate a pre-sorting mechanism that helps you filtering out those 80% from your inbox that have bad spelling? Would you bother sending those 80% a rejection note, or use a software feature to do so automatically? What if you had to handle not a hundred but a thousand applications a day? At what point would you choose to even send successful applicants a contract, negotiate salaries and hire them all automatically? What if the software also offers to identify inefficient employees? What if it finds Human Resources’ staff is not really that necessary anymore? If it fires you, who is to responsible — the program, the manufacturer, or you for installing it in the first place? Would it feel different to be fired by a program than by your supervisor? Why?

Smart futures

What might seem like a far fetched future scenario is the reality we’re already living in. ResumeterPro, one of many software firms employing so called ‚Applicant Tracking Systems‘, claimed that 72% of the resumes their software handles are filtered out before a human even gets to see them. That was one year ago. Last month — a mere 10 months later — a Japanese insurance company announced it is planning to lay off almost 30% of their workforce, and replace them with IBM’s Watson algorithm, in order to determine payment amounts to insurants.

These examples are just a fragment of the wake of artificial intelligence (AI) and smart technologies impacting our society to come. We’re steering into a future that is abound with smartness: smart assistants, smart inboxes, smart cars, smart contracts, smart drugs, smart homes, smart cities. And while the application of AI in many of these areas promise great value in terms of efficiency, it is advisable for us to not confuse smart with better.

The duty of prudence

To see why this is an issue, it helps to take a closer look at what we mean by ‚smart‘ exactly: Artificial intelligence is intelligent because it learns. The most widely applied (and probably most hyped) among the praised machine learning algorithms these days is the so called deep learning algorithm: an artificial neural network, that works by generalising probabilities, by looking at (at lot of) data.

Given the anticipated impact it will have on virtually all levels of our future society, it would be highly irresponsible for us to be negligent about how and what it learns exactly. There is a palpable chance, that the principles assumed by our smart companions do not exactly meet our moral and ethical beliefs. And that can be a serious problem: examples range from lewd chat bots („trained“ by equally lewd interlocutors), to Google Ads for high-paying jobs not being displayed to women, to algorithms replicating racists patterns in jurisdiction. The severity and potential ramifications of flawed AI is appalling — and the fact that most of the AI in question is kept undisclosed as a corporate secret, does not really help the matter.

Thus, prudence is indispensable when designing and applying AI, and public and democratic scrutiny are needed to live up to our moral responsibility to strive towards the world that we understand as desirable. We — companies, judiciaries, policy makers, but also users, citizens, and „operators“ of smart products and tools in general, really — would be well advised to embrace this basic principle of responsibility and to understand the inalienable duty of prudence that comes with it.

The case for AI-free zones

There is a greater catch, however. And it applies even if we ever actually had „flawless“ AI, acting completely in concordance with our moral beliefs in every single case:

I argue that we should never allow any AI to make decisions that affect a person’s autonomy, because doing so would inevitably disrespect that person’s human dignity.

This, I claim, is due to the fact that AI is genuinely incapable of treating a human with respect by morally valuing its dignity. Since allowing it to violate human dignity would be morally unacceptable, it follows that whenever dealing with peoples’ autonomy, we are morally obliged make so personally, rather than automatically. This means we are obliged to establish AI-free zones that are too important to automate. Let’s take a closer look at these arguments that greatly refer to Kant’s concept of morality.

The duty of respect

It is safe to say, that when we refer to a human’s autonomy, we implicitly refer to its dignity, since autonomy — that is the capacity for self-determination, self-fulfilment, and free choice of one’s destiny — is an integral trait of dignity. Furthermore, human dignity demands that a person is always treated as an end in itself, rather than only as a means to any other goal. This moral understanding for valuing a person is what we call respect. Acting upon this understanding of is thus a moral virtue and duty, and intending to live up to it can be considered a moral act.

Since only rational agents can bring about their own moral beliefs and intentionally act upon them, only they can perform moral acts. Vice-versa, if a non-rational actor acts identically, it might produce the same functional outcome, but that action is lacking moral value.

This is the case when it comes to AI, and more specifically the deep learning algorithm illustrated above: It works by observing the world (data) and retrospectively abstracting general principles. Moral intent is replaced here by deduced probabilities of accurateness, making its decisions both, highly functional and amoral. AI thus cannot be understood as a moral agent and is therefore inherently incapable of moral acts — including the act of respecting another person as an end in itself.

In short: Every human is entitled to be treated with respect. Respect is a moral virtue that only rational agents are capable of achieving. AI is not a rational agent and hence cannot treat humans with respect. This is, however, mandatory when interfering with a persons autonomy. Now, where does this leave us?

Striking a blow for Human Dignity

In order to defend human dignity, it is mandatory for us a society to reserve decisions that interfere with a person’s autonomy to humans (rational actors) exclusively. This is needed to protecting human dignity from being disregarded by our smart companions as they are incapable of treating it morally.

It is to be discussed, what decisions and areas this „interference of autonomy“ applies to specifically, but the fact that they do exist is indisputable, obliging us to mark ‚safe spaces‘ for human dignity to thrive in a smart future society. In other words: Let’s have AI-free zones!

Reasonable Doubt

Before we discuss extent and demarcation of said areas, let’s quickly review some potential objections to my argument.

Edginess is everywhere

First, one might argue that moral decision making is only needed in so called ‚edge cases‘, extreme scenarios in otherwise harmless applications. The famed auto-pilot that has to pick between harming its inmates or harming a bystander, being an example of such a case. The argument here being that those edge cases do not justify restricting AI all together because while they can possibly arise in any field, the actual likelihood of occurrence is insignificantly small. Allowing them to hinder greater benefits would be inappropriate and thus unreasonable.

Edge cases point to the notion of moral dilemmas, contradictions of two applying maxims that make acting upon both of them mutually exclusive, and thus impossible. However, these dilemmas are in fact rather common in our society: from underlying social principles when hiring an applicant (don’t we restrict an applicant in its freedom of self-development by not giving him a job?), to making an ethical decision about patient management at a hospital (which patient should be treated first and why?), all the way to concepts of good governance and social justice, when, say, managing public spending (should we fund a new school or subsidise cancer research?).

Comparing the incomparable

The greater problem here is thus not insignificance but incommensurability which in arises from the moral beliefs of a rational agent, and is invaluable.

While morally insolvable, these problems have to be dealt with practically. As a society we do so by giving humans the freedom to act upon their own moral judgements — and holding them legally (not morally) accountable, if their judgement conflicts with our laws. This way, instead of proposing one universal and coherent moral code, we leave moral responsibility (and the dilemmas that come with it) with the agent himself. This neatly preserves the moral freedom of the agent, by allowing for de facto relative morality and discourse — a practical and robust way to deal with moral dilemmas as a society.

AI, however, is functioning by calculating probabilistic results. It thus has two options when being confronted with the value of human dignity: neglecting it all together — or representing it by a discrete, computable figure. Such a quantified representation, however, would strip the supreme value of of dignity of its incommensurability and put it in relation to other quantities. This however would violate the supreme value of human dignity as it is not relatable, measurable, or computable.

Both alternatives that AI has when being confronted with human dignity — neglecting it or quantifying it — would disrespect its inherent quality of incommensurability, making it again unfit to process dignity.

Fuzzy lines

Calling for rational agents whenever dealing with moral decisions could lead us however to the abandonment of AI all together, as there a moral dimension behind practically any decision that we make. One might say, it is virtually impossible to draw the line between morally sensitive and morally safe decisions, thus the establishment of AI-free zones would be unpractical.

This is a valid concern as we’re dealing with a recursive problem here: the definition of ‚what is okay to automate‘, or rather ‚what decision involves a moral judgement‘ is itself a moral judgement. The line distinguishing AI-free areas and AI-fine areas is thus hard to draw if we want to make it consistent throughout various situations — it is fuzzy and hard to pin down. How to deal with this?

As with all moral challenges, we must direct the problem back to ourselves, asking wether we as moral agents consider it justifiable and ethically appropriate to employ AI in a specific case — or to refrain from it. While this does not ‚let us off the hook‘ of moral responsibility, it also requires information and transparency about the matter at hand. All the more reason to make sure, our AI is transparent, well documented, and comprehensible. Open Source licensing and through public review processes can be valuable tools to this end, by the way.

Implications

These are ambitious challenges, but it has been done before: Our society has efficiently dealt with many other ethical judgements. It implemented practical mechanisms to handle them collectively and evolved shared legal understandings of what is okay — and what is not. The authorisation and regulation processes of pharmaceutical drugs being just one example among many.

  • Preservation Areas
    Declaring zones where human dignity is presumably affected by decision making could be a first step. These areas would likely include
  • labor: decisions related to employing and paying staff as well as defining working conditions
  • education: decisions affecting self-development potential of pupils, including rating them
  • health: ethical decisions about the treatment of patients
  • jurisdiction: any legal assessments, especially when referring to leal sentencing and punishment
  • culture & social policies: decision referring to public funding and subsidies to social groups, cultural goods, and public policies
  • environment: decisions affecting the welfare and autonomy of present and future generations
  • Institutional support
    Providing institutions for appeal, when feeling treated undignified by an AI. These should be legally embedded and have the power to overrule and declare legally void any decision made by an AI. Existing ethical committees can be a good reference in this regard.
  • Informed decision making
    At least in these areas, AI should undergo specific assessment per se. This does not mean it is to be forbidden all together, but moral responsibility must be carried by a rational agent, even if his decision in informed or assisted by AI. Again: comprehensible and transparent processes are morally mandatory here. No black boxes!
  • Mandatory public domain for AI
    As a general measure to allow for public scrutiny, one could argue a generally binding law that calls for any AI applied in the public realm to be licensed as Open Source. This can be sided with a mandate to implement opt-in mechanisms wherever possible.
  • Our smart future is right ahead. Let’s make sure its’s a responsible future also, so we can enjoy them morally enjoy it, too.