Do Robots Act for Reasons ?

There are reasons why robots and other machines behave as they do at any given time. Knowledge of how they were programmed, combined with knowledge about the environment in which they behaved will give us their structuring and triggering causes, respectively. Information about each of these is, individually of jointly (depending on our standpoint and interests) crucial to the explanation of their actions.

I have argued elsewhere that reasons why we act should not be conflated with the reasons for which we act, but all I shall presume here is that the latter are at most of the former. If I miss a lecture because I overslept, my failing to wake up is a reason that explains why I didn’t make it on time but it is not a reason for which I missed the lecture, in the sense in which oversleeping might be a consideration I act upon (as in the case where I go to see my GP because I’m constantly oversleeping). Our question, then, is whether robots can act in the light of considerations which they take to favour their course of action viz. agential reasons.

The question is important because if machines are going to make important autonomous decisions for us it is important that these are made for reasons. The argument for this is simple: decision-makers ought to be accountable for their actions and accountability is inextricably tied to the practice of offering reasons for one’s decisions. The ability to act for a reason is a necessary (but not sufficient) condition for being given the responsibility of decision-making. Without it humans may wish to consult machines, but should stop short of allowing them complete choice over any non-trivial actions.

How would we know whether any given robot is acting for reasons? We could of course ask it why it acted as it did and see if it offers and explanation in terms of its own reasons. If it persistently fails to do so, the signs aren’t good. But if it manages to offer convincing answers, we can progress to thinking about whether or not its claims are veridical (clearly any machine can be programmed to mention reasons).


What might machine explanations in terms of their agential reasons look like? Revealing the algorithm behind the agency in question would produce an explanation of the robot’s behaviour, but it would not be evidence of it having had a reason for it. For that it would need to render its behaviour intelligible by telling the end user what the reasoning behind it was. It would be a mistake to get bogged down in philosophical debate here over whether or not machines can reason. There is clearly a sense in which they can which is strong enough to allow for sensible questions about machine reasoning.

To give a simple example, a search engine might ask the user whether they meant something different from what they typed. There are various reasons why it might ask such a thing. Perhaps the original string of words typed by the user would produced no hits, or far fewer hits than the engine’s suggestion. Or perhaps the suggestion is simply a more popular search term with other users. Or again, perhaps the machine’s behaviour is the result of specific rule to intervene with a particular suggestion in response to one or more keywords e.g. Google’s tweak to bring up the Samaritans’ phone number in response to questions about suicide methods.

Such explanations could reveal an why engine behaved as it did in response to a particular query. Have we now revealed an agential reason for its action? For this to be plausible the machine must at the very least be able to tell us why it pulled up the Samaritans above anything else. But this alone may not be good enough. For one, the answer it gives to the user needn’t be the answer it gives to the user’s loved ones. Moreover, it would need to additionally be prepared to offer a reason for why it nonetheless also followed its top suggestion with pages containing information on suicide methods. The curious juxtaposition is in one in need of a particular kind of justification, one which would again vary depending on who was enquiring. Needless to say, machines are a long way off from managing anything remotely approximate, even by comparison to speechless animals.

There is little evidence, then, to suggest that they currently act for reasons of their own. Whether or not they are in principle incapable of such action is a much tougher question. We would first need to understand what sort of thing a machine’s reason might be. Such obstacles, I hope to have shown, are not purely empirical; they require a conceptual re-imagining of machine reasoning that is robo-centric.


A version of this story is forthcoming in The Philosopher’s Magazine.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.