Virtue Ethics and Emerging Technologies

This article originally appeared in the Monday Magazine of 3 Quarks Daily, see here

In 2007 Wesley Autrey noticed a young man, Cameron Hollopeter, having a seizure on a subway station in Manhattan. Autrey borrowed a pen and used it to keep Hollopeter’s jaw open. After the seizure, Hollopeter stumbled and fell from the platform onto the tracks. As Hollopeter lay there, Autry noticed the lights from an oncoming train, and so he jumped in after him. However, after getting to the tracks, he realized there would not be enough time to get Hollopeter out of harm’s way. Instead, he protected Hollopeter by moving him to a drainage trench between the tracks, throwing his body over Hollopeter’s. Both of them narrowly avoided being hit by the train, and the call was close enough that Autrey had grease on his hat afterwards. For this Autrey was awarded the Bronze Medallion, New York City’s highest award for exceptional citizenship and outstanding achievement.

In 2011, Karl-Theodore zu Guttenberg, a member of the Bundestag, was found guilty of plagiarism after a month-long public outcry. He had plagiarized large parts of his doctoral dissertation, where it was found that he had copied whole sections of work from newspapers, undergraduate papers, speeches, and even from his supervisor. About half of his entire dissertation was stuffed with uncited work. Thousands of doctoral students and professors in Germany signed a letter lambasting then-chancellor Angela Merkel’s weak response, and eventually his degree was revoked, and he ended up resigning from the Bundestag.

Now we might ask: what explains this variation in human behaviour? Why did Guttenberg plagiarize his PhD, and why did Autrey put his life in danger to save a stranger?

The most intuitive way to explain this is to make an appeal to different character traits. It seems that Guttenberg lacked honesty, as he was willing to contravene German law and academic standards to save himself some time and effort. Inversely, Autrey was courageous, as despite the fact that he was placing himself in harm’s way, he nonetheless found the courage to act and save a stranger’s life.

Both courage and dishonesty are part of our moral vocabulary, and, more specifically, we think of them as virtues and vices. Virtues being those dispositions or character traits that we praise persons for possessing, and vices being those that we find them blameworthy for possessing. In virtue ethics, one of the three main branches of normative ethics (the other two being deontological and consequentialist ethics), virtues play a foundational role in moral appraisal, and the rightness or wrongness of an action is to be cashed out in terms of stable character traits. Thus, the focus point of virtue ethics is on agents. This is in contrast to the other two normative approaches mentioned earlier. In consequentialist ethics (of which utilitarianism is the most common offshoot), the focus is, naturally, on the consequences of people’s actions. In Autrey’s case above, we could say that he acted in a morally praiseworthy fashion because of the fact that his actions led to him saving a life, which increased (or maximized) overall utility (or ‘happiness’). Deontological ethics (of which Kant’s moral philosophy is the most famous offshoot) concerns itself with how we ought to show respect for persons and the principles that we act on. For example, in Guttenberg’s case, we might say that he violated the maxim “do not lie”, and therefore ought to be held responsible and blamed. However, for the consequentialist such maxims or principles play a secondary role, as the rule “do not lie” might justifiably be broken so long as it produces more desirable consequences than truth telling.

These traditional moral theories have dominated discussions about what it is we ought to do in many different situations. The question I want to now ask is which of them, if any, is up to the task of helping us chart a course through emerging technologies. I am not going to claim that one of these theories is the best all things considered, but rather suggest that one might be best in the specific case of emerging technologies.

So, while the sketch I have presented here is obviously not the full story, the point was merely to outline some essential components of the three ethical theories. Before seeing how they fare in the face of emerging technologies, we need to first get a handle on the key characteristics of these technologies: they often create novel moral situations, in which traditional moral concepts might be challenged. For example, self-driving cars might challenge what we think of as a ‘moral agent’, Lethal Autonomous Weapon Systems (LAWS) might challenge our traditional concept of ‘moral responsibility’, and the emergence of Big Data techniques challenge ideas of what ‘privacy’ entails in the digital age. Let us go through each theory one by one.

Consequentialist theories face problems due to the unpredictable effect that emerging technologies may have. If we have high levels of uncertainty regarding the potential consequences of our actions (and how they interact with technology) then the abstract calculation of utility seems almost impossible. It would seem that holding people responsible due to consequences that they could not have reliably foreseen would be unfair.

For the deontologist we find a problem emerges due to its focus on fixed or universal moral laws. With emerging technology, we must pay attention to the specific context in which it might be embedded, and while such universal principles may be of help, they are insufficient by themselves. Having an abstract moral principle does not help us when we are attempting to figure out the ways in which novel technologies might challenge our existing understandings of morality, by, for example, extending the class of entities that might be counted as moral agents (such as LAWS).

This leaves virtue ethics, which I think holds out the best hope for our dealings with emerging technology. This is because of its focus on prudential wisdom, and its emphasis on the ability to respond appropriately in a variety of situations. This avoids the problem of unpredictability, as the focus is not on consequences, but on inculcating certain moral virtues. It is here that we can note one of the essential features of virtue ethics: the focus on right action (or excellence) is not only about the action itself, nor is it wholly concerned with following the correct rules. Rather, the central point feature is in the cultivation of right actions over time, and the aforementioned virtuous states emerge from, however slowly, continued action. In this way virtues are a kind of muscle that we need to train properly. The way we build a virtue like temperance, for example, is by observing those who we believe extoll this virtue and attempting to model our behaviour on theirs. Such exemplary persons serve as the guiding star in our cultivation of the moral virtues. This is not to say that this is the only way to get a handle on the virtues, however. As noted above, reason responsiveness also plays an important role.

This provides us with a very useful frame to use when analyzing emerging technologies, as we are not stuck trying to find a priori criteria for moral truths but can rather focus on the conditions which enable human flourishing and use these in the design of various technologies.

Take digital communication, which includes instant messaging applications but which for my purposes will exclude things like video calls (once again calling attention to how we need to be specific when assessing any technology). The utilitarian might say that face-to-face interaction maximizes utility because people enjoy it more, and we should therefore try to minimize virtual interaction. The deontologist might say that computer mediated interaction is intrinsically less valuable than face-to-face interaction. Of course, these example arguments are merely illustrative — there might be much more compelling consequentialist and deontological arguments. In each case, however, notice that the conclusion is that the technology must either be good or bad, and there is little room for context. Virtue ethics, however, gives us the possibility of providing far more nuanced responses. We might say, as Shannon Vallor does, that

“The morally significant virtues of perseverance and commitment tend to have weaker expression in present online social environments where anonymity is combined with low entry and exit barriers, when compared with social environments (online and offline) with less anonymity and/or higher entry/exit barriers.”

Thus, virtue ethics gives us a way of understanding the effects that technology may have on agents as they interact with it over time and in different contexts. It can also make salient how exactly certain virtues might be promoted or hindered, and thus does not rely on either a strict technological determinism or on an entirely constructivist understanding of technology. “Technology” is not some monolith that when investigated gives us binary answers about how it ought to be used. Rather, each specific technology will have its own specific ways of influences our ability to flourish as humans, and virtue ethics provides us with a helpful way of tracking this influence.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store