Morality and User Interface Design are two topics that do not seem too closely related at first glance. After all, interface design is about “nice” and usable interfaces, but we as Usability Engineers and User Interface Designers don’t have to make decisions that have any severe impact on the morality side, right? — Well, how you design a user interface for a mobile phone may not be something Immanuel Kant would have bothered himself with, were he alive today. But what about such things as, e.g., user interfaces for weapon control?

M.L. Cummings at MIT wrote an interesting article on “Creating Moral Buffers in Weapon Control Interface Design” [abstract] in which she takes a look at military and also medical settings and describes the moral implications that decisions in those areas of interface design inevitably have.
The basic argument she makes is, that a user interface can create a “gap” between a person’s actions and their consequences which results in psychological/emotional (and in some cases also physical) distancing from those consequences and therefore in a diminished sense of accountability and responsibility: the moral buffer.
In addition, users have a tendency to anthropomorphise computers. (Those of you who ever yelled at their computer when it didn’t do what it was intended to do will know what she is talking about.) This, together with the cognitive limitations a stressful situation can produce and the moral buffer described above, can even lead to users assigning moral authority to computers in certain situations. This may seem rather theoretical — as long as you are not a patient in a hospital where staff relies on a system like APACHE, which determines “at what stage of a terminal illness treatment would be futile”.

Usability Engineers and User Interface Designers should be aware of this issue which basically affects every area where an interface has to be designed for a system that influences the well-being of humans — or the lack thereof, as with weapon control…

Two thoughts come to mind:

  • Can interface design also have the contrary effect, creating a deeper sense of moral involvement by the user?
  • Are there other moral pitfalls in a Usability Engineers / User Interface Designers work — even when not concerned with life-critical systems?

For the first question, I think it is possible. Ironically, an area that Cummings names as one encouraging emotional detachment could provide an example: video gaming. It is correct, that war seems like a video game at times and this can definitely alter the perception of the things that happen. But fortunately there are other types of games besides shooters. Take “The Sims” as an example. People spend hours caring for those computer-generated characters, providing them a nice home and helping them advance in “life”. Nothing they do has any significant impact on “real life” (except for the lack of time users may experience for other activities), but yet, players care very much for the well being of their “friends”. So this special kind of interface seems to cater to the tendency to anthropomorphise the computer by giving interface elements human form. (For “The Sims” players, it may even be weird to talk to the characters as “interface elements”. But that’s what they are, you click on them, get context menus — everything there…)
So does that mean that every interface should look like a game or provide little people for the user to see? Probably not. But maybe the standard questions: “What constitutes the task?” and “What information is needed to fulfil the task?” should be supplemented with “What does the user need to realize the implications of his actions?” If it’s Sims walking around, so be it. For other user types it may be numbers and statistics. The point is that “traditional” interface design may often take the easy way by put a narrow focus on the task and not caring for anything else — such as consequences.

This also answers the second question: Usability Engineers and User Interface Designers may ignore the moral impact of their work whenever their focus becomes to narrow. And the classical focus on users and their tasks may be exactly that: too narrow! We want to help users do their work more efficiently and comfortably. But designing systems in a way that allows that may also help to do the work with less people. So in this case it’s not the consequences of the users’ tasks that the Usability Engineer should consider, but rather the consequences of his own work that improves the efficiency of users working with a system.

The basic lesson is — and hopefully that does not come as a surprise for you — that Usability Engineers’ work is conducted in a context that may be larger than the one which is analysed in Contextual Analyses. This may not always be as obvious as it is with weapon control interfaces. Sometimes one could think, that a system one is dealing with is a very clear-cut entity that is used to fulfil a clearly defined task and that has no other (moral) connection to the “real world”. As seen above, one is well advised to think again…

Originally published in my blog at http://anotherusefulblog.blogspot.com/2005/11/morality-and-user-interface-design.html