How Can We Be Responsible for the Future Of AI?
Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.
Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.
AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers.
Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.
Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation.
When evaluating AI, we are in effect engaging in some form of Technological Assessment (TA), which involves trying to understand the effects that various technologies have had, do have, and could have. Obviously, my concern here is with what effects the technology could have in the future. An interesting point of departure on this journey is to reflect on the German translation of technological assessment: Technikfolgenabschätzung.
Within this word, we find folgen, which, literally, translates to “consequences” in English. Here we see how embedded the role of consequences are in TA, and rightly so. Prospective knowledge (knowledge about the future) is by its very nature uncertain, and so we need to devise a means with which we can reduce this uncertainty. This leads to attempts to develop mechanisms to anticipate what the future might hold, which acts as a guide to how we might structure our decision making in the present, with respect to novel technology.
When assessing technology, however, what exactly are we evaluating? A straightforward answer might be that we are assessing, well, technology. Although, this would imply that there is something out there which is technology as such. Since technology is always embedded in a given social environment, it cannot be that technological assessment is only about technology. Another target for TA might be the consequences of technology. We might think that TA is concerned with predicting or estimating the impact that a given technology might have. Indeed, it is exactly this understanding of TA that seems to pose a problem for AI, as it is precisely our handle on these consequences that AI threatens.
However, there are two problems with this view:
The first is that the consequences of technology are not just the consequences of technology: these consequences are the result of a varied and evolutionary interactions between technical, social, and institutional factors.
Second, the consequences of technology do not yet exist. Therefore, strictly speaking, TA cannot be about these consequences per se, but only about the expectations, projections, or imaginations of what they might be. In this way, we come to see that when evaluating technology, it is not enough to simply state that we should be concerned with the consequences of a specific technology. Rather, we must be sensitive to the ways that our projections and visions of new technologies come to shape the way they are developed, deployed, and used. Technology is not developed in a vacuum, and to do research scientists must acquire funds. They need to sell an idea and convince those in charge of funding (who are often not experts in the field) that their investment will have a decent return. Thus, the acquisition of funds is often less about the science or technology itself and more about what it could make possible.
Recognition of this allows us to extend TA beyond consequentialist reasoning, and supplement such an approach with an investigation into the potential meaning of a given technology in order to uncover hermeneutic knowledge. Hermeneutics is concerned with interpretation, and thus centers discussions around questions of how the technology might change social configurations in its arena of deployment. Instead of only looking at the potential consequences of the technology, we need to train our attention on trying to give an adequate account what the technology means. This meaning is never “stable”, as it is an iterative process (often called the “hermeneutic circle”): once we take the time to understand the social meaning of a technology we do not come back to our original starting position. Rather, the process of uncovering meaning itself creates a kind of spiral, whereby new inputs are interpreted by society in a number of ways and come to influence our understanding of the specific technology. In this way we pivot from trying to predict consequences to approaches which instead focus on the process of development.
For example, we might ask about the consequences of predictive policing. Unfortunately, with the benefit of hindsight, we can see that the results have been damaging for those communities in which the system has been used. Hyper-surveillance partly produces crime (in the form of more arrests for petty crimes, for example), especially when police know that they are being deployed in areas and are on the lookout for criminal behaviour, creating a guilty until proven innocent scenario.
The point of this example is that before deploying such systems, we should not merely look at the consequences of the technology, but must also critically investigate how the technology will be embedded and what that might mean to the communities whom it will affect. This illuminates how (and why) we can enrich our assessment of technology by adding a hermeneutic perspective, which can better inform how we think about what the “consequences” of technology might be.
What does any of this have to do with AI and moral obligation? When assessing a technology unreliable knowledge about the consequences of that technology do nor foreclose our ability to investigate the societal meaning that the technology may hold. Therefore, while it may be true that the creators of AI systems might not be able to fully appreciate what the consequences of their systems might be (in a narrow sense), they can still take the time to investigate the systems societal significance.
What this means is that while AI and novel technologies complicate our ability to fairly apportion and comprehend our forward-looking responsibilities, they do not undermine our ability to do so.