ICTC’s Tech & Human Rights Series

Self-Driving Cars and the Politics of Innovation

An Interview with Dr. Jack Stilgoe

Kiera Schuller
ICTC-CTIC

--

On April 8th 2020, ICTC spoke with Dr. Jack Stilgoe, Senior Lecturer in the Department of Science & Technology Studies at University College London, where he researches and teaches the governance of emerging technologies. Dr. Stilgoe is the Principal Investigator of the Driverless Futures? Project, a three-year social science project looking at the governance of self-driving cars. He is a frequent contributor to The Guardian and his recent book is called Who’s Driving Innovation? In this interview Kiera and Dr. Stilgoe discuss the governance of self-driving cars, COVID-19, the ethics and politics of innovation, and how autonomous technologies are not in fact autonomous.

Kiera: Thank you so much for speaking with me today, Dr. Stilgoe. You are focused on a very important political and social question of our time: how, and whether, we can meaningfully control and guide the technologies that are starting to run our lives. Can you tell me briefly about your background and what you do in your current position(s)?

Dr. Stilgoe: Absolutely. Before I became an academic, I had a background in policy. I worked with the Royal Society in London for a number of years, and before that worked at a policy thinktank called Demos. I work right now on artificial intelligence (AI) issues with policy bodies such as the AIan Turing Institute, Dot Everyone and other London-based bodies trying to broaden the debate on AI.

At the moment, I’m particularly focused on AI and self-driving cars, not because I come from that area (my previous area was in geoengineering and nanotechnology) but because usually when hype grows around an area of emerging technologies, it can narrow down discussions around those technologies, and I consider that problematic. My interest is in trying to broaden those discussions and make them more democratic. A lot of what I do is I try to bring new voices into debates about emerging technologies; I do a lot of public dialogue work on emerging technology issues, most recently on self-driving cars.

Kiera: At University College London, you teach courses on “Governing Emerging Technologies” and “Responsible Science and Innovation.” What does “governance” and “responsibility” mean in the context of these technologies?

Dr. Stilgoe: The central problem — the problem that I start with — is that the governance of emerging technologies often comes too late. By the time we know where the advantages and drawbacks of technology lie and how they impact society, it can be too late to do much about it, and so we live with the consequences of technologies. Too often we recognize the problems in hindsight. So the question is: can we consider and do things in an anticipatory manner, rather than waiting until after the consequences become visible?

On responsibility, to the extent that scientists and innovators have some privileged knowledge, I want to ask what their responsibilities should be in anticipating some of the issues around new technologies. This should be a public conversation. There are other questions of responsibilities, particularly when you are looking at combinations between humans and automated systems; for example, where does responsibility lie in the event of something going wrong? But my interest is more in the first issue: the question of responsibilities of research scientists and innovators making new things, as far as they understand the potential uses of those things in the future.

Kiera: Is there a particular set or framework of guidelines that you think could be used at the level of innovators or scientists to achieve this, like the UN Guiding Principles on Business and Human Rights?

Dr. Stilgoe: I have resisted identifying one particular set of values to apply to technology developments. I am more interested in this question: if we do want to see particular aspects of public value or outcomes from technology, how do we get those fed in upstream? Now, those could be values such as human rights, protection of fundamental principles of privacy, solidarity, autonomy or any other set of values that society considers important. For example, I’ve done a lot of work with the European Union (EU) on Responsible Research and Innovation, and they do apply a very particular set of values — the values that define the EU’s activities — and their question is ultimately, “How do you get those values embedded in research and innovation?” But my interest is in the model or process, rather than specific content, and the models that could apply regardless of what sorts of values you want to define. It is a question of politics.

Kiera: One specific topic that you have spent a lot of time researching is autonomous vehicles (AVs). You are currently leading a three-year project looking at the governance of self-driving cars and have written a book related to the topic. What are the major ethical questions or concerns with AVs?

Dr. Stilgoe: I’m actually critical of the framework of “ethics,” because I think it can narrow down the discussion, especially when it comes to AVs. Specifically, there has been so much early excitement about AVs being a possible “test” for practical ethics, such as the Trolley Problem. A few ethicists jumped on this problem and thought, “Oh, here is a new real-life example of the thought experiment we have been teaching for so long,” and I think that discussion has taken on a life of its own.

Of course, there are real ethical questions, particularly to do with the ethics of testing a technology in public. With AVs, for example, in order for the technology to develop to a point at which the public can be confident that it is doing what developers want it to do, it needs to be tested on public roads, and there are very real ethical questions about treating a public space as a laboratory. Conventionally, we regard laboratories as spaces that are contained, or spaces in which we can be very clear about who is and who isn’t a willing participant in that context. The ethical principle of informed consent is very important. It is much more problematic — and public — if you are testing an AV on the open road. The death of Elaine Herzberg, the first bystander to be killed by a car driven by software, brought those ethical questions to the fore. And there are other ethical questions in the design of technology as well, such as how safe is safe enough? That is not an engineering question; that is a question of values. But ultimately, I would say ethics is often too narrow a way of looking at these things, and ultimately these are questions of politics.

Kiera: That leads perfectly to my next question. You’ve written, in the context of AV’s, that in a discussion that has been dominated by science, engineering and narrow questions of ethics, there is a need to draw attention to the old questions of politics: Who wins? Who loses? Who decides? Who pays?” What are the politics of the development of emerging technologies? What does that look like?

Dr. Stilgoe: First, it means not starting with the technology per se, but starting with the problem that technology developers are looking to solve. The first political question would be, “What do we want in the future, and how might technology serve those ends?” This is the reverse of the conventional question around technology, which is, “How will technology change our lives?” Second, while knowing who will win and who will lose from technology is very difficult because most technologies are profoundly uncertain, we can often know enough to ask the right questions.

The philosopher Langdon Winner argued that technologies have politics. For example, a nuclear power plant demands different political arrangements from micro-generated renewable energy. The differences aren’t just technical, they are also differences of political vision, leading to different uses, risks, benefits, and regulatory setups. We can’t know in advance what the political constitutions of particular technologies will be, but we often anticipate some key questions, even if we are unsure of the precise details. With autonomous vehicles, for example, we can ask questions such as, “Are these likely to improve or worsen current inequalities and injustices within transportation, in terms of who has access to mobility in particular places?” We can ask, “What sorts of companies or agencies are going to be able to deliver services that rely on autonomous vehicles?” If they are dependent on software, for example, it’s likely we may have the same types of economies of scale — monopolies or oligopolies — that we see in other technology industries, which could be problematic for transportation. The point is that we can ask these questions about the political economy at early stages, even if we are uncertain about what the precise outcomes will be.

Kiera: Turning to the idea of responsibility, what kinds of responsibilities lie at different stages of the technology lifecycle for the outcome of the technology: for example, considering scientific researchers, innovators, investors, companies, customers/users or government?

Dr. Stilgoe: There is a conventional division of responsibility between different agents in technology development. The assumption is that scientists and innovators produce technologies that are value-neutral, and it is up to the public sector and regulators to decide if and how we want to use them. Here, the responsibility for protecting the public interest sits with the regulator, while scientists and technologists are simply contributors to the marketplace of ideas. But we need to come up with a model that is much more collaborative than that. We need to challenge that old division of labour because innovators can certainly think about things like, “What might ‘good’ AI might look like? How transparent do we want our AI to be? Who should own algorithms? Who should have access to data?” These are typically upstream questions of responsibility but shouldn’t just be placed upon government or regulators.

Kiera: You have noted before that despite the term “autonomous vehicle,” these technologies are far from autonomous if you consider them through a “social scientific lens.” In what ways are AVs not autonomous?

Dr. Stilgoe: Yes, the term “autonomous vehicles” is and should be in quotations. First, a so-called autonomous vehicle is not just a lone robot interpreting and navigating the world on its own. For “autonomous” technologies or systems to work, they must be connected in various ways — connected to other digital systems and also connected to the infrastructures that support them. It’s the same way that a car does not simply run on its own; for a car to work, it needs fuel, roads, road-signs, maps, rules of the road, and people to enforce those rules. It needs a massively complex environment — what social scientists like me call a “sociotechnical system” — for it to function. If you look at all the parts of the system that are required for a so-called “autonomous” system to work, we can see that it is very complicated.

But there is also a more profound critique of the idea of “autonomy,” which is that technology doesn’t come from nowhere. There is a Silicon Valley idea of technology as being an autonomous force in the world, which evolves and grows, and we can do nothing but speed it up or slow it down. But that is not true; technologies are created by people, created to serve particular purposes, and there we need to think about the directions in which innovators are driving technology and ask whether we want that technology to go in that direction, or whether we’d like it to go in another direction. That is another reason we need to critique this idea of autonomy.

Kiera: I am very interested in the social impacts of technology, particularly on human rights and inequality. What are some concrete implications of some emerging technologies on human rights or existing inequalities?

Dr. Stilgoe: The ideal story often told around technology is that technology can challenge existing power structures, that it’s disruptive, it’s ruthlessly meritocratic, that anyone can innovate and that innovations can serve people who have lost out on other aspects of progress — even that innovation can break down existing social class structures. However, evidence shows that technologies can exacerbate, rather than narrow, inequalities. Tech innovations are often made by and benefit privileged people.

I think we need to examine where technology has not followed the above pattern. We should be looking for examples where innovation has genuinely tackled inequality, in order to see what is required. What are the processes and purposes behind innovations that have done a lot to tackle inequality? For example, what is it about mobile phones and certain agricultural innovations in some parts of the world that have made massive improvements for people in low-income countries?

Kiera: AVs get a lot of attention these days, but you also conduct research on a range of other emerging technologies such as genetically modified crops, nanotechnologies and geoengineering. Could you talk about the risks, benefits, and governance needs of another area of technology?

Dr. Stilgoe: Certainly. It is a little difficult because it tends to be that the hype comes first and the questions come later; if only we could find a way to reverse that process, things would be much easier. But I think AI in healthcare is a particularly problematic one because of a dynamic similar to that behind AVs: that there appears to be a strong justification for the need for innovation, and that justification may actually foreclose important other discussions and obscure the thoughtful, democratic debate around the direction of new technologies. For example, people justifying AVs will tell you that over a million people die in car accidents each year and we need to solve that problem. Cars are indeed a public health catastrophe, but the desire for quick fixes can foreclose thoughtful conversations about the direction of AVs and technology. Similarly, in healthcare, there are various problems that require innovation — from bureaucratic problems about how to aggregate health data, to scientific imperatives to link genomics to health outcomes — but these shouldn’t justify careless innovation.

Kiera: To close, how do you see the current “lay of the land?” Are these issues being taken seriously? By whom? Or does more need to be done?

Dr. Stilgoe: I can actually offer a somewhat optimistic story here. For AI, one of the interesting things is that because momentum has been extraordinary — the scale of investment and hype has been so large and so fast — there has been a rapid realization that questions of governance are important. The realizations have largely come because of some fairly well-publicised mistakes, and they are often framed in terms of ethics, but there is at least some consideration of values and rights and a recognition that engineers and scientists cannot on their own fully understand the issues that bedevil all of us. Thanks to this, there has been some reaching out for other voices — primarily consulting experts, such as philosophers and lawyers, rather than something more genuinely democratic. But I do see greater potential for constructive engagement and democratic debate. Particularly more here than with other technologies, where, for example, the technology has been developed, supported, and presented to the public as an effective fait accompli, and the public is then left to take it or leave it. So I would like to think that some lessons of past technologies have been learned when it comes to AI, but also when looking at the speed and scale of change, there remains cause for concern because, in that excitement, some of those questions may get drowned out.

At the moment, the COVID pandemic is creating a strong demand for technological fixes such as vaccines, treatments, or apps that build data on the disease. There is currently a lot of enthusiasm for contact-tracing apps that use smartphones as sensors. The benefits could be huge; if we get this right, it could be a way to give people back some freedom of movement in this context. But legitimate concerns about surveillance, data-ownership and function-creep should not be ignored. There is a danger that people who call for “responsible innovation” for contact-tracing apps, for example, themselves get labelled as “irresponsible” because of the desperation for solutions.

Dr. Jack Stilgoe is a Senior Lecturer in the Department of Science & Technology Studies at University College London, where he researches and teaches about the governance of emerging technologies. He is the Principal Investigator of the Driverless Futures? Project, a three-year social science project looking at the governance of self-driving cars. He is a fellow of the Turing Institute and a Trustee of Involve, a public participation think tank and has been a frequent contributor to The Guardian. His recent book is called Who’s Driving Innovation?
Kiera Schuller is a Research & Policy Analyst at ICTC, with a background in human rights and global governance. Kiera holds an MSc in Global Governance from the University of Oxford and launched ICTC’s Human Rights Series in 2020 to explore the emerging ethical and human rights implications of new technologies, such as AI and robotics, in Canada and globally, particularly on issues such as privacy, equality and freedom of expression.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--