Jack Stilgoe on Responsible Innovation

How can we govern emerging technologies?

Subtle Engine
Subtle Engine
9 min readNov 10, 2017

--

Metropolis’s Rotwang (not a photo of Jack): do mad scientists make irresponsible innovators?

“The development of full artificial intelligence could spell the end of the human race.”

So said Stephen Hawking to the BBC’s Rory Cellan-Jones a few weeks ago. Hawking’s caution followed a similar statement about artificial intelligence from tech entrepreneur Elon Musk earlier this year.

How do we balance the need to innovate and create new technologies to help us address the big challenges we face (sometimes caused by the side-effects of past technologies), while reducing the potential risk of these emerging technologies?

Dr Jack Stilgoe, Senior Lecturer in Social Studies of Science at UCL’s Department of Science and Technology Studies kindly agreed to be interviewed about the relationship between technology and society, and responsible innovation.

This post contains highlights (rather than a full transcript) from an interview with Jack conducted in 2014 and is published with his permission.

To hear about future interviews or other blog posts, follow Subtle Engine on Twitter or sign up for emails using the form at the bottom of this page.

Jack introduces himself as an academic…

I am a science policy academic, who has also worked in and around science policy-making for a few years. I’m interested in the relationship between experts and society, and have got involved in debates about democracy and public engagement in emerging science and technology issues.

…with a healthy scepticism towards some technologies:

Some of my students see me as a technophobe. I terrify them by telling them that technology is taking us to Hell in a handcart … which is not to say that I am anti-technology. I don’t think that anybody sensible would ever claim to be anti-technology. They might be anti a particular technology, like the Luddites, who are often characterised as being anti-technology in general.

Jack explains why developing policy that copes with emerging technologies is so difficult:

My abiding policy interest is in how do you govern: how do you come up with policies that deal with the pervasive ignorance we face when approaching emerging technologies. We don’t know what they will become, we don’t know what their benefits will be, we don’t know what their risks will be but it’s good to think about governing them before it’s too late, otherwise we get locked-in to particular technologies. Langdon Winner, the sociologist of technology, writes of ‘technological somnambulism’, the idea that we are sleepwalking into technology. So the question is “How can you be a bit more aware, a bit more attuned to technology before it’s too late?”

When it comes to technology policy, our society seems less well-equipped and confident at debating than in other areas of policy:

I would make a strong argument for the need for a critical analysis of technology… What we need with technology is the same sort of critique that we would apply to other areas of political power that affect our lives in various ways.

We should be able to talk about technology policy in the same way that we talk about health and education policy.

There’s a need for an analytical focus, rather than just taking whatever Facebook or Google say about self-driving cars at face value. We should be able to ask important questions of those claims.

Are there factors specific to technology that prevent us from developing this ability to discuss what we want from technology?

The big difference [from other policy areas] is that … scientists and technologists claim to be able to have unique access to truth about the world, which can shut down democratic debate. Science claims access to the answer, and the space for questions of choice and values gets closed down.

Jack researches and teaches what is now known as responsible innovation at UCL:

Responsible innovation is just a new way of talking about ideas about governing technology that aren’t new.

Writers like David Collingridge wrote [in the 80s] about the social control of technology: how you can never anticipate where it will take us, yet we need to control it at the point at which it is possible to control, in its early stages.

That’s a central dilemma. How you govern that dilemma is something that social scientists have been interested in for decades. There are various [principles and frameworks] and responsible innovation is a new way of talking about those sorts of things.

Jack lists several key principles inherent in responsible innovation:

The sort of principles that would fall into these approaches would be the need to improve anticipation: how do you encourage scientists and innovators to better anticipate the intended and unintended consequences of innovation?

Then there are principles to do with the inclusion of new voices. How could we democratise these discussions if we believe that technology is too important to be left to technologists alone, how can we get new voices involved, whether those are public voices or the voices of philosophers, historians, social scientists?

There are arguments to do with reflexivity; how to encourage scientists to reflect on their own assumptions. So if you went into Facebook you might say “when you come up with principles of privacy, would everybody share your idealised worldview?” And Facebook, a silicon valley company, staffed by engineers and young men, may have a unique worldview that wouldn’t be widely shared so let’s reflect on what alternative complexions of that might look like.

Finally, the point would be about responsiveness. How do you respond in the light of those new questions, how do you allow for science and technology to get better at changing direction, avoiding lock-in, avoiding irreversibility, so that you don’t end up down a cul-de-sac, how do you create the potential to be more open?

So [responsible innovation is about] those sorts of high-level principles, and you might think about how they would work out in different domains if you’re a neuroscientist or a geoengineering researcher or whatever.

Geoengineering has been a focus of research for Jack, and this field of applied science, together with GM crops, provides a good case in point:

I think geoengineering is a really interesting example. Consider what might have happened with it in a different era: geoengineering at one level is a set of proposals or techno-fixes, with all of the dangers that techno-fixes bring.

You can imagine that during the Cold War, scientists would just develop these things that they talked about in private, and maybe tested some things, and you would have got into a situation where you’re suddenly looking at a ‘Star Wars’ scale technological roll-out.

But now what you’re seeing is that geoengineering research is taking place in a very messy, interdisciplinary, contingent way, where the scientists don’t pretend that they’ve got all the answers … On the whole, geoengineering has become an interesting experiment in responsibility.

[This is] partly because of lessons learned during the GM crops debate in Europe. So GM was the example of where it went ‘wrong’ according to some policymakers but actually according to other people it went ‘right’, because that’s how discussions should happen; in a messy, uncontrolled, politicized way.

Responsible innovation sounds like a step towards a more constructive relationship between society and technology, but it also sounds unusual and fairly far-removed from our everyday experience of technology. So what’s stopping responsible innovation from being commonplace?

A lot of the barriers are soft or they’re not soft because actually they end up being very powerful but they’re not formal legislative barriers, they’re cultural.

One of the barriers to responsible innovation is a pervasive division of moral labour, you might call it, within science and innovation; a set of assumptions that scientists take responsibility for some things, society takes responsibilities for others and engineers have slightly different responsibilities…

The implicit idea is that scientists’ responsibility is to each other as a community and especially if they claim to be basic scientists, they would say: “my responsibility is to truth, I don’t make futures, I make truth”. Engineers might say “I have a particular responsibility to users”, or when you make things, your responsibility is for the life of that thing, and their responsibility professionally is embedded in various codes of conduct, as with doctors. So there is a well worked out, if still informal, division of labour.

Jack also points to the lack of science and technology in politics:

Science and technology are almost nowhere in formal politics. You need a high-level conversation about science and technology in parliament and in government, and a lot of those conversations are just devolved to research funders and universities, which I think is a shame.

My personal sense on the debate [over whether we need more scientists and technologists in government] is that actually what we need is more science in government but that doesn’t equate to more scientists… What is actually missing is the capacity to generate and make sense of scientific advice within government…

Jack’s prescription for furthering responsible innovation is for a dialogue that bridges the division of labour he identifies:

I think a lot of it is about empowering the scientists themselves to be willing to have those conversations; to talk about what they’re doing and what their intentions are, what their motivations are for the work that’s happening in their laboratories and companies. Empowering them to do so involves changing structures and incentives, new sorts of funding mechanisms for interdisciplinary work and collaboration, new ways to measure science. What do we consider to be excellent science?

Though such changes could help bring responsible innovation to universities—what about scientists and technologists working in the private sector?

It’s really hard. You can have open conversations with university scientists that are impossible to have in a corporate context. There are various possibilities for opening up corporate innovation activities.

You could say if you were Facebook rather than just hiring engineers to design your privacy settings, why not hire philosophers to advise on those things, hire women; maybe the gender balance of companies is a highly important part of the design process. Thinking through all of these possibilities and opening up might lead to more responsible outcomes.

Are Hawking and Musk’s warnings, and the media coverage generated, helpful interventions?

I don’t think they are, no. They fall into a typical pattern in which technologies are sold as incredibly potent, hugely exciting, but also hugely dangerous that’s the dynamic.

There’s this nice phrase from a philosopher of technology called Alfred Nordmann — speculative ethics — which is the idea that when it comes to technology we speculate about the ethics; are they going to kill us all, are they going to save us all? Actually the effect of science and technology on our lives is far more mundane but far more profound.

People’s experience of artificial intelligence, of how AI is changing our lives, is through things like through Google maps, right? Not computers that are going to take over and shut down the human race, because that’s ridiculous and far-fetched.

Jack argues that while anticipating the implications of an emerging technology is important, so are those more mundane conversations about technology:

The idea that you can postpone technological threats means that you leapfrog debates that you need to have now about what science and technology are doing to us. So it annoys me when the technologists themselves say we should talk about existential risk no! Let’s talk about what’s happening now!

Jack identifies Ray Kurzweil as another rather problematic contributor to the science and technology debate:

[Kurzweil’s date for the singularity is] a manifestation of a deterministic worldview which I think is part of the problem, which suggests that science and technology will do things because they follow their own internal logic, and therefore we can take that logic to mean that by 2046 the human race will no longer exist.

The presumption of determinism is that we can do nothing about that. So it immediately takes you towards fatalism: who’s responsible? God knows. What can we do about it? Nothing. And we can either ban the technology completely, or just wait and deal with it when it happens. That’s not our relationship with technology, nor should it be.

Lastly, Jack mentions (in response to a question) that his interest in technology and society is not driven by a particular belief, but that all voices need the right tools to be able to engage in debate:

My interest is (regardless of your particular interest) in your vision of the good life so that … we can have a discussion about how technology fits into that. So if you’re a Christian, you need the tools to be able to discuss how Christianity and technology interact. I have very little to say about the religion and technology question except that I think religion is a an entirely legitimate voice to have in that debate, and that religious perspectives should be included.

With many thanks to Jack for his time.

This interview was originally published in December 2014.

To hear about future interviews or other blog posts, follow Subtle Engine on Twitter or sign up for occasional email updates below:

--

--