AI Carries High Risk. Policymakers Must Treat it That Way.

Titiksha Vashist
The InTech Dispatch
4 min readSep 12, 2020

AI is everywhere. Can we use policy tools and participatory methods to regulate it better?

“AI for All?” Illustration: Author.

The age of Artificial Intelligence
From grading students to autonomous weapon systems, AI is increasingly becoming ubiquitous. According to research by PwC, AI is likely to add US$15.7 trillion to global economic growth by 2030. AI has also seen quicker adoption by governments across the world, as automated systems find their way into public services, criminal justice systems as well as in policy documents as a national priority. As a recent BCG article stated, “Suddenly, the risk of doing nothing feels much greater than the risk of introducing new technology and processes.” States have leveraged AI at an unprecedented scale during the COVID-19 pandemic.

But here is the problem. AI is not like other conventional developments in science and technology. Along with its immense potential to transform society for good, AI is risky. Really risky. The risks range from social and political harms right up to existential risk. The nature of narrow AI(which performs specific tasks like driving your car, or running facial recognition) as well as general AI (that which in totality could cognitively outperform humans) warrants us to understand, adapt and regulate it differently. These ways can be found in the most unlikely of places, like environmental science.

What is Post-Normal Science?

In 1993, Silvio Funtowicz and Jerome Ravetz wrote about “post-normal science” (PNS), a problem in which facts are uncertain, values are in dispute, stakes are high and decision making needs to be urgent. Such challenges required a novel approach in doing science, distinct from both hard and applied sciences.

The PNS approach lays out ways of thinking about new science, keeping in mind its influence on policy, and its wide-ranging impact on people. The authors argued that we must shift focus from truth to quality, and science must be taken out of the expert’s lab into the public space, allowing for citizens to actively participate in knowledge creation.

Problem solving in fields like environmental sciences involves complex social and political implications and complexities, and new ways of addressing these must be evolved. This approach owed itself to Thomas Kuhn’s revolutionary work on the nature of scientific advancement, as well as the STS approach of interrogating the process of “science-making”.

The Precautionary Principle & Risk Society

The Precautionary Principle was adopted in the Rio Convention of 1992, and has since become a part of policy and governance- including in technology policy. The Principle empowers decision makers with a framework for adequate caution in cases where scientific evidence regarding an environmental or health hazard. It indicates that lack of scientific certainty is no reason to postpone action to avoid potentially serious or irreversible harms. The principle is used in scenarios when the stakes are particularly high, placing the tool in the risk governance box of a decision-makers briefcase. It is currently also part of the Maastricht Agreement, a founding treaty of the EU.

The Precautionary Principle flips the question of “prove this is harmful” to “prove this is safe”. The principle helps decision-makers exercise caution and is particularly useful in high risk scenarios which are characteristic of modern societies from nuclear disasters to climate change. -scenarios where the scientific community cannot ascertain facts, risks are global, uncertainties are often manufactured by human actions (as sociologist Ulrich Beck warned us).

Applying the Precautionary Principle to AI will bring in greater reflection at a time when the industry is focussed on fast-paced development creating lopsided and biased outcomes- from racial bias and sexism to the UK exam controversy- all of which show cracks in society that AI cannot fix, and often exacerbates. Technology creates its own risks, and more technology cannot be the solution.

Creating Better Futures

AI ticks all the boxes to be considered and regulated as a post-normal phenomena. Expanding public policy models beyond the standard approach taken with hard sciences will help conversations on technology and AI become more aware of risk, more deliberate and inclusive of all stakeholders and address real problems as opposed to a ‘singularity and solutionism’ driven view of technology.

This might allow us to navigate the different implications of AI in varied settings, and create responsibility-centered frameworks to prove that solutions do not create more harms.

We must also move towards a more participatory science, a science which includes the people it was designed for — the citizen patient, the mother vaccinating her child, the agricultural producer working the soil, and today, the racial and ethnic minorities who are vulnerable to facial recognition algorithms deployed in the criminal justice system.

Policymakers, academics and philosophers are also increasingly anticipating the need to place environmental concerns next to technological development. One such example is the blue and green project by Oxford philosopher Luciano Floridi, who states that ““Green is the color of any environment: not only parks, but also urban spaces and the infosphere itself. It is the environment as ecology. Blue is everything digital. The objective is to assemble them intelligently, so that the environment is saved by digital technology and that digital generates a successful sustainable economy.”

Using tools from environmentalism into technology policy might also help us place technological concerns within environmental questions, as ecological sustainability must accompany technological ambition.

I thank Shyam Krishnakumar for his insightful comments on this article.

Like what you are reading?

We write on emerging tech, politics, culture and us with an Indian focus every fortnight. Subscribe for free : www.bit.ly/IntechDispatch.

--

--