UCL IIPP Blog
Published in

UCL IIPP Blog

Can humans get a grip on AI before it’s too late?

By Laurie Macfarlane

This blog is a follow up to the event ‘AI for good? Platforms, ethics and public value’. The event is part of IIPP’s ‘Walking the talk: Getting serious about the UN Sustainable Development Goals’ series. The recording of the event can be watched above.

For many people the words ‘Artificial Intelligence’ conjure up images of evil human-like robots in Hollywood movies such as ‘I, Robot’. But just because the depiction of AI on our cinema screens is often far fetched, it doesn’t mean AI has yet to make its mark. Because in reality, it surrounds us every day.

AI powers the search engines we use to browse the internet. It selects playlists for us on our favourite streaming service. And it helped to develop the vaccines that protected us from Covid-19 pandemic.

The relevant question today is not ‘will AI ever exist?’, it is ‘will AI technologies help us address the major challenges we are confronting, or will they make them worse?’

This was one of the questions posed in the first webinar in IIPP’s ‘Walking the talk: Getting serious about the UN Sustainable Development’ event series.

Posing the question was Gabriela Ramos, Assistant Director-General for the Social and Human Sciences at UNESCO. Ramos fears that on current trajectories, AI technologies may be compounding our problems rather than helping to solve them.

This is because technology cannot be separated from its human creators. In practice, algorithms often end up mimicking the same unconscious biases as those who design them. And when we live in a world of vast social, economic, racial and gender-based inequalities, this can be a big problem.

Ramos pointed out that 85% of all global AI projects are being developed by male only teams, particularly in Global North. As a result, there have been many examples where algorithms have been found to have a bias against women, and particularly women of colour.

She also pointed to a recent case in the Netherlands, where algorithms used by the Ministry of Social Services were found to have wrongly accused thousands of people of child welfare fraud, many of which from an immigrant background. The scandal ultimately led to the downfall of the government.

According to Ramos, the solution lies in establishing clear regulations on ethics and inclusion before technologies are deployed — not afterwards. In the absence of robust ex-ante regulation, the discriminatory outcomes that we observe in the real world will continue to be translated into the digital world.

To kickstart action, Ramos’s organisation, UNESCO, recently adopted the ‘Recommendation on the Ethics of Artificial Intelligence’ at its General Conference. The document aims to “provide a basis to make AI systems work for the good of humanity, individuals, societies and the environment and ecosystems, and to prevent harm.”

Photo by Andy Kelly on Unsplash

Speaking alongside Ramos was Prof Carissa Véliz, Associate Professor in Philosophy at the Institute for Ethics in AI. Véliz explained how the development of medical ethics offers valuable lessons for regulating AI. “At the moment, pretty much anyone can design an algorithm and let it loose into the world without any kind of supervision”.

This used to be the case in medicine, which resulted in a huge amount of pain and suffering. But today all medicines are strictly regulated by public authorities, and subject to rigorous peer review and randomised control trials before they are allowed to be sold. “We should learn from agencies like the US Food and Drug Administration (FDA), and apply those lessons to algorithms”, Véliz said.

Ian Hogarth, IIPP Visiting Professor and author of the annual State of AI Report, also drew inspiration from the world of science and medicine . “If you want to engineer a pathogen, you have to do it in a biosafety lab, which is regulated at a global level”. The contrast with the present free-for-all in the development of algorithms couldn’t be more stark.

“At the moment, pretty much anyone can design an algorithm and let it loose into the world without any kind of supervision”.

According to Hogarth, a logical step would be create an equivalent of biosafety labs for AI. In other words: regulators should create a series of highly regulated and controlled environments where algorithms can be thoroughly trialled and tested before being released into the wider world. And crucially, time is of the essence.

While discriminatory social outcomes present an enormous challenge, Hogarth believes that the rate at which computing power is expanding could pose an even more serious risk. Between 2000 and 2010, the ‘intelligence’ of AI systems was doubling every roughly two years years. But from 2010 onwards, Hogarth says, it has been doubling every 3–4 months. Many AI experts now view this exponential increase in AI capabilities as a serious existential threat.

“By making AI more powerful without understanding how to make it more safe, we are playing a dangerous game”, he said.

One risk is that AI capabilities inevitably get applied to military purposes, creating ever more sophisticated ways to cause death and destruction. But many experts also fear an even greater risk: the prospect of machine intelligence one day surpassing that of humans. And according to Hogarth, it might not be that far away.

One influential study concluded if current trends continue, a machine with comparable intelligence to the human brain will be developed by 2052. If that is the case, then perhaps the world of ‘I, Robot’ isn’t that far away after all.

This begs the question: can humans get a grip of AI before it’s too late?

Read IIPP’s recent policy report, ‘Crouching tiger, hidden dragons’ which reveals how 10-K disclosure rules help Big Tech conceal market power and expand platform dominance.

--

--

--

The official blog of the UCL Institute for Innovation and Public Purpose | Rethinking how public value is created, nurtured and evaluated | Director @MazzucatoM | https://www.ucl.ac.uk/bartlett/public-purpose/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
UCL Institute for Innovation and Public Purpose

UCL Institute for Innovation and Public Purpose

Changing how public value is imagined, practiced and evaluated to tackle societal challenges | Director: Mariana Mazzucato | Deputy Director: Rainer Kattel

More from Medium

A long delayed golden age: or why has the ICT ‘installation period’ lasted so long?

Prifina Comments to the Proposed EU Data Act: What Will Data Access, Interoperability, and Data…

This Is Why Educators Won’t Go Near AI Tools

Sustainable Development: How will our consumption evolve?