Social Finance UK
Published in

Social Finance UK

Some thoughts on ethics and artificial intelligence

What are the moral principles that should be applied to developing and using AI?

Image: mikemacmarketing.

By Elizabeth Osta

As I set out to write this blog I felt a little overwhelmed. So much is happening in the world of ethics and artificial intelligence (AI), and my understanding was bound to be incomplete. I felt I needed AI to make sense of it.

I decided to get some help from Quid, a software company that specialises in text-based data analysis. I’ve known of them since their first steps as a ‘contextual platform’ running algorithms that mine everything published on the internet, plus many more databases.

On their website I found the Artificial Intelligence Index report 2019 (PDF)

It is fitting that one of the most comprehensive reports on the state of AI could only be the product of an algorithm. The report is comprehensive and makes the case — backed up by numbers — that AI is here to stay, as if there was any doubt. More and more people are learning AI and getting jobs in the field, and it attracts large amounts of investment.

The report provided me with an opportunity to select a few charts on the status of AI in relation to ethics, and to offer my own interpretation of them.

To judge by the number of papers written, conferences planned, and university courses developed, a lot is going on in ethics for AI. Several ethical AI frameworks have also been developed.

Number of titles at AI conferences mentioning ethics keywords, 1969–2018. Source: Prates et al, 2019
Number of ethical AI frameworks produced 2016–2019, by type of organisation. Source: PwC

Ethics in relation to AI (in a sample of universities) is taught mostly in a non-technical context, with less than half of ethics taught in technical courses.

Tech ethics course, by department. Source: Tech Ethics Curriculum (Casey Fiesler), 2019

The good news is that there seems to be a consistent set of ethical challenges across the globe, albeit with some differences across countries.

Ethical Challenges covered across AI Principle Documents. Source: PwC

China is most interested in safety and security, Canada and the UK in privacy, South Africa in interpretability and sustainability, Switzerland and the US in fairness. Germany is quite evenly balanced, albeit with a larger focus on human rights.

Most mentioned ethics categories by Source Country.

When it comes to news coverage on ethics in AI, much of the reporting focuses on frameworks, data privacy and facial recognition.

Quid network with 3,661 news articles on AI Ethics from August 12, 2018 to August 12, 2019.

I found it interesting that whilst frameworks are widely reported, other areas are of greater public interest: Is my data private? Can I delete it (and by the way, who owns the data)? Should a system recognise me automatically from the measurements of my face?

Facial recognition is important as it’s where we cross the line between the digital world and the physical world. And that’s scary. Although we accept that in the digital world we are ‘known’ through data that is not really fully ours, it’s concerning when the digital and the physical cross over. Our physical identity can be matched to our online identity. Two worlds that were separated become one.

Percent of worldwide news coverage monitored by GDELT that mentioned “machine learning”, “deep learning”, “TensorFlow” and “artificial intelligence”.

When it comes to AI news (without ‘ethics’ as a keyword), one of the most reported areas is job losses (‘robo-killers’ and autonomous weapons that would fit squarely into each ethics category do not even come close). In short, we mostly care about whether AI is going to make us redundant.

Interestingly, this concern over ‘replaceability’ does not appear in the ethical challenges. So why, if this is what matters most, are we not seeing it as a main part of the ethical frameworks? Should the frameworks consider not only the outputs of AI but its overarching impact on society? And why does ethics not cover that part?

From a corporate viewpoint, organisations seem mostly concerned with cybersecurity and regulatory requirements when it comes to AI, although there are also concerns about privacy and explainability.

Organisations taking steps to mitigate risk from AI. Source: McKinsey & Partners

The good news is that AI can support many use cases related to the UN Sustainable Development world. It would be good to see more work that translates ‘the good that AI can do’ from a conceptual ethical standpoint.

AI use cases that support the UN Sustainable Development Goals. Source: McKinsey Global Institute

A great summary of ethics topics relevant to AI is offered in a paper authored by the World Economic Forum. The paper highlights nine ethical areas of concern:

  • unemployment
  • inequality
  • humanity (how AI affects our behaviour)
  • how can we guard against AI mistakes
  • how to eliminate bias
  • how we code a safer AI
  • how we protect against unintended consequences
  • how we stay in control
  • how we define rights and accountabilities for robots.

Finally, as I write this blog the world is battling the Covid-19 pandemic. AI may play a significant role in this battle, and it is likely to shape our attitudes towards technology in the future. A few early considerations: AI, big data and surveillance has come to the forefront in how countries are attempting to contain the pandemic; several efforts are underway to find a cure, helped by AI modelling; and media companies are responding to increased concern about the spread of fake news.

As a technology optimist, my personal hope is that AI will assist in resolving the challenges of the pandemic and that this will improve our trust in technology. We may also start to realise the importance of automation in keeping the basic infrastructure of the world running when humans can’t.

Elizabeth Osta leads the Collaborative Technology Initiative, incubated by Social Finance.

Sign up for the Social Finance mailing list.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store