Will our ethical considerations scale alongside the widespread deployment of AI?

EVR
Professor Rose Luckin’s EDUCATE
5 min readMar 30, 2020

Imagine if a stranger took a photograph of your pupils through a gap in a fence as they played at break-time, and then used it to find out their personal details — such as their names and where they live.

It sounds frightening, and beyond the realms of the imaginations of many of us. But the reality is that the technology to do this already exists and is in use among law enforcement and intelligence agencies in the United States.

It has been developed by a company called Clearview AI, which has produced an app that can match up images taken by cameras with the millions of photos it has ‘scraped’ off social media such as Facebook and Twitter. By doing so it can, of course, access personal information.

Facial recognition technology is increasingly being used in education systems around the world, leading to growing concerns around ethics. In China, for example, facial recognition is being used in schools to gauge pupils’ responses in class. Yawning or looking bored suggests to a teacher that the lesson needs to be more interesting and engaging. However, the authorities in China have stated recently that they plan to “curb and regulate” the use of facial recognition tools amid concerns over privacy.

In the United States, it has been used as a security measure to detect expelled students under surveillance by the police and to stop them entering school or attending events they are barred from. But it has also raised questions about the amount of information being gathered about young people, their activities and associations, as well as the accuracy of facial recognition on darker skin tones.

“Commentators are debating the ethics of using such technology without the consent of young people. They are asking: should we be using facial recognition just because it is available?”

In Australia, meanwhile, facial recognition is being trialled to record absenteeism among students. Commentators are debating the ethics of using such technology without the consent of young people. They are asking: should we be using facial recognition just because it is available?

They raise an important question. Artificial intelligence (AI) has the capability to bring huge benefits to teaching and learning, but understandable concerns remain particularly over the safety of children. Technology that identifies individuals and personal details feels particularly threatening and dangerous because it can negatively impact young people’s security. And where do we draw the lines on the right to privacy of association?

But technology can also bring benefits. For example, a teacher who is working with hundreds of children remotely every week, as happens in Uruguay for example, would be able to identify each child using facial recognition, enabling a more personalised relationship, and better engagement between the student and teacher. It can also been extremely useful for helping the elderly and those with poor eyesight to quickly recognise friends and family, as well as offering a secure way for people to log onto their technology without the need for passwords that can be forgotten or hacked.

All of us who work with technologies such as AI need to help people understand what it is, and what it can and cannot do, so that they are not surprised when they discover the possibilities. This will help them to make informed decisions about the personal information they share publicly, including images.

This week, the Institute of Ethical AI in Education (IEAIED) of which I am co-founder, is publishing its Interim Report, Towards a shared vision of ethical AI in education. It is timely because it comes as ever more intrusive and powerful uses of AI are entering into public awareness and debate.

The IAIED Founders, L-R, Priya Lakhani, Sir Anthony Seldon, Professor Rose Luckin at the Speaker’s House, Westminster
The IEAIED © Brody Herberman, Century Tech

Our report seeks to spark a discussion around the ethics of AI by identifying its benefits and pitfalls, and how we can mitigate against some of the challenges with a framework for its development and use. For example, we believe that AI should only be used for educational purposes where there are clear indications that it will genuinely benefit learners either at an individual or collective level, but not when there are significant risks involved.

We want to see the learners, parents and educators become better informed about what AI is and how it can benefit teaching and learning, while at the same time being discerning about what is, and is not, ethical. Should this knowledge and awareness be part of initial teaching training and continuous professional development?

The report also seeks to examine how best to ensure that educators can, for example, override the decisions taken by AI assessment systems so that these processes are fair and transparent.

AI is a continuously evolving technology, and we do not yet know or understand fully its capabilities. But at the IEAIED we believe that we need a shared vision for ethical conduct in the sector, that manages its design and development and offers confidence to a public that feels increasingly violated and imperilled by its presence in day to day life.

Perhaps the ultimate question will always be: the fact that AI can allow us to do something is not enough by itself to persuade us that we should do it. Our AI must bring tangible and important benefits. We must ensure that it does.

Author: Rose Luckin, Director, EDUCATE & Professor of Learner Centred Design, Institute of Education, UCL

Professor Luckin is the Co-Founder of the UK’s Institute for Ethical AI in Education along with Sir Anthony Seldon, Vice-Chancellor of the University of Buckingham, and Priya Lakhani, social impact entrepreneur and CEO of Century Tech. Rose has a particular interest in how AI techniques can be used to enable more effective, continuous, formative assessment processes and tools. Her 2018 book: Machine Learning and Human Intelligence: The Future of Education for the 21st Century describes how we can best benefit from using AI to support teaching and learning, and how the prevalence of AI in our future means that we need to revise what and how we teach and learn now. She has also published numerous academic articles, authored two monographs and edited two paper collections.

Originally published February 2020 by EDUCATE/Tes Global

--

--

EVR
Professor Rose Luckin’s EDUCATE

EVR is an AI consultancy for education and training institutions