Facing the ethics of surveillance technology

Dorothy Lepkowska
Professor Rose Luckin’s EDUCATE
4 min readJun 12, 2020
Rose on the podium at EDUCATE’s 2019 Demo Day with a mic in her hand

Surveillance technology is thought to have played a major part in tackling the coronavirus crisis in South Korea and elsewhere. One of the methods used, facial recognition, is increasingly being adopted in schools and education systems, often with ethical implications, says Rose Luckin, director of EDUCATE Ventures.

Imagine if a stranger took a photograph of your pupils through a gap in a fence as they played at break-time, and then used it to find out their personal details — such as their names and where they live.

It sounds frightening. But the reality is that the technology to do this already exists and is in use among law enforcement and intelligence agencies in the United States.

It has been developed by a company called Clearview AI, which has produced an app that can match up images taken by cameras with the millions of photos it has ‘scraped’ off social media such as Facebook and Twitter. By doing so it can, of course, access personal information.

Facial recognition technology is increasingly being used in education systems around the world leading to growing concerns around ethics. In China, for example, this technology can gauge pupils’ responses in class. Yawning or looking bored suggests to a teacher that the lesson needs to be more interesting and engaging. However, the authorities in China have stated recently that they plan to “curb and regulate” the use of facial recognition tools amid concerns over privacy.

‘In China, for example, this technology can gauge pupils’ responses in class. Yawning or looking bored suggests to a teacher that the lesson needs to be more interesting and engaging’

In the United States, it has been used as a security measure to detect expelled students under surveillance by the police and to stop them entering school or attending events they are barred from. But it has also raised questions about the amount of information being gathered about young people, their activities and associations, as well as the accuracy of facial recognition on darker skin tones.

In Australia, meanwhile, facial recognition is being trialled to record absenteeism among students. Commentators are debating the ethics of using such technology without the consent of young people. They are asking: should we be using facial recognition just because it is available?

They raise an important question. Artificial intelligence (AI) has the capability to bring huge benefits to teaching and learning — and as has been proved in the example of South Korea, public health — but understandable concerns remain over the safety of students. Technology that identifies individuals and personal details feels particularly threatening and dangerous because it can negatively impact young people’s security. And where do we draw the lines on the right to privacy of association?

But technology can also bring benefits. For example, a teacher who is working with hundreds of children remotely every week, as happens in Uruguay for example, would be able to identify each child using facial recognition, enabling a more personalised relationship, and better engagement between the student and teacher. It can also be extremely useful for helping the elderly and those with poor eyesight to quickly recognise friends and family, as well as offering a secure way for people to log onto their technology without the need for passwords that can be forgotten or hacked.

All of us who work with technologies such as AI need to help people understand what it is, and what it can and cannot do, so that they are not surprised when they discover the possibilities. This will help them to make informed decisions about the personal information they share publicly, including images.

Earlier this year, the Institute of Ethical AI in Education (IEAIED) of which I am co-founder, published its Interim Report, Towards a shared vision of ethical AI in education. It is timely because it comes as ever more intrusive and powerful uses of AI are entering into public awareness and debate.

Our report seeks to spark a discussion around the ethics of AI by identifying its benefits and pitfalls, and how we can mitigate against some of the challenges with a framework for its development and use. For example, we believe that AI should only be used for educational purposes where there are clear indications that it will genuinely benefit learners either at an individual or collective level, but not when there are significant risks involved.

We want to see the learners, parents and educators become better informed about what AI is and how it can benefit teaching and learning, while at the same time being discerning about what is, and is not, ethical. Should this knowledge and awareness be part of initial teaching training and continuous professional development?

‘we believe that AI should only be used for educational purposes where there are clear indications that it will genuinely benefit learners either at an individual or collective level’

The report also seeks to examine how best to ensure that educators can, for example, over-ride the decisions taken by AI assessment systems so that it these processes are fair and transparent.

AI is a continuously evolving technology, and we do not yet know or understand fully its capabilities.. Perhaps the experience of South Korea during the coronavirus pandemic — and the acceptance by its citizens of surveillance being merely a fact of life — will make us perceive this technology differently, in time. But there is no doubt that many people continue to feel violated and imperilled by its presence in day to day life.

Perhaps the ultimate question will always be — just because AI allows us to do something, does it mean that we should?

Authors: Dorothy Lepkowska, EDUCATE Comms Lead, and Rose Luckin, Director, EDUCATE, and Professor of Learner Centred Design, Institute of Education, UCL. Rose is also cofounder of the Institute for Ethical AI in Education

--

--

Dorothy Lepkowska
Professor Rose Luckin’s EDUCATE

Dorothy is the Communications Lead on EDUCATE Ventures, and former education correspondent of several national newspapers.