Predictive Futures: The Normalisation of Monitoring and Surveillance in Education

Tactical Tech
8 min readSep 28, 2020

--

By Daisy Kidd

Tactical Tech’s Youth initiative starting 2020

In 2020, tens of thousands of high school students across the United Kingdom took to the streets to protest the use of an algorithm that predicted their end-of-school grades. The algorithm had lowered almost 40% of grades, meaning some pupils were no longer eligible for their chosen university or college. The algorithmic scoring impacted students from lower income backgrounds the most, despite warnings of the ‘potential risk of bias’, raised in a parliamentary education committee a month prior. Days later, the UK Education Secretary Gavin Williamson backtracked on the decision, stating that pupils who had been marked down would now receive their original teacher-assessed grades. This was a historic U-turn, noted as one of the few times that the government have acknowledged the ‘discriminatory potential of algorithmically-informed policy regimes.’

Four months earlier, students at 12 universities across the United States staged protests to ban the use of facial recognition technology on campus, with a further 36 universities signing online petitions. The digital rights group Fight for the Future launched a campaign that ran facial recognition technology on 400 photos of faculty members and sport teams from UCLA (the university that had originally proposed facial recognition technology). The software incorrectly matched 58 of those individuals with a mugshot database, and the majority of those misidentified individuals were people of colour. In another change of course, UCLA abandoned plans to introduce facial recognition technology with the statement: ‘We have determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community.’

…data monitoring, surveillance and prediction technologies used in schools are often rolled out without the students’ knowledge, let alone participation.

The grading fiasco in the UK and the facial recognition protests in the US show how predictive technologies can exacerbate bias. As with most predictive algorithms, the systems are based on largely historical training data, which can be limited, incorrect or unfair. In the former case, high-achieving students in previously low-achieving schools were reliant on the reputation of the school and the performance of students from three years prior, with their individual achievements becoming a footnote in their future. In the latter case, Fight for the Future’s anti-facial recognition campaign revealed that the technology produced unreliable yet unsurprising results. The fact that a UCLA football team member was falsely matched with a photo in a mugshot database with ‘100% accuracy’ — despite looking nothing like the convicted man — is a harsh reality of even the best facial recognition software.

A hopeful thread that ties these two stories together is that students were successful in blocking these technologies from being used. As a result of campaigning, media attention, petitioning and mobilizing, they were listened to. However, data monitoring, surveillance and prediction technologies used in schools are often rolled out without the students’ knowledge, let alone participation. From nursery to university, there are huge amounts of data collected on children and young people, from inside and outside of the educational system. A report from the UK-based organisation Big Brother Watch notes that ‘Children in schools and young adults at universities are subject to state and commercial surveillance perhaps more than any other community in England.’ This data can range from basic personally identifying information like name, date of birth and gender, to browsing history, search terms, location data, contact lists, and behavioral information. It’s a similar story in the US, where according to a report by the Electronic Frontier Foundation, ‘educational technology services often collect far more information on kids than is necessary and store this information indefinitely’.

The ‘trove’ of sensitive information collected during school through the educational system is increasingly linked to third-party data sets about individuals and households. This data is bought from data brokers to make early interventions based on algorithmically predicted behaviour and to design predictive modelling in classrooms. In other words, the masses of data that are collected on children and young people are later fed back to them in the form of predictive technologies, along the lines of the grading algorithm but over much longer periods of time, and without public knowledge. Professor and author Shoshana Zuboff¹ has written about a similar process in the marketplace where companies such as Google and Facebook use surplus behavioural data to build ‘prediction products’ which are ‘designed to forecast what we will feel, think and do: now, soon, and later.’ These ‘predictive future markets’ thrive because they are selling the idea of certainty, whether it be a car insurance broker wanting to minimise risk, a police force wanting to reduce crime, or a teacher wanting to avoid disruption in class.

Data collection and monitoring is now a basic requirement of schools in many countries, often under the guise of modernising and cutting costs, which has led to a burgeoning ‘ed tech’ industry, the term for the use of digital technologies in education. For years, technology companies such as Google and Microsoft have been encroaching on, and dominating, the ed tech market. As of 2020, 25 million students and teachers use Chromebooks, laptops that run on Google’s Chrome operating system. These systems often come with behavioural monitoring packages such as GoGuardian, which offer filtering, managing and monitoring ‘on and off campus’. As students become normalised to these platforms, they sign up to free Google products such as Gmail and Google Docs, which are currently used by 90 million students and teachers globally. These products fall outside of the privacy controls that are enforced in schools, thereby gaining access to more detailed behavioural data about households and individuals. According to one 2020 lawsuit against Google from New Mexico’s Attorney General: ‘When students log into their school Chromebooks, Google turns on a feature that syncs its Chrome browser with other devices used by a student on that account’, building a full data profile that goes way beyond what’s necessary. This is the latest in a string of fines against Google’s collection of young people’s data, including in September 2019 when Google was fined $170 million for harvesting children’s data from YouTube in New York alone.

The global pandemic has provided a fertile space for surveillance companies and ed tech platforms who are scrambling to respond to new demand.

The coronavirus pandemic has compounded the problem of monitoring and surveillance in schools and created a significant boost in the ed tech market. Since March 2020, Google Classroom has doubled in active users, and its video conferencing tool, used by many schools, has increased by 900%. As 68% of countries are depending on remote learning, there is an entire market waiting to fill the void of online learning, homeschooling and even contact tracing for when pupils go back to school. Behavioural monitoring apps such as LiveSchool (US), which compare students’ performance across classes and schools and share real-time behavioural scoring and performance details with administrators, teachers, parents, and kids themselves, are already planning for extended periods of home schooling. One data scientist at Hoonuit, another behavioural monitoring platform, claimed, ‘predictive analytics […] will be especially useful during the 2020–21 academic year when face-to-face learning is limited.‘ New formulas are being introduced, such as the ‘student growth percentile methodology’, which measures students against each other to try to monitor growth. The makers of these technologies claim they will be extremely valuable for comparing a student who has been home-schooled versus a student who has gone back to school. Beyond behavioural analysis, companies that provide beacon tracking technology such as Volan are hoping to market their services as a contact tracing system, by tracking the location of students around the campus.

The global pandemic has provided a fertile space for surveillance companies and ed tech platforms who are scrambling to respond to new demand. But beyond crisis response, the steady proliferation of these technologies is leading towards an increasingly documented, scored and profiled learning space. Students are left in the dark whilst their behaviour, identity and search history is collected without their knowledge, and fed back to them in the form of predictive technologies. The results of this can be shocking, such as the blatant bias that finds its ways into algorithms or facial recognition software. But it can also go undetected: a subtler form of monitoring and surveillance that disguises itself as protective security measures, public health safety on campus and keeping student behaviour in check. Students and parents who become aware of the ‘big mother’ role of technology in schools are faced with a lack of transparency and a lack of choice, leading either to a breakdown in trust or an acceptance of the surveillance status quo.

Students are left in the dark whilst their behaviour, identity and search history is collected without their knowledge, and fed back to them in the form of predictive technologies

The fact that we are beginning to see students standing up for their digital rights and against these technologies because of inherent bias may demonstrate a shift towards more participatory and transparent learning environments. It demonstrates that, as these experimental technologies become more widely used in the educational context, students may begin to react more strongly against the inequities they introduce. By protesting against and overturning the use of technology, the students bridge the gap between the use of the technology and its social and political impact, claiming the territory between as their own. As one teenager describes: ‘My generation has shown itself to be resilient, robust and ready to make change.’

To take advantage of this heightened awareness and motivation, civil society organisations, teachers and parents need to step in to provide a digital rights education that explains how this technology works and how it is impacting what young people care about. It needs to go beyond digital literacy towards a ‘systemic literacy’², an education about how digital technologies have invaded public and private spheres for profit. And most importantly, young people need to be brought into the conversation, so they can make a meaningful contribution that is proactive rather than responsive in the face of crisis.

Tactical Tech’s new youth initiative will engage young people to critically reflect on how they want the digital environment they inhabit and will inherit to look. We will work alongside young people, their educators and parents, and other civil society organisations, to provide a systemic literacy and rights-focused education to young people so that they can take back control of their tech.

Daisy Kidd works on Tactical Tech’s Youth project. Read the first in this series What the Future Wants about how digital technologies designed for young people are often addictive, unhealthy or unsafe — and what we should be doing about it.

Thank you to Christy Lange, Stephanie Hankey and Sasha Ockenden for their feedback on this article, and for Yiorgos Bagakis for the illustration.

1 Shoshana Zuboff, The Age of Surveillance Capitalism (Profile Books: 2019)

2James Bridle, New Dark Age (Verso: 2018)

--

--

Tactical Tech

Tactical Tech is an international NGO that engages with citizens and civil-society organisations to explore and mitigate the impacts of technology on society.