Work in the time of AI

DataKind UK
DataKindUK
Published in
4 min readAug 21, 2020

By Stef Garasto, DataKind UK ethics committee member

‘AI at work’ was DataKind UK’s second fully remote book club. The reading list included topics ranging from workplace monitoring and surveillance to algorithmic discrimination in hiring.

Image by kiquebg from Pixabay

Digital workplaces

For this book club, we were talking about work while work as we know it has changed drastically. Because of the pandemic, many businesses have accelerated the shift to a digital-first way of working. Perhaps not surprisingly, the pros and cons of remote working — as well as the heightened necessity of using digital tools to be able to participate in both work and social life — was a recurrent topic throughout the conversation.

When it comes to work, book club participants’ concerns were not necessarily linked to automation and AI. Issues of surveillance at work pre-date automation, as demonstrated by one participant’s stories of working in a call centre. People told tales of calls being logged and monitored, and break time tracked to the minute. But has automation made things worse?

A skewed balance of power

Participants remarked that algorithms and AI have the capacity to enable the rapid acceleration of processes in the workplace, meant to increase efficiency and, ultimately, productivity. But for people at the mercy of such algorithms, this can be a scary thought.

A lack of transparency — not only into how the AI works, but also into what data are collected, how they are used, and by whom — was a worry for many. The multifaceted, black box nature of automated processes was also linked to a lack of accountability and an erosion of trust between employees and employers. A different system of checks and balances, as well as creating the tools for employees to challenge algorithmic surveillance and decision-making, were seen as important parts of the solution.

Image by mohamed Hassan from Pixabay

Algorithms are blunt tools

Algorithms have a tendency to rely on what can be measured, what is easy to measure, or what a given person decided was worth measuring. This can lead to problems, such as using flawed proxies for, or directly ignoring, what can not be measured and/or using the wrong metric. For example, employee performance in call centers might be evaluated based on the length of the call, rather than customer satisfaction. And hiring algorithms can rely on the quantification of facial expressions and emotional cues (which, some argued, is impossible to do).

If the goal of workplace algorithms is to increase productivity, but those algorithms can only rely on what can be measured, does this lead to false assumptions about what makes employees more productive?

Participants mentioned emotional wellbeing, autonomy and trust-based (as opposed to surveillance-based) relationships with one’s employers as features that can increase productivity. However, these are often not measured by algorithms in the workplace, and might even be negatively affected by the use of such algorithms.

Trust (or lack thereof) was a recurrent theme in the discussion: trust in the algorithms, and those developing or using them. A crucial element of trust is open communication and transparency, not only about the technical details, but also about the underlying goals of the system.

But transparency alone is not sufficient. Even if we knew exactly what was happening within an algorithmic system and why, we also need to have the ability to challenge the outcome, and to choose whether to participate in the system itself. As we continue to develop and use AI, there is no guarantee that transparency and our right to challenge and opt-out will go hand in hand.

Our reading list

Our top five suggestions

The full reading list, plus some suggested prompt questions, can be found here.

--

--