Working with AI: Learning from expert systems in industry to design for collaboration not replacement

Cecile Boulard
Research Stories by Naver Labs Europe
5 min readJan 13, 2021

The introduction of Artificial Intelligence (AI) in work activities is often perceived as a technology that will replace humans in their jobs [1] yet the tales that explain this are grounded in the “substitution myth” [2]. We’ve already seen that layers of automation don’t necessarily remove workers [3] and, from the literature, we know that people will have to cooperate with (rather than be replaced by) AI because human intelligence is necessary to bootstrap and ensure performance in many different kinds of context. If technology designers remain stuck in the myth of substitution, they won’t make the effort required to create technology that appropriately cooperates with humans. An interesting article from Tom Simonite “When AI Can’t Replace a Worker, It Watches Them Instead” highlights a possible situation where humans might work under the control of AI and we should figure out what the impact of this type of cooperation would be. A source of inspiration for this dates back to the 80s, when the field of human factor and safety science gave us findings on how automation has to be thought to support human activities not just at the level of basic repetitive tasks, but at the cognitive level in process supervision, diagnosis and decision-making.

Having studied the human factor and teams who supervise nuclear power plants, I learned how the work is performed in a dynamic environment and the way workers collaborate within the whole socio-technical system. In these environments, anticipation is key and strongly supported by an accurate ‘Situation Awareness’ on the part of the workers.

Along with the chemical, aircraft and healthcare industries, nuclear plants began to use “expert systems” in the 60s. These systems introduced a layer of automation with the aim of replicating a part of human cognition based on a set of “if-then” rules. There are a number of studies on how the collaboration between operators and expert systems took place and what the main difficulties were and, although they may be well known within the ‘human factor’ community, it’s worth recalling them for the benefit of everyone concerned with AI in the workplace [2].

The main aim of expert systems is to replace humans for some tasks yet in practice, we observe that even in highly automated systems, there’s still the need for human beings to supervise, adjust, maintain, expand or improve [2, 5]. While trying to design a fully automated system, we actually end-up with a human-machine system where the interaction between humans and the system has to be anticipated if we want the global system to perform.

To grasp the impact of using intelligent machines in human activities the concept of Situation Awareness is interesting. It’s defined as the human’s “internal model of the world around him/her at any moment in time” [6]. In a recent publication [7], Endsley discusses the impact of automation in autonomous driving cars on human drivers. He calls it ‘the autonomous conundrum’ which is eerily familiar with the ‘ironies of automation’ highlighted by Bainbridge in the 80s [5].

Let’s have a closer look at the concept of Situation Awareness and how it’s impacted in autonomous driving cars where the idea is that automation will improve road safety. “The ability for automation to incrementally add to existing safety levels assumes that people’s performance will remain independent of the system autonomy, however, some 40 years of research on human interaction with automation shows this not to be the case” [7]. Looking at the figures, “Waymo’s vehicles can travel approximately 5,600 miles on average before a human driver has the need to intervene, or before the vehicle disengages on its own due to some detected problem. In comparison, a human driver travels over 490,000 miles between accidents and over 95 million miles between fatal accidents. While not all disengagements would have necessarily resulted in an accident if a human driver had not been able to intervene, this comparison shows that vehicle automation efforts still have a long way to go to come even close to current levels of safety provided by human drivers”. The introduction of expert systems in industry showed that the improvement of automation is linked to a decrease in people’s performance. One of the reasons is the difficulty for people to maintain accurate Situation Awareness when they have the possibility of counting on automation.

“Automation has a significant effect on lowering the situation awareness of the operator, creating out-of-the-loop performance deficits. People have been shown to be both slow at detecting when the automation is in a situation that it is not programmed to handle, and slow at determining the cause of the problem for successful intervention. These issues occur because of:

(1) Poor vigilance when people become monitors, often coupled with increased trust or over-reliance on the automation,

(2) Limited information on the behaviour of the automation and/or the relevant system and environment information due to either intentional or unintentional design decisions, and

(3) A reduced level of cognitive engagement that comes from becoming a passive processor rather than an active processor of information” [7].

These observations of the impact of automation have major consequences, two of which are

- “the more automation is added to a system, and the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed

- “due to the inability to reliably respond to the unexpected, imperfect automation is doomed to creating new types of accidents as it degrades human performance” [7].

What happens to Situation Awareness in autonomous driving cars is a pointer to how humans can be impacted by ‘losing abilities’. The saying ‘practice makes perfect’ is meaningful and leads to other questions such as what’s going to happen to humans if AI prevents us from practicing?

So, if we want the whole system to maintain or even improve performance, workers need to be provided with an understanding of the situation and maintain their skills. One way to do this is to keep the human in the decision loop. This is particularly challenging because recent improvements in results in AI have been made with deep learning which has shown to be less effective when it’s more explainable (and vice-versa). We need to find a way to balance AI performance whilst keeping us humans very much on board.

References

1: Frey, C. B., and Osborne, M.A. (2017) “The future of employment: How susceptible are jobs to computerisation?” Technological forecasting and social change 114: 254–280.

2: Zouinar, M. (2020) “Évolutions de l’Intelligence Artificielle: quels enjeux pour l’activité humaine et la relation Humain‑Machine au travail?.” Activités 17–1.

3: Pethokoukis, J. (2016) What the story of ATMs and bank tellers reveals about the ‘rise of the robots’ and jobs

4: Simonite, T. (2020) When AI Can’t Replace a Worker, it Watches Them Instead Wired magazine.

5: Bainbridge, L. (1983) “Ironies of automation.” Analysis, design and evaluation of man–machine systems. Pergamon, 1983. 129–135.

6: Endsley, M. R. (1988)”Situation awareness global assessment technique (SAGAT).” Proceedings of the IEEE 1988 national aerospace and electronics conference. IEEE, 1988.

7: Endsley, M. R. (2018) “Situation awareness in future autonomous vehicles: Beware of the unexpected.” Congress of the International Ergonomics Association. Springer, Cham, 2018.

8: Dekker, S. and Woods, D. (2002) “MABA-MABA or abracadabra? Progress on human–automation co-ordination.” Cognition, Technology & Work 4.4: 240–244.

Cover Image by: Freepik.com

--

--