The future of human-robot interaction: A look at three scenarios
Interaction between robots and humans will become an important feature of the industrial workplace of the future. But what sort of impact will this have on society — and what will that mean for how the technology evolves? The IfM’s Cyber-Human Lab takes a look.
By Thomas Bohné, Benedikt Krieger and Andreas Archenti
One aspect of the development of robotic technology in industry appears clear: humans and robots will increasingly interact to accomplish tasks.
Research on how they will interact points more and more to a relationship of collaboration, with humans and robots pursuing shared goals alongside one another, rather than humans always driving the task forward.
But will such human-robot teams be harmonious, or will this industrial development lead to social and economic tensions as the growing prevalence of robots is perceived as a threat?
At the Cyber-Human Lab at the IfM, we looked at the current trajectory of technological development of robotics and wider social and economic trends to sketch out three possible futures, using a series of key questions.
Three questions for the future of human-robot interaction
1) How can robots accommodate the differences of their human teammates?
Robots will be equipped with the ability to adapt their level of autonomy so that they are flexible and able to take on more or less responsibility in achieving the shared goal, depending on the different skills of their human teammates. But how those differences are communicated — how they are understood by robots, and whether humans trust to robots to understand them — is still contested.
2) How can humans and robots make their intentions and demands understood by their respective teammate?
Technology which enables communication with robots modelled on human-to-human interactions is regarded as the way forward. This means having the ability to communicate through gestures, speech, appearance, tactile and other haptic interfaces, as well as other channels. However, how these are combined, and in which instances they are most helpful for both the robot and the human, remain open issues.
3) How do humans perceive their robotic teammates?
As robots move into spaces previously solely occupied by humans, trust in robots will be a major factor in how human-robot teams succeed. Will making robots appear more human-like facilitate this trust? The psychological and social effects of robot teammates will be a major factor in how they evolve.
Three future scenarios
Using research on human-robot interaction and considering wider socio-economic trends, we conceived three distinct future scenarios: one where the evolution of the technology is shaped primarily by social concerns, one where economic competition determines the technological trajectory, and a middle ground between the two.
These are simplifications, but they should serve to spark thinking about the future of human-robot interaction and the responsibility of designers as well as corporations to take into account the societal implications of their creations. Although we do not expect any of the outlined scenarios to evolve exclusively, different organisations may find themselves along one of the three trajectories.
Scenario one — Social scepticism prevails
This scenario features increasing disparities, both socially and financially, among the workforce, combined with an increasingly hostile public view on robots.
Companies are facing economic pressure to become more efficient and productive while the organisation of labour hinders technology-facilitated efficiency gains.
One area where the divergence of views is most prominent is the possibility of building new skills and re-skilling. Workers and labour unions see this skill building as a chance for the workforce to gain upward social mobility by becoming eligible for other jobs. At the same time, economic pressure on companies makes them unable to offer large scale re-skilling programmes and limits skill building to a minimum in order to maintain a viable level of competitiveness.
However, this renders the workforce unable to adopt large-scale technological innovation, as they lack the required skills. A gridlock ensues, impeding innovation adoption.
Due to a growing mistrust between companies and their workers, the control that is handed to human workers is mainly motivated by necessity rather than trust-based empowerment. As it allows workers to maintain some degree of the process knowledge necessary to sustain operations, workers likely stay in control of robots.
Communication design will be focused on human-to-robot communication. Communication of the robot to the human is, if necessary, limited to providing the human with awareness of the robot’s environment.
Companies are able to increase the level of autonomy of robots but unable to take the humans out of the control loop entirely. Efficiency gains in turn are limited by the level of autonomy handed to the robots. It is not necessary for the human and the robot to share their workspace.
Aimed at enabling human control as the compromise between the workforce and companies, trust in this scenario might develop after all, over time. However, the workforce will perceive robots as agents of companies, for which trust has diminished. This scenario does not allow for a further transition of robots from tools to teammates.
Scenario two — The economics of replacement
In this scenario, societal issues are outweighed by the economic need for automation to remain competitive. Thus, any pushback from society is limited to the fringes.
Similar to the first scenario, skills and education are contested. As companies are forced to introduce technology and automate, they are similarly forced to train their workforce for new technologies. However, this training is kept to a minimum due to the economic pressure on the company. Thus, worker mobility is not significantly increased, and more innovative technology cannot be introduced.
Skills building is first and foremost aimed at ensuring that the workforce is able to intervene in automated operations to ensure the pursuit of given goals. Literacy in robotic technology is of secondary concern. Efficiency gains rely on the continuous development of more advanced robotics.
This might lead to distrust in robots, but this is less important in this scenario. Robot technology is becoming more and more capable of emulating human capabilities. Thus, human intervention is less and less needed. Communication is geared towards helping humans to understand the robot’s decision-making.
Remaining workers are increasingly taken out of the control as well as the design loop. Supervisory roles allow workers to claim some degree of goal-setting responsibility while instructor roles ensure some degree of process-related work. Both roles allow the company to harness efficiency gains provided by high levels of robot autonomy.
Ageing societies and shrinking workforces in many countries may ease the economic pressure, but the affected parts of society still feel under threat.
Scenario three — Moderated innovation
We see possibilities for consumers or innovative companies to pave the way to a middle ground. Instead of focusing on pure economic pressure to achieve efficiency, companies look to cooperative and collaborative approaches to robotic technologies. Research agendas are influenced to account for humans in the loop of technology and to consider human factors as well as societal implications of innovation strategies.
Skills building plays a crucial role to ensure the necessary levels of experience and expertise to work in human-robot teams. A greater understanding of the workings of robots is necessary. Education in robotics can be seen as an empowerment of humans, as they are able to contribute to the design of human-robot teams.
Re-skilling also prevents a workforce backlash. Robots in this scenario are perceived less as agents of the company than as peers to the human workers, supporting a way out of tedious, dirty, and perilous work. Attitudes towards robots are projected to become increasingly benevolent as the scientific reasoning of their benefits permeates through society, facilitated by education.
Thus, this scenario enables uniquely human abilities to be employed both to the benefit of the employers, by enabling efficiency and flexibility, and employees, by empowering them to truly leverage their abilities.
As the relationship between human and robot becomes dynamic, building trust in the robot as a teammate becomes a significant design challenge. Both the human and the robot need to be able to express themselves and be understood by the other in a number of different ways. Two-way communication becomes essential.
Which future will we choose?
What is clear from each of these scenarios is that the design of robots is about to take on increasing societal importance, and it needs to incorporate disciplines beyond robotics.
Robots present a number of novel ways humans can interact with technology, and there are many ways this interaction can be designed. People need to better understand the possibilities of robots, but also learn what continues to differentiate humans from the technology, even as it becomes more autonomous.
This knowledge is essential understanding how human-robot interaction should be designed and how that interaction can be beneficial.
This means that increasingly the focus should shift from keeping humans in the control loop — that is, hands-on directing of the technology — to keeping a range of interested parties in the design loop — ensuring that the design of human-robot teaming and collaboration benefits everyone.
Organisations and companies will play a crucial role. Continuous learning, up-skilling and re-skilling; decentralised decision-making; and participative organisation might become more important as the number of robots increases in companies.
If such steps are not taken early enough, technological developments are on track to substantially disrupt social structures.
While the development of robots with ever more human skills, such as conversation, empathy, collaboration and emotions, will challenge our perception about the very nature of being human, it is us humans who will decide to what extent and in which ways robots will be able to challenge us.
About the Authors
A team from the University of Cambridge and KTH Royal Institute of Technology collaborated on this project.
Benedikt Krieger and Dr Thomas Bohné are researchers at the Cyber-Human Lab of the University of Cambridge. The lab focuses on how technologies can be used to augment human work and improve human performance in industry.
Prof. Andreas Archenti is chair professor in Industrial Dependability. He is dividing his time between his research group at the Department of Sustainable Production Development and the Department of Production Engineering. He is also director of the Center for Design and Management of Manufacturing Systems at KTH.
For further information about this research project or other ongoing projects, please contact the Head of the Cyber-Human Lab, Dr Thomas Bohné (email@example.com).