Robothings for Social Good — 🖐🎤💥 MicDrop #11 by Cristina Zaga

This is a transcript of the speech given by Cristina Zaga, Ph.D. candidate at the HMI group (University of Twente) and a visiting scholar at the RiG lab (Cornell University) at the Digital Society School, during MicDrop#11 ‘Robothings for Social Good’ on April 10th, 2019, Digital Society School in Amsterdam.

Digital Society School
digitalsocietyschool
11 min readMay 1, 2019

--

Who of you is working with robots? Who of you is afraid of robots? Who of you doesn’t know anything about robots and would like to know more? If you read newspapers or are following Instagram, it’s everywhere: Robots are coming, robots are here! I work with robots all the time and I don’t see how they could be coming yet. And everybody is saying: You are taking away my job. But if we design well, this will not happen. But, clearly we are getting scared.

I understand why you are getting scared: Does anybody see a resemblance with these robots from Boston Dynamics and the Demogorgon from Stranger Things? Why would you like to have a scary monster walking around and open your door? Probably not, and it is understandable. Among the robots that we are making there are also ones that they make us react with: “Oh Cute!”. For instance, have you ever seen Pepper? It’s cute and reassuring, at least by its looks. Another example is Paro, designed for companionship and catered to older adults. Pepper and Paro are called ‘social robots’ because they are specifically designed to communicate with humans and other robots on a social level.

Paro, a robotic pet designed for companionship and catered to older adults.

Paro is definitely a cute and interesting robotic pet, but it is kind of a black box, we cannot modify the algorithms. I would argue that ‘social robots’ can also be a little scary. They challenge our ethics, our values. In the movie ‘Ik ben Alice’, the consumer robot “Little Sofia” is given for a period of 3 months to provide company and (arguably) emotional support in their everyday life. However, one of the seniors is getting so attached, what happens when the robot is taken away? Do we want for these instances of attachment to happen? These are important questions that we should ask ourselves both as users and designers.

Ik ben Alice screenshot

According to the literature, a ‘social robot’ is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviour and rules attached to its role.

They communicate with us on a social level to support us in our everyday life. It looks like an awesome piece of tech. A robot to help. But I would like to invite you to not believe the hype. When I have started to work on my doctoral research, I believed the hype very much. I tried to develop a robot that helps children develop social skills. I tried to motivate children to be more collaborative, help each other, being engaged in a task and share objects with each other. Why not trying to have a robot to support their development of social skills then? But, when I have tested it with elementary school kids, I have noticed that kids did not “buy” it. They were engaged with the robot, yes, but their expectations were not met. They find the behaviours of the robot, modelled on the behaviours of human teachers to be not believable. At some point, they were teasing the robot and dis-engage.

A robot looks a lot like a human, but it does not have what makes us human.

So kids have high expectation when they are presented with a kinda humanoid robot and when the expectations are not met, they lose engagement. That would not happen or it would rarely happen with the physical presence of a teacher. Do kids really need robots for their formal and non-formal education? The debate is open. There are positive and encouraging results in the literature. But, I argue that a human-like replica of a teacher might not be the way to go.

A humanoid robot teaching a class of children.

I started to feel uneasy, to experience ethical dilemmas. Are we in the right director trying to model a robot like a human? There is a contradictio in terminis: how could this robot, this “thing” be capable of teaching a human child? The robot is still learning itself (sometimes), only based on simple reactive algorithms, or simply remotely controlled.

Where does this desire for robots come from? It is the desire to replicate ourselves.

I started to reflect on the human desire of creating robots, to try to understand whether traditions and narratives play a role. Actually, our desire for creating a perfect, better version of ourselves— a robot — dates back to Greek times. And we want to make a better version of ourselves. Recent examples of the desire to replicate ourselves to perfect ourselves and makes us more efficient and effective are in the expressionist movie Metropolis (1927). It’s an example of how things can go wrong. The movie provides the idea of a robot as a mechanical servant with enhanced capabilities, but also with human traits. In ‘Metropolis’ the robot is portrayed as a magnificent and powerful monster, that is making mechanical behaviours the new normal for a mechanised society. A very grim idea and narratives about robots. At the same time, sci-fi has gifted us with a way more positive visions about robots: The panacea. These solutions reassurance us that robots can be good. We can make friends with them. The above examples are narratives that generate bias and background that accompany our understanding of robotics and automation. Nobody is immune: also scientists, engineers and people like me working in Human-Robot Interaction are affected.

C3PO from the Star Wars universe is probably the most famous use of a robotic ‘panacea’.

I have amazing respect for engineers who are dealing with robotics.

Robotics and automation started in engineering and mechatronics. Of course, engineers (most of the time) come from a technocentric point of view. To know what was possible, to find technological solutions and realize the idea of intelligent automation. Engineers want to make things more efficient with algorithms, sensors and mechatronics to collaborate with people. So we engineers and human-robot interaction designers asked: How can we make the computer-human interaction nice and engaging and what type of effects are happening on a conceptual level? And much of the high-quality research in human-robot interaction actually prove that it brings about better user experience, trust and social engagement. But, it is still challenging to look at how human-like social robots will play out in the long run. A lot of neuroscientists have also explored the boundaries and opportunities of robotics. There, the main paradigm (of course with much much simplification) is to reverse-engineer humans and try to understand by developing robots how humans think, perceive, learn and feel. The research that traces back to the neuroscience field is invaluable as it is also contributing to teaching us about social cognition. What is better than using robots because they use the same models as us?

Another group of professionals interested in robots are designers, who are looking for the social affordances of automation, the needs and meanings around what does it mean to interact and experience robots.

After my first experiences in HRI, after feeling uneasy with the interaction paradigms for child-robot interaction, I have found the design perspective quite fitting. Design tries to respond to the need of the people and think of solutions. I found that design should be integrated with engineering and machine learning to develop robotic technology meaningful for people. So in my own research, I am following the following pillars:

1. Meaning + Ecologies + Needs = Design.

  • Find the meaning in interaction by working with users.
  • Work with everyone who is in the ecology, not just one stakeholder.
  • Do it with people in mind, technological exploration will come later.

2. Design for and with people: Design for ‘Social Good’.

  • If we want to design something for everyday life we should strive to design for social good. That is for societal empowerment, wellbeing, diversity and equity.
  • We should involve the people that are going to use the robots. This is called participatory design.

3. Robothings.

Instead of designing with human replicas in mind, design for ecologies of objects, of things. We should strive, at first, to explore how to embedded robotics and automation in everyday objects that are already around us. Objects we know and need, or at least things we could easily make a mental model about. A lot of things in our everyday objects can have robotic technology. Instead of modelling their behaviours and looks on us, we should explore their unique functions and behaviours that can communicate in human terms. That is exploring animacy and agency of the “things” prior to endow them of human traits, visual appearance, movements and voice. Within the example of my research, we can make toys more intelligent by leveraging already existing dynamics between children and using simple play behaviours of a robot as a way to nudge towards collaboration. For example conflicts in childhood are really important: it’s the way they learn how to solve interpersonal problems and regulate collaboration. Therefore, we want robothings to support that with only a gentle nudge to collaboration. We don’t want a robot to be the parent or teacher and to give a normative reaction to children conflicts. Same things for robots for older adults: we might want to enhance things and objects in our everyday life to give support. Maybe we should step away from paradigms of pure companionship as we do not want to substitute human connections with a robotic one.

Most of the robots for children are developed for children on the spectrum because it teaches them how to communicate according to OUR standards. But what if we adjust the robot so it teaches us how we should communicate with the children on the autistic spectrum?

The hardest thing with robots is to find a way how they can communicate to you. Children seem to be able to play with the illusion of life while they make sense of the world around them. As a researcher, children often see things that I do not see.

Children don't want humanoid robots per se, but a robot that plays with them and makes sense of what they are doing. They want robots designed for them. There are other ways to design the way robots communicate with us. And many researchers, I included, are using the unique way humans make sense of movement and other behaviours to explore new ways for robots to communicate. Please look at the following video:

An example: Please look at this video.

This video does not have a given meaning. It was used by Heider and Simmel (1940) to study how people generate an explanation of (apparent) behaviours of abstract geometric forms. The researchers found out that humans are compelled to ascribe intentions, goals and even personalities to the abstract moving objects on the screen. Therefore, humans can ascribe social communication and intentionality only from non-verbal behaviours. We could use this human tendency of making sense of movements and other nonverbal behaviours, to design robot behaviours that are not one or one replica of human behaviours. Nonverbal behaviours are the first thing humans used to communicate with each other. Nonverbal behaviours are the origin of human communication and of language itself. Humans and animals evolved to socially communicate through nonverbal behaviours.

I have designed robots following the 3 pillars that I discussed above and leveraging the human tendency to make sense of behaviours from behaviours and actions. The first example is Push-one, a robothing designed to support children’s collaboration. While playing in puzzle games and other games, it is able to push objects around. The robot is designed to participate in the games just by doing and play, a natural way for children to interact. The robot shares objects with the children when the children are in need of objects or take an object away when they arguing about the game. They are really simple behaviours to nudge toward collaboration and conflict resolution. The robot is not teaching children, but it gives a nudge for them to figure out. The robot does not do much more that. Pushing things around is its social behaviour. It is an honest design: you do not expect much more from the robot than shuffling things around. I have talked about education and how robothings could empower children in learning social skills via play.

Now let’s shift to another domain: wellbeing. Not getting enough sleep is detrimental to our health and wellbeing. Yet, it is difficult to say no to the many distractions that are making us procrastinating our bedtime: Netflix, social media scrolling, work. We have worked on robothings for human empowerment and wellbeing. Our speculative design project ‘Snoozle’ is the result of it. Imagine a pillow that calls you to bed and helps you relax and fall asleep. We thought about this robothing as an alternative to voice assistance and other devices. We started with the idea of reversing the current paradigm of tech for behaviour change that tends to be (sometimes) paternalistic and normative. We wanted to play with the objects already part of the sleep ecology to explore different ways to motivate to get sleep and support relaxation.

In a similar vein, from an idea of Rei Lee, PhD student at Cornell University, we have explored the context of serendipity and connecting with people. Millennials are often seen as glued to their smartphone. During my time at Cornell, My colleagues and I saw a lot of students that were not talking to each other. So, Rei Lee envisioned and designed a robot that blows bubbles as a way to bring people together. We worked on this speculative design project to sparkle discussion about how we could design tech that could bring us really together.

All these examples are to underline how we could design with social good in mind and people in mind. Let’s do something for social good! I would like to end with my points to do this:

  • We need new roles for AI-driven embodied technology.
  • We should design with meaning and take into account ecologies and people.
  • Say no to humanoid technology push.
  • The majority of technology does not feel like it is made for us. Let’s change this.
  • Don’t forget other disciplines. Multidisciplinarity is difficult but it is vital for designing tech for social good.
  • Let’s work together to design for social good.

The DSS Mic Drops are inspirational, interactive, provocative master classes given by expert researchers and practitioners, on topics that relate to design, tech, societal challenges and how we can make the world a better place by integrating technology more wisely and humanely.

--

--

Digital Society School
digitalsocietyschool

A Learning Community shaping the Digital Society. Join us to for training and applied research. http://www.digitalsocietyschool.org