How Therapists Control Robots when used in Therapy with Children with Autism

Saad Elbeleidy
Mines Robotics
Published in
6 min readJun 14, 2021

This post is a summary of “Analyzing Teleoperation Interface Usage of Robots in Therapy for Children with Autism” that will appear at IDC 2021 and was coauthored by myself (Saad Elbeleidy), Daniel Rosen, Dan Liu, Aubrey Shick, and Tom Williams. If you prefer a video format, check out the video presentation that will be presented at IDC.

In the US, 14% of school-age children receive special education services. This can be through their school or directly through a therapy provider. These services belong to a vast selection of therapeutic modalities including talk therapy where children communicate with a therapist directly through various activities, music and dance therapy which uses music and physical motion to engage with children, and art therapy which can provide an effective creative outlet for children to bridge the gap in learning and communication.

While traditionally therapy is facilitated by a human, research has shown socially assistive robots (SARs) to also be highly effective in delivering therapy to children with autism. There is an opportunity to benefit from the advantages of human and robot-led therapy. Specifically, robot-led therapy produces similar learning outcomes while increasing engagement with children. Engagement in therapy is crucial and robots can help bridge the gap in making children feel comfortable in that setting. So how do these robots work in therapy?

We’ve partnered with Fine Art Miracles (FAM) to learn more. FAM is a service nonprofit that provides experiential therapy to children and the elderly who may be experiencing challenges.

A group session with children and a therapist controlling a Misty robot

The photo above shows an example session of a program that’s run by FAM. You can see the children sitting in a circle with a therapist there as well. In the middle there you see a Misty robot interacting with a child. The therapist is using a tablet to control this robot. The therapist controls the robot’s motion, and its dialogue using the PEERbots app.

Below is an example of what the therapist may see on their tablet:

An example of what the PEERbots app looked like at the time of the data we collected.

PEERbots is a software nonprofit that creates the PEERbots app. The PEERbots app is open-source, with code available on GitHub. You can download the app on the App Store or Play Store and start controlling another device’s verbalization and motion if it’s a Misty Robotics’ Misty robot.

Disclaimer: Aubrey Shick, one of the coauthors of this work, is the founder of PEERbots. Through this collaboration, Dan Liu, another coauthor of this work, has become a board member of PEERbots’ Board of Directors and I am now in the process of joining their board as well.

Since this collaboration has begun, PEERbots has changed significantly and this is what a more recent configuration of the app looks like:

A recent example of what the PEERbots app’s controller interface looks like

In the center of either example, you see some dialogue options for the teleoperator to select. Each button will result in the connected robot verbalizing that dialogue. On the left side, you can see buttons for the various collections that the teleoperator has loaded. This allows them to organize the dialogue options more effectively. On the right side, the teleoperator can edit the content of the options. This allows this interface to double as both an authoring and teleoperation tool. So, how are therapists actually using this software?

We looked at usage logs from two 8-week programs run by FAM in special education classes. The data we received contains the dialogue verbalized during sessions which we’ll refer to as “Session content” as well as the dialogue options that were authored which we’ll refer to as “Authored content”. A key thing to note here is that sometimes authored content isn’t used in the session. We looked at all the unique instances of these options and coded them to identify key themes within them.

Amount of content belonging to each identified content theme

We coded the content and arrived at these 5 themes: Lesson content, rapport building, feedback, attention management, and ignorance. As you can see from the chart above, Lesson content, rapport building, and Feedback make up the majority of the session. Lesson content is the key information that the operator aims to deliver. This is the therapeutic intervention. Rapport-building dialogue aims to establish or continue a relationship with the child. Feedback options provide positive or constructive feedback about the child or their actions. Attention management content is uniquely differentiated from feedback in that the core intent is to redirect the child’s attention. We actually found that teleoperators had a specific collection for this theme whereas most collections were authored to include content for a particular lesson. And finally, ignorance dialogue is used when the robot didn’t have a pre-authored response and is either trying to give the teleoperator time to author that content or won’t be able to respond. An example of an ignorance response is “That’s a good comment let me think about that”. Using these content themes we were able to identify several patterns in how therapists are using robots in sessions.

1. We found a repeated order in how these content themes appeared in sessions. Sessions mostly start with Lesson content with interspersed feedback and end with rapport building. Some sessions also start with rapport building.

2. These themes exhibit different dialogue architectures. With lesson content, there is a sequential nature in dialogue where the teleoperator is mostly just delivering content. Whereas with rapport building, the child’s response heavily influences the next dialogue option verbalized.

3. While collections were used to separate different lessons, content with similar intent was duplicated both within and across collections. This was most visible with feedback dialogue. For example, “good job” and “awesome” were prevalent across collections and usually placed after questions.

4. When examining session content, we found that robots rarely verbalize the same content more than once within a session.

Based on these patterns we present 3 key design recommendations for improving interfaces in this context.

R1. Authoring and Teleoperation interfaces for dialogue should have custom views for each content category, and simplify switching between them.

R2. Teleoperation interfaces for dialogue should be able to handle dynamic dialogue content so that teleoperators can easily customize the content to different individuals.

R3. Teleoperation interfaces for dialogue should present suggested options that a therapist may want to select based on previously selected options.

We hope that by sharing these insights we’ve uncovered that teleoperating interfaces for therapy with children with autism can improve.

Check out the MIRRORLab for more research on this topic and related work and let us know if you have any comments or are wanting to collaborate in some capacity.

Thank you to our partners at Fine Art Miracles and PEERbots! Please check out their work as well.

Header photo credit: Andy Kelly (@askkell)

--

--

Saad Elbeleidy
Mines Robotics

Robot Teleoperation Interface Researcher interested in Machine Learning, Data Visualization, Algorithmic Bias, and Food