A revised version of the robot: an iRobot Create equipped with a Dancer board (a custom-designed PCB with a microcontroller plus IR emitters and receivers)

Teaching a Robot How to Dance

David Ng
Vertical Learning
Published in
13 min readDec 7, 2016

--

In 2011, a small team began working on an introductory computer science course for high school students. Daniel had been software lead for the second-generation Roomba at iRobot; Lee was an associate professor in computer science at Bridgewater State University; Tim was a roboticist and graduate of the M.I.T. Media Lab; and I had been a middle school teacher and curriculum specialist.

While we never finished developing the course, the results were promising when we tested components of the course on Amalia and Ariana, two high school juniors with little interest or experience in either robotics or computer science. In fact, Amalia and Ariana went on to major in computer science at Boston University and Carnegie Mellon, respectively. Although it would be impossible to say if they chose those majors because of their work with us, we did design the course to be powerful—and when I say powerful, I mean literally path-altering.

Deep down, we believed most students would benefit from and enjoy the application of computational thinking across a wide range of fields—and we hoped to demonstrate that in this course for our students, the world, and ourselves. Of course, if successful, it would only make sense for students to change paths and continue studying and applying computational thinking on their own. That was, after all, the goal.

An authentic experience

Creating a path-altering learning experience isn’t hard. If you truly believe computational thinking is a skill most students would benefit from and enjoy, then you simply enable them to have an authentic computational thinking experience—one in which they are the expert and they experience what mastery looks and feels like. But how do you do that for beginners?

A slow build up isn’t feasible because they won’t stay engaged long enough to reach the payoff. You have to show them, as quickly as possible: this is what you can do as a computational thinker, and you are more than capable of developing these skills. A passive experience where you can see but not do also doesn’t work—thinking computationally is different than observing computational thinking, and watching others do something doesn’t suggest you can do it, too.

Ground, scale, leverage

To create an authentic and sufficiently powerful learning experience, we attempted to distill what roboticists and computer scientists do down to the bare essentials. In our challenge, students would engage in the engineering design process, diagnose and debug issues, and navigate abstraction layers by constructing and applying a robust mental model of the robot. This mental model had to be grounded, so students could think and act like the robot, and it had to be complex enough to account for interactions between hardware, software, and the environment.

In the end, we decided to de-emphasize mechanical design in the course because we felt it would take too long for students to develop those skills on top of everything else we needed them to do. While the students would not be designing the hardware themselves, they would still need to understand the hardware to construct mental models of the robot.

As we designed the course, we kept five key principles in mind:

  1. keep the core set of skills and concepts needed to solve a challenge small so students can jump straight into problem-solving;
  2. enable students to ground their mental models so they can reason and problem-solve from a solid foundation, and feel secure enough to engage in inquiry and work independently as experts;
  3. focus on skills and concepts that can be leveraged to solve a diverse set of compelling problems;
  4. focus on skills and concepts that scale as you grow, as opposed to skills and concepts that are replaced or discarded as you grow; and
  5. select the most interesting and complex challenge solvable using the core set of skills and concepts within the constraints of the course.

Behavior-based programming and dancing

Everything depends on enabling students to construct grounded mental models. If the students are uncertain in their understanding, they won’t be able to engage in inquiry, problem-solve effectively, or scale and revise their mental models later on—and the experience won’t be nearly as authentic or powerful. By using behavior-based programming instead of a traditional imperative programming language, we dramatically simplified the mental model required to understand the robot, and made it possible for beginners to rapidly program the robot to exhibit complex behaviors and react to the environment by following a set of simple condition-action rules.

With this basic mental model in mind, the most interesting and complex challenge we could think up using behavior-based programming involved teaching a robot how to dance. Note, this is not the same as programming a robot to perform a scripted dance routine to a fixed piece of music, which requires very little computational thinking. It means programming a robot to improvise a dance on its own to an unknown piece of music and with an unknown robot dance partner. The robot needs to synchronize its movement to the music while staying in close proximity to—but not colliding with—its partner.

When we shared this challenge with adults, the roboticists and computer scientists were enthralled. It represented the kind of computational thinking they did every day, and they wanted to work on the challenge themselves. But few people thought this was a challenge typical students could take on, especially students with little interest or background in robotics or computer science. That told us we were on the right track—the challenge was both authentic and rewarding to solve.

Testing our hypothesis

We decided to invite two high school juniors, Amalia and Ariana, to test our design. We needed a challenge two beginners, going from zero to 60, could solve in under 90 minutes. One of the intermediate steps in Teaching a Robot How to Dance is programming one robot to follow another robot. Instead of using two robots, we challenged Amalia and Ariana to program one robot to follow a beacon across the dance floor.

We began by helping Amalia and Ariana construct a grounded mental model of the robot. They would need a complete and concrete understanding of how the robot works before they could take the lead in programming the robot and diagnosing issues. Daniel, acting as the instructor, guided them by posing questions and encouraging them to experiment with the robot itself.

Constructing a mental model of the robot

The first step was understanding how the robot sees the beacon. The beacon emits an infrared (IR) signal—basically light waves at a wavelength invisible to the human eye. The robot detects this light from the beacon using four IR sensors mounted facing to the front, back, left, and right. But reading about sensors and understanding how they work are two different things. Amalia and Ariana were able to figure out how the sensors work by formulating and testing their own theories on the robot itself. To understand how the sensors work, try the simulation below.

To run the simulation on its own page, open it in CodePen and then change the view to full page

The second step was understanding how the robot moves across the dance floor. The robot has two drive wheels: one on the left and one on the right. By setting the wheels to turn at various velocities, the robot is able to move in arcs and straight lines. This is known as differential steering.

There was some initial discussion about the best way for Amalia and Ariana to drive the robot. Should we implement a basic driving API so they could simply command the robot to drive forward or turn right? Or should we expose them to differential steering and have them drive the robot by setting the two wheel velocities? In the end, we decided to have them drive the robot by setting wheel velocities because: (1) differential steering isn’t that hard to understand; and (2) we wanted them to feel as though they were interacting with the robot at the most concrete level possible, not interacting through a black box or abstraction layer. Try the second simulation below to understand how the drive wheels work.

To run the simulation on its own page, open it in CodePen and then change the view to full page

The third step was understanding how the behavior system links the drive wheels to the IR sensors. The robot operates by running through a loop one hundred times a second. At the start of the loop, the robot gathers readings from its four sensors. It then uses those readings to choose an action by going down a list of condition-action rules and choosing the first rule whose condition is true. For example, the first condition-action rule in Amalia and Ariana’s final program is:

if (IR_left > IR_right + 2) then leftWheel = -200, rightWheel = 200

This means that, if the IR signal at the left sensor is 3 greater than the IR signal at the right sensor, then the robot will set the velocity of its left wheel to -200 mm/s and the velocity of its right wheel to 200 mm/s. If the condition is false, then the robot will check the next condition-action rule in its list. The robot chooses and executes a new action every 0.01 seconds.

Understanding the behavior system is crucial. Amalia and Ariana were only willing to take risks and test hypotheses because they were confident in their ability to understand the robot and predict what it would do. And—if the robot did something unexpected, which happened in every trial until the last one—they were even more confident they could diagnose and troubleshoot the problem.

The two simulations below will help you understand the behavior system. The first simulation places you in the role of the robot. Only seeing what the robot sees and only moving like the robot moves, can you drive your way to the beacon at its unknown location? The next simulation will show you how complex behaviors can emerge from a set of simple condition-action rules. The rules in this last simulation are similar to the rules programmed into the robot by Amalia and Ariana. I changed some of the numbers and added one rule because the simulated world is slightly different than the real world.

To run the simulation on its own page, open it in CodePen and then change the view to full page
To run the simulation on its own page, open it in CodePen and then change the view to full page

Analyzing our test results

We recorded our 90-minute session with Amalia and Ariana and posted a series of videos on YouTube. You can find a detailed analysis of the session at Computing Explorations.

If you are an educator interested in computational thinking, you should definitely watch the entire series. But for this article, I’m going to highlight a few clips. Amalia and Ariana are both strong academic students and above average problem-solvers. Because of that, some people wonder how weaker students would fare at the same challenge. But if you watch these videos, you will see their demeanor and engagement visibly change from the start of the session to the end. Amalia and Ariana become better problem-solvers as they ground their understanding and feel increasingly secure. That’s how all learners respond in an appropriate learning environment. The clips below are arranged in chronological order.

Figuring out IR signal strength can be used as an indirect measure of distance (2:40)
Figuring out how to use priority order when the beacon is behind the robot (3:24)
Using real-time instrumentation to diagnose emergent behaviors (4:22)
Success and celebration! (1:57)

At the start of the session, Amalia and Ariana are hesitant. They hunch over, speak in hushed tones, and look to Daniel for approval before continuing on, and most of their statements sound more like questions. But once they begin to make sense of the behavior system, their confidence grows. Instead of always looking to Daniel, they start looking to each other, reasoning through problems on their own. And by the end, they are smiling, laughing, relaxed, and animated. When the robot behaves unexpectedly, they take it as a fun challenge and try to figure out what happened, ignoring and even overriding the adults in the room. They’ve clearly taken the lead.

Real-time instrumentation

Identifying a small, well-grounded mental model that can be used to solve an interesting challenge is only the first step. Enabling students to actively construct and ground their own mental models through direct experiences is only the second step. Any mental model that we initially construct is going to be naive. It’s only after we’ve applied and tested our model, and revised it in the face of new data over time, that it becomes sophisticated.

While 90 minutes is not a lot of time to test and revise a mental model, Amalia and Ariana did revise and test their program eight times. And along the way, if you watch the videos, you can see where they deepen their understanding of the sensors (e.g., signal strength can be used as an indirect measure of distance), drive wheels (e.g., the robot can drive backwards by setting its wheel velocities to a negative number), and the behavior system (e.g., if no condition-action rule is triggered, the drive wheels remain set at their previous velocities; we don’t have to test if a condition is false if that condition has to be false to reach a rule with lower priority). I’m also willing to bet they figured out: if you establish a deadband, then you might need to do some additional condition-testing to catch the cases falling through that deadband!

Enabling students to construct a mental model is one thing. Encouraging them to use it is another. To encourage Amalia and Ariana, we made sure they had enough data at their fingertips to understand what the robot was doing at all times. That included LEDs on the robot to indicate signal strength at the four IR sensors, and telemetry data on a nearby computer monitor, including the current condition-action rule being executed by the robot. The real-time instrumentation made it easier for them to apply their mental models when reasoning about the robot or analyzing emergent behaviors. It also gave them the data and feedback needed to revise those models and engage in inquiry.

Navigating abstraction layers: drilling down

In the design of Teaching a Robot How to Dance, we take mental model revision much further. After all, the course wouldn’t be very powerful if our mental models stopped developing after the first lesson. We encourage students to revise their mental models in two directions: drilling down and building up.

In drilling down, we are basically making our mental models more robust and concrete by opening up and looking inside of any black boxes. While the mental models constructed by Amalia and Ariana may feel concrete enough already, they actually contain a number of black boxes that an expert would feel compelled to delve into.

For example, when we say the robot sets the left and right wheel velocities, what does that mean exactly? For many robots, it actually means the robot varies the amount of power being delivered to each wheel’s motor, and as the robot’s battery runs down, the wheels actually turn slower. That may not matter in most scenarios, but it could be a serious problem for a robot trying to execute precise dance moves synchronized to music.

Amalia and Ariana also never asked how frequently the robot executes a new action, or how quickly the robot’s wheels can physically transition from turning -200 mm/s to +200 mm/s. Again, they never bumped up against those constraints, but we designed the course so students would run into them. In fact, in the simulations, the robot is only selecting a new behavior twenty times a second. Because of that, it requires a wider deadband when turning to avoid oscillating behaviors (getting stuck turning left and right), which then causes the simulated robot to take a slightly curved path to the beacon.

Navigating abstraction layers: building up

Driving the robot by setting wheel velocities worked well for Amalia and Ariana because they only needed the robot to drive straight or turn in place. On the dance floor, the robot would need to perform aesthetically-pleasing dance moves. Driving the robot in a dance move by setting wheel velocities is going to get cumbersome fast.

Students will quickly discover the utility of adding an abstraction layer: a basic driving API on top of the drive wheels, such as forward(), back(), turn(), and arc(). Then, on top of the basic driving API, they may want to add a basic dance step API, like slide(), wiggle(), shuffle(), twirl(), spin(), and bump(); and then a dance move API combining those dance steps into artistic combos.

But programming the robot to execute artistic dance moves is only a small part of the challenge. Those moves need to be parameterized so they sync with the music. For example, a standard wiggle() might be two-beats long. And then the other issue is dealing with the robot dance partner. Our robot needs to respond if the other robot gets too close or far away. We could have the robot break out of a dance move if necessary, but that should only have to happen in an emergency. The robot should adjust the move it’s in based on the movement of the other robot. Where to make those adjustments—in the basic driving API, the basic dance step API, or the dance move API—is a design question students will have to figure out and implement themselves.

In making those adjustments, it might also be helpful to convert the four IR sensor readings into an estimated distance and heading for the other robot by creating a virtual sensor sitting between the real sensors and the behavior system. Then, when programming condition-action rules, the students could think in terms of distance and heading instead of IR_left and IR_right.

Building up is the best way to test a mental model. If we can leverage and build on top of a mental model, then it is likely well-grounded and robust. If not, then we are exposing faults and misunderstandings that we should drill down into. Drilling down and building up work in tandem.

Vertical learning

One of the questions I’m asked all the time is: “Shouldn’t you personalize your curriculum?” In general, I don’t personalize for student interest unless I’m working with a student one-on-one. But I do personalize for student cognition—how different students think about and process the world around them. I try to get to know how individual students think and learn, so I can design learning experiences to fit their needs.

When we were designing Teaching a Robot How to Dance, we weren’t targeting the course at students interested in dance. I have no idea whether Amalia or Ariana had any interest in dance because we didn’t ask them and it wasn’t relevant to us. Our goal was to create a powerful and authentic experience that would enable a student to experience the world as an expert computational thinker. Some students will find that experience engaging; others, not so much.

My goal as an educator is to enable students to learn vertically, no matter the domain. In Teaching a Robot How to Dance, that means: (1) recognizing the black boxes in our understanding, and having the confidence to open them up and make sense of what we find inside; and (2) actively testing mental models and using data to revise and improve them. These skills are useful and relevant for everyone, even if you aren’t a roboticist or computer scientist. It’s how we cultivate curiosity and become lifelong learners.

Special thanks to Tim McNerney, Lee Mondshein, and Daniel Ozick for our collaboration; and Amalia and Ariana for being such wonderful guinea pigs.

--

--

David Ng
Vertical Learning

Founder and Chief Learning Officer of Vertical Learning Labs