When Robots Do Surprising Things

Emergent Behavior in Robotics

Michael Gielniak
MistyRobotics
9 min readJun 19, 2018

--

I’m sure you’ve seen movies based on the premise of robots suddenly doing weird things they weren’t programmed to do. Chaos ensues and everybody leaves the theater worried about the future. Well, robots suddenly behaving in unexpected ways isn’t actually a possibility, right?

It turns out, it kind of is, but in a good way.

We’ve all heard the phrase “the whole is greater than the sum of its parts.” That applies to robots, too. In robotics, a set of simple behavioral rules can result in more complex, unintended behaviors. Or a group of robots can interact to create unexpected results. We call these unanticipated effects emergent.

The natural world is filled with examples of emergence. The symmetry of snowflakes and the shapes of flocks of birds can be impressive and even inspiring. Emergent behavior in robots can be equally intriguing.

Organization Emerges from Randomness

One example of robots showing emergent behavior was a project by O. Holland and C. Melhuish from the University of the West of England. Researchers equipped robots with a bracket to push items along the floor in front of them as they drove along. When the force pushing on the bracket reached a certain threshold amount, the robot was designed to back up a small distance, spin to a random new bearing, and wander off in a new direction.

When these robots encountered an area filled with items, the emergent behavior that resulted was that they ended up collecting all the items into piles! And even cooler, the size of the piles correlated to the force threshold on the front bracket. So, as the researchers increased the force threshold, the robots collected items into fewer piles and each pile contained more items.

It doesn’t take too much imagination to see that there’s a lot of potential use for this type of emergent behavior, such as clearing and keeping an area free of items or debris. And the researchers noted that ants exhibit similar behavior when they push dirt or rocks into piles. Thus, robotics can be inspired by nature, and in turn, studying robotics can provide insight into biological systems.

Images of the emergent robot behavior described above—collecting objects into piles. From the Holland and Melhuish study: “Stigmergy, self-organisation, and sorting in collective robotics”.

How did some basic rules and random movement turn into a piling behavior in the Holland and Melhuish study? Well, sometimes robots surprise us because they incorporate strategies for problem solving that include exploration. These are strategies that allow a robot the freedom to explore possible solutions to a problem, not (necessarily) explore a physical space. Since the real world is so complicated, it’s impossible to design robots to handle all the situations they may encounter. So, instead of trying to perfectly pre-define all problems and all solutions, roboticists make use of exploration strategies such as:

  • Randomness, to help robots explore greater quantities of possible solutions by encouraging more change and unpredictability in problem-solving. For example, the research that created the piling behavior above incorporated randomness into its design.
  • Trial-and-error, which helps robots solve problems by making small changes and measuring how the quality of the solution responds to those changes.
  • Data-driven learning, which uses data to help a robot “learn” the answer to a problem on its own (this is the problem-solving force behind neural networks). Sometimes, however, some of that data isn’t perfect or doesn’t capture the entire problem perfectly.
  • Searching, a core part of many optimizations, uses specified criteria to explore possible solutions for a problem. The search may conclude when a “good enough” solution is found, but “good enough” may leave some room for interesting things to happen.
  • Generalization, which applies what a robot has learned from a single case to other situations. When this single solution is applied to different problems, unexpected things may result.

In all these cases, we seek answers to problems that can’t be perfectly defined, so the solutions can’t be, either. So, when the exact relationship between problem and solution isn’t perfectly known, if a solution is found, it can have side effects that affect the way a robot operates. In robotics, this is one key source of emergent behaviors.

Interestingly, another way to get emergent behaviors from robots is from situations where the relationship between the problem and solution is defined, but only by sets of simple rules…

Synchronization Emerges from Perception

Exploring emergent behaviors based on nature is something researchers have been doing for a while. This example is inspired by synchronously flashing swarms of fireflies, based on work by A. Tyrrell, G. Auer, and C. Bettstetter.

Image: Junichiro Tokiyoshi / EyeEm

With only two simple rules, robots can exhibit firefly-like emergent behavior. Imagine a group of robots, each with a single LED and the only two abilities:

  • Pulse its own LED.
  • Perceive when other nearby robots pulse their LED.

Each of these robots is given a variable called “desire to pulse LED.” When the variable “desire to pulse LED” reaches a threshold for a particular robot, that robot pulses its LED and resets its “desire to pulse LED” variable to zero. All robots in the group have the same threshold for when they choose to pulse their LED. And, all robots in the group follow the same rule set:

  • Rule 1: The variable “desire to pulse LED” follows a trajectory that increases with time until a threshold is reached, which is called the “desire to pulse LED” threshold.
  • Rule 2: When a robot perceives the LED pulse of another robot, it increases its own “desire to pulse LED” variable value by 10% once, at the time the perception occurs.

To underscore how simple these rules are, consider that the entire program (written in modified MATLAB .m script) running on each robot is only 8 lines long, involving instructions no more complicated than if-else statements, basic arithmetic, and variable assignments.

Figure 1 below illustrates this. Each colored line represents one of three robots, each with a unique LED color. The robots with the blue and green LEDs begin at very similar values for “desire to pulse LED,” so within one cycle, they are pulsing their LED in unison. The robot with the red LED begins with an initial value of “desire to pulse LED” very different than the blue or green, so it takes longer to synchronize. On each cycle, the robot with the red LED is reducing the gap between when it pulses its LED and when the other robots are pulsing theirs. But by the twelfth cycle, it coordinates with the other two robots.

Figure 1: Desire to pulse LED vs. time for three robots: one with a red LED, one with a green LED, and one with a blue LED. Synchronization occurs by the 12th cycle.

No matter where these robots start, as long as they’re within distance to perceive each other, eventually they’ll pulse their LEDs in unison. The behavior is emergent, because at no point did anyone explicitly program these robots to pulse light in unison. Synchronization is simply a by-product of the simple behavioral rules that they were programmed to follow.

By the way, there’s a larger principle at work here — the convergence of a stable system — and that’s part of why emergent behavior is so interesting. Investigating and understanding why such behaviors occur can lead you down the path of discovery and exploration of much larger and more complex topics in disciplines ranging from math to physics to biology.

To see a simulation of this algorithm in action, watch Video 1:

Video 1: Example of synchronized robot LED behavior emerging from two simple rules.

Wall Following Emerges from Avoidance

The next example of emergent behavior is well-known in robotics. Assume you have a robot that has obstacle detection sensors (e.g. ultrasonic sensors, time-of-flight sensors) at each of its four corners closest to the ground and that rotates about the centroid of its base. Using only a simple obstacle avoidance algorithm, the robot can engage in wall-following behavior, using the following rule set from M. J. Matarić’s research on designing emergent behaviors:

  1. If the left-rear sensor detects an obstacle, turn left.
  2. If left-front sensor detects an obstacle, turn right.
  3. If the right-rear sensor detects an obstacle, turn right.
  4. If right-front sensor detects an obstacle, turn left.
  5. Otherwise, drive straight forward.
Figure 2: A robot with obstacle detection sensors (black) at the four corners of its base. Each sensor has a finite distance-sensing range (yellow).

The first four rules define an obstacle avoidance algorithm, because each of these rules rotate their respective corners in a direction away from the detected obstacle.

The robot can begin at any position in a walled space. Given the set of rules, it drives forward as long as none of the sensors detect an obstacle. When the first obstacle is detected, that corner turns away from the obstacle, until the obstacle is no longer detected. (The obstacle, at some point, must no longer be detected, because real-world sensors don’t have infinite sensing capability.) At the point when it stops turning, the robot is aligned approximately parallel to the wall, and it begins to drive forward. That forward motion continues until the next obstacle or next corner is detected. The end result is that the robot appears to be intending to drive along parallel to walls, even though that was never part of its programming.

Now, there are many conditions where this obstacle avoidance algorithm fails because it’s too simple. One example is when it begins in a location that is too constrained by obstacles. When that happens, a robot with this algorithm appears to either vibrate in position or remain frozen. The robot switches between turning left and turning right on alternating time steps, but it can never actually move in either direction.

The behavior is emergent, because at no point are we explicitly programming these robots to follow walls. Wall following is just a consequence of the behavioral rules. To see a simulation of the obstacle avoidance algorithm emerging as wall following, watch Video 2:

Video 2: Example of robot wall-following behavior that emerges from five simple rules in an obstacle avoidance algorithm.

Group Motion Emerges from Communication

Let’s look at one final example of emergent behavior in robotics. Inspired by animals that move in groups, robots with simple locomotion rules can also engage in emergent group motion. This behavior was notably described by C. W. Reynolds in a seminal article published 30 years ago, but it’s still relevant today.

Image: D. Thomas

A robot similar to the one shown below in Figure 3 is surrounded by other robots like itself. These robots can communicate with each other, but only within a finite radius (a “neighborhood”). Only one piece of information is communicated between the robots: bearing (the direction that the robot is traveling).

Figure 3: A robot (green) that can communicate its bearing to other robots that are within its “neighborhood” communication radius (yellow).

With this information, each robot can independently follow two simple locomotion rules:

  • Alignment: Turn in the direction of the average bearing of all robots in the neighborhood.
  • Keep Communication: If the robot loses communication with the last robot in its neighborhood, turn in the direction opposite to the last move and hold that bearing until you gain communication with another robot.

The result of these independently executed algorithms is that the robots’ movements are similar to those of a group of animals.

There are alternative implementations of these rules that create similar effects. For example, in some cases the “Keep Communication” rule is implemented as two separate rules, as a workaround that allows the robot not to have to retain a memory of what decision it made previously.

And there are a number of conditions that can cause this algorithm to fail. A primary one is if robots start outside of the communication range of all other robots (i.e. with an empty neighborhood). However, as long as every robot in the group experiences communication with another robot in their neighborhood at some point during the group motion, they will begin to move towards, and eventually with, the group as a whole.

To see a simulation of this algorithm in action, watch Video 3:

Video 3: This group of robots takes some time to make the corrections necessary based on their randomly assigned initial positions, so from the start of the simulation, you can see how the “flocking” behavior emerges.

Surprise me, robot!

Emergent behaviors aren’t so scary now, are they? If you’re lucky enough to encounter a robot doing surprising things in real life, consider it a privilege. You may be witnessing something that can teach us a little more about the world as a whole.

And remember that it doesn’t take much to program a robot to do cool things. Anyone can write programs like the simple ones presented here, and we at Misty Robotics hope you do.

--

--