Wheel Odometry Model for Differential Drive Robotics

Ahmed
19 min readMar 1, 2023

--

Wheel odometry refers to odometry (i.e., estimating motion and position) using rotary encoders (i.e., sensors that attach to the motors of the wheels to measure rotation). It’s a useful technique for localization with wheeled robots and autonomous vehicles.

In this article, we’ll dive deep into wheel odometry by exploring a wheel odometry model for differential drive robots.

Defining our Scenario

The world of wheeled robotics is complex and expansive. Similarly, there are various options when it comes to rotary encoder sensors. Thus, before we start making our wheel odometry model, let’s define the scenario we will model.

For this article, we’ll focus on a two-wheeled robot. For our model, the shape and silhouette of the robot will not matter as we’ll treat the robot as a point in space — this greatly simplifies our model as we won’t have to worry about the nuances of the robot’s physics. In diagrams, we’ll simply represent the robot as a rectangle. The only physical constraint we’ll force on our robot is that the two wheels are parallel. We’ll define a reference point that is located equidistant between the two wheels — the reason for this is that it will simplify our model dynamics. Furthermore, for visualization purposes, we’ll denote the front or forward direction of the robot with corner triangles. Thus, the robot in our diagrams will look like this:

For this article, we’ll focus on differential drive robots. Differential drive means that each wheel has its own independent motor and can operate independently of each other. In a differential drive robot, the motors controlling each wheel are separate and each motor can spin the wheels at different speeds and directions (i.e., forward or backward). Based on the speed of the two wheels, we’ll see different robot motions. If the two wheel motors are moving at the same speed in the same direction, the robot will move straight in that direction (e.g., if both motors are spinning forward at the same speed, the robot will move forward in a straight path). If one wheel motor is moving at a faster speed, the robot will turn to the opposite side in the direction of the faster motor (e.g., if the right motor has a faster speed than the left and is spinning backward, the robot will move backward and turn to the left).

Since our robot can move forward and backward, and the left and right directions flip depending on how you view the robot / the robot’s orientation, it can be confusing to discuss the directional motion of the robot. To remedy this potential confusion, we’ll define the direction in absolute terms, similar to how a compass has one true North (i.e., a fixed frame of reference). For this article, the fixed frame of reference when it comes to describing direction relative to the robot will be that the forward side of the robot will always correspond / point in the forward direction. In this setup, regardless of the robot’s orientation or motion, the forward, backward, left, and right of the robot will always be the same relative to the front of the robot. Notice in the above diagram that regardless of which orientation the robot is in, the directions are adjusted based on where the forward direction is pointed.

The data / measurement for our odometry model will come from rotary encoders. Speaking generally, rotary encoders attach to the motors and collect data on rotation — for this situation, we will have two rotary encoders, one attached to the left wheel’s motor and another attached to the right wheel’s motor. Then, using the properties of the rotary encoder, we can determine information such as the distance traveled by the wheel. To demonstrate how rotary encoders work, we’ll focus on an example.

The rotary encoder we’ll focus on for this article will be an incremental optical encoder. Incremental optical encoders are encoders that leverage a light-emitting diode (LED), a disk with slits, and a circuit with a photo sensor. The disk separates the LED and circuit with a photo sensor. As the motor spins, the disk will rotate letting light from the LED pass the slits to the photo sensor, changing the voltage of the circuit. The number of times the voltage changes corresponds with the number of slits passed, which provides information on the angle of rotation (as each slit corresponds to some amount of rotation). It will be an incremental encoder meaning that at each measurement, we’ll know how much we rotated from the previous time step — this contrasts an absolute encoder where the exact orientation of the motor is determined at each measurement.

With the rotation data, alongside information on the encoder, such as the radius or circumference, we can estimate the distance traveled by the wheel. Since each slit represents some angle of rotation, knowing the number of slits passed informs us about the amount of rotation between time steps. For an optical encoder, where all the slits are equally spaced, we could get the total angle of rotation between time steps by multiplying the number of slits passed by the amount of rotation represented by a single slit. After we determine the angle of rotation, we can multiply it by the circumference of the encoder to get the distance traveled by the wheel.

Our odometry model is not reliant on the encoder we use. In fact, any type of encoder will work as long as we are able to determine the distance traveled by the wheel. In the case of an incremental optical encoder, we can use the rotation data collected alongside the properties of the encoder (i.e., its dimensions) to translate our encoder measurements to distance. The incremental optical encoder was chosen for this article due to its intuitive nature. However, distances can be extracted from other types of encoders, albeit the process might look different, and those encoders would suffice for the odometry model in this article.

To summarize, the wheel odometry model for this article is for a two-wheeled, differential drive robot, where the wheels are parallel, and the robot is represented by a singular reference point, with the reference point located equidistant between the two wheels. We will treat the robot as a point, ignore the physics of the robot, and we’ll define the direction with respect to the forward direction of the robot. In terms of encoders, any would work as long as we are able to extract distances traveled by the wheels at each time step — to build intuition of encoders, we looked at incremental optical encoders as an example.

Wheel Odometry Model

The goal of our odometry model is to estimate the position and orientation of the robot. To achieve this, we’ll leverage data from the rotary encoders, the dimensions of our robot, and geometry. As discussed previously, the encoder will inform us of the distance traveled by each wheel at each time step. In terms of using the dimensions of our robot, since we’re representing our robot as a point, not much is needed. The only dimension we need is the distance of the point from the left and right wheels. Since we defined our reference point to be located equidistant between the two wheels, we only need to keep track of one number.

Now let’s define some variables to keep track of these ideas:

The first two variables correspond to the distance traveled by the wheel at a certain time step. This information will come from our rotary encoder. The third variable can be derived by measuring the distance between the two wheels and dividing it in half since the point is equidistant from the two wheels.

Let’s define the model of motion for our robot. We’ll model the motion by saying the robot will always travel in some arc. Mathematically, that means traveling at some angle along a circle of some radius.

The motivation for modeling motion as a curve along a circle is that it lets us use various geometric properties to solve for distance traveled and orientation angle.

In the above diagram, the model shows forward motion to the left. What about straight, rightward, and backward motion? Fortunately, our model would still hold — let’s explore why.

In our model, straight motion will correspond with a curve that has a very small angle and / or a very large radius. As the angle gets smaller or as the radius increases, the curvature of the curve decreases and the curve becomes flatter. At very small angles and / or large radii, the curve will look like a straight line. Thus, we’re able to encapsulate straight motion with this model.

For rightward motion, notice we can model that by flipping the above diagram horizontally. This reveals a symmetry — in fact, if you were to derive the flipped version, you’d get the same equations, except the variables / values associated with the left wheel would flip with the variables / values associated with the right wheel. In effect, the orientation estimate from the model would flip in sign (i.e., positive to negative, negative to positive), however, the distance estimation would remain the same. Since angles are defined in both the positive and negative directions and exhibit symmetries similar to sign flipping, we actually encapsulate rightward motion in the model.

It’s a similar type of argument to include backward motion, which is just distances going in the negative direction. We would capture backward motion through negative distance values.

Now that we’ve established that using curves / arcs in this fashion is reasonable for capturing robot motion, let’s dive into the geometry behind our model.

The first geometry idea we’ll cover is the units for angles. Angles are commonly measured in degrees or radians. For degrees, we split a circle into 360 equal parts and the angle of any single slice is the size of a single degree. For radians, it’s defined in terms of arc length for curves on the unit circle (i.e., a circle with radius 1) — essentially, it relates the length of an arc to an angle, so 1 radian represents the angle corresponding to an arc of length 1 on the unit circle.

Public Domain

There are formulas and tables to convert between radians and angles.

Public Domain

Either unit for angles can be used in trigonometry functions and for defining geometric properties. In this article, we’ll leverage both radians and degrees — by default, the unit for angles will be radians, unless specified otherwise.

Now that we’ve established context on our units for angles, a key formula for our wheel odometry model will be the arc length formula (using radians):

“File:Theta in radians minimalistic.svg” by Guy vandegrift is licensed under CC BY-SA 3.0.

The last few geometric ideas we’ll state are:

  • The sum of all angles on a straight line is 180°
Created by Gustavb using Eukleides (CC BY-SA 3.0)
  • The sum of all angles in a triangle is 180°
Public Domain
  • A tangent line to a circle is perpendicular (i.e., at 90° degree angle) to the circle at the point of contact; furthermore, the angle between the tangent line and a chord is half of the angle of the arc produced by the chord
Created by WillowW (CC BY-SA 3.0)

Now let’s start annotating our model with known variables and variables of interest. To avoid clutter, we’ll drop the time subscript for now as we work toward the core relationships in our odometry model. Later on, we’ll discuss them as they become crucial for downstream analysis.

The first three variables can be directly measured (using the encoder for the first two variables and a ruler for the third variable). The last three variables are not directly measurable — instead, we need to use geometry to relate these variables to the measurable quantities.

We can start by using the arc length formula. The path for the left wheel, right wheel, and reference points are arcs. They all share the same angle and the radius for each can be expressed in terms of the radius of the curve containing the reference point and the distance between the reference point and the wheels.

Now let’s solve for the change in the angle of rotation by solving the system of equations with elimination.

  • Distribute the multiplication
  • Multiply both sides of the left wheel distance equation by negative 1
  • Variable elimination and algebra

Thus, we were able to solve for the change in the angle of rotation in terms of measurable quantities, getting the following relationship:

Now let’s solve for the radius of the curve containing the reference point by rearranging equations and plugging in what we know.

  • Rearrange the equation so that the radius of the curve containing the reference point is on one side
  • We’ll flip the equation so that the quantity of interest we want to solve for will be on the right side (as it’s more natural to read)

We solved for the radius of the curve in terms of measurable quantities. Now let’s move on to the distance travelled by the reference point. We just need to plug in our results and simplify.

We solved for all variables in terms of measurable quantities. Since we’re interested in the robot’s position and orientation, the key variables of interest would be the distance traveled by the reference point and the change in the angle of rotation. The distance traveled by the reference point informs us of the position and the change in the angle of rotation informs us of the orientation. The radius of the curve containing the reference point, while useful for derivation, is not really needed anymore. Thus, the key results from our model so far are:

With the results so far, we can determine the distance and change in orientation from one time step to the next. The results describe some relative motion between time steps.

However, if we want to know the direction or new orientation of the robot, that information is missing. We know the distance traveled, but not the direction. We know how much the orientation angle changed, but not the new orientation angle. This will motivate the next part of our odometry model.

Let’s start by trying to determine the direction of the distance. To simplify our model, we will represent the distance traveled by the reference point as a line instead of a curve.

We can make this simplification because typically, in wheel odometry with encoders, the data sampling is very high. What this means is that our encoders are able to collect data very frequently so the time window between measurements is very small. Since the time window is very small, the amount of motion captured by each time step will also be small. For our model, that means the curvature of the arc will be very small and resemble a straight line. Thus, it’s a safe assumption and simplification to represent our distance now as a straight line.

We are interested in the angle that this distance is going in.

We can solve for this angle using the properties of triangles, namely that the angles in a triangle sum to 180°, and the properties of tangent lines to a circle, namely that the angle is 90° at the tangent point since the line and circle are perpendicular. Note: the diagram removes the robot bodies to reduce clutter.

We can set up an equation and solve. Note: these angles are degrees (because the general audience is more familiar with geometry in degrees), but we could have solved the problem in radians as well — the answer wouldn’t have changed.

Great — we’ve solved for the angle of the distance in terms of a previously solved variable.

Now let’s shift focus to solving for the new orientation of the robot.

Similar to before, we’ll use geometric principles to solve the angle of the new orientation. Here’s our diagram (with robot bodies removed to reduce clutter):

The key to solving this is realizing that the top, dotted line for the orientation angle of the robot is tangential to the curve because in our model, the reference point is going along the curve. Since our orientation is based on the reference point, which is a tangent point of the arc, we know the top, dotted line for the orientation angle makes a right angle (i.e., 90 degrees). Then we can extend the bottom, dotted line for the orientation angle to create another right angle with arc’s radius.

Using the fact that the sum of all angles in a triangle is 180°, we get:

Now we can solve using the fact that angles along a straight line should sum up to 180°.

Thus, our current odometry model looks like this (filtering away the less relevant / intermediary variables):

Now we know how far the robot traveled, the angle at which it traveled, the change in the angle of rotation, and the angle of orientation between different time steps.

Wheel Odometry Absolute Motion

From the results in the previous section, we are able to estimate the relative motion (i.e., moving from one-time step to the next). However, we can extend our odometry model to describe absolute motion. With absolute motion, we would define the environment that the robot navigates with a coordinate plane system (typically with an x and y direction). In this coordinate system, the robot’s motion can be captured by coordinates, which tell us the absolute position of the robot.

As for orientation, we can define it as the angle from the x-axis. When the robot is facing the direction of the positive x-axis, it has an orientation angle of 0°. When the robot turns and faces somewhere in quadrant I, then it has an orientation angle of 0° to 90°. When the robot faces somewhere in quadrant II, then it has an orientation angle of 90° to 180°. When the robot faces somewhere in quadrant III, then it has an orientation angle of 180° to 270°. And when the robot faces somewhere in quadrant IV, then it has an orientation angle of 270° to 360°. This can be visualized by this graphic (has degrees and radians around the perimeter and quadrant labels by the center):

Public Domain

The major distinction from relative orientation is that absolute orientation is always from the same frame of reference — the angle from the positive x-axis on a fixed coordinate plane. In contrast, relative orientation can vary depending on perspective / frame of reference.

So far, our odometry model has been drawn so that the initial position is looking to the left, which matches the direction of the positive x-axis. This means in our odometry model so far, the robot always started off with an absolute orientation of zero radians (we never had to account for an existing orientation).

What happens when the robot starts from a different orientation? All the relative motion work remains, but now we need to account for this initial orientation to ensure we properly calculate the absolute position and orientation.

Why does all the relative motion still hold even though the initial orientation is changing? The reason has to do with perspective. Suppose I took the above example and rotated my existing coordinate system such that the initial orientation was now zero radians (i.e., made my coordinate system such that the x-axis is parallel with the robot’s current orientation). All the work for relative motion previously discussed would be applicable, as we’ve recreated the same scenario. Rotating the coordinate system doesn’t change anything fundamentally about it — it simply changes the perspective in which we would see it.

In fact, one strategy for getting an absolute motion model is to keep creating new coordinate systems each time by rotating and then transforming them back to the original coordinate system (based on the rotation needed to fit a new coordinate system). Albeit, the coordinate transformation method is more advanced and involves using (rotation) matrices.

However, it gives a great perspective on the geometry. Our robot starts off with some absolute orientation. If we decide to modify the coordinate system such that our robot has the orientation of zero radians, we need to rotate the existing coordinate plan by the absolute orientation angle. That’s based on how we defined absolute orientation as an angle off the positive x-axis. Basically, we’re adjusting the coordinate plane by the absolute orientation angle. Instead of shifting the entire coordinate plane, we can just add it to the relative orientation calculation as demonstrated below:

In this diagram, the odometry model at time t will add the absolute orientation angle from the previous time step. Notice that adding the orientation from the previous time step won’t change the distance traveled by the reference point or the change in angle of rotation as the formulas we derived from earlier don’t rely on the orientation angle (only the traveled wheel distances). Instead what does change is the orientation of the robot, from being relative between time steps to now being absolute on the coordinate plane. Thus, the absolute orientation angle at any time step can be defined by:

When working with absolute motion, our robot will have a coordinate point at each time step. The way that the coordinate position is updated is with the trigonometric properties, namely that cosine of an angle is adjacent over the hypothenuse and the sine of an angle is opposite over the hypothenuse. Using the distance traveled by the reference point as a hypothenuse and the angle of orientation from the previous time step plus the angle that results from motion, the amount of distance traveled along the x and y directions can be calculated.

Adding the x and y distance to the previous time step’s coordinate, we can determine the new coordinate position of the robot. We can describe the dynamics with these equations:

Notice that the position for the current time is based on the position and angle from the previous time step and the change in the angle of rotation in the current time step. Since we’re using quantities from two different time steps in one equation, it makes sense to bring back the time subscript to delineate the differences.

Conclusion

Our encoder collects data on how much each wheel travels and we can measure the distance between the wheel and the reference point. Using the arc length formula, we got a system of equations that we solved to get the distance traveled and the change in angle of rotation (in radians) between a time step.

We realized that our distance traveled can be represented with a line instead of an arc because of how frequently we collect data with wheel encoders. The high data collection frequency makes the curve behave / look more like a straight line. Then, using the geometry of angles, we found the angle of orientation (in radians) caused by moving and the final, relative orientation.

Afterward, we extended the model for absolute position and orientation where we have a defined coordinate plane. In the absolute system, we need to adjust our odometry model by factoring in the absolute orientation angle from the previous time step. Then, we can use trigonometric relationships to determine the new absolute orientation angle (in radians) and coordinate position.

For absolute motion, we have the coordinate position, comprised of an x and y component, and the absolute orientation angle (in radians). The three equations to define the odometry model for absolute motion are:

It is also common to express the equation in vector form:

In this article, we developed an odometry model for a two wheel differential drive robot. This model will enable us to track position and orientation using data from rotary encoders. Since rotary encoders are very affordable and have high data sampling, they are often a go-to sensor for wheeled robotics. However, one of the challenges to using encoders is noise and measurement error. If we wanted to continue developing our model, a good next step is to factor in the statistical uncertainties and errors with our measurements.

--

--