# Motion Planning with F1/10

Now that we have a race car and a simulator, what do we do next?

# What is Motion Planning?

The motion planning problem, sometimes known as the navigation problem, is to find a sequence of valid configurations that moves the robot from the source to destination.

In the case of our ** F1/10** race cars, the Motion Planning problem comes down to finding sequences of configurations of our race car so that it traverses the race track without collision with the racetrack and the opponents.

# How do we do Motion Planning?

We can start by putting the planning problem into two categories for our car-like robot: planning on graphs and planning in the continuous space.

Planning on graphs

Planning on graphs is usually used to plan high-level tasks. The goal of planning on graphs is to find a path or the shortest path from one node on the graph to the other. For example, the following figure shows an example of real-world planning.

If we want to find the shortest path from Pennovation to VanPelt, we can first convert the map into a simple graph shown below:

functionDijkstra(Graph, source): create vertex set Q // Unvisited setfor eachvertex v in Graph: g[v] ← INFINITY succ[v] ← UNDEFINED add v to Q g[source] ← 0whileQ is not empty: u ← vertex in Q with min g[u] remove u from Qfor eachneighbor v of u: // only v that are still in Q alt ← g[u] + cost(u, v)ifalt < g[v]: // shorter path is found g[v] ← alt succ[v] ← ureturng[], succ[]

Dijkstra’s algorithm is one of the well-known algorithms for finding paths on a graph. The basic idea behind is to keep a running cost for nodes and pick the lowest cost one in the frontier to explore until you reach the goal.

One problem with Dijkstra’s is that it’s unsupervised, meaning that it doesn’t know where the goal is. To solve this problem, we need a way to prioritize nodes to explore.

For a node in the graph, we want to both estimate the running cost and prioritize the ones with the lower running cost to reach from the start. We also want to (under)estimate the cost to reach the goal (a heuristic) and prioritize the ones with lower costs to reach the goal.

Heuristic functions are problem-specific. They need to be admissible, which means that it never overestimates the actual cost to get to the goal. And it needs to be consistent, which means that the heuristic of a node should be less than the sum of the cost between the node and its successor and the heuristic of the successor (basically triangle inequality). Note that consistency implies admissibility, but not necessarily the other way around.

A variant of Dijkstra’s algorithm is the A* algorithm. Instead of using only the running cost, it also calculates a heuristic function value for each node to pick a neighbor to explore. We have created a **toy example** in our simulator to explore the difference between Dijkstra and A*. The toy example is a small grid world with obstacles. For a grid world example, there are two popular definitions of the neighborhood of a node: 4-connected and 8-connected.

There are also several options for heuristic functions in a grid world example. Some popular ones are Euclidean distance, Manhattan distance, etc. In our planning example in the simulator, we used Euclidean distance.

Before we started showing cool animations, we still need to define how to create a discrete approximation of the continuous space. Why? The continuous space just simply has way too many combinations of configurations to consider that we cannot realistically plan in real-time. What is often used in the world of robotics is an occupancy grid. In the case of our ** F1/10** cars, we’re doing motion planning in the 2D space. Our occupancy grid is a 2D grid with cell values representing whether a cell is occupied or not. The value could either be a boolean or a value between 0–100 representing the probability of occupancy.

Now that we have an occupancy grid, we can start planning!

As we can see, A* clearly knows the direction to explore and it finds a better path faster.

Planning in Continuous Space

Now that we have a discrete approximation of the continuous space, we can start planning in that space. There are also concepts that we need to understand before we go further. An important concept is configuration space vs. the workspace of the robot. The configuration space is space of the complete specifications of the positions of every point in the system. In our case, depending on how detailed you want to be, the configuration space of our ** F1/10 **race car could be as simple as (position_x, position_y, heading_angle); it could also be as complicated as (position_x, position_y, velocity_x, velocity_y, acceleration_x, acceleration_y, jerk_x, jerk_y, heading_angle, yaw_rate, and so on …). In contrast, the workspace is just the coordinates of the robot specified in the same coordinate system of the world it’s manipulating. In our

**race car case, the workspace will just be the x, y coordinates of the car in the world.**

*F1/10*There’s a lot of challenges that come with motion planning for the race car. First is dimensionality: as the state of the robot we define encodes more information, the higher the dimensionality of the planning space is. Second is responsiveness: the planning algorithm needs to react fast when the race car is traveling at a high speed. The third is avoiding obstacles: there are potentially dynamic and moving obstacles if we’re in a head to head racing scenario. Fourth is motion constraints: the car has kinematic or dynamic constraints to their motion. And the list goes on and on.

There are various different approaches to motion planning that try to address some of these issues. But the one we’ll talk about today, and the one that the students of the Autonomous Racing course will implement as a lab is a sampling-based approach: Rapidly-exploring Random Trees (RRT). The idea behind RRT is to expand a tree that spans the free space. At each iteration, you sample the free space, find the node on the tree closest to that sampled point, expand the tree in that direction to create a new node. After sampling for a while, you’ll eventually have a tree that spans the whole free space, and you can just traverse the tree to find a path for the robot from the start to the goal. The pseudo-code of the base version of RRT is shown as below.

functionrrt(xinit, max_iter): V ← {xinit} // vertices E ← ∅ // edgesfori = 1:max_iter: // maximum number of expansions xrand ← SampleFree()

// collision free random configuration xnearest ← Nearest(G=(V,E), xrand)

// closest neighbor in the tree xnew ← Steer(xnearest, xrand)

// Expand tree in the direction of sampled point (up to a distance)ifObstacleFree(xnearest, xnew):

// the edge is collision-free V ← V ∪ {xnew} E ← E ∪ {(xnearest, xnew)}returnG=(V,E)

There are also various improved versions of RRT: some consider the robot’s motion constraints, some try to use costs of nodes so that the path found is close to the optimal solution, and some try to parallelize the computations.

The variant shown below is RRT*, a version of RRT where a cost for each node is kept and the tree is rewired every iteration to find a solution close to the optimal one.

# Conclusions

With this intro, you should be able to start your own adventure in motion planning with the ** F1/10 **race car simulator.