Udacity Self Driving Car - Project 11 Path planning

Here is a 30,000ft view of a fully Autonomous vehicle system

Source: Udacity Self Driving Car Nano Degree Program — Autonomous System overview

Sensors sense the environment using cameras, radars, lidars, GPS, IMU, Ultrasound etc. Data from these sensors are fed to the Perception module where multiple algorithms work hard to perceive the environment around the car to detect other vehicles, pedestrians, traffic signs, free space, lanes, traffic lights etc.

Information from sensors is collated using Sensor Fusion techniques which provide critical information of both static and moving objects about their position, velocity and heading. Once the environment around the car is well understood, its time to move the car! But before moving the car we need to know “where is the car?”. This is what the Localization module does. Using GPS and lidar data the car should localize itself at an accuracy of less than 10cm which is a very crucial for driving in urban scenarios such as traffic jams.

While Perception is the muscle, the Planning module is the true brains of the car. In the Term 3 - of Udacity’s Self Driving Car Nano Degree Program a team of engineers from Mercedes-Benz does an excellent job in breaking down this difficult concept into simple and understandable terms.

Planning can be broken down into four steps,

  1. Route plan — A bird’s-eye-view plan of how to go from A to B like where to take right or left turn, which road or highway to choose etc.
  2. Prediction — Predicting the path of all the cars around you for the next 5 to 10s by looking at their current position, speed, heading.
  3. Behavior plan — How to navigate through traffic, lanes to choose, decision at intersections.
  4. Trajectory generation — How to generate a trajectory which is feasible, safe, jerk free and within speed limits.
Source — Udacity Self Driving Car Nano Degree Program — Interaction of various modules where Sensor Fusion and Localization serves as input and Motion control is the output of Path Planning module which is indicated by green bounding box.

While route plan is comes from the map, Prediction, Behavior plan and Trajectory generation is something that the car should to compute at regular intervals sensing the environment around it. Information from Sensor Fusion, Localization modules are inputs to Path planning module and the output goes to the Control stage which actuates the vehicle using a PID (Proportional, Integral, Differential) controller or MPC (Model Predictive Control)


In the final project of Path planning chapter, we are expected to drive a car (Ego vehicle) around a 4.32 mile simulated highway by safely and swiftly maneuvering around traffic without any “incidents”. Incidents can be collision, acceleration limit of 10 m/s², jerk limit of 10 m/s³, speed limit of 50mph, no wrong side driving and so on.

Snapshot of a successful run where the Ego vehicle makes a lane change

The Term 3 — Udacity simulator for Path Planning provides information about the highway as way-points in global (X, Y) and also in Fernet (s, d) co-ordinates. It provides position, velocity and heading of Ego vehicle and also Sensor Fusion data of other vehicles. The simulator renders a 3 lane highway (per side) which are 4m apart and vehicles which are about 2m wide. It also accepts a vector list of global (X, Y) co-ordinates which acts as trajectory of the Ego vehicle. The simulator renders frames at the rate of 50fps and hence consumes each trajectory point at 20ms. It also provides back a pruned list of trajectory points which were not consumed.

Trajectory Generation

The trajectory generation part which is the most difficult is covered as part of the project walk-through by Aaron Brown and David Silver. LINK. They recommend using the open source C++ tk:spline() method to generate a 5th degree polynomial which help minimize jerk while accelerating, decelerating and changing lanes. The summary of operations is as below,

  1. Take the top two points from the previous trajectory in global X,Y coordinates.
  2. Project the points ahead in 30m, 60m and 90m spaces from car’s current position. Convert from Fernet (car_s, car_d) to Global XY.
  3. Convert car’s global XY coordinates to local XY co-ordinates. This helps simplify math a lot.
  4. This gives 5 reference points which can be supplied to tk:spline() function to return a 5th degree polynomial.
  5. With this polynomial generate new points in local XY co-ordinates. The Y values on the spline can simply be read from corresponding X values on the X-axis as shown.
  6. Append the points to the previous trajectory after converting back to Global XY co-ordinates.
Source — Term 3 — Path plan project walk through. Aaron’s diagram on explaining the benefit of converting global XY to local XY co-ordinates. The y values can simply be read of the curve to the corresponding x values

Behavior Planning

The behavior planning module helps in making “lane-changing” or “lane-keeping” decisions. This is a critical module which keeps the car going around the highway without getting stuck behind slower moving vehicles.

My solution is focused on the problem at hand and does not comprehend all the complexities of real highway driving of an autonomous car.

There is no right or wrong decision in choosing lane, however steps taken are-

  1. Identify possible lane change maneuvers
    a) Side lanes to center lane (Left to center, Right to center)
    b) Center lane to side lanes (Center to Left or Center to Right)
Two possible ways to change lane, sides to center and center to sides

2. Classify surrounding vehicles as ahead/behind in their respective lanes.

The sensor fusion data can also be pruned for data in the range of 60m to 120m ahead/behind instead of full range. Sometimes considering the full range of might affect the behavior of the Ego vehicle.

3. Prefer lanes with lower occupancy

Once vehicles are sorted into respective lists, prefer lanes which are least occupied than lanes with most occupancy as shown. This is just a preference, but not a final decision. The Ego vehicle can prefer the high occupancy lane if the lower occupancy is not available for changing lane, e.g. other car is close to Ego vehicle making the lane change unsafe.

Computing average lane speed instead of lane occupancy was also attempted, but in a lane with many cars, not all cars move at the same speed. This affects the behavior of Ego vehicle and several times it gets stuck behind a slow moving vehicle, even if lane speed is higher due to cars ahead of the slow moving car.

For the scenario shown, the Ego vehicle prefers a lane shift to the right instead of the left purely due to lane occupancy. But this is not the final decision, it’s just a preference.

4. Safe passage for the lane

Before initiating a lane change, it is important to check the distance of cars around. In this implementation, a minimum safe distance for 20m is adhered to in both directions. For both scenarios, side lanes to center lane and center lane to side lanes, four cases have to be handled as shown,

With this simple approach, the Ego vehicle was able to drive 5+ miles without any incident.

The Prediction stage of computing trajectories of surrounding vehicles was not implemented as this being a highway scenario, I assumed the cars to be well behaved, keep their lanes and maintain there speed. Although there are cases where they abruptly change lanes leading to an incident sometimes. Implementing prediction stage will further improve the drive around the highway.

I had a blast playing with Term 3 simulator and if one spends enough time there could be even more elegant solutions, as done by other Udacians.

Thanks a ton Udacity and Mercedes-Benz team for introducing me to Path Planning. Hands down, this was the best chapter and project in the entire course. :)

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.