Real time head to head racing using vehicle tracking

|Let’s race!

As a part of the F1/10 racing course to be offered in the fall 2019 semester we have developed a vehicle tracking vision pipeline which provides us the ability to perform head to head racing in a much crisper manner than we have been doing so far. Using simple concepts of Visual Geometry and openly sourced vision tools we can now keep track of the vehicle in the front to make better maneuvers.

ROS: Robotic Operating System
For our vehicle tracking pipeline we make use of the open source Robotic Operating System (ROS) which provides the capabilities to perform communication between and computation on different agents or entities which in ROS are popularly known as nodes.

We make use of the AprilTag Visual repository to run our algorithms for tracking the vehicle.

The repository makes use of some well known algorithms of single view geometry to estimate the position and orientation of the april tag that it sees. And since the april tag is attached to the vehicle, viola! we have the position and orientation of the vehicle too. But, that is not enough we need more. (see theoretical stuff).

Nodelets is a system paradigm which has been introduced in ROS to provide faster data transfer between nodes.
In Essence: Nodelet is a node which runs nodes inside itself.
When nodes run inside a nodelet manager, they do not transfer the serialized data like message passing over topics is done. They only transfer the msg pointers which is just a few bytes of address of the memory location and thus the overhead of data transfer is considerably reduced.

Single View Geometry

The fundamental ideas is:

If you have knowledge of certain points in the world coordinates and the corresponding points in the image frame in through a single image. You can find out how what is the pose (orientation and position) of the camera with respect to the image with the help of those correspondences.

Or even better, we can find the position of the camera with respect to the world frame. (this is what we are more interested in.)

The question then is: How do we have knowledge of these correspondences? How do we know the world coordinate points?

We make use of the idea that we have the choice of having the world coordinate frame anywhere in the world. And also the good thing about april tags that based on their ids, the relative distances between the corners in the april tag are known.

We simply place the world coordinate frame in the corner of the april tag itself which makes Zw =0

The intrinsic parameter matrix which is the first matrix after the equivalence tilda in the above formulation and the rotation+translation matrix which is the second matrix after the equivalence tilda in the above formulation together lead to projective transformation and form the homography matrix.

In this projective transformation we have three constraint equations and 9 parameters from the homography and 1 parameter from the unknown scale ambiguity.

After a few algebra tricks we can come to the following formulation

We need four point correspondences. Three for three linear constraints and one for scale ambiguity.

Thus, given the correspondence points in the world and the image plane, we can find the homography. From the homography then we need to find the rotation and translations of the two frames.

Theoretically this should work, but it does not because there is noise in the information we get from the point correspondences.

Thus, we now will use Singular Value Decomposition to solve a LMS optimization problem.This essentially is satisfaction of the constraint that the Rotation matrix lies in the SO(3) lie group

Advantages of April Tags

The good thing about April Tags are:

  1. Tags having unique ids helps in disambiguation.
  2. The dimensions of the blocks are known ( we have the relative coordinates of the corners in the world frame)
  3. The world coordinate frame can be taken at one of the corners and thus all the points have z = 0 (takes care of the unknown depth)
  4. Has a bunch of corner points to track so we can have enough information to remove outliers.

The AprilTag Repository does what has been mentioned above and some more things.

  1. Detecting the AprilTag. The very first task of the Repository is to find the AprilTag itself in the image frame.

Uses Union-Find Algorithm and a bunch of other stuff for refinement and speed.

See https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7759617

  1. Corner (feature) detection

a.This is the foremost thing to do is to find the corners in the April Tag to be used as correspondences.

b. One of the most used and dependable corner detection algorithm is the Harris corner detection.

  1. Homography and pose estimation

a. Once we have the corners and corresponding image points we perform pose estimation by the methods described above.

  1. Outlier rejection

a. There might be noise in the information we gain so methods such as Hough Transform or Ransac can be used for outlier rejection.

The Weighted prediction Algorithm

Once You have the current pose of the vehicle you want to estimate its poses in the future for a given time that you decide.

This is to ensure that you are able to perform safe overtaking and since the obstacle vehicle is moving, your commanded velocities and turning angles(which now also depend on the future poses of the obstacle) are able to compensate for the lag in the system itself.

Assumption:

The obstacle vehicle will try to follow the centerline path. This is defined by the pure pursuit trajectory.

This is equivalent to the fact that cars in the real world will try to follow the centerlines between lane markings.

The above equations is used to perform the prediction of the obstacle vehicle over a fixed time interval.

In the image above, the red markers are the paths predicted and tracked for the vehicle in the front and the yellow markers are the pure-pursuit way points which are used in the weighted algorithm.

We need the path prediction to be especially robust near the turns which is shown by green markers here as compared to the pure pursuit path in yellow.

The experiments have been done around our lab which we fondly call the levine doughnut.

Stay tuned for more updates!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Siddharth Singh

Penn Engineering Graduate. My Robot got away from me. Trying to make this world a better place. Trying to find the Robot who got away.