Harris Corner Detection and Shi-Tomasi Corner Detection

Hi! Welcome to pixel-wise. This post provides a good starting point for anyone wanting to begin with Computer Vision. We will understand Harris & Shi-Tomasi Corner Detection algorithms & see how to implement them in Python 3 and OpenCV. But, before that, we need to know what are image features:

Features in images are the points of interest which provide rich image content information. They are basically comprised of two things:

  1. Interest points : Points in the image which are invariant to rotation, translation, intensity and scale changes. (Basically, robust and reliable). There are different interest points such as corners, edges, blobs etc.
  2. Feature Descriptors : These describe the image patch around the interest points in vectors. They can be as simple as raw pixel values or complicated like Histogram of Gradients (HoG) etc.

So corner detection is basically detecting (one type of) interest points in an image.

Corner Detection : Corners are locations in images where a slight shift in the location will lead to a large change in intensity in both horizontal (X) and vertical (Y) axes.

Harris Corner Detector

The Harris Corner Detector algorithm in simple words is as follows:

STEP 1. It determines which windows (small image patches) produce very large variations in intensity when moved in both X and Y directions (i.e. gradients).
STEP 2. With each such window found, a score R is computed.
STEP 3. After applying a threshold to this score, important corners are selected & marked.

For those of you not interested in the mathematical overview, feel free to skip directly to the code section :)

Mathematical overview

STEP 1 :How do we determine windows which produce large variations?

Let a window (the center) be located at position (x,y). Let the intensity of the pixel at this location be I(x,y). If this window slightly shifts to a new location with displacement (u,v), the intensity of the pixel at this location will be I(x+u,y+v). Hence [I(x+u,y+v)-I(x,y)] will be the difference in intensities of the window shift. For a corner, this difference will be very high. Hence, we maximize this term by differentiating it with respect to the X and Y axes. Let w(x,y) be the weights of pixels over a window (Rectangular or a Gaussian) Then, E(u,v) is defined as :

Weighted sum multiplied by the intensity difference for all pixels in a window [1]

Now, computing E(u,v) by the above formula will be really, really slow. Hence, we use Taylor series expansion (only the 1rst order).

STEP 2: Now that we know how to find windows with large variations, how do we select the ones with suitable corners? It was estimated that the eigenvalues of the matrix can be used to do this. Thus, we calculate a score associated with each such window.

Score for classifying into flat region/edge/corner. [1]

STEP 3: Depending on the value of R, the window is classified as consisting of flat, edge, or a corner.(Finally 😛) A large value of R indicates a corner, a negative value indicates an edge. Also, to pick up the optimal corners, we can use non-maximum suppression.

(Note: Harris Detector is not scale-invariant). Now, lets dive into the code. ⬇️

Code Overview

Harris Corner detection is implemented in OpenCV. Let’s see the code below :

Corner Detection using Harris Corner Detector

Shi-Tomasi Corner Detector

Shi-Tomasi is almost similar to Harris Corner detector, apart from the way the score (R) is calculated. This gives a better result. Moreover, in this method, we can find the top N corners, which might be useful in case we don’t want to detect each and every corner. (Trust me, the mathematics overview is very small here! 😃)

Mathematical Overview

In Shi-Tomasi, R is calculated in the following way:

Shi-Tomasi R score [4]

If R is greater than a threshold, its classified as a corner.

Heading over to the code. ⬇️

Code Overview

Corner Detection using Shi Tomasi Detector

You can check the full code here. The code can be used to detect corners using Harris and Shi-Tomasi detection methods in an image, a folder of images, or from a live webcam. You can also play with some other parameters to get different outputs.

Now to the fun part 💃! Let’s see some of this code in action. As we can see, Shi-Tomasi detects corners better than Harris Detector :

Top Left: Harris, Top Right: Shi-Tomasi, Bottom left : Original

Conclusion

To conclude, Harris & Shi-Tomasi corner detection methods are some really cool and easy algorithms to detect-those-corners using the simple concepts of intensity gradients. Shi-Tomasi is a slightly better version after just changing the score formula. We detect corners for several applications : image alignment, image stitching (remember the panorama feature on your phone camera?), object recognition, 3D reconstruction, motion tracking and so on.


That’s it for this post! I have tried to include all the necessary details without making it into a long boring essay 😛. All questions/suggestions are most welcome, please drop a comment below or get in touch with me.

LinkedIn Profile : Know more about me!
GitHub Profile : Check out my other projects!

Stay tuned for further posts. Thank you :)

References

[1] https://docs.opencv.org/3.0-alpha/doc/py_tutorials/py_feature2d/py_features_harris/py_features_harris.html

[2]http://www.aishack.in/tutorials/harris-corner-detector/

[3] https://en.wikipedia.org/wiki/Harris_Corner_Detector

[4] https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.html