Line, Circle and Blob Detection

Riwaj Neupane
5 min readJan 9, 2024

--

HoughLineTransform:

  1. The Hough Line Transform is a transform used to detect straight lines.
  2. To apply the Transform, first an edge detection pre-processing is desirable.

How does it work?

  1. As you know, a line in the image space can be expressed with two variables. For example:
  2. In the Cartesian coordinate system: Parameters: (m,b).
  3. In the Polar coordinate system: Parameters: (r,θ)

Standard and Probabilistic Hough Line Transform

OpenCV implements two kind of Hough Line Transforms:

a. The Standard Hough Transform

  • It consists in pretty much what we just explained in the previous section. It gives you as result a vector of couples (θ,rθ)
  • In OpenCV it is implemented with the function HoughLines()

b. The Probabilistic Hough Line Transform

  • A more efficient implementation of the Hough Line Transform. It gives as output the extremes of the detected lines (x0,y0,x1,y1)
  • In OpenCV it is implemented with the function HoughLinesP()

Code Implementation:

Standard Hough Line Transform:

First, you apply the Transform:

# Standard Hough Line Transform

lines = cv.HoughLines(dst, 1, np.pi / 180, 150, None, 0, 0)

  • with the following arguments:
  • dst: Output of the edge detector. It should be a grayscale image (although in fact it is a binary one)
  • lines: A vector that will store the parameters (r,θ) of the detected lines
  • rho : The resolution of the parameter r in pixels. We use 1 pixel.
  • theta: The resolution of the parameter θ in radians. We use 1 degree (CV_PI/180)
  • threshold: The minimum number of intersections to “*detect*” a line
  • srn and stn: Default parameters to zero.

And then you display the result by drawing the lines.

# Draw the lines
if lines is not None:
for i in range(0, len(lines)):
rho = lines[i][0][0]
theta = lines[i][0][1]
a = math.cos(theta)
b = math.sin(theta)
x0 = a * rho
y0 = b * rho
pt1 = (int(x0 + 1000*(-b)), int(y0 + 1000*(a)))
pt2 = (int(x0–1000*(-b)), int(y0–1000*(a)))
cv.line(cdst, pt1, pt2, (0,0,255), 3, cv.LINE_AA)


Probabilistic Hough Line Transform:
# Probabilistic Line Transform
linesP = cv.HoughLinesP(dst, 1, np.pi / 180, 50, None, 50, 10)
  • with the arguments:
  • dst: Output of the edge detector. It should be a grayscale image (although in fact it is a binary one)
  • lines: A vector that will store the parameters (xstart,ystart,xend,yend) of the detected lines
  • rho : The resolution of the parameter r in pixels. We use 1 pixel.
  • theta: The resolution of the parameter θ in radians. We use 1 degree (CV_PI/180)
  • threshold: The minimum number of intersections to “*detect*” a line
  • minLineLength: The minimum number of points that can form a line. Lines with less than this number of points are disregarded.
  • maxLineGap: The maximum gap between two points to be considered in the same line.

And then you display the result by drawing the lines.

# Draw the lines
if linesP is not None:
for i in range(0, len(linesP)):
l = linesP[i][0]
cv.line(cdstP, (l[0], l[1]), (l[2], l[3]), (0,0,255), 3, cv.LINE_AA)
Display the original image and the detected lines:
# Show results
cv.imshow("Source", src)
cv.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst)
cv.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP)

Using an input image such as a sudoku image. We get the following result by using the Standard Hough Line Transform:

And by using the Probabilistic Hough Line Transform:

You may observe that the number of lines detected vary while you change the threshold. The explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected (since you will need more points to declare a line detected).

Circle Detection:

cv2.HoughCircles(image, method, dp, MinDist, param1, param2, minRadius, MaxRadius)

  • Method — currently only cv2.HOUGH_GRADIENT available
  • dp — Inverse ratio of accumulator resolution
  • MinDist — the minimum distance between the center of detected circles
  • param1 — Gradient value used in the edge detection
  • param2 — Accumulator threshold for the HOUGH_GRADIENT method (lower allows more circles to be detected (false positives))
  • minRadius — limits the smallest circle to this size (via radius)
  • MaxRadius — similarly sets the limit for the largest circles

Code Implementation:

rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
param1=100, param2=30,
minRadius=1, maxRadius=30)

Blob Detection:

Blob detection is an important technique in computer vision and image processing. Blob detection is the process of identifying and localizing these regions in an image.

Blobs can be useful in a variety of applications, such as object detection, tracking, and recognition. For example, in object recognition, a blob can represent a particular feature of an object, such as an edge or a corner.

Syntax

cv2.SimpleBlobDetector(src, minThreshold, maxThreshold, thresholdStep)

Parameters

  • src: Input image or source image
  • minThreshold: Minimum value for the threshold function
  • maxThreshold: Maximum value for the threshold function
  • thresholdStep: We increment the value of this parameter and add to the minThreshold parameter till the value of minThreshold is equal to the maxThreshold parameter

Template Matching:

Template Matching is a method for searching and finding the location of a template image in a larger image. OpenCV comes with a function cv.matchTemplate() for this purpose. It simply slides the template image over the input image (as in 2D convolution) and compares the template and patch of input image under the template image. Several comparison methods are implemented in OpenCV. (You can check docs for more details). It returns a grayscale image, where each pixel denotes how much does the neighbourhood of that pixel match with template.

If input image is of size (WxH) and template image is of size (wxh), output image will have a size of (W-w+1, H-h+1). Once you got the result, you can use cv.minMaxLoc() function to find where is the maximum/minimum value. Take it as the top-left corner of rectangle and take (w,h) as width and height of the rectangle. That rectangle is your region of template.

Code Implementation:

import cv2
from matplotlib import pyplot as plt
# Read the main image
image = cv2.imread('/content/messiki.jpg')
# Display the original image
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.title('Where is Waldo? - Original')
plt.show()
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Load the template image
template = cv2.imread('/content/messi.jpg',0)
# Resize the template image to a smaller size if needed
# You can adjust the dimensions based on your requirements
template = cv2.resize(template, (50, 50))
# Match the template with the main image
result = cv2.matchTemplate(gray, template, cv2.TM_CCOEFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
# Create a bounding box around the detected region
top_left = max_loc
bottom_right = (top_left[0] + template.shape[1], top_left[1] + template.shape[0])
cv2.rectangle(image, top_left, bottom_right, (0, 0, 255), 2)
# Display the result with the bounding box
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.title('Where is Waldo? - Detected')
plt.show()

--

--

No responses yet