Calibration in Image Processing

Chinmay Bhalerao
6 min readMar 21, 2022

--

Photo by Vadim Bogulov for pixel representation

Many times in image processing and object detection problems, we have to measure sizes of objects from images. In Quality control department in any organization, in tumor size changes detection, measuring distances of craters on moon or other comets from image, classification on the basis of sizes from image and lot of other applications use measurement of objects from image. Each image or extracted bounding box can have different specifications and metadata.

For example:

1) 250 mm in JPG format can be different from 250 mm in PNG format

2) 250 mm in JPG format at 140 DPI can be different from 250 mm in JPG at 144 DPI

3) different image resolutions will affect differently

4) Many more ….

Lets first understand some basic terms related images.

Resolution:

In layman’s words, higher the resolution, higher the details and depth of image. How detailed data image holds is the resolution. Edges & features of images are blurred in low resolution images and that’s why they loose clarity.

PPI:

Screen resolution is measured in pixels per inch (PPI). A pixel is a tiny square of color. A monitor uses tiny pixels to assemble text and images on screen.

DPI vs PPI

DPI:

DPI of image stands for dots per inch. How much dots are included per 1 DPI is describe by DPI. 300 DPI means that a printer will output 300 tiny dots of ink to fill every inch of the print.

JPG:

JPG is image format which means “Joint Photographic Experts Group”. Most of links assume JPG and JPEG format as same.

PNG:

Portable Network Graphics” is a raster-graphics file format that supports lossless data compression

There are many differences in JPG and PNG format but the important difference is their compressing or converting algorithm that they used. Test cases found where JPG lost some amount of data while processing or conversion but PNG didn't loose. that’s why PNG is also known as lossless conversion technique.

Calibration starts with checking DPI of Image.

How to check image’s DPI:

1. Right click on your image

2. Go to properties

3. In properties, go in details

4. The sizes in pixel and DPI of image is mentioned over there

From Image properties

There is a second option if you can’t see DPI from first method. The second method is

1. Go to your 3D paint application from windows button

2. Open image in paint

3. Click CRTL+ E

4. You will see DPI of image over there.

From MS-Paint

After knowing DPI of your image, next step Is to convert your image into one specific DPI for uniformity. There are many only DPI changers for this conversion but I preferred to write code for it so that it can be easily added in any application code pipeline.

DPI Conversion

After converting image in appropriate shape and specification, now we can start calibration part. Calibration in image is termed as matching dimensions from image to your units.

Ex. A size of wall is written 250 mm in floor plan. But actually, in drawing, it’s not exact 250 mm. It is drawn randomly and then it is assigned as 250 mm tag due to restriction of size of paper or drawing space.

So understanding how much MM in reality is termed as dimensions in drawing is important.

For example :

250 mm mentioned line corresponds to 21 mm length of actual line if we measure it with the help of scale.

Actual vs real dimension

After watching above image, we can understand that there must be some factor which will be helpful for conversion of represented unit to actual unit. That factor is known as calibration factor. This factor is always constant throughout drawing as drawing is build on that assumption. Otherwise whole drawing will be non-symmetric and dimensionless and we cant take any output from that due to its abnormal shapes.

OpenCV dimension calculation methods use a reference object to understand calibration factor.

Reference object concept

In this technique, we know the dimension of reference object as above picture. It ultimately manages to convert that dimension of reference object to desire scale and with respect to that scale, we can measure other objects. But it is very hard to keep one particular object in every kind of photo as some places might be hostile and it can be impossible to place any reference object. It also requires physical work which we want to eliminate.

To encounter this problem, we can do one interesting thing. What if we use OpenCV's inbuilt functions to calibration? We can do that by mouse click event.

Before going into actual coding, lets try to build its basic structure .

First we will take input in the form of image from user. We will convert it into require DPI or require specification. Then the window containing user’s image will pop-up and ask user to mark specific portion in image whose dimension is known to user. We will fetch first click coordinates of mouse [X1, Y1] and then second click coordinates of mouse [X2, Y2]. Now we know the pixel co-ordinates from which we can find distance. There can be chance that the line might not be straight. So in such cases, Euclidian distance will be perfect solution .

Euclidian distance calculation

From above formula, we can find Euclidian distance for line drawn by user and we will get that perfect distance. Now we will ask user to enter your unit and enter previously marked line’s distance which he already know.

After entering above things, we will have simple mapping formula which will give us Calibration factor .Calibration factor we can use to multiply after extracting original dimensions.

If we use above algorithm to convert it into code then it will be:

import cv2
import scipy
from scipy.spatial.distance import euclidean
def select_point(event,x,y,flags,param):
# Record starting (x,y) coordinates on left mouse button click
if event == cv2.EVENT_LBUTTONDOWN:
ix,iy = x,y
print(ix, iy)
points.append([ix, iy])

# Record ending (x,y) coordintes on left mouse bottom release
elif event == cv2.EVENT_LBUTTONUP:
points.append((x,y))
print('Starting: {}, Ending: {}'.format(points[0], points[1])) # Draw line


cv2.line(img, points[0], points[1], (36,255,12), 2)
cv2.imshow("image", img)
points=[]
img = cv2.imread(r"Your image path")
img = cv2.resize(img, (1960, 1000))
cv2.namedWindow('image')
# bind select_point function to a window that will capture the mouse click
cv2.setMouseCallback('image', select_point)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
px= int(euclidean(points[0], points[1]))
print("Number of pixels",px)
Actual_unit = input("Enter your unit(m, mm, cm, inch): ")
Actual_dimensions= float(input("Enter dimensions: "))
detected_px_len= int(Actual_dimensions)/px
print(detected_px_len, Actual_unit+"/px")
callibration_factor = str(round(detected_px_len, 2))
print("the callibration factor is :",callibration_factor)

If you run this code into your system, then a pop-up window will appear and ask user to mark a simple line on image.

pop-up window

After marking line, the coordinates of mouse recorded at backend. Then code will ask you for dimension,

Number of pixels

After entering dimension, it will ask for marked line’s dimension, and it will calculate calibration factor.

The final calibration factor

Now by any method, like segmentation, bounding boxes from object detection you can directly extract area or size in pixel and convert it into mm. when you multiply that with calibration factor, you will exactly the dimension that you want. So without using any reference object or any kind of extra arrangement .

Area calculation

THANK YOU !

--

--

Chinmay Bhalerao

AI-ML Researcher & Developer | 3 X Top writer in Artificial intelligence, Computer vision & Object detection | Mathematical Modelling & Simulations