Camera Calibration With CheckerBoard

Chaitali Bhattacharyya
4 min readJan 26, 2023

--

Camera calibration is the most important part when one wants to detect or calculate the distance between the AR marker/any object and the Camera. In this blog, I am going to share my project and the important study material for the project.

Keywords: Focal length, Camera matrix, Optical centre, Rotational vector, Translation vector, Distortion, Pinhole camera, Fisheye camera.

Camera Calibration

Camera Parameters

Why do we need camera calibration? Well! if you see the figure-1 [Please don’t judge my drawing skill], The camera is actually capturing the 3-dimensional data and transforming it into 2 Dimensional data. For this transformation, it’s obviously hard to analyse the real-world distance just with a camera. And this is one of the reasons why we need to do the camera calibration. The main aim is to get the camera’s geometrical parameters.

Figure-1

Let’s see what parameters we need for camera calibration. There are basically two types of parameters: Intrinsic parameters and extrinsic parameters.

In this case,

Intrinsic parameters: Focal length [fx,fy] and Optical center [cx,cy]. Extrinsic parameters: rotation [rx, ry, rz]and translation vectors [tx, ty, tz]. These extrinsic parameters are their to help you understanding the transformation between 3D real-world object to 2D image.

In the following equation, x = homogeneous form of 2D image points, P = camera matrix (containing the intrinsic parameters) and X = 3D homogeneous point.

So, this was about a short discussion about which primary parameters are important here.

Distortion

But there is another parameter which we have to take into consideration…and that is Distortion. Now Distortion can be present in different form. If you see figure-2, you can see two types of distortion. One is Barrel and other one is Pincushion distortion. Now, there are other kinds of distortion which can be present while taking pictures according to what lens you are using. And you can remove the distortion based on the lens type you are using.

Figure-2

The two popular methods are the Pinhole camera model and the Fisheye camera model. Now, if you are using a camera with more than 135 degrees Field of View (FOV) you have to use Fisheye. In this small project, I have used the Pinhole model as my camera has 120 degrees of FOV. In OpenCV, there is cv.calibrateCamera which returns the distortion coefficient which includes the coefficients K1 through K6 representing net radial distortion while P1 and P2 represent net tangential distortion.

Let’s get into the method of camera calibration….

I have used (8x6) checkerboard you can download yours from this site. Click 17–25 pictures(from different angles ) from the camera you are going to use for detection. If your project needs high accuracy please click more with various angles.

import cv2 
import numpy as np
# corners of the square blocks (vertical and horizontal)
Ch_Dim = (8, 6)
Sq_size = 24 #milimeters
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

After giving the values of my checkerboard I have added the 3D and 2D points as mentioned in the discussion.

obj_3D = np.zeros((Ch_Dim[0] * Ch_Dim[1], 3), np.float32)
index = 0
for i in range(Ch_Dim[0]):
for j in range(Ch_Dim[1]):
3D_obj[index][0] = i * Sq_size
3D_obj[index][1] = j * Sq_size
index += 1
#print(obj_3D)
obj_points_3D = [] # 3d point in real world space
img_points_2D = [] # 2d points in image plane.

After this I have looped all my images from image file so that all the files can be read and by using drawChessboardCorners. Before

image = cv.2cvtColor(img, cv.COLOR_RGB2BGR)

check if your images are in RGB or BGR and change it accordingly

import glob
image_files = glob.glob("location of folder/*.jpg")

for image in image_files:
#print(image)

img = cv2.imread(image)
image = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)


gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(image, Ch_Dim, None)
if ret == True:
obj_points_3D.append(obj_3D)
corners2 = cv2.cornerSubPix(gray, corners, (3, 3), (-1, -1), criteria)
img_points_2D.append(corners2)

img = cv2.drawChessboardCorners(image, Ch_Dim, corners2, ret)

Now I have used cv2.calibrateCamera to get the distortion coefficient, Intrinsic and extrinsic matrix. As you can se in code snippet, R_vecs and T_vecs are Rotational and Translation vector respectively and dist_coeff is the distortion coefficient which may give you 5 coeffs/6 coeffs according to your camera and lens and the method you follow.

ret, mtx, dist_coeff, R_vecs, T_vecs = cv2.calibrateCamera(obj_points_3D, img_points_2D, gray.shape[::-1], None, None)
print("calibrated")
np.savez(
f"{calib_data_path}/CalibrationMatrix_college_cpt",
Camera_matrix=mtx,
distCoeff=dist_coeff
RotationalV=R_vecs,
TranslationV=T_vecs,
)

REFERENCES

  1. These three videos from Prof. Shree Nayar, Columbia University helped me a lot and it can be a good resource to gain deeper knowledge for the Extrinsic and intrinsic parameters.
  2. https://docs.opencv.org/4.x/da/d13/tutorial_aruco_calibration.html
  3. http://www.cs.cmu.edu/~16385/s17/Slides/11.1_Camera_matrix.pdf
  4. https://learnopencv.com/understanding-lens-distortion
  5. https://in.mathworks.com/help/vision/ug/camera-calibration.html

--

--