Visual Testing With OpenCV
What is the importance of user interface testing?
Usually, after each releases testers have to execute the regression suite. It will take more time when testers only use manual processes, that’s why we need to maintain automation suites for regression. Apart from that after each release user interface will be checked by testers. It is a very headache process. Testers should check each button, text, heading, and all. As a result, there are many issues in UI and they are missed by testers. It will be a big problem at the end of the day.
Are there any tools or technologies to automate UI testing and identify differences?
The answer is Yes. The most popular tool is applitool. It is a very smart tool and testers can easily identify what are places changed after release. Applitool can easily identify and manage differences using applitool management platform. We can easily integrate the existing framework and execute and also mark baseline variations. Applitool can be integrated Java, C#, javascript appium web etc.
Why are we checking open-source solutions for UI testing?
Applitool is a commercial tool, but we can use it as a trial version with limitations. If we can invest for applitool in testing level there will be no issue but if not, we have to find out a solution to meet above requirements. As a solution, we can use OpenCV and Python. OpenCv is a library that can be trained in images and objects.
Our Solutions
There are 2 images.
- Expected Image
- Actual Image
Python code
Basically,we use python script with opencv to check differences between two images then call into java framework or whatever your framework. Major tasks are handled by python script.
# import the necessary packages
from skimage.metrics import structural_similarity
import argparse
import imutils
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-f", "--first", required=True,help="first input image")
ap.add_argument("-s", "--second", required=True,help="second")
args = vars(ap.parse_args())
# load the two input images
imageA = cv2.imread(args["first"])
imageB = cv2.imread(args["second"])
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
(score, diff) = structural_similarity(grayA, grayB, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
thresh = cv2.threshold(diff, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)for c in cnts:
# images differ
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.rectangle(imageB, (x, y), (x + w, y + h), (0, 0, 255), 2)
# show the output images
cv2.imshow("Original", imageA)
cv2.imshow("Modified", imageB)
cv2.imshow("Diff", diff)
cv2.waitKey(0)
As arguments we have to input the expected image path and actual image path, then images are converted to grayscale model by opencv. Then the image is programmatically identified pixel by pixel.
Then navigate to file location and send the below command with arguments.
python image_diff.py --first original.png --second mod.png
After that, you can see below 3 windows and identify what are the changes that happened on the below release. This is a simple concept but you can improve this as you wish.