Applying Edge Detection To Feature Extraction And Pixel Integrity

Vincent T.
High-Definition Pro
5 min readJan 7, 2019

--

In digital image processing, edge detection is a technique used in computer vision to find the boundaries of an image in a photograph. This uses an algorithm that searches for discontinuities in pixel brightness in an image that is converted to grayscale. This allows software to detect features, objects and even landmarks in a photograph by using segmentation and extraction algorithm techniques. The points in an image where the brightness changes sharply are sets of curved line segments that are called the edges.

There are 4 things to look for in edge detection:

  • Discontinuities in depth
  • Discontinuities in surface orientation
  • Changes in material properties
  • Variations in scene illumination

The edges of an image allow us to see the boundaries of objects in an image. This is needed in software that need to identify or detect let’s say people’s faces. Other objects like cars, street signs, traffic lights and crosswalks are used in self-driving cars. AI software like Google’s Cloud Vision use these techniques for image content analysis. This provides ways to extract the features in an image like face, logo, landmarks and even optical character recognition (OCR). These are used for image recognition which I will explain with examples.

I will present three types of examples that can use edge detection beginning with feature extraction and then pixel integrity.

Feature Extraction

First example I will discuss is with regards to feature extraction to identify objects.

So what is a car?

We know from empirical evidence and experience that it is a transportation mechanism we use to travel e.g. a car. We also know that we need to drive a car. Our vision can easily identify it as an object with wheels, windshield, headlights, bumpers, etc. It is rather trivial to even ask that question to another person.

Machines do not know what a car is. We have to teach it using computer vision. Let’s take a look at this photo of a car (below).

Feature extraction can be used to identify objects. A system can be trained to identify a car by its features — headlights, wheels, front bumper, side mirrors, license plate, etc.

Using edge detection, we can isolate or extract the features of an object. Once the boundaries have been identified, software can analyze the image and identify the object. Machines can be taught to examine the outline of an image’s boundary and then extract the contents within it to identify the object. There will be false-positives, or identification errors, so refining the algorithm becomes necessary until the level of accuracy increases.

The actual process of image recognition (i.e. identifying a car as a car) involves more complex computation techniques that use neural networks e.g. Convolutional Neural Networks or CNN. I will not cover that in this article.

Landmark Detection

Software that recognizes objects like landmarks are already in use e.g. Google Lens. Once again the extraction of features leads to detection once we have the boundaries.

Landmark detection can be used when edges are created. In this example one can see how a system can identify a landmark from its outline.

Landmarks, in image processing, actually refers to points of interest in an image that allow it to be recognized. It can be a landmark like a building or public place to common objects we are familiar with in our daily lives. For software to recognize what something is in an image, it needs the coordinates of these points which it then feeds into a neural network. With the use of machine learning, certain patterns can be identified by software based on the landmarks. Detecting the landmarks can then help the software to differentiate let’s say a horse from a car. Once the edges help to isolate the object in an image, the next step is to identify the landmarks and then identify the object.

Let us take a closer look at an edge detected image.

Pattern analysis can help identify objects in an image from landmark detection.

I have isolated 5 objects as an example. Each object was landmarks that software can use to recognize what it is. I won’t delve further into that, but basically this means that once a pattern emerges from an object that the software can recognize, it will be able to identify it.

Are they always accurate?

Well in most cases they are, but this is up for strict compliance and regulation to determine the level of accuracy. Systems on which life and death are integral, like in medical equipment, must have a higher level of accuracy than let’s say an image filter used in a social media app. This is why thorough and rigorous testing is involved before the final release of image recognition software. In visioning systems like that used in self-driving cars, this is very crucial.

Pixel Integrity

I created code to validate an image’s integrity pixel by pixel using “ImageChops” from the PIL library routine. This allows a pixel by pixel comparison of two images. One image is the original and the other image is the one that needs to be compared. Sometimes there might be a need to verify if the original image has been modified or not, especially in multi-user environments. Without version control, a retoucher may not know if the image was modified. Original content creators may also be curious to see if the original image they created is the same as their content that another person may have uploaded on the Internet.

Now in order to do this, it is best to set the same pixel size on both the original image (Image 1) and the non-original image (Image 2). I usually take the pixel size of the non-original image, so as to preserve its dimensions since I can easily downscale or upscale the original image. I have an example of Image 2 that is 600 x 906 pixels at a 0.662 aspect ratio. I set the same pixel size for Image 1 and then I run my code to compare them using edge detection.

This code example runs on a Python 2.7 or 3.x environment only.
So I take Image 1 and Image 2 and get their edges to define boundaries. Then I overlay them using “ImageChops” and we can see if the original matches the non-original image.

As we can see from my example, if Image 2 was not modified it would not show any offset from the edge boundaries. Image 2 was clearly modified in this example to highlight that.

Summary

Edge detection is one of the steps used in image processing. It can be used for both feature extraction to detect objects and for verifying pixel integrity of two images.

Here is a link to the code used in my pixel integrity example with explanation on GitHub:

Use Git -> https://github.com/Play3rZer0/EdgeDetect.git

From Web -> https://github.com/Play3rZer0/EdgeDetect

--

--

Vincent T.
High-Definition Pro

Blockchain, AI, DevOps, Cybersecurity, Software Development, Engineering, Photography, Technology