Image Processing: Filters for Noise Reduction and Edge Detection

Maxence Boels
4 min readOct 24, 2019

--

This story aims to introduce basic computer vision and image processing concepts, namely smoothing and sharpening filters.

Smoothing Filters are used for blurring and for noise reduction. Two types of blurring filters will be discussed:

  1. Smoothing Linear Filters
  2. Smoothing Non-Linear Filters

Then I will present the Sharpening Filters which highlight transition in intensity:

  1. The Laplacian — Using the Second Derivative
  2. The Gradient — Using First Order Derivative
  3. Sobel Operators
Coffee Filtering

Smoothing Filters

  1. Smoothing Linear Filters

Average filters take the mean value of the pixels in a neighborhood, which is defined by the size of a mask (m-columns and n-rows). It is important to divide by the sum of the values in the neighborhood to normalize output values.

Box filter (Einstein image)

Another linear filter is performed with a weighted average filter to multiply pixels by different coefficients, thus giving more importance (weight) to some other pixels. Here is an example:

a weighted average linear filter

The mask size increases the blurring factor.

Because at each convolution it calculates the average output with more pixels values. For example, a mask with m=15 will blends small objects in the background.

Finally, the Gaussian Filter blurs an image with a bell shape represented by its normal distribution, image (b).

2. Smoothing Non-Linear Filters

Median filters are the most popular because of the ability to reduce impulse noise aka salt-and-pepper noise. In order to perform median filtering at a point of an image, we first sort the values of the pixels in the neighborhood, determine the median and then assign that value to the corresponding pixel in the filtered image.

median filtering algorithm
Median filtering on salt and pepper

Sharpening Filters

This type of filter is used to highlight the transition in intensity on edges.

Using the first derivative shows between one to the next pixel the variation of intensity. This can be done along the rows and columns of the pixel matrix.

The second derivative only expresses a shift in direction but not a continuous increase or decrease in pixel value.

  1. The Laplacian — Second Derivative

This method uses the second derivative and is defined mathematically as:

Second Derivative of x and y

Those masks enhance fine details and are called isotropic masks since they provide the same results on rotated images by 90°. Note on the following image that the sum of the elements in each mask is equal to zero. The reason is that when applying the second derivative formula on every pixel, the sum of all derivatives is equal to zero. We want the mask to keep all the cumulative intensity of the original image.

2. The Gradient — First Derivative

Very useful to detect the defects in preprocessing.

The first derivatives in image processing are implemented using the magnitude of the gradient. This magnitude expresses the rate at which the gradient changes in direction. Note that the isotropic properties are lost with this filter.

3. Sobel operator — Using the Gradient

This sharpening filter is using a coefficient to smooth the output image while enhancing edges. It uses a weight value of 2 in the center. Note that in all the masks shown, the sum is equal to zero, as expected of a derivative operator.

Sobel filter for vertical and horizontal edges

As a conclusion, it is very common in image processing to combine many filters during preprocessing to enhance our training dataset when using computer vision and machine learning techniques. As mentioned, filters are often used to remove noise before applying masks in order to extract a certain feature in an input image.

My next article will deep dive into a ship detection competition on Kaggle. Follow me for more Computer Vision content.

--

--

Maxence Boels

MSc in Computer Vision, Machine Learning and Robotics at University of Surrey.