The Startup
Published in

The Startup

What Is a Kuwahara Filter?

Photo by Joanna Nix-Walkup on Unsplash

The Kuwahara filter can be defined as a non-linear smoothing filter that does not compromise the sharpness of the image or the positions of the edges and is traditionally recognized by this second aspect. A large part of the filters used for image smoothing are low-pass filters that effectively reduce noise but also cause blurring of the edges, unlike Kuwahara which preserves them, this being its great differential.

It is one of the pioneering techniques in image filtering with edge preservation, proposed in 1976 had the primitive purpose of assisting in the processing of RI-angiocardiography images of the cardiovascular system because of the utility in the extraction of characteristics and segmentation due to the preservation already mentioned. It is frequently used in biomedical applications as a first step in the identification of anomalies in noisy images, such as detection of brain tumors through magnetic resonance imaging, and is currently used in artistic imaging and photography due to its ability to remove textures, sharpen the edges and create a painting effect desired by the level of abstraction.

Figure 1: example of RI-angiocardiography image. (Source: Author, 2017).

The process of filtering Kuwahara is basically the division of a pixel grid into four overlapping sub-grids, calculating an average and variance for each one. The output value is defined as the mean of the sub-grid that presents the least variation and this value will be assigned to the central pixel of each region analyzed by the algorithm, and can be used in a wide variety of ways in relation to the division of its grids. A demonstration form is a grid of square order of size ‘J=K=4L + 1’, where: L is an integer; J is the number of rows; K is the number of columns; and 4 is the number of regions. The example below demonstrates such an operation.

Figure 2: example of the process. (Source: Author, 2017).

Math under the hood

Consider a grayscale and a square window centered around a point on the image. This square can be divided into four other parts, being defined by:

Figure 3: the Kuwahara operator (Author: Wikipedia, 2012).

The pixels located on the borders between two regions belong to both regions resulting in an overlap between the sub-regions.

The arithmetic mean and standard deviation of the four regions centered around an imaging element are calculated and used to determine the value of the central pixel. The filter output, for any point, is given by:


That is, the central pixel will have the average value in relation to the edges, playing a large role in determining which region will have the largest pattern deviation.

In a case where the image element is located on a dark side of a border, it will probably take the average value of the darkest region. In a case where the pixel is located on the lightest side of the border, it will take the average value of the lightest region. The case where the pixel is located at the end, it will then take the value of the lightest region.

The filter considers the homogeneity of the regions, ensuring that the edges are preserved.

The size of the window is chosen in advance, and may vary with the desired level of layers of the filter at the end of applications. Larger windows usually result in more abstract images and smaller windows produce images that better preserve their details. Usually windows are chosen to produce frames with sides that have an odd number of pixels, due to the symmetry, but this is not a rule, as there may be filter variations, which use rectangular windows. Also, subregions do not need to overlap or be the same size, as long as they cover the whole window.


ImageJ was used for these examples.

Figure 4: Original image. (Source: Author, 2017).
Figure 5: image with filter using 7x7 kernel. (Source: Author, 2017).
Figure 6: Original image. (Source: Unknown, 2017).
Figure 7: image with filter using 10x10 kernel. (Source: Unknown, 2017).
Figure 8: Original image. (Source: Unknown, 2017).
Figure 9: image with filter using 13x13 kernel. (Source: Unknown, 2017).

Possible filter changes

The filter can be changed both in its image intensity and the addition of other filters, such as resizing, in addition to image and video enhancement with low sharpness, from the filter and calculations such as PSNR and MSE which demonstrates an advance in image smoothing.

PSNR formula.
MSE formula.

This example shows an image with a noise density of 0.3:

Figure 10: Original image, with noise and filter, respectively. (Source: Unknown, 2017).


  1. Giuseppe Papari, Nicolai Petkov, and Patrizio Campisi, Artistic Edge and Corner Enhancing Smoothing, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 10, OCTOBER 2007, pages 2449–2461

I also thank Samuel Licorio Leiva and Carlos Eduardo Benedetti Lopes Junior for being part of this study that was conducted in 2017.




Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +756K followers.

Recommended from Medium

Large Scale Scikit Learn ML processing for batch scoring using Azure ML

Machine Learning is all about data

Using Lexical Resources Effectively

How to Generate Images using Autoencoders

Machine Learning. Linear Models. Part 1.

CNN Feature Extraction: Between ReLU, Tanh and Sigmoid.

Optimising Content Discovery via Probabilistic Machine Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lucas de Brito Silva

Lucas de Brito Silva

More from Medium

Having fun with CLIP features — Part I

ArcFace Project (Part 1/5)

Almost Any Image Is Only 8k Vectors

Implementing Google Smart Compose