IMAGE FUSION IN IMAGE PROCESSING

Ahemad Kazi
Designway
Published in
5 min readJul 11, 2019

In this article today you will be learning everything about a process that takes place in image processing called Image Fusion. Without further ado, let’s get into it.

WHAT IS IMAGE FUSION:

Image Fusion

Image fusion is a process or method in image processing which is responsible to combine the specifications or features from different images into one to get an accurate and more informative image.

Let me explain. What this basically means is that while performing the image processing, different images which are most similar are taken and the information of each of the image is gathered into one image. Hence the process called fusion (combining one or many into one)

This is done to obtain a single image that is more accurate and informative than all of the images combined. It is also done to reduce the size, obviously, when you combine only the features of many images into one, the resultant image will have more-accurate characteristics.

We don’t have to waste space in storing all of them, just this one image will do the trick. Hence, compactness is secured of the image and the database where it will be stored.

For instance, say that there are two images, one colored and another, black & white. Both the images are of the same scene or thing. The only difference between them is the way they are presented. Now there are some things which are not visible in colored or are not clear or are missed, while the black & white image has those things which the colored doesn’t, but lack some other information or distortion.

Now each of the images is incomplete. To get the best and accurate image, this image fusion is done. The best characters from the colored and black & white image will be taken and fused together to get one perfect image.

This resultant image won’t have any faults or missed areas like the two before. That is the whole purpose of image fusion to get one improved image from two imperfect images.

Now that you know what image fusion is all about, let’s see how it is done. There are 3 levels in which the image fusion takes place.

· At Pixel Level

· At feature Level

· At Decision level

As we all know, the pixel of an image is the smallest unit. So, the first process of fusion is at the pixel level.

At the pixel level, the pixel of the first image is already registered and related to the second image in the database and then the same pixel from the second image is analyzed. It is done at a very minute level.

At the feature level, the features of any object in the image (say person) is matched with the features of the object (same person) from another image and then the features are fused to get a new better image.

At Decision level, at this level, both the images are analyzed separately and the information regarding each of the image, say features and characteristics, are stored and then that collected information is fused to get a new complete and fused image.

Methods Used for Image Fusion:

There are a number of methods that are used for image fusion. Such as,

· Multiplicative algorithm

· Subtractive method

· PCA (Principle Component Analysis)

· IHS (Intensity Hue Saturation) method

· High pass filter method

· Brovey Transform method

· Wavelet method

Now, I know those scientific terms can be overwhelming. But let me explain to you in easy terms which you can grasp quickly.

Multiplicative Algorithm is a method where a high-quality image and a low-quality image is taken and fused together to get a perfect image which is more informative than the two.

The subtractive method is a method where the overlapping of the two images is done based on subtractive algorithms and the resultant image will have the color specifications of the colored image but the detail of the black & white image. It has good quality and viewing crisp.

PCA method is a mathematical method where the pixel values are modified to achieve the final resultant image. Here the storing of similar data from the images is reduced to the maximum extent without reducing the quality of the image.

IHS method is a method that is mainly used for sharpening the color image. It only involves the basic three colors which are red, green and blue. It converts the RGB image into IHS space. All these three characteristics of the image are improved and the image is enhanced in the intensity, hue and saturation department.

High Pass Filter method is the method in which the high-resolution information in the black & white image is combined with the low-resolution information in the colored image and the final image is real and smooth which sacrifices the sharpness and hence the object in the image is not sharp enough.

Brovey Transform method is a method where data from different sensors have the ability to preserve the pixels. Pixels of each image are gathered but the resultant image can have more brightness than the two combined. It is a simple method to combine data from different image sensors.

Wavelets method is a method where the color is taken from the colored image and is poured on to the black & white image. The resultant image is not well-formed and the corners are not clear. Hence the image is not that perfect.

See we made it. That wasn’t too difficult, considering how complex the names of the processes sounded, was it?

Finally, let’s see the application of this fusion process.

Application of Image Fusion:

Image fusion is used to check the following:

1) Object identification: Suppose that the resolution of the colored image is low and the person or object in the image is small, then it is fused with a black & white image which has higher resolution, to get a high-resolution image.

The resultant image will have a high-resolution and the size of the object in that image will also be larger. Hence it becomes easy to identify.

2) Classification: The second application of image fusion is to classify the image. Meaning, when a colored image is fused with a black & white image, the final image will have high resolution and thus it becomes easy to classify them according to the different categories of data.

3) Change detection: The third and final application of image fusion is to differentiate the change between the two images. Say that the image is kept for a long duration of time, now it can be that the image will have some distortion or some portion may be faded or missing. To check that kind of change the image fusion is done.

That was it, everything you needed to know about image fusion. Make sure to check my previous post where you can learn about steps involved in image processing in detail. Until next time:)

--

--