Blending two images to produce the desired image output using PorterDuff

Elye
Elye
Aug 22 · 8 min read

PorterDuff Mode is a software image blending method that is available in many software platforms, which includes Android. It is based on a mathematic model which runs on the pixels of two images to produce a neat output.

Many people have used it to make a circular image as below…

But aside from making circular images, there are so many interesting things that could be achieved using PorterDuff.

For illustration, the blending of the cloud and dragon animations is accomplished by looping through various PorterDeff.Mode. In this piece, we will cover this explanation in more detail and also give more practical examples of its usages.


The API Usage

To start off, let’s explain how to use it in Android Kotlin language.

In the examples (app code provided below) provided in this article, I made them in a custom view, by drawing two bitmaps together and blending it using the paint.xfermode. That’s it! Simple code.

private val paint = Paint()
canvas.drawBitmap(bitmapDestination, null, fullRect, paint)
paint.xfermode = PorterDuffXfermode(mode)
canvas.drawBitmap(bitmapSource, null, fullRect, paint)

The Overall Picture

The API usage is simple, but there are altogether 18 modes one could apply which makes it complicated. If we simply follow the below illustration (that we get from the Android Documentation), it does help, but it is still not easy to think about how it could be used further.

Refers to Android documentation, there’s also mathematic model, which is not easily understood and imagined.

Note: There’s a comprehensive tutorial that provides an explanation in term of the math behind it, so I’m not getting into those technicalities here. Instead, I’ll show some practical usages of it in hopes that it makes learning it more fun.

For simplicity and to make it more relatable, I used the images below:

With that, we could place them as such:

Nevertheless, while some are clear, there are still several where its application is not clear. I’ll illustrate these below.


The Cropping Effect: Source In & Destination In

This allows you to crop away the unwanted edges of an image by using another image with a transparent area of the unwanted area.

In the above example, we could also use Source_Atop/Destination_Atop, but Atop could be better used below.

To demonstrate what can be achieved by Source_In/Destination_In but not Source_Atop/Destination_Atop, below is an example where it only shows the area where both the source and destination are not transparent.


The Patching Effect: Source Atop & Destination Atop

When trying to patch a part of an image onto another image, Atop comes in handy. Unlike Source In/Destination In, it doesn’t crop away the intended image (in the case below, the source), but rather patches whatever overlaps on top of it.

For a different effect, check out the image below, which makes it look like some sort of color bind testing image.

There’s an interesting article on using these patching effects i.e. Destination Atop, to patch the white color on text, making it look like a light over the letters.


The Punch Hole Effect: Source Out, Destination Out, & Exclusive Or

If you want to have the shape of an image overtop another image rather than the entire image, the punch hole effect is what you are looking for.

Using Source Out and Destination Out, you can literally cut out the space of a provided picture:

There are certain cases where we would like to remove the intersect area, but not the entire picture. Here, Exclusive Or becomes handy:


The In-Between Image Effect: Lighten & Darken

Sometimes, we want to blend an image both in front and behind the other image. This sounds almost impossible if we only have two images, however, with Darken and Lighten, this is made possible by comparing the lighter or darker color.

An example of a Source and Destination can be seen below:

If we make it a pure Source Over or Destination Over, it will look as if one image is totally in front of the other:

But if we use, Lighten and Darken, we can get a blend.

  • For Lighten, the brighter part of the image will be on top. In our example, this means the whiter part of the cloud will be in front of the pigeon, making it both in front and behind when blended.
  • For Darken, the darker part of the image will be on top, meaning the darker part of the cloud will be in front of the pigeon.

Another good usage is to use a pure black and white mixture of an image. You’ll get a good blend of the image onto the black and white spots using this mode:


The Brighten or Darken Effect

With some of the modes, we can use them on the same image and it will have an effect.

When we use the same source and destination, most of the modes will have no impact on the blending, as shown below:

However, there are some modes that will have an effect.

Screen: Making it less clear through a screen

Makes the overall picture whiter and brighter. The picture becomes less clear as if you put a screen on top of the picture.

Add: Very bright light striking the image (until white exposed)

Makes the picture really bright by adding the source and destination image together. This makes it look like it has a bright light on it until a part of the picture is white exposed.

Multiply: Darken the image (reducing the light on it)

Multiplying two image’s color pixels together makes it darker, as the color of the image is considered to be between zero and one. Multiplying two decimal points will return a smaller number, hence making it overall darker.

Overlay: Sharpen the image with better contrast

This is an interesting mode, as it darkens the darker pixels and brightens the whiter pixels. It expands or contracts the image making it sharper.

Look at it overall…

Just for comparison side by side…


The Blending, Brighten, and Darken Effect

Above we looked at the effect of having two of the same images. However, we could also apply the modes to two different images. How much the destination covers the front image changes from mode to mode.

Below are the two images used for our experiment:

I arranged the seven modes below based on how much the cloud is covering the dragon, as well as the brightness of the cloud.

Putting them in a loop, we could animate it and produce the image sequence below:

Why isn’t the multiply mode used above?

If you noticed above, I didn’t add the Multiply mode. This is not because it is too dark to fit in.

The reason is the Multiply mode only shows the image pixels if both the source AND the destination is not transparent. The blending doesn’t produce the entire source image, because the destination image has transparent sides as well.

Nonetheless, it does have other uses.

It can be used for dark color patching, with a destination image that only has white instead of transparent color.

Check out below where the dragon has been patched with the black color:


I hope you found reading this fun and beneficial. I believe there are more creative usages of PorterDuff. If you find an interesting usage of it, please respond and share, so that others could benefit from it.

Here is the code that you can use to experiment:

If you would like to just perform some image processing on a single image, you could check out the below blog


I hope this piece was helpful to you. Please check out my other topics here.

Better Programming

Advice for programmers.

Elye

Written by

Elye

Learning and Sharing Android and iOS Development

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade