# Activity 8: Morphological Operations

This new activity is an exciting one, as we will see an application in biology, a subject I actually liked, though I can’t say I’m good at. To begin, let us have a little starter on what morphological operations are.

By now, we can easily relate the word **morphology** to shape or structure, since the root word easily gives it away. Structure is not simply determined in image processing, and more so in computer vision.

For us humans, we can determine shapes by looking at something, and determining which portions are parts of which. Similarly, in binary images, we determine shapes and boundaries by determining which is which, but this time, by assigning values to certain portions, such as 1’s to the included parts and 0’s to the others.

It would be an arduous task to discuss all the morphological operations available and how they work, so I’ll just give some details as I demonstrate them.

**HAND DRAWN DEMONSTRATION**

Here, we demonstrate the effects of two morphological operations: EROSION and DILATION.

Let me tell you ahead that I lost some of my scans, and without a scanner at home, I was forced to take pictures with my smartphone. The lighting was poor, and the alignment was not preserved well. However, we have recently learned in our Photonics class how to manipulate the histogram of the value plane of the image in GIMP. Thus, I was able to “save” the important details of the images. Even so, I am still sorry if the images are quite grainy.

Let us draw some basic shapes:

Here we have a **5x5 square**. Note that here, we are considering each box as one pixel. We know that for our computer, nothing is smaller than one pixel.

Here is a **2x2 squar**e as our structure element. It is what we use to apply morphological operations on a given shape. The dot represents what part of the structure element is taken as the origin.

In **dilation**, we expand or thicken a shape for each direction equal to the dimensions of the structure element for each of the directions.** Erosion**, otrue to its name, does the opposite. It thins out the shape.

To explain how I came up with the upcoming results, let us first establish some “rules”.

Each square center is taken as a pixel location. Pixels must be “dots”, thus they are dimensionless. Thus, in the case of our 5x5 square, it will be interpreted as a matrix of 25 pixels. If our unit distance is the distance between two pixels, then our 5x5 square is actually 4x4 square in the strictest sense. Our square structure element therefore, is a 1x1 square.

To better demonstrate it, let us apply the said operations to out 5x5 square.

In our image to the left, the dotted lines determines the boundaries of the eroded image, while the broken lines determines those of the dilated image. Since broken and dotted lines are hard to discern form each other occasionally, I also took the liberty to **shade the final area of the eroded image with red lines**. **The x marks are the eroded parts**. **The red outline traces the boundaries of the dilated image**. This will be the convention for the rest of this demonstration.

Notice how the dilated image is one pixel greater for each dimension. The boundaries of the dilated image covers a 5x5 pixel area, 1 unit length greater than the original image for both its dimensions. This increase corresponds to the 1x1 size of our structure element.

Imagine that for each pixel in the big square, I “**sticker-on**” the structure element, aligned by the dot-marked origin. We will expect that at the bottom and right edges, we will have an extra unit length of “stickered-on” areas. **That is how dilation works.**

Moving on to erosion, we see the opposite effect. Each dimension are reduced by 1. Let us again use the “sticker-on” analogy to better see how the process occurred.

Imagine now that before “stickering” I check first if the sticker will still be within the bounds of the original shape. Since that will not be the case for the right and bottom side, we have to skip stickering. The total area is now less. missing 1 unit length for each side. **This is how erosion works.**

Let us try a different structure element.

Here is a **2x1 structure element**. This is interpreted as being of length 1 vertically, and of length 0 horizontally.

We see here the results. With the origin on top, I expected that the bottom of our shape will extend by 1 more unit length after dilation, and decrease by the same amount after erosion.

Now let us test the horizontal dimension with a **1x2 structure **element.

In a lazy man’s way, we can just essentially rotate the previous images. Even so, strictly following the said method must result to the right side extending or shortening with dilation or erosion, respectively, as shown in the image to the left.

A curious case would be a 3**x3 cross** as a structuring element. The origin is at the intersection of the “lines”. Note that 3 boxes corresponds to 2 unit lengths

Using the “stickering” rules I’ve laid out, I predicted that in dilation, the top and bottom should extend, as well as the left and right. Note that the original corners remain where they are, since the extensions are within the horizontal and vertical exclusively.

The extension for each is a total of two unit lengths, since the structuring element is extending 1 unit length from the origin for both directions.

We then expect that the erosion must decrease the size from all directions, 1 unit length for each.

Let us try a diagonal structure element. In light of the “laddering” effect I’ve discussed in Activity 4, our diagonal is a 2x2 structure of the form :

(0 1;1 0)

Now we see the interesting case of having a the origin displacement of the pixels of the structuring elements have both vertical and horizontal components.

Again, using “stickering”, we see that the top row and right column are the vital pixel locations.

We see a diagonal extension for the dilation, and a diagonal shortening for the erosion.

I realized that there can be a** “reverse-stickering”** for dilation. Here, we pick an origin point on the base shape, and then “sticker” the shape on top of every box of the structure element. The total area covered is then the final result. The previous pictures can easily account for that

With the similar approach, the common overlap of the said stickers will be the erosion. The position, however, will depend on the origin taken.

To put this in simple words:

#### Dilation is the union of the shape stickers, while erosion is the intersection of the stickers.

#### -Ralph Aguinaldo, 2015

If you are into the mathematical terms, here are some set theory equations to help you out

Of course, we have to try other shapes.

Here, we have a **3x4 right triangle **as our base shape to morph.

Note how it does not look like a triangle (I’m sorry I have no other way of saying this). This is, once again, due to the pixellated nature of our images. I can’t exactly make a diagonal line, so the “staircase” was the closest thing we can have.

Using the 2x2 SE (structure element), we predict that the right and bottom will extend outwards in dilation, and will recede inward with the erosion. 1 unit distance for each change.

For the 2x1 SE, I predicted that we will observe the same extension of the bottom edge by 1 unit distance for dilation and shrinkage of the same distance for erosion.

We can see now that the 2x1 and 1x2 SEs are just components of the 2x2 square SE. Here, the right edges are the ones that moves out and back. Note how the top row disappears with the erosion.

For the cross SE, we’ve seen that all the edges extend outwards by 1 unit distance. Thus, we predict the same result for each edge of the triangle. Note that since no point in the triangle accommodates the cross SE, the entire shape disappears in the erosion.

For the diagonal, we can see that each box replicates to the upper right direction. The dilation si essentially the union of the translated shape to the upper right direction. The erosion result is very small, as the majority of locations do not accommodate the diagonal

Here, we have a 5x5 cross shape, 1 box thick.

Using the 2x2 square SE, I predict that the dilation will increase the thickness of the cross, as well as extend the lengths by 1 . As expected, the cross is too thin for anything to be left after erosion.

With the 2x1 SE, the dilation is predicted to stretch the cross by 1 uni length vertically. The erosion must remove the horizontal bar and shorten the vertical bar

With the 1x2 SE, we see the same effects but along the horizontal.

Interestingly, the use of the cross SE resulted to a dilated shape that resembles a diamond, or a rotated square. The erosion resulted to the cross being reduced to its central box,which is the only place where a cross can fit.

The predicted result for the diagonal SE is easily conceivable for dilation. Note how I made an initial mistake with the erosion. A left leaning diagonal is all that was left.

A special case that should be investigated is when the base shape has a hole. The image on the left is a 10x10 hollow square 2 boxes thick.

In the previous cases, we see that the 2x2 SE results to an addition or reduction of 1 unit length for each dimension. Here, the predicted dilation is an 11 x 11 hollow square 3 boxes thick, while the erosion is a 9x9 hollow square 1 box thick. The same behaviors are still observed.

Dilating with the 2x1 SE, the hollow square became thicker along the top and bottom, whiel the erosion thinned it out.

We see a horrizontal version with the 1x2 SE, with the left and right columns the ones changing in thickness.

The 3x3 cross SE predictions are very interesting. While the dilation is easily imaginable, the erosion is a curious case. Only the inner corner boxes “survived” after the erosion.

The diagonal SE is once again easily done for the dilation as I basically translate a duplicate of the shape. The erosion thinned out the shape diagonally. preserving the upper left and lower right corner thicknesses.

### CHECKING BY SIMULATION

The code in Fig. 30 is a Scilab code used to simulate the performed dilations and erosions. The first 14 lines are just generations of the shapes as matrices. The function **makeshape() **is used to make a black area spacing around the shapes.

**lines 24–27** are the shapes generated, while **lines 28–32 **are the structure elements made using the **CreateSturctureElement()** function of the **Image Processing Design **module of Scilab. **lines 34–40 **are the concatenation of the operations into one image matrix.

We can see on the images on the left the result of the dilation and erosion for different SEs.

Comparing to the predictions images above, we see that we got the correct final shapes.

However, It is noticeable that the positions are shifted compared to some of the predictions. This is due to our choice of origin for the SEs.

We can then conclude that we get the same shape, irregardless of choice of origin. the mains difference would then be the apparent translation as compared to the predictions.

### APPLICATION: IDENTIFICATION OF CANCER CELLS AND NORMAL CELLS (HISTOLOGY)

An application of morphological operations is to be able to sort a group of smaller shapes or images in an image according to their shape or size. Note that morphological operations, as explained above, are suited for binarized images ( much like our results in Activity 7: Image segmentation).

to further expound in the said application, consider the case of an image with several ROIs (Regions of interest). If the distribution of values have an overlap, it is bound for artifacts and errors to be detected by our segmentation. Morphological operations are the ones that can take care of cleaning up these artifacts.

Our test image is an image of circles as shown below. Let them, for now, represent healthy, normal cells (they are actually punched out paper)

How do we know when a cell is unhealthy? True to the way the human mind works, need a comparison. We must know what a healthy cell looks like first.

Our first goal is to determine what is the shape and size of a healthy cell?

We first crop the image above into sub-images randomly. The sub-images can have overlaps. I used the code below.

Here, I took ten 256 pixel x 256 pixel portions of the image at random, and saved them into JPEG formats. Three of the images are shown below in Fig. 37. Notice the overlaps in the images.

The next step is to segment the sub-images, separating the background from the circles. The code above goes as follows within the for loop:

- Read the sub-image
- Find the value with the highest ditribution
- Find the first value after the maximum that has count less than 60
- Take the value from the previous step, add 5 and set as threshold
- Segment the sub-image by the threshold and save as JPEG

We see above the thresholded version of the sub-images from Fig. 37. Note that the function **SegmentByThreshold() **splits the images into 1’s and 0’s depending on whether they are above the threshold or not, respectively.

Now, we investigate on what morphological investigations we need to apply. Let us look into 4 operations, namely **Opening, Closing, Top Hat and Black Hat. **For additional information, here is a useful link.

The first operation is **Opening**. It is an erosion followed by a dilation. Since the erosion happens first, it will eliminate the very small objects that cannot accommodate the structure element. The dilation then follows to restore the eroded images to their original size.

The second operation is **Closing.** You guessed it right, it is a dilation followed by an erosion. Since dilation occurs, small holes are removed as the inner lining of the shapes fill inward. The restoring erosion will skip the closed holes, thus making this operation good for closing up holes.

The third operation is **Top Hat**. It takes the difference of the opening and the original image.

The last operation is **Bottom Hat. **It is the difference of the closing and the original image.

To better show this, let us try it on our binarized sub-image segmentation.

The Scilab code used is shown on the left.

I used the IPD functions **CloseImage(), OpenImage(), TopHat() **and** BottomHat() **to do the operations with a circle of radius 10.

We see the results of the operations above. We see that the best for removing the artifacts and unions is the **Opening **operation.

I then try to determine what shape should be used. I assumed that I can narrow it down to two general shapes, a square to represent the straight edges, and a circle for the curved edges. My money was on the circle, but I’ll still try it.

In the code to the left, I tried using a square of side 10 pixels, and a circle of radius 10 pixels as my SEs for opening.

I’ll spoil you now, the circle was much better, so I proceeded to test the radii for the values 4,6,8 and 10.

We can see above that the circle made the better cleaning of the artifacts, while the square made crude cuts around the ROIs.

Above, we see that increasing the radii improves the cleaning of artifacts. I eventually settled at a radius of 12, as the radius of size 13 already erodes most of the circles.

Here I coded to apply opening on all sub-images using a 12 radius circle as the SE. I also concatenated them together, shown in Fig. 45, spaced away by a matrix of zeroes 10 pixels thick. This would make it easier to process the area distribution.

Now, we need a way to identify the circles in the image above. Shown below is the code I used on one of the sub-images.

In the code on the left, I opened a clean sub-image and used the function **SearchBlobs()** on it. The said functions identifies the blobs in the image and replaces their 1’s with integer values corresponding to their order.

The image above is the result of the concatenation in the for loop of the recent code. Put your attention on **blob #10**. There is no blob, yet it was identified. Gio Jubilo also experienced this glitch. We’ve tried to color in whatever tiny pixel is being detected, but to no avail. The size itself is 0. Ma’am Jing is encouraging me to e-mail the Scilab or IPD author/s about this.

Now in this code, I find the blobs of the image in Fig. 45 and take the distribution of the area values in the blob.

If you are curious about the lines in the second and third cluster, here is what happened.

The peaks of the histogram is very varied. Not shown here is the peak at the area value of 655698. The bin preceding it is at 1647. This will just blow up any statistical step I attempt. It is the reason my **bar()** function, which plots the histogram, caps at **Nbin-1.**

It took quite a while to understand why this happened, as it also occurred when processing the individual cleaned sub-images.

Eventually I was struck with the relative magnitude of the difference with the other bin values. Upon addition of the individual blob areas, I observed that…

sum(bins.*sizes)== TOTAL IMAGE AREA

This can only mean that the black background in itself is being identified as a blob. This is not supposed to happen. Another e-mail incoming.

I limited what part of the histogram I will use by zooming in into the high histogram peaks.

Thus, I worked around these peaks to determine key values.

**devsize **is the standard deviation of the areas

**meansize **is the average area

**minsize **is the average less the deviation

**maxsize **is the average plus the deviation

**minradius **is the corresponding radius for a circle of size *minsize*.

**maxradius **is the corresponding radius for a circle of size *maxsize*

Using these data, I can now, process the image below.

Observe that here, there are some very large circles, which will correspond to the unhealthy, cancer cells.

On the code to the left, I repeated all the previous steps to identify the blobs in the image.

Here is the result of binaring the image by thresholding. Note the tiny pixels and the incomplete circles.

Here, we have the artifacts cleaned out. Note some of the clipped cells have disappeared.

This code is an extension of the code in Fig. 48. to process the binarized and cleaned image

I used the function **FilterBySize()** which takes a blob-labeled image, and retains all the blobs that are within the threshold parameters, which here are the **minsize** and **maxsize**.

I then multiply the original binarized image with the negative of the filtered image. This will result in the normal cells being replaced with negative numbers, thus being interpreted as black.

Here, we can notice that there are still a lot of the small circles found. I observed that the smaller circles are those which didnt make it into the interval specified in **FilterBySize().**

Also, we can observe touching circles that form very large blob areas. Thus, I decided to increase the size of our SE circle.

I used the exact value of 12.81626, which is our maxradius.

Here, we can see that we got rid of the touching circles, though we also lost much of the smaller cells.

Now we have our long awaited result. Above is the result of my second attempt. Notice how the remaining circles correspond to the big circles of the original image. I also noticed that by using only a circle of size 13 as my SE, I can easily isolate the large circles, which is a much easier approach.

### BONUS

Of course we cant just settle with that.

Here we have a medical scan image. Say, we want to separate the solid tumors and dense organs from the rest of the tissue.

To gain the image in Fig.60, I first inspected the histogram of the original image and thresholded it both above and below. I clipped the bottom part to zero to avoid detecting non-organs.

For Fig. 61, I tried I used a combination of closing and openiing using two circular SEs of different radii. This is a bt of trial and error.

Finally, we get to Fig. 62 by using **FilterBySize() **to get rid of the smaller artifacts. And Voila! We have now separated the dense organ areas.

The entire entry has been a long one, I have to say. The activities just keep getting more and more interesting, and the amount of time I spent discussing this is a proof to that.

First of all, I would like to thank **Miss Eloisa Ventura **for answering my questions abut the morpholgical operations. Also to **Ma’am Jing** for helping me confirm my glitch suspiscions. To **Mario Onglao** for brainstorming with me my concerns with dilation and erosion. And as usual, to **Gio Jubilo, **for the obligatory discussions we do. Also, I would like to cite **Computer Vision by Shapiro and Stockman** as my reference in the technical parts of the kentry.

The discussion has been informative and detailed, and I believe my analogies for the lay man show my deep understanding of the topics. The figures are separated well, labeled independently and presented in order, with clear visual cues for identifying the parts. Of course, I was once again able to incorporate other lessons to solve my problems. My discussions are beyond requirement and I put focus on the errors, limitations and possibilities of the methods. I have earned my good ol’** 12 points**.