Traditional Image semantic segmentation for Core Samples
This my first time to write a blog in my life, so we gonna talk today about how can you segment image using the traditional ways.
In this blog we gonna use a core sample as an example, if you don’t know what core sample is, it’s a cylindrical shape sample taken from an oil well or mining area, and we use it to see how is the geology down there doing.
so our mission here is to try to write a code which can differentiate between the dark texture in the core sample and the brighter part, this might help a geologist by saving him some time and introducing some automation to his work.
The first step:
we need to find an algorithm that can find a difference between the 2 different textures up there, if you searched over semantic segmentation using traditional ways, you gonna find a lot of ready made filters on libraries like Skimage, Scipy, and OpenCV. In our case we gonna use Skimage because I like it .
so in Skimage we can find a lot filters to do this task, but the only one that would give us a good end result, which is entropy segmentation filter.
first we need to import the essential libraries for this mission
we need to import Matplotlib to show our final results, Skimage for the entropy segmentation and Thresholding(2nd step), and Numpy just to complete our importing cocktail.
To open our image in python first, we gonna do this using the following line of code
now let’s talk a little about the Entropy filter, the entropy filter can detect subtle variations in the local gray level distribution, the next picture will show you how it works
to apply this filter over our core sample image, we gonna use just the following line of code.
and to show the the result of this filter we gonna use Matplotlib to plot these new values, using the coming line of code.
and here we can see the result of the entropy segmentation over our core sample.
now we can see that entropy filter resulted yellowish green color for the darker texture in the original picture and normal green to the other texture, but from this result we can’t precisely differentiate between both textures, we need a boarder line between the darker and brighter textures, to do so we gonna use the Otsu Threshold filter, which gonna result the value which estimate the boarder line between the darker texture pixel values and the other texture pixel values, we can do so using the following line of code.
using the previous code we made a line between 2 textures, we just need to put this line in action, that’s why we gonna make a boolean mask using this threshold,using the following line of code.
the previous line of code constructed an image, consists of true and false value or zeros and ones, the true values are for the pixel values which are less than or equal the threshold and the false values are for the rest of the pixels
now we have only 2 different textures in the output image, and we can calculate the area ratio between the 2 textures easily using the boolean mask.
to calculate the area ratio you need to use the following lines of code
and the final result gonna be this pie chart, which shows the yellow area to the purple area.
This segmentation tutorial is inspired By Professor Jörg Benndorf and from DigitalSreeni YouTube channel