Cartoon Images → ASCII Art
Harnessing the power of computer vision to generate ASCII art
In this blog, we are going to see how we can use computer vision to convert cartoon images to ASCII art. If you carefully observe the image shown below you will quickly deduce that this is fundamentally a problem of edge detection followed by estimating a set of characters that looks similar to the edges.
Edges can be detected using multiple ways: Dilation followed subtraction with the original image, Canny edge detection, Deep Neural Network for detecting edges (DexiNed), and many others.
We will use dilation followed by subtraction with the original image since it yields perfectly fine edges from cartoon images with minimal computation. The latter stated method generally output edges that have more than 1 pixels which decrease the effectiveness of the method that we will use in the next stage, hence we use a Guo-Hall thinning to reduce the number of pixels that represent the edges.
Sub-images need to be obtained by using the sliding window technique to identify the character that best represents the sub-image. Character estimation can be done by comparing perceptual hashes or by using a Convolutional Neural Network (CNN).
We use a CNN to determine the best character that represents the thinned edges present in the sub-image since they perform better than comparing perceptual hashes while estimating the character. The sliding window is passed over the image resulting in a 2-dimensional array of characters that represents the thinned edges.
Want to convert a cartoon image to ASCII art? Use the link below for a website that implements the above-mentioned process.
Feel free to access the source code using the link below:
I hope you enjoyed reading this blog. Have a great day!