Warhol II: Compose your Overlay on Top of the Detected Face Hassle-Free on iOS
Add your Filter Overlay without having to deal with coordinates in Warhol
Some time ago I talked about Warhol, a library of my own that detects a Face from a Camera or Image input and passes the related information to the client. That way, the latter can draw on top or process the face data to accomplish its requirements.
I find it very convenient because the application using Warhol can forget about the cumbersome process of dealing with Vision and AVFoundation frameworks of Apple, focusing of what really makes an application different.
With the same goal in mind, to make the developer’s life easier, I proposed then that we can go one step forward in this path and include in the library one of the most common usage of Face Detection, to draw (silly) images on top of each feature. How cool would it be if the client could just pass an image for each feature and forget about drawing on top and the coordinates! That way they could compose a face overlay with a minimum of hassle.
With that ambition I started the ball rolling.
To draw the images we will follow the same approach we implemented when first developing Warhol: The drawing, in this case the image drawing, is performed in a transparent view that is added on top of the camera. The Face Detection engine updates the View Model containing the Face Data and asks the data to draw with
draw(_ rect: CGRect) is called in our view where the drawing magic happens.
As I pointed earlier, the client should only be required to provide an Image for each Face Landmark. They can forget about coordinates or other extra information and focus on the design looks.
It is therefore Warhol’s task to deal with the coordinates, so we can can draw the provided image on the right place. Given that we obtain the landmark area as an array of points, we should first convert it into the rect where the image will be draw:
Once we have the target rectangle, we can add the image with
layout.image.draw(in: rect). The outcome is:
Oh it looks ok, but that is not probably what the client wants. Given that the provided eyes area are actually kinda small, we should arrange a way to resize it, so it can be bigger (or smaller) according to the app input. The latter should be also customizable in relation to each Landmark, some items can be bigger while others not. To tackle that, I created a SizeRatio struct with which the developer can express how big the image should be in relation with the original feature area:
Notice how we can specify a different width and height ratio. This is especially relevant for the eyes case, where the width should be the same as the original but the height much bigger.
Once we have the SizeRatio, we resize the rect increasing the area but keeping the center point the same as the original:
Now it looks much better, we made the eyes bigger but the nose the same size:
Yeah! That’s acceptable. We found the perfect size for the eyes. The nose did not need any processing.
Real time updates
At this point the outcome was good, but there was one further concern. Since for simplicity sake I was using UIKit to draw the image instead of CoreGraphics, the image refresh might be slow when the features area change, as when the eyes blink or the mouth opens. UIKit is built on top of CoreGraphics, that is a more low level Framework, consequently providing more flexibility at the cost of more complexity. For instance, with CoreGraphics with can draw in a background thread because it is thread safe, which is not possible with UIKit. Happily, UIKit proved itself to be quite reliable for this case:
That is nice! The layout updates at real time without any perceptible delay.
We are almost done, but there is one more functionality widely used in Instagram Filters that I could add to my project:
In this case, the filter images for the eyes are not placed exactly on top, but a little bit below. To handle this case, I added an
offset property in the
ImageLayout struct. Using it, we can draw them where we wish:
Wrapping it up
So this is it! In this story we have seen how you can use Warhol now to:
- Create a Face overlay by adding filter images just by specifying the face feature where they should be placed, without having to deal with their coordinates.
- Change the size of each of them at wish to create more visual effects.
- Ensure that the Overlay is refreshed in the camera accordingly on real time, without any visible delays or poor performance.
- Place the image layouts with an offset in relation with the Face Feature, to add more flexibility to your filters.
As I mentioned in the previous story, there is plenty of room for improvements and new features in the field of Face Detection. In the next future I will be focusing on ARKit to animate facial expressions in real-time. Only devices with TrueDepth Camera (iPhone X onwards) will support it, so for the rest of them we will keep using the Vision Framework as described here.
As always, comments and proposals are more than welcome, just drop me a comment or message here. And of course, I would love you for the contribution to Warhol. PRs are appreciated with new ideas, improvements, fixes, and suggestions. This project is under MIT license and in case of issues please use the dedicated section in GitHub.