How do Snapchat’s filters work?

Anira Darouichi
4 min readJul 27, 2017
Left Image: Point-mask detection Right Image: Man using the Rainbow Filter-http://weplay.co/should-you-incorporate-snapchat-within-your-marketing-strategy/

On Snapchat you can transform your face into a dog or even swap faces with a celebrity. This is all made possible because of Snapchat’s facial recognition technology, it has helped to form the selfie filters we use on a daily basis. Although the Snapchat lenses are very easy for us to use, there’s a lot of work that goes behind the scenes.

Snapchat computer engineers are constantly testing their facial recognition technology to be the best it can possibly be and creating new filters to keep their users entertained. These lenses that we use today were created by a Ukrainian app called “Looksery” which Snapchat bought out to obtain these lenses. From there Snapchat has continued to make new filters and even improve their facial recognition technology so users can have the ability to even swap faces with the person sitting next to them. There are many complex factors that go into computer vision technology to make our filter using experience easy.

How do humans detect faces?

For thousands of years humans have had the exceptional ability to recognize people by sight, and computers still haven't caught up. According to the BBC Earth Lab, computers only have a 97.35% success rate of detecting the correct face to the right person and that humans today are still more accurate than a computer at detecting faces. Although they all play a part in detecting faces they perform different functions that in unison help to process facial recognition. Humans exceptional ability to detect faces is because we have designated areas of our brains dedicated to recognizing faces and looking at configurations these areas include,

  • Fusiform face area; determines whether we are just observing facial features or whether these facial features form a face.
  • Occipital face area; determines what parts of your face is a nose, mouth, eyes, etc.
  • Superior temporal sulcus; observes different facial expressions to determine others emotions.

So, if humans have a whole region of their brain where they detect faces how do computers facial recognition technology work?

Computer vision has several steps when analyzing peoples faces as well. When the computer begins to analyze the person they look for nodal points. Nodal points are the areas of your face that have definition, so when the computer is analyzing the nodal points they take into consideration the depth, width, and length of each of your facial features.

The computer also detects faces by observing the contrasts of shades on your face to determine the placement. The camera detects areas of dark and light parts of the image which helps the computer know where the shadows are on your face. Without the proper computer vision, the alignment of the filter would be off and the filters we use would be unable to work properly. In order for the facial recognition to be accurate an algorithm is used called, the Viola-Jones Algorithm. This algorithm is how the face detection tests are ran, they are ran by constantly scanning the image to calculate the shades of each of the pixels in the picture. This helps us know where all the highlights and shadows are of the face which therefore helps to indicate what the picture is depicting.

The Science Behind Facial Recognition

In order for the facial recognition technology to work properly Snapchat is constantly testing its system by using many different faces to make sure it would be able to detect anyone. Once the face is detected it is marked with points to show where each facial feature is and its borders. Petal Pixel explains how this application works,

“The trained application can then take that point-mask and shift it to match your individual face based on the data its getting from your camera at 24 frames per second.The final step is to create a mesh from that point-mask; a mesh that can move with you or trigger an animation when you open your mouth or raise your eyebrows.”

Once the point-mask is meshed by connecting all the points on your face and has reached full recognition of your face, then that’s how you are able to be engaged with the filters because it able to recognize your movements based on the points detected.

When you open up the filter on your camera the points align with all of the points on your face and then adjusts accordingly in order for the filter to work. The points align with the borders of your eyebrows, lips, nose, etc. For example, this is how the interactive dog Snapchat filter works when it tells you to raise your eyebrows and as a result a tongue appears. During this process you are still able to move your face and the filter will still be able to recognize your face. Your movement is unaffected because the point-mask is constantly adjusting with every frame the camera captures.

The interactive animations can not work on Snapchat unless the facial recognition technology is up to par. Computer vision technology is constantly improving and is becoming essential in other forms of social media as well. Advancements in computer vision technology are not only developing to improve in facial recognition but also for online account security purposes as well. Thanks to computer vision technology we can now snap a photo of our credit card to link onto our amazon account, have Facebook tag our friends in pictures for us and stay entertained with the newest Snapchat filters.

--

--