It does more than it looks like it can.

I Made a Thing!

Facial Recognition Edition III!

So this is the final write up for the broader facial recognition project. You can read part 1 here and part 2 here. This has been a really fun project, and I’ve learned a ton about facial recognition, image capture in the browser, a little Python, and physical computing in this final part.


The first part of this project linked above details building a service that allows users to register a facial profile, and manage devices or apps and the users that can use those devices via facial recognition. Essentially it’s very much like Oauth ‘login with facebook/gmail/twitter’ links that you see on a lot of sites now.

Having made the service, I wanted to have a device that can use the service to recognize users. The origins of the idea was to be able to use facial recognition for access control (doors, the beer fridge, etc.). So my device was made with this general purpose in mind.

In Action

After registering my device with the web service (workflow for that is detailed here). It now has a unique app key associated with it for authentication calls.

The actual device is contained in a box that is designed to be mounted on a wall. It senses when a person moves within 90 cm. and takes a photo of the person. It sends that image to the web service. The web service will respond with whether it recognizes the face and that face is authorized. The device will then allow use of whatever is connected via an electrical relay.

So with the facial recognition device hooked up to a door lock, the process for opening the door would work like this. A person walks up to the box, their photo is automatically taken when they are close enough. If they are listed as an authorized user of that particular door on the web service, then the door will receive the signal to unlock. No keycard or need to buzz you in.

Technical Details


The actual brains of the device is a Raspberry Pi Zero W computer. It is basically the smallest cheapest computer that you can buy. This is fine, as what we are asking it to do doesn’t take a lot of horsepower. And realistically this computer could do some much heavier lifting if we needed it to.

There are four other physical devices that are wired into the RPi. An ultrasonic distance sensor. A camera module. A single button. And an electrical relay. The wiring diagram that I drew up looks like this:

Did I mention that I have no training in drawing electronic schematics?

The actual device looks like this with the covering off:

Raspberry pi at the top. Button to the left. Prototyping breadboard in the middle. Relay at the bottom. The camera and distance sensor are attached to the top and not visible in the photo, although you can see their wiring heading off to the left.


Repo here

The Raspberry Pi is a really great choice for hardware prototyping. The amount of support for just about any physical interface is better than any other option. That said, the language of choice is Python, which is great, but not something that I have a ton of experience in.

I wrote two distinct classes. One that handled taking photos, and the other that handled getting a distance reading from the distance sensor.

The camera class uses the RPi’s built in python camera library. When the snap method is called the RPi turns on the camera, waits for the sensor to adjust to the lighting conditions, and then saves the image in a 400px by 300px jpeg.

The distance sensor class measures the time it takes for a sound pulse sent by the sensor to return. It uses a multiplier based on the speed of sound to determine how far away whatever echoed the sound signal back is, and returns that number in centimeters. There is some additional logic to handle a timeout when the sensor doesn’t properly send the signal for the echo.

There is a third file that acts as the runner that calls all of the classes as needed, and sends the HTTP requests to the service. the ‘runner’ is setup to initialize the GPIO pins controlling the hardware, and then starts an infinite loop. Inside of the loop the computer continuously runs distance checks using the sensor class. When it senses that someone has moved within 90 cm. it calls the camera class. The resulting photo is encoded into a base64 jpeg, and sent using Python’s request library. The server’s authentication response determines whether the relay is activated, or if the distance sensing loop is restarted. If the relay is activated, the runner waits in another infinite loop until it senses that the button is pushed. When the button is pushed, the relay is turned off, and the authentication is revoked. The initial distance sensing loop is then restarted for the next user to walk up.


Hardware is hard! Even with all of the support available, troubleshooting can be really tricky. The biggest issues I had were in trying to isolate bugs to either the software or the hardware. It can be really hard to tell if an issue is in the code or the components. It adds a layer of complication to developing for sure. Packaging and design are also important. I used a project box that takes up WAY more space than necessary. But I wanted it to be easy to work with firstly. Making it elegant can come later.

I learned a lot about Python in this process as well. I have a lot more to learn (I’m guessing I need to start with a style guide, but some guidance in best practices with Python OOP wouldn’t hurt). If I had my choice of languages, the code could have been a lot cleaner. However, it does work well, so the code can’t be that bad.


It’s really hard to describe exactly how something like this works in words. But I think it’s enough to say that it is a little magical to walk up to something, and have it recognize your face and do work for you. The actual hardware can be configured for a number of different scenarios. Instead of having it log out when the button is pushed, it could be setup to logout after a set period of time, or even when it senses that the user has moved away (much more useful for a door lock). Right now I have it hooked up to a coffee machine, which is why it is set-up to only log off when i push the button.

I had a ton of fun with this project. Just like playing with the initial facial recognition tools, building one thing leads to ideas for more uses. I definitely plan on working more with physical computing.

Call to action

I love developing and writing about what I’m doing.

  • If you have feedback I would love to hear it, that’s what the comment section is for. Or send me an email:
  • If you liked this project, please click that little green heart right below this.
  • If you want to share it with the world: please do. Twitter, Linkedin, Facebook, Myspace, Friendster, Reddit are all options. Not Yo, however.
  • I’m moving up to Seattle, and looking for a new position. I’d love to talk development over coffee, if you’re in the area (even if you aren’t looking to hire). If you’re not near Seattle, I carry a phone with me at -literally- all times. Get in touch!