Discriminating Mirror

Brent Bailey
Critical Objects
Published in
5 min readMay 15, 2019

The final code lives here.

For our final, Adi and I were inspired by the AI Now Discriminating Systems report to make an object that communicated some of the problems it touched on. We were specifically drawn to the idea of a “black box”, which is often used as an excuse for the misuse of artificial intelligence. The people who weaponize these technologies often claim they have no control over them, as the inner workings of, say, a machine learning algorithm are opaque. This is obviously untrue — while the inside of an algorithm may be a black box, the people who make them control the data that’s put into them and the use of the algorithms themselves.

To touch on these ideas of control and power, we made a mirror that only shows ourselves. Using a live facial recognition network we trained on our own faces, and running on a Raspberry Pi, we’re using Processing to draw to a screen behind a two-way mirror. When one of us looks at the mirror, it returns to no back-lighting and works as a “mirror.” When anyone else looks at it, the LCD screen behind the mirror shows a distorted version of them, using increasing pixelation (to the point of total unrecognizability) for the less the algorithm thinks they look like us.

How Did We Get Here?

Initial ideation.

We went through a long ideation process: we had thoughts about making a type of Magic 8 Ball or a crystal ball, or an AI voice assistant, or a lot of other things.

When we ultimately settled on the mirror, we went through several prototypes.

This didn’t work that well.

We initially started working with OpenCV, as I’d worked with it before on the Pi and had some success. Adrian Rosebrock at PyImageSearch has some super helpful tutorials on the topic, but ultimately we weren’t able to get this to run at more than a couple frames per second, which was suboptimal at best.

We also built a prototype using face-api.js, while toying with the idea of making it respond to voice — we were imagining something like the classic Magic Mirror from Snow White. A browser-based solution wouldn’t be performant on the Pi, though, so we were back to square one.

Ultimately, we ended up going with Tensorflow and a Google Coral on the Pi. Working off of Dan Oved’s Edge TPU Processing Demo, we’re sending an initial video stream from Processing to the Coral, isolating and cropping the faces in the stream, then piping those to the Coral to run a facial recognition model we trained on ourselves.

Testing an early version on the Pi.

The two steps that ended up being the most difficult were training the model and actually doing something with the video in Processing. We retrained the model 3 or 4 times before we were semi-satisfied with its output, and it’s still not great. I think given more time, we’d like to spend it figuring out the best way to train it. It’s still often prone to mis-recognizing, and seems to work best when whoever’s using it is completely still. We tried using it with just our faces, as well as a third category of random faces we called “unknown” — we ultimately stuck with the two-face model, but are still a bit confused about how to optimize its performance for this case.

Processing became a bit of a nightmare: we had a lot of trouble getting it to draw over live video on the Pi. We initially tried using built-in filters, which didn’t perform quickly, and then tried using GLSL shaders, which also failed. We spent a long time warping and rotating and generally messing around with the image, and ultimately settled on an adaptation of one of Gene Kogan’s shaders, which for some reason is now performing quite well after not working at all when we started testing.

This didn’t fit the screen! Always measure twice, I guess.

Once we had something semi-successful working, we had to build a frame that would hold the screen and have a large enough back to hide the Pi, Coral, and an LCD screen. This shouldn’t have been difficult, but I still managed to screw it up the first time by mis-measuring and had to make a second frame.

I made extra sure it fit this time.

That takes us to where we are now. We have a working prototype, but I think neither of us is satisfied. It’s technically interesting, but as an art piece it needs more work: the metaphor is a bit on the nose, and we’re also not certain how well it’ll perform in gallery conditions. We’ve built a super useful technical pipeline for projects along these lines, however, and are hoping to continue with this going forward.

--

--