CineFrame: Making Pictures Come to Life

Charlene Atlas
Portfolio Charlene Atlas
7 min readJun 5, 2018

Any Harry Potter fans out there? A well-known aspect of the Harry Potter books and films are the talking portraits adorning Hogwarts’ walls. Purely magic. Not possible, right? Well, it turns out you can create magic with a dash of coding, a sprinkle of circuits and hardware, and a standard picture frame!

Goal

Create the feeling of bringing a photo to life by holding it in your hands.

CineFrame Demo. Cinemagraph Credit: Kevin Burg and Jamie Beck, cinemagraphs.com

How It Works

The frame holds a Raspberry Pi connected to a 7" touchscreen. The Pi is running Processing, which is an IDE and software language based on Java. Processing programs are called sketches. The running sketch is controlling playback of a GIF by using a tilt sensor connected to the Raspberry Pi as input. When a person picks up the frame, the image appears to come to life, by the GIF playback turning on.

Background

My monthly maker meetup has a them each month, and this time it was “motion”. As inspiration, we were taught about a variety of mediums and technologies that involve motion. One of those mediums is called a cinemagraph. A cinemagraph is a gif or video made to look like a photo with only parts of the image repeating an animation. This creates the effect a kind of “magic photo” effect.

Examples of cinemagraphs from Flixel.

I decided to add interaction to this concept. I figured that when the animated portion of a cinemagraph is not playing, it would look like a normal photo. How cool would it be to have a photo of a loved one, and be able to pick it up at any time to re-experience that moment in time? I wanted to create the feeling that a person’s presence has power to bring memories to life. That they hold an innate ability to reach across time and space to a certain moment, at a certain place, with a certain someone.

Process

Cinemagraph Playback Control

I used Processing 3 and the gifAnimation library to be able to playback and pause a GIF using keystrokes.

Display Hardware Setup

Next I needed to prepare the hardware the processing script would ultimately run on. Since I wanted to to emulate a photo frame, I would need a small display. So I decided to get a Raspberry Pi screen. I got this 7" touchscreen from my local hardware shop. However, I explicitly did not intend to use the touchscreen for this project. I wanted to not make it feel like a digital experience.

I connected the Pi and the screen together and set up the Raspian Jessie OS. Then I installed the latest Raspberry Pi version of Processing by running the following in the Raspberry Pi terminal:

wget http://download.processing.org/processing-3.3.7-linux-armv6hf.tgz

Finally, I used an FTP program to copy over my Processing library and script to the Pi.

Tilt Sensor Setup

I connected a Tilt Sensor to the Pi using a breadboard. In the Processing script I then used the GPIO library in Processing to read from the input pin on the Pi to control the playback instead of using a keystroke.

Frame Setup

Now that everything was working, it was time to create the photo frame aesthetic look and feel. I combined two different frames into a setup that could hold everything and allow the structure to stand on its own. It took a lot of trial and error, construction paper, and various adhesive products, but no sawing or glue!

Framing Process

Result

I was very happy with the result and was able to show it off at the next meetup and to friends.

Problems Encountered and Overcome

  1. First display method did not work.

Originally I planned to use an old digital photo screen I bought at Goodwill.

This approach had a variety of issues:

  • I wanted to connect a Raspberry Pi to the screen in the photo frame. Talking with my friend, Andy Muehlhausen, he recommended that it would take a very long time to figure out how to use the screen since I would need to figure out the screen type and look up its datasheet figure out how to connect it to the Pi, etc.
  • So then I tried using the port on it called “AV in/out.” I bought a composite to HDMI converter (Pi outputs HDMI). This ended up not working. When I tried to output to the frame nothing would display and there was a static sound for some reason.
  • So I decided to go with a dedicated Raspberry Pi display.

2. Raspberry Pi + screen set up was under-powered.

I followed a tutorial that explained how to route power through the Pi and the screen so that you would only need one 5V power supply. However, I noticed a yellow lighting bolt symbol after awhile that would periodically appar. I looked it up and it turns out that it means that the Pi is not getting enough power, know as a “brownout”. So I ended up powering the board and the screen separately with two 5V power supplies.

3. Linux is case sensitive.

I had to copy over the Processing library I used on my PC to the Pi, because attempting to install it through Processing on the Pi, the library was out of date for that platform. However, I then hit another problem, which was that since Linux is case sensitive, it could not find the “GifAnimation.jar” file in the library. The error message I got was not informative so it took me a while to figure out I needed to change GifAnimation.jar to gifAnimation.jar so that the reference in my script would work.

4. Raspberry Pi 3 has limited memory!

When I was running my Processing sketch on my PC, it had four cinemagraphs that you could cycle through. After copying over the sketch to the Pi and trying to run it, it just stayed on a loading screen indefinitely. I learned that I needed to go into the Preferences in Processing to increase the memory given to the running sketch, as well as just reduce the amount of cinemegraphs to load since the Pi 3 only has 1GB of RAM. This allowed for the playback to work again. This taught me to think more about the memory limitations of microprocessors such as the Raspberry Pi.

5. Adhering/securing things to other things is hard.

I had quite a time going through different iterations of ways to hold the setup in place in a structure securely. Due to time constraints I did not want to do any custom frame fabrication or wood cutting, drilling, etc.

At a certain point I was just staring at scatter frame pieces on my desk.

Eventually I was able to figure something out using 3M strips and tape, but this experience made me realize I want to learn more about structural integrity planning and techniques such as 3D printing and wood working for future projects.

Ideas for the Future

I have some ideas for how I could use this same setup for interesting other experiences.

  • My brother pointed me to this Processing tutorial about pixel sorting images, so I am thinking about writing a sketch that uses the tilt sensor as a vibration detector to allow you to “shake up” a photo and have it turn into “sand”. Basically turning photos into sand art!
  • Originally I planned to control playback using an IMU (inertial measurement unit) before I learned about tilt sensors from Andy Muehlhausen. I thought about using the IMU to be able to do more analog control, such as using a physical rotation to rotate the image or to make certain hidden elements in the image appear.

For example, imagine you are holding a picture of a person, and then by rotating the frame left another person leans out from behind. It would be fun to try this someday with my existing setup plus an IMU!

Sketch about using rotation of the photo to reveal new elements.

GitHub for Processing Script: https://github.com/drummershou jo/CineFrame

Comments or questions on this article?

Leave a comment or tweet at me — @CharleneJeune

Thanks!

--

--