A Slice Of Time
What are timeslices?
If a photograph is a two dimensional representation of three dimensional reality, then a timeslice is a two dimensional representations of four dimensional space-time. Or, more prosaically, they’re composites of multiple exposures shot over time into a single image — a static alternative to a time-lapse movie.
They aren’t a new idea. The first timeslice that I became aware of is by Eirik Solheim, who photographed a tree over a year and in 2011 composed 3,888 selected photos into a single image. Sam Javanrouh’s Daily Dose of Imagery had a great example of Toronto in February 2013, while later that year Fong Qui Wei’s Glassy Sunset animates the slices themselves.
Dan Marker-Moore and Richard Silver have each produced dozens, and sell a selection. Wei Yao even used the technique to highlight Chinese air pollution. There are some overlaps with slit-scan photography, as collected by Golan Levin, such as Miska Knapek’s 24hr images.
Why make them?
I started thinking about making my own a couple of years ago. I’d started experimenting with shooting time-lapse movies, with an image every few seconds shot over the course of minutes or hours generating a short film. However, it’s technically challenging to make a perfect time-lapse; missing a frame, or failing to match lighting changes (especially for a sunrise or sunset) can make the footage unusable.
I realised that making timeslice images was a good use of the footage from imperfect shoots, since they’re more forgiving of a missing frame or two, or even slight tripod motion during the course of a sequence.
They’re also useful for the better videos which I want to use later. A single image is a good summary; a timeslice is a better thumbnail than the first or last frame that’s more typical in a file or movie browser.
How are they made?
I’m a programmer, so when I’m confronted with a repetitive task my instinct is to automate it. I wrote a script (using Python) which opened every JPG in a directory, cropped a column from it, and then pasted that into a new image. Once I was past the proof of concept, I removed all the hardcoded directories, image size information, and details about the number of images, and made the script usable on other image sequences I had on my hard drive.
Over time this small program grew more complex. I added the option to generate an image from a completed (time-lapse) movie rather than a directory of source images. The program can also reverse the order (so the earliest photo is on the right, rather than the left) and add labels on each slice, showing the time the photograph was taken (as seen in Richard Silver’s earlier work).
The most complex change to my program was to add the ability to choose slices by luminosity rather than time. Each of the images (or frames) is processed to calculate its luminance (that is, roughly, its brightness), and each slice is chosen to be an equal step more bright (or less, if it’s sunset) than the last. Generally, I prefer the output this way, since the change of colour flows better across the image, but it’s easy to produce both and compare them.
Even in the first script, the most important setting was the width of each slice (or, alternatively, the number of slices in each image). Changing this can change the look of the resulting timeslice composite a great deal, as the examples below, from the first set of images I processed, show.
Initially I’d thought that using many thin slices would be best, but my feeling was that the images produced were lying. There was something almost realistic about them, as if there was really a sky that had such a an obvious gradient from darkness to light (as in the first picture above). When I changed to using far fewer slices, the divisions over time became more obvious, exposing the artifice, which I believe makes what’s being shown (that is, the change of light over time) clearer.
Other photographers choose to process their images in different ways. I’m pretty sure that most of the examples I listed above are produced in Photoshop or another image editing program, rather than directly in code. Eirik Solheim is an exception.
Most of the examples above also have the distinct slices that I decided I prefer, but Stephen Wilkes produces composite images that seamlessly change from night to day and back again. That said, photography has always seen many people using similar or even identical tools in different ways; perhaps it’s no surprise that even a niche technique such as this manages to demonstrate a variety of approaches.