“What am I looking at?” I asked


It was a Wednesday morning and Peter Nitsch, the Director of our Labs group at Teehan+Lax was standing in my office. We meet every Wednesday to discuss what he is working on. We started T+L Labs two years ago as a way to explore the possibilities of technology, because while clients are great, rarely do they ask you to really go out to the edges. That’s Peter’s job,to come up with experiments out at the edges and see what happens.

He has a job most geeks would kill for. The deal is this: He doesn’t work on client projects. Nothing he does needs to make money. His work does not need to be applied to any real world problems. He can make whatever he wants, as long as the company learns something new. We give him tools and resources; he creates and executes experiments.

We call them experiments because they are meant to prove or disprove some hypothesis or notion we have. Unlike some Labs groups, which have a mandate to create products that can be marketed, ours is not burdened with this requirement.

To come up with ideas for these experiments, he talks to the people at Teehan+Lax. He tries to find out what they are interested in. Those interests get turned into experiments that Labs then works on.


“It’s an awesome idea,” Pete says.

I’m looking at a page on Vimeo. It’s one of those videos you see on Vimeo all the time.

Exceptional camera work? Check.

Hi def? Check.

Exotic location? Check.

The perfect ambient soundtrack? Check.

The video is in a style I hadn’t seen before. Every once in a while a new trend emerges on Vimeo. First, it was depth of field, then tilt-shift videos. I knew I was looking at a trend right before it becomes one. It’s still untouched — tons of possibility.

The video had a hypnotizing effect, moving fast and slow at the same time. It was somewhere in Europe, maybe Prague.


“It’s called a Hyperlapse,” Pete said.

“This is awesome,” I said.


Hyperlapse photography is a technique combining time-lapse and sweeping camera movements. Typically, the camera focuses on a point-of-interest and then moved while the focal point remains constant.

Creating them requires precision and many hours stitching photos together taken from carefully mapped locations.

Jonas Naimark, a motion graphics artist at Teehan+Lax, wanted to make a Hyperlapse movie, only he wanted to use locations from all over the world. It wasn’t realistic to go and shoot this footage so Jonas had the ingenious idea of using Google Street View images to create his video.

He started experimenting by putting a piece of scotch tape on his monitor and marking a tiny dot on the tape. He then lined up the tip of the CN Tower and moved down the Gardiner Expressway in Google Maps step by step. He would line up the dot every time to get the right camera angle. The result was this:

After realizing it was possible, he asked Peter if there were any way that Labs group could make him software that would help him create Hyperlapse videos from Street View.

Peter looked at the movies and saw that if you fed software a series of Street View images, you could do the stitching to calculate horizon lines, camera speed and frame rates. This is how Peter’s mind works.

So now Peter is in my office, telling me that this is the next experiment he wants Labs to work on.


Google Maps is not just an amazing tool it’s an amazing platform. Google had just given API access to Street View images, providing us with, an unlimited database of source material.

Peter started writing the engine so Jonas could make his movies. Jonas started going to the ends of the earth in Google Maps, location scouting. He found waterfalls in Hawaii, windmills in California and bridges in the Florida Keys that were all perfect to put in his movie.

Soon after starting, it became clear that the tool we were building for Jonas could be something bigger. This often happens on our experiments. Only once we are inside the problem do we see the possibilities. Serendipity is part of the discovery.

The tool we were building for Jonas could be built for everyone. All it needed was a usable UI and a Web based engine so that anyone could make a Hyperlapse.

The initial versions of the tool had a crude UI. It wasn’t really intuitive unless you knew what you were doing, but it was sophisticated enough that Jonas could control all the parameters available to us in the Google API. Here is the software he used to begin getting footage for his video:

Over a period of about six weeks, Peter refined the engine and prepped the code for launch. We were going to open source the code and that meant it needed to be extra clean.

The technology in the Hyperlapse is put together primarily in Javascript. It uses WebGL and leans on Three.js , a modified version of GSVPano.js, and Google Maps API v3. It is running on an AWS infrastructure with Node.js handling some lightweight caching of routes (performance related).

Chris Tanner, a designer and Nery Orellana, a developer, started creating a version of the software that could be made public. There was a lot of Google Maps API work needed so that we could make it easy for anyone to create a Hyperlapse


“We’ve got some problems,” Pete says.

A group of us are standing around Peter’s computer. He opens the Hyperlapse UI.

“When I set the camera position, it makes a data call to Google.” Pete explains,

“Ok, so what’s the problem?” I ask.

“We only get 2500 of those a day. We’ll hit that in minutes if we go live. Oh and I am caching a lot of data for performance reasons. I think that violates their terms of service.”

We have a history with Google. A previous Labs experiment involved rendering Street View in Ascii. This experiment has been featured by Google in their Creative Sandbox and led to us to do some work with Google.

We didn’t want to violate any terms of service or alienate Google. So we decided to send them a link to it and see what they thought.

On the day before Easter Friday we sent our contact at Google an email explaining the Hyperlapse experiment. It was a test balloon.

They asked for two small changes (we needed to add some attribution and clarification on our caching methods). But otherwise, we had the green light.


Since Easter we continued to refine the UI. Jonas put the final touches on his video,which now included original music from Paul Reiss, another motion designer at Teehan+Lax.The final cut you can watch here.

Chris, Nery and Peter along with several others, got the site ready for launch, which included building up a robust AWS infrastructure to support the expected traffic.

At 11:23 am on April 9 we made it available to the world with a Tweet.


The Hyperlapse site lets you create your own Hyperlapse movies. The web version is stripped down so that everyone can enjoy it, which meant restricting frame rates and controls like speed. So if you want to make something like Jonas’ video you need to download the source code.

We want you to make cool things with this code.

We want you to travel the world in Hyperlapse.