An Interview With Daniel Lawrence Lu

Paul Mison
9 min readMay 14, 2018

--

I first became aware of Daniel Lu’s slit-scan photography when his image of a Japanese Shinkansen N700A series on Wikipedia appeared on a Slack that I read.

After finding his other Wikimedia images and personal site, I thought I’d ask whether he was willing to answer some questions about his process- and he was. Here are my questions with his answers.

How did you first become aware of this sort of imaging, and what led you to take it up as part of your photography?

I found out about it when reading about Adam Magyar. I think I saw it on Hacker News or Reddit or something. I also really like trains so it seemed fun to get a line scan camera to scan trains.

Did anything you’d previously photographed help working with line scan cameras?

Nope. It’s pretty different from other types of photography. In terms of composition with line scan camera, you can’t really screw it up because no matter how busy the background is, it will appear as horizontal stripes. I suppose a cleaner background is still better, though. Other than that, just set up the camera on a tripod and wait for interesting vehicles to roll by.

You use industrial line scan cameras. How difficult are they to get and work with?

You can find some monochrome line scan cameras on eBay. I think that’s how Adam Magyar got his monochrome line scan camera. I bought a NED XCM6040SAT2 this way, but it is very difficult to get set up. It uses the CameraLink interface, which requires you to get a PCI Express CameraLink capture card. And then the capture cards you find on eBay usually don’t come with the arcane proprietary drivers or software. I got an NVIDIA Jetson TX2 with an Epic PIXCI camera link capture card.

NED XCM6040SAT camera (left) and Epic PIXCI capture link card (right)

This setup works fine; however, it is very difficult to use outdoors. One would need a 12 V DC input for the camera, and then the NVIDIA Jetson TX2 also requires DC input (19 V, I think). Also, the Hirose HR10A four-pin connector for the 12 V DC input is hard to find online and I had to solder the cable myself. Ultimately, the setup requires a big backpack like the one Adam Magyar used, which is super clunky and difficult to travel with.

Also, you can’t find any color line scan cameras on ebay, and the USB3 to CameraLink adapters cost $2000 by themselves.

So, eventually, I just gave up going down that route and shelled out $2000 for a new Alkeria camera, which is USB3, and plugs into any laptop. Since I bought it from the manufacturer directly, I get all the software support I need. They provide a C++ API, which I use.

I should note that Alkeria, like most industrial equipment manufacturers, only sells to businesses. But it’s fairly straightforward to buy one if you email them and provide a company name.

Roughly, what’s the capture rate and resolution of the camera? Is it possible to economise with cheaper or older gear?

The Alkeria camera is 4096 x 2. One of the lines is RGRGRG… and the other is GBGBGB… The line scan rate can be up to 95 kHz. The length of the line is 28 mm (roughly the same as the diagonal on APS-C).

The NED XCM6040SAT2 I mentioned earlier has 6144 pixels, monochrome. The length of the sensor is 43 mm, roughly the same as the diagonal on 35mm full frame. It was only $300 on ebay, but then the CameraLink capture card was another $300, the CameraLink cable was around $80, and so on. If you add in the cost of the NVIDIA Jetson TX2 or similar embedded computer with a PCI-Express slot, the cost quickly increases.

If you want to economise with cheaper gear, you can use a regular video camera, or even a cellphone. There are apps to do line scan photography with your phone, using its video recording capability. However, even “slow motion” cell phone cameras only do up to a hundred or so frames per second, far from the tens of thousands that a true line scan camera is capable of.

You can also economise by building your own line scan camera.

The capture rate is also limited by the shutter speed. After all, it’s essentially a video camera, and you need time to properly expose each line. The fastest I’ve gone is 40,000 lines per second (Shinkansen photo). With the Alkeria you can adjust shutter speed in intervals of 100 ns. To help with getting a fast shutter speed, you need a fast lens. Speaking of which…

Both cameras I mentioned have a standard Nikon F mount. You can put any Nikon lens on it. Since there are no electronic contacts for autofocus or aperture control, I’d recommend using manual or industrial lenses.

Most line scan cameras are meant for industrial use, so the ones on ebay sometimes come with macro lenses with a fixed focusing distance. (Close-Up Photography has some cool reviews of some of those lenses — they are often very good and superior to consumer macro lenses.) However, for taking pictures of trains, we want a regular photographic lens or an infinity focus industrial lens. I have two F-mount lenses, the photographic Voigtlander Nokton 58mm f/1.4 and the industrial Myutron 5026. The latter is sharper and has much less vignetting, but it is prone to flare. Industrial lenses are nice because they have a set screw to keep the aperture and focus rings in place. It’s very difficult to get things in focus and you wouldn’t want to accidentally bump your focus ring.

Also note that you may not be able to put Nikon F-mount industrial lenses on a Nikon DSLR. The industrial lenses may sometimes have elements protruding behind it which can damage the mirror. I’ve put the Myutron lens on my mirrorless Sony a7R and a7R II and it works fine.

Your GitHub account has some C++ code for communicating with the camera. Was this hard to write or work with?

The communication with the camera is done by the Alkeria C++ API which was provided to me by Alkeria. The GitHub code I wrote is used for:

1) human interaction to adjust parameters and preview things, i.e. a “live view”, useful for aiming and focusing the camera

2) denoising and demosaicing the image

Alkeria provides a Windows program to do step 1, but then its output contains many artifacts like fine lines/stripes, and it doesn’t let you save raw 12 bit images. Besides, I use Linux. So I made my own program to do that, as well as get rid of those artifacts. See my later answer about post-processing.

The Alkeria C++ API is fairly easy to work with. As for the rest of the code, it’s a lot of experimentation, but I’m pretty comfortable with writing C++ so I’m okay with that.

Do you spend much time finding locations?

Yes, I usually look on Google Street View beforehand. A good location needs to show the moving thing from top to bottom. Most trains are secluded from direct view, especially the wheels, by fences, platforms, and so on. As for the Shinkansen photo, the Himeji station is a well known trainspotting hotspot on the internet.

I heard that Adam Magyar got all of his friends to visit every station in the New York Subway to find suitable locations. He even made a light meter to detect the presence of flickering lights, as those can really ruin line scan photos. I don’t have as many friends so I just use outdoor locations on Street View.

How many attempts do you need to create a photograph like the cable car or BART train on your website?

For both of those images I stood there for 2 hours and scanned every cable car and BART train that moved past until I got one I was happy with. It’s pretty hard to nail focus with a manual focus lens when all you can see is a 1 px tall image.

As for the Shinkansen photo, that was done in 1 hour while waiting for my train. Five or so trains blasted through the station at 300 km/h during that time, and luckily one of the pictures turned out pretty well.

Is there much post-processing involved? Is making the source image consistent difficult?

The post-processing to get rid of noise and artifacts is the main motivation for writing my C++ code on GitHub. For an example of the difference that my denoising makes, see this at full size.

The first image is the raw output from the camera. The second gets rid of fine alternating horizontal lines (vertical in that image). The third equalizes the random jitter in shutter speed, getting rid of fine vertical lines (horizontal in that image). The final image subtracts the minimum value of the image, setting the black point accurately. That image is a bit old though; I’ve since then improved my algorithms further.

Another point of consideration is that the Alkeria camera has two lines, as mentioned. The red and blue channels are on different lines, meaning that if you’re not careful, you’ll get colour fringing that looks like chromatic aberrations. If you know the speed of the subject, you can align those two lines.

A third point of consideration is that the speed of the subject affects how stretched out the image is. If it’s moving quickly, the image becomes compressed horizontally. If it’s moving slowly, the image becomes stretched out. So far most of my photos were of things moving at a constant speed, so I can just scale the image horizontally. If it’s not moving at a constant speed, you’d need to warp the image with a nonlinear function like cubic B-splines. I have so far been too lazy to implement this, but I’ll do it at some point.

I have some ideas to automatically detect the speed of the subject. For example we could try to align the red and blue channels, taking advantage of the two line sensor. We could also use a generalized Hough transform to detect axis-aligned ellipses. Since most ellipses turn out to be stretched out versions of perfect circles (such as wheels), we can automatically figure out how stretched out those ellipses are, and in turn infer the speed.

A fourth point of consideration is that if the line scan camera is tilted, the resulting image is sheared. All horizontal lines remain horizontal, but the vertical lines are slanted. Both the shearing and scaling can be fixed with, say, ImageMagick. Detecting shear automatically is fairly easy since most trains are full of vertical lines, so an edge detection algorithm can detect how slanted the image is.

In his TEDx talk, Adam Magyar describes these distortions and how he fixed them. Too bad his code isn’t open source, as far as I know.

As a final note, the full size image can easily be a few gigapixels, so you need a pretty beefy computer. My computer has 32 GB of RAM and it struggles a lot. That’s why I try to do as much of the processing in my C++ code as possible, where I can do things in chunks or otherwise optimize certain operations. Sometimes, though, I use GIMP, which seems to handle large images fine (albeit very slowly).

Would you describe this as an artistic as well as technological project?

Several people have mentioned that there’s lots of artistic merit in line scan photos. I’ve already mentioned Adam Magyar, who sells his line scan photos of subway trains for thousands of dollars apiece at fine art galleries. Jay Mark Johnson makes line scan photos too, and those are more abstract. For now the main goal of my line scan photo is to do documentary — to scan trains and such, which can’t be captured in their entire length with normal cameras. I really enjoy the aesthetics of such photos though.

Is there something you’d particularly like to image with a line scan camera?

Bicycle races, car races, and horse races sound fun. Did you know that there exists a Canon 300mm f/1.8 lens specifically for the purpose of line scan photos of horse races?

Photo finishes for races are one of the main reasons why this photographic technique was invented in the 1930s. Maybe I should find some races and camp near the finish line to take unofficial photo finishes.

Thanks to Daniel for taking the time to answer these, and providing some additional images.

--

--

Paul Mison

batteries not included / british and has opinions / not fully sure about not hosting my writing