Measuring a lensless camera PSF and raw data

Eric Bezzam
7 min readDec 4, 2021

--

In a previous tutorial, we built a lensless camera with a Raspberry Pi HQ camera sensor and a piece of tape as a diffuser. Now we will measure the camera’s point spread function (PSF) and raw data using this library. With these measurements, we can reconstruct pictures with lensless camera!

LenslessPiCam point spread function, raw data, and reconstruction.

Note that we have denoted exercises for students as such:

This is an exercise.

If you are an instructor / trying to replicate this tutorial, feel free to send an email to eric[dot]bezzam[at]epfl[dot]ch.

In measuring the PSF for our lensless camera, we assume that we have a linear shift invariant (LSI) system, such that we can capture a single PSF. Namely our PSF captures the response between a single point (where we put our light source) and our sensor. With this assumption, we can simply shift the PSF in order to get the response between another point and our sensor. More on this LSI assumption can be read in the following tutorial and/or in Section 2.1 of the modeling/algorithms guide.

1) Installing the Python library

We have prepared a library with scripts and utility functions in order to interact with the LenslessPiCam, analyze the measured data, and perform the reconstruction. The gradient descent and ADMM reconstruction code are inspired from the original DiffuserCam repository. We have reformatted it to an object-oriented structure and optimized the code quite a bit.

Exercise: install the lenslesspicam library if you haven’t done so already (on both the Raspberry Pi and your local machine). You can use the following commands from a Terminal to download from GitHub and install the library in a virtual environment (see here for the latest setup instructions).

# clone
git clone git@github.com:LCAV/LenslessPiCam.git
# install in virtual environment
cd LenslessPiCam
python3.9 -m venv lensless_env
source lensless_env/bin/activate
pip install -e .
# on the Raspberry Pi you may also need
sudo apt-get install libimage-exiftool-perl
sudo apt-get install libatlas-base-dev

Here’s how you can generate an SSH key pair and add it to your GitHub account, (which you may need for cloning).

Below we will be running scripts from the root of the repository from your local machine and in a dark room!

2) Measuring the point spread function (PSF)

We will be using the same setup from this tutorial on measuring optical PSFs. Place the point source about 40cm from your LenslessPiCam, as shown below.

As discussed in the PSF measurement tutorial, we want PSFs that do not saturate and that use the full dynamic range. To this end, adjusting the exposure setting of our camera will be the most useful parameter to play around with until you get a good histogram.

python scripts/remote_capture.py --exp 0.1 --iso 100 --bayer --fn <FN> --hostname <HOSTNAME>

where <HOSTNAME> is the hostname / IP address of the Raspberry Pi and <FN> is the name of the file to save the PSF, e.g. --fn psf which will create a file called psf.png . If omitted it will be saved as test.png. Note if using the Legacy Camera on Bullseye OS, you should include the --legacy flag as well!

The lowest exposure value we could get to was 0.02. If you find that you are still not able to use the full dynamic range, you may have adjust the potentiometer of the point source or use a smaller resistor in series with the LED.

Note that we use the --bayer parameter in order to save the measurement as raw Bayer data to do color correction afterwards. Omitting this parameter will use the blue and red gains estimated by the HQ camera’s software (see Section 5.7 of the Raspberry Pi Camera Algorithm and Tuning Guide for their algorithm). However, we have noticed these gains lead to unsatisfactory results for the LenslessPiCam PSF.

Below is a good-looking histogram without color correction.

python scripts/analyze_image.py --fp data/psf/tape_bayer.png --bayer

As expected by the histogram, the PSF (below) is greenish.

Exercise: measure a PSF with your LenslessPiCam. Make sure it doesn’t saturate and try to use the full dynamic range.

Using the analyze_image.py script, we can determine the appropriate red and blue gains (by trial-and-error to line up the different color histograms), and use the --save parameter to save the color-corrected RGB image.

python scripts/analyze_image.py --fp data/psf/tape_bayer.png --bayer --gamma 2.2 --rg 2.1 --bg 1.3 --save data/psf/tape_rgb.png

Exercise: using the analyze_image.py script, determine the appropriate red and blue gains.

Come up with an algorithm to automatically determine the red and blue gains, given we know the resulting image should be a white caustic pattern. This type algorithm is known as automatic white balancing (AWB). In Section 5.7 of this guide, you can get an idea of the Raspberry Pi camera’s approach.

3) Analyzing the PSF

So how do we know if we have a good PSF? Unlike the PSF of a lens, the LenslessPiCam PSF has a very large support. While this is normally undesired, for LenslessPiCam we want a PSF with large support and many directional filters.

When our diffuser is “focused” we will have thin lines (high contrast contours), which means that small shifts will be uncorrelated with each other. By measuring the width of the PSF’s autocorrelation peak we can have a quantitative metric to compare different PSF candidates. Moreover, by computing the width of the autocorrelation at different directions, we can determine if the PSF has a sufficient amount of directional filter.

Below are plots of the autocorrelations of the grayscale PSF and the red, green, and blue channels. We also plot a cross-section of the autocorrelation to analyze the width of the peak.

python scripts/analyze_image.py --fp data/psf/tape_rgb.png --lensless

4) Measuring raw data

Now that we have measured a good PSF. We can proceed to capturing some raw data. We will measure raw data at the same distance at which we measured the PSF. We recommend using a backlit source, such as a phone or computer monitor.

If we would like to measure / reconstruct raw data at different distances, we will have to also measure PSFs at those distances. In that way, we can actually perform computational refocusing.

We will use the same script as for the PSFs to capture raw Bayer data:

python scripts/remote_capture.py --exp 0.1 --iso 100 --bayer --fp <FN> --hostname <HOSTNAME>

For the raw data, it is is not necessary to use the full dynamic range. Below is the histogram without color correction.

python scripts/analyze_image.py --fp data/raw_data/thumbs_up_bayer.png --bayer

From the above histogram, we can again expect a very green raw measurement.

Exercise: place your phone (or any backlit source) in front of your camera and at the same distance you measured the PSF. Measure raw data and make sure it doesn’t saturate.

Just like with the raw PSF data, we can use the analyze_image.py script to color correct the Bayer data, using the same gains determined for the PSF. Moreover, we can use the --save parameter to save the color-corrected RGB image.

python scripts/analyze_image.py --fp data/raw_data/thumbs_up_bayer.png --bayer --rg 1.45 --bg 1.55 --save data/raw_data/thumbs_up_rgb.png

In general we cannot just line up the red, green, blue histograms as the corresponding object may indeed be more red, green, or blue.

Exercise: color-correct your raw data using the analyze_image.py script.

5) Reconstruction

In another tutorial, we will talk more about reconstruction. But here we give a sneak peak of what we can expect.

Running the following command will apply a few iterations of a reconstruction technique known as ADMM (Alternating Direction Method of Multipliers) on the data we measured and processed above, and save the intermediate reconstructions. File paths and default parameters are set in configs/admm_thumbs_up.yaml.

python scripts/recon/admm.py save=True

Not bad 👍

Useful links

The original building guide gives some examples / insight on good-looking PSFs: https://waller-lab.github.io/DiffuserCam/tutorial/building_guide.pdf

--

--

Eric Bezzam

PhD student at EPFL. Previously at Snips/Sonos, DSP Concepts, Fraunhofer IDMT, and Jacobs University. Most of past work in audio and now breaking into optics!