Building a Display

Simon
9 min readJul 29, 2018

--

Using a Raspberry Pi to drive an LED matrix display from Python

The display is built from RGB-LED stripes, a Raspberry Pi Zero W, and a 50W power supply. In the final assembly, there is a matte acrylic glass plate as a diffuser in front of the LEDs.

This is the start of a series about a project to build a 17x17 pixel RGB matrix display. This part covers interfacing with the display from a Raspberry Pi and building a Python library to display graphics on it. We will start with the basic interface to the WS2812B ICs and continue with coordinate and color transformations. By the end of this article, we will be able to display image files on the display.

In a future part of the series, we will go to applications for this display. I’m using it as a wall clock that shows selfies others can send in via a website. As a screensaver it can run Conway’s game of life.

Turning a chain of LEDs into a display

I used the WS2812B ICs in the form of strips with 60 LEDs per meter. Each of the ICs has three LEDs at colors red, green, and blue. The IC controls their brightness by pulse width modulation (PWM) from off to fully on in 256 steps. That means there is one byte per color per pixel for a total of more than 16 million different possible combinations. PWM means the IC turns the LED periodically on and off, so that on average the fraction of the time the light is on equals the set color value. The figure illustrates the concept:

Pulse Width Modulation (PWM)

When the period T is smaller than how fast your eyes can react, this gives the illusion of a light source of constant brightness. If you would take a picture with a camera that has a shutter speed that is much faster than T, say 10x shorter, this illusion goes away and you would see a picture of pixels that are either on or off. A very interesting question is what the “shutter speed” of the human eye would be. One can learn a lot looking for an answer… A simpler question is what T do you need to get a flicker-free display. This article gives some background. The short answer is that with 100 Hz refresh rate or T = 10 ms you should be safe.

I’m not entirely sure what the PWM period on the WS2812B is and how exactly the timing of the update works. The datasheet lists a scan-frequency of no less than 400 Hz. This seems to be the PWM frequency and would mean T = 2.5 ms. See also this and this discussion. The other factor we have to take into account is the update frequency of the RGB values in a chain of pixels. At 17x17 pixel resolution, we have 289 pixels in one chain. The datasheet lists a data rate of 800 kbps (=kilo bits per second). With 24 bit per pixel a refresh is 289 x 24 bit = 6936 bit. That would take about 8.7 ms to update all pixels or a frame rate of 115 Hz. That should be plenty even for moving scenes.

The WS2812B uses a timing based code on a single data wire to update the pixel values. You can look at the datasheet for details. The idea is to transmit three symbols 0, 1, RESET over a single wire by using different ratios of high and low voltage level. All ICs sit in a chain receiving an input data signal and sending an output data signal. After an IC has received 24 bits, it forwards everything else to the output until it sees a reset code. The reset is immediately forwarded to the downstream chips.

This means, we need to send 6936 bits using the 0 and 1 symbols. And then a single RESET to send the next frame. Because the symbols require precise timing, the standard GPIO control interface won’t suffice. Instead, we will use a specialized library that can use the PWM output of the Pi to talk to the LEDs. This output is normally used for sound on the analog output. So that needs to be disabled by blacklisting the kernel module and disabling sound:

/etc/modprobe.d/snd-blacklist.conf:
blacklist snd_bcm2835
/boot/config.txt:
#dtparam=audio=on

To install the Python module follow the instructions here. One reboot later you are ready to go and can test your LEDs using one of the example and test programs in the repository.

In your own programs, you can now use the library to set the color value of individual pixels:

2D coordinates

To draw graphics on the display it is more convenient to address pixels in a two dimensional coordinate system. So we need a function to convert from two coordinates to one pixel index:

The code first checks the given coordinates are on the screen. Then it computes the pixel position in the chain. And finally it checks again that the computed position is within the bounds of the screen. That is not really necessary if the computation of the position is correct. But it’s handy to catch errors in the code. How to compute the position in the chain depends of course on how the strips are wired up. In my case, I connected the output signal from one strip with the input of the next strip above on the short edge. That means even and odd strips run in opposing directions. Unfortunately, I made a mistake when gluing strip 14 and put it on the wrong way. That explains the (y < 14) in the if statement. Also, I added one LED that does not belong to the screen matrix directly after the output from the Pi. So there is one off-screen pixel position that is just added to the computed index. Apart from that, the computation is straight forward: The y coordinate is multiplied by the screen width and the x coordinate is added to that number to give the offset within the row (running either from left to right or vice versa).

Abstractions

Usually, it is a good idea to abstract your code from the concrete implementation details of the library you are using. That might allow you to re-use your graphics code on another display or more easily adapt to changes in future versions of the library. To do this I defined a class that provides all functionality to draw something on the display. The main workhorse of this class is the image function:

It accepts any two dimensional Python sequence and two transformations. One for coordinates and one for color. We just saw one possible coordinate transformation using x and y coordinates. Now, in computer graphics it is quite common to place the origin in the top left corner with axes pointing right and down. In mathematics the origin would be in the bottom left corner with axes pointing right and up. Or maybe, I want to turn the display on the side for some reason. By making the coordinate system a parameter it gets abstracted from the actual graphics code and can be easily changed at any point in time. The same argument applies for the color transformation. A PNG loader might return an array of bytes. Some other code might want to use floats from 0 to 1 to represent the color values.

Blitting, anti-Aliasing, gamma correction…

So far, we have basically constructed a display from scratch and written a library to display a chunk of data on it. What we want to do is draw an analog clock with a nice background image. Before we get to that, let’s discuss some basics from computer graphics:

Bit block transfer

Our image function from above transfers a two dimensional matrix of colored points to the individual LED ICs. You can think of this matrix as the frame buffer of the screen. It contains the complete content of what is shown. What we need is functions to modify this frame buffer to draw the desired picture. A typical operation that we will also need is to composite multiple images into the frame buffer. That is an operation known as bit blitting dating back to the Seventies. Bit blitting in general transfers blocks of memory while applying an operation per pixel to form the result image. It’s basically a fancy copy operation. In the early days, operations were Boolean operations like AND or OR. That’s enough to select if a pixel T should come from image A or B, given a binary mask M:

T = (NOT M AND A) OR (M AND B)

This would place a pixel from A in the target buffer if the mask is zero at that coordinate and from B otherwise. Today you would also expect to do alpha blending: Combine the color values from both sources by a weighted sum, where the weight determines the level of transparency. Also, you would want to specify destination coordinates and a source rectangle to control what part of the images are used.

Spatial Anti Aliasing

Anti aliasing tries to reduce “jaggies” from an image that is displayed on a screen with less than infinite resolution. Imagine a landscape you are looking at from a hill. Now you reduce this view to a finite number of pixel color values. For each pixel you have to decide on a single color that represents that part of the real view. A patch of grass in the distance will be reduced to a single average green pixel. This process of coming up with pixel values is called sampling. When you later look at this image on a screen, you have a similar problem: Your screen resolution is likely different, than that of the image. So your screen has again to sample from the image to define the individual pixel values. How this sampling is done, can have a huge impact on how a human perceives the resulting image, especially how close it matches the original view. The name aliasing comes from the field of signal processing: Spatial frequencies above the Nyquist-Limit are cut of by sampling and so appear as lower frequencies in the sampled image. Sounds complicated? Ever noticed how the rotor of a helicopter seems to switch directions while spinning up? Same effect, but now it’s a temporal alias. Aliasing effects pop up in a lot of places. A really weird one is the Umklapp-effect.

In our case, we want to draw angled lines for the hands of a clock. We can compute these lines as arbitrarily precise vector graphics and then sample that onto the 289 pixels we have. In such a case, a common way to deal with aliasing is supersampling: For each real pixel we compute colors for multiple sub-pixels and take the average. It’s not necessarily the best approach to have a regular rectangular grid of sub-pixels. Their position might even be random to avoid a sharp Nyquist Cutoff. Since we only want to draw single lines, we can place the sub-pixels exactly on the line and mix the color of the line with the background weighted by the distance from sub-pixel to pixel center.

Here is the code:

Gamma correction

In a previous section I described how the LEDs in our display use PWM to control what color is shown. The color value between 0 and 255 determines for how long the LED is on leading to a linear relationship between intensity and the programmed color value. Unfortunately, that doesn’t fit very well to how our eyes perceive light. Human sensitivity is closer to a power law meaning, that you can perceive smaller absolute changes in brightness for dark areas than for bright ones. By the way, humans can see as little light as a single photon. Because of this power law dependency its common to do a gamma correction from the measured intensity Ito the stored pixel value E(encoding gamma) and back to the displayed intensity D(decoding gamma). The simplest form is an exponential gamma > 1:

E = I ^ (1/gamma)
D = E ^ (gamma)

The commonly used sRGB color space defines a slightly more complicated gamma correction. If we want to display images encoded in the sRGB color space, we have to do this correction to compute the PWM values for our LEDs.

If you continue on this line of thinking, you might ask if we also have to correct for the relative sensitivity of the eye to red, green, and blue colors. We absolutely have to! Humans are much more sensitive to green than to blue. The sRGB color space takes this into account to avoid wasting bits. Fortunately, the LEDs in the WS2812B already have relative intensities that fit the eyes response. We don’t have to do any additional correction to get good enough results.

Here is the code to do sRGB gamma correction when loading a PNG image:

The End of part one

Now we have everything we need to draw a clock and display images we loaded from a file. You can find the full code at https://github.com/five-elephants/hello-ursula. If you have questions or suggestions you can reach me via Twitter.

In the next part of the series we will build a web application to interact with the display.

The display showing “Starry Night” by van Gogh.

--

--