The making of a LED TV

Jonas Oscarsson
Tictail  -  Behind the Scenes

--

Or, alternatively, “an embarrassing attempt at embedded systems by a bunch of high-level application programmers”. You’ll be the judge.

The last week before Christmas was dubbed “demo week” at Tictail. Being a fairly new employee I was unfamiliar with the concept, but I think it’s best described as “during one week, build whatever you want that’s related to Tictail in some way”. Then, at the end of the week, each self-formed group gets to do a demo and a winner is chosen by vote.

Alex, who normally works with the checkout and order systems at Tictail, had an idea about visualizing Tictail orders using a world map on a LED wall. This LED wall should then be put up in the office, and serve as a real-time reminder of Tictail’s main goal — getting our merchants more orders. He had no problem convincing people to join this exciting project, and a diverse five-people group was quickly put together.

Now, the observant reader might already have noticed the project idea was expressed as LED wall, while the post title says LED TV. There’s a reason for that. When we set out on this project, I believe we all envisioned something like this:

The vision in our heads.

That’s not quite what we ended up with. When we started to look for what we needed to purchase, we realised three things:

  1. The equipment is not cheap.
  2. The equipment is expensive.
  3. We’re great at high-level programming as that’s what we need and do at Tictail, but unfortunately we’re not experienced enough in low-level programming of FPGAs and such.

Although we like learning new things we only had a week to complete the project, so we decided to get this into Python as quickly as possible. In short, we wanted the following interface:

ledtv.set_pixel(x, y, r, g, b)

This should set the pixel at position x, y to the color r, g, b on the LED TV. If we had that we figured it would be comparatively easy to get order location data, draw a world map and project orders on the map in a nice way.

Hardware

At first, we considered buying LED strips by the meter which we would form into a wall of LEDs. However, given the spacing of the LEDs on the strip, we quickly realised we would need a LED skyscraper to get a resolution good enough to display order locations with some granularity.

The 32x32 LED matrix.

Next approach: LED matrices. There are a lot to choose from, but Adafruit has collected some reliable alternatives and we finally decided to go with 32x32 chainable LED matrices, much because we found a great guide and API on GitHub which had lots of useful information for exactly those matrices. Preferable to us, this guide also didn’t use any FPGA or real-time system to drive the matrices — it only used a Raspberry Pi which we already had in the office.

We decided to buy eight 32x32 matrices because (1) we didn’t know if we would be able to drive more from one Pi, and (2) budget. This gave us a 128x64 display which fits well with the ~2:1 ratio of a world map (and also lowered our expectations from LED wall to LED TV, which was probably good for the office working environment anyway).

The LED matrices came with magnets for mounting, so we cut an old product photographing backdrop to the correct size, stencil-painted a Tictail logo on it and mounted metal strips.

If you remember one of the first Tictail landing pages, that page had a picture with products photographed in front of this board. And yes, that’s a puppy in the background (aww!).

To communicate with the LED matrices from the Raspberry Pi, the GPIO pins were wired up to the first LED matrix, which in turn was connected to the next, and so on. We had seen examples online where four matrices had been chained, but none with eight. This is what happened when we tried:

Eight LED matrices daisy-chained and driven from one Pi.

It worked, but as can be seen in the video the last four displays had artifacts. The proper thing to do at this point is probably to acknowledge that a Pi isn’t a real-time system, and as the LED matrices require exact timings in order to show the correct color and brightness it would be better to get a FPGA to drive them.

What did we do instead? We got another Pi, and drove four matrices on each Pi, which gave us some interesting application-level sync problems instead. Hey, those we can handle!

Software

So, we now had two Pis and a C++ lib to control two independent rows of four-matrix displays. We still didn’t have it in Python and we had no way of drawing a frame on the full display. Also, as the timings were very important to avoid flickering and artifacts, we didn’t want to run the Python process rendering the orders map on the actual Pis. To solve these problems, we came up with the following architecture:

The server and channels are written in Python and run on a server. The master and renderer are written in C++ and run on the Pis. “What’s a channel?” you ask. Well, we figured we might want to display something other than the orders map, so we prepared for it early on. A channel is just a script that implements a method which draws something on a frame. We ran each one in its own process and connected them to the server via ZeroMQ.

We decided to use WebSockets for all communication as the APIs for it are straightforward and easy to use in many languages. If you don’t know what it is, think of it as TCP sockets with a HTTP handshake and simple message protocol on top. It also has the great benefit of being available in modern web browsers, so we could write a simulator for the LED TV in JavaScript which can be used when developing new channels.

The flow is that the server creates a frame, and passes it to the channel which populates the pixels in the frame.

class SomeChannel(Channel):
def update_frame(self, frame):
for x in xrange(self.MAP_WIDTH):
for y in xrange(self.MAP_HEIGHT):
frame[x, y] = self.get_color(x, y)

This frame is then sent on a WebSocket down to the master on the first Pi. Master timestamps the frame and splits it into a lower and an upper half. The lower half is forwarded to one renderer, the upper half to the other.

The renderer runs two threads. The first listens for incoming frames and puts them in a queue. The second thread (shown below) pulls frames from the queue, sleeps until the current time matches the timestamp and then renders the frame. As the Pis’ clocks are synchronized using NTP, this makes them render each frame at the exact same time and avoids tearing.

void run_renderer(Canvas* canvas) {
while (true) {
// blocks if no frames available
Frame frame = queue.dequeue();
int64_t diff = frame.timestamp — timestamp(); // skip frame if too late
if (diff < 0) continue;
// just yield to next thread
// (almost busy-sleep, better precision)
while (diff > 0) {
usleep(0);
diff = frame.timestamp — timestamp();
}
render_frame(canvas, frame);
}
}

This seems like something that could have ended up like this:

A programmer had a problem. He thought to himself, “I know, I’ll solve it with threads!”. has Now problems. two he

It actually didn’t — the whole thing worked surprisingly well!

Result

This is the result put up on a wall in the office, viewable by everyone. For me seeing the orders happen worldwide in real-time like this, two things are made very clear:

  1. Wow, lots of people use and depend on Tictail on a day-to-day basis.
  2. It’s true what we’re always saying — we’re just getting started.

--

--