Generating Images in JavaScript Without Using the Canvas API

And putting them into a web notification

Alastair Coote
The Guardian Mobile Innovation Lab
12 min readJul 12, 2017

--

At the lab, we’ve experimented a lot with the web Notification API. But mobile capabilities are always improving, and since our last experiment Google expanded their Notification API capabilities to add an image attribute, letting you use Android’s BigPictureStyle notification on phones. This is particularly interesting for us, since in previous experiments we had to cram data visualizations into the icon of the notification. Now, with a larger canvas to play with, we wondered what we could achieve.

We weren’t sure we were going to have an opportunity to find out. But in a perfect example of why being a newsroom developer is so interesting, the UK prime minister surprised everyone by calling a snap election in June. Not everyone was happy, but we saw our chance: With six weeks’ notice, we had an opportunity to develop a new test. What could we put together? The answer turned out to be more complicated than we originally imagined.

We ended up sending a web notification (to Android users only) that contained live updating results from the major UK parties as they came in throughout the night of June 8–9. Users also had the option to be alerted on results from one or more of the local elections. The vote totals were presented in text, in the collapsed view of the notification, and in a data visualization, in the expanded view using the big image spot to show the totals.

With great image comes great (bandwidth) responsibility

Although new, larger image notifications give a much better visual experience than icons, they’re worse in one important area: bandwidth. Our previous experiments downloaded remote images for each notification, which worked well for small icon images. But the image notifications are much larger, even more so when multiplied by the device’s pixel density. For instance, the Samsung Galaxy S7’s 4x pixel density means a notification image will typically be over 150KB. To add to that, the UK has 650 electoral constituencies, so there’s a chance that a user might have received 650 separate images — downloading over 97MB over the course of the evening. Clearly, that’s not acceptable.

Creating PNG images locally

Normally, the answer to this is simple — the HTML Canvas API lets us draw images locally and read them out as PNG data URLs by using Canvas.toDataUrl(‘image/png’). But service workers don’t have access to the Canvas API. The OffscreenCanvas API is on the way and in any other situation we’d wait for it. But we couldn’t.

I remembered that the Canvas API has a getData() function, which returns an array of all the RGBA values in an image. Could we somehow replicate this low-level functionality inside a worker to create images? It turns out the answer is “yes, but with a whole lot of caveats”. So let’s begin.

Memory limits

One important thing you need to remember: the download size of an image is rarely its actual size. Both JPEG and PNG images use compression to reduce the size of a downloaded file dramatically, but the OS can’t use a compressed image directly — it has to be decompressed into memory first. That’s a concern on mobile devices, especially on Android phones, given how varied they are. We don’t know how much memory a device has, let alone how much is being used by other apps, different OS versions, and so on. When we start dealing with devices with very high pixel density, the compressed image sizes are scary, but the uncompressed sizes are terrifying.

Some research into different image file formats led me down the rabbit hole of the PNG file specification, where I found something useful. While PNG files can contain RGBA arrays, they can also use other pixel formats, including palette-based ones. These let us specify an RGBA palette at the start of the file, then store one palette index value per pixel instead of four separate red, green, blue and alpha values. This saves a lot of space, as this simplified example of a 10x1 single-color image demonstrates:

Suddenly our data is around a quarter of its original size, and is hopefully a lot more manageable across different devices. But how do we actually edit an image?

The PNG File Format

As someone who hasn’t worked with raw file formats too much before, the idea of parsing a PNG file was intimidating, but it turns out that PNGs are actually quite simple. The file is split up into a series of “chunks” that signify different components of the file (a header, an RGB palette, an alpha palette, data). Each chunk specifies a length and a string identifier, followed by data, then a CRC check to ensure data consistency. A simplified representation of a 10px x 1px dotted line:

(if you want to read more details on the PNG format, see here)

If we want to manipulate the image we just need to find the offsets for these chunks and write bytes to them manually. But how do we manually write bytes in JavaScript?

Typed Arrays

JavaScript is a wonderfully flexible language. I can make an array, add whatever I want to it, and change the size of it at will:

But that flexibility comes at a cost. Being this dynamic takes up a lot of memory, and that’s something we don’t want to do. Luckily, these days browsers have typed arrays — fixed length, integer arrays that are far more memory efficient. They have no push or pop operations — just the ability to set a number at an index:

We’ll use these to create and edit our image data. But manually writing UInt8 bytes is a little too low-level — PNG files also use strings, UInt16 and UInt32 values all over the place. Browsers have built-in functionality to read and write all these data types in the form of a DataView, but some initial tests showed that they are surprisingly slow. Instead, inspired by some code in Hein Rutjes’ libpng-es6, I put together an “ArrayBuffer Walker” that makes its way through an array, writing values of different types:

Creating the walker was a good reminder of all the various data type rules I haven’t thought about in years, like, UInt8 values can be from 0–255, and take up one byte. UInt16 values can be 0–65,535, and take up two bytes. Uint32 can go up to 4,294,967,295, and takes up 4 bytes.

So we can now use the walker to read and write a PNG chunk. For example, reading the IHDR:

And so on. Let’s say we want to add red and blue to our color palette. The PLTE chunk is one long array of RGB values, so we’d do:

With red at palette index #0 and blue at #1, all we need to do in order to draw a red/blue dotted line is write 0s and 1s in our data chunk, right? Well, not exactly.

Caveat: compression

The PNG file format uses zlib for compression in the data (IDAT) chunk. Normally that’s a very wise idea, but it makes it impossible for us to directly manipulate byte data because we have no idea what the compression algorithm has done to it. There is a JS implementation of zlib available, but memory usage rears its head again — if we want to decompress an image, we’ll need to load both the compressed and decompressed data into our service worker’s memory space, so it’ll be more wasteful (and CPU intensive) than just loading the decompressed version in the first place. Serving uncompressed data would have awful implications for download size, but I realized that we can shunt the decompression overhead down the browser pipeline by serving the entire PNG file with Content-Encoding: gzip. That way, the image download size remains small, but our service worker receives the decompressed body: the best of both worlds.

But that doesn’t mean we can ignore zlib entirely — the PNG parser still expects data to be in zlib format, even if it’s uncompressed (i.e. using a compression factor of zero). I’ll skip the full story (details here and here should you be curious) but the short version is that zlib has its own chunks and checksums, so we’re basically reading and writing chunks within chunks. Accordingly, I created a quick ZlibWriter that sits on top of our ArrayBufferWalker for doing exactly this.

Making the actual image edits we wanted to do all along

With the zlib issue solved, we can finally write our palette indexes to the IDAT chunk of the file (remembering to add “row filter” bytes along the way — that confused me for a long time) and manipulate the content of the image.

In case, for some reason, you don’t feel like implementing all of this yourself, the lab has wrapped up all of our byte-editing code into an JS library, which is available here:

https://github.com/gdnmobilelab/png-pong

It has a very basic set of functionality — creating images, drawing rectangles/lines, and drawing sections of images onto other images. There is also an add-on, PngPongFont, which uses that same functionality to read in bitmap font files and write text.

Tailoring our image for Android notifications

We can edit an image, but we still need to work out what edits we actually want to do. There were three factors to take into account:

Device pixel ratio

As we discussed before, different devices have different “pixel ratios,” or display densities. If we want our image to appear crisp and clear on all devices, we need to tailor the size of the image according to this ratio. This meant multiplying our image width and height by window.devicePixelRatio (not available in the service worker, so we had to send it from the signup page when users signed up) and creating a separate, resized copies of our sprite image:

The “lozenge” in the top right was repeated in different colors for the minor parties

for each ratio (PngPong does not resize images). These pixel ratios aren’t necessarily whole numbers, so I resized for each decimal place (creating sprites@1.0.png, sprites@1.1.png, and so on), with each device only downloading the version it needed. I applied the same logic to each of the bitmap fonts created by PngPongFont.

Android screen size

Android devices are a varied bunch, and even devices with the same pixel ratio don’t have the same width. The notification image width is typically the width of the screen, minus 32 dip (device independent pixels, i.e. pixel * window.devicePixelRatio) on either side. But not always — once the screen gets to a certain width, the notifications UI “snaps back” to a more conventional phone-sized notification. Like so:

After an annoying period of experimentation that involved spinning up Android emulators with different width screens over and over again, I discovered that the breakpoint is 600dip. At 600 or above, the OS flips into “tablet mode” (you can just about see the Chrome tablet UI, complete with tab selector, on the right hand side above) and changes the notification size. Less than 600dip, it takes up the whole width.

Android version

Just to add even more variety to the mix, BigPictureStyle notifications are handled very differently in Android M and below. Rather than take up an specific box in the center of the notification, they actually serve as a background that draws underneath the action buttons. Compare and contrast:

M on the left, N on the right
Argh

After yet more trial and error involving making a lot of test images in PngPong and measuring how they ended up being drawn on the screen, I found out that Marshmallow and below require an image that is 1.15x the size you want to display, with the extra being cropped equally from the top, left, bottom and right. By using a white background and adjusting our starting X and Y draw points, we could, at last, draw a notification that looked consistent across pixel ratios, screen size and Android version.

To save re-rendering the entire image every time a new result came in, we created a reusable template when the user signed up for notifications that looked like this:

and stored it on the device via the Cache API. We then used PngPong to draw text and the constituency squares on top of the template.

Actually using the image in a notification

We can store and retrieve images by grabbing its underlying ArrayBuffer and putting it in the cache, but getting it into a notification is more complex. URLs specified in a showNotification() call do not pass through the worker’s fetch event (though hopefully they will eventually), but we can use Blob URLs and Data URLs. Blob URLs would be ideal since they are just a slightly different representation of the same underlying data, but URL.createObjectURL() has been removed from service worker environments and I’m not aware of any other way to create a Blob URL. Instead, we can create a Data URL via a FileReader:

then pass that to our showNotification() call. This is a real shame, though, because after all the work to reduce memory usage as much as possible, in our last step we’re creating a separate, base-64 encoded string representation of our image that takes up a huge amount of space. Luckily, the code still worked on every device I tried, so I just had to push ahead with it.

So, (a very simplified version of) our final code looked like this:

And produced our final product:

Next steps

As is always the case with projects like this, there were some things I didn’t get to do. PngPong is structured in a slightly weird way, emitting events as it walks through the PNG file rather than allowing access to the whole file at once. That’s because I wanted to integrate it with the new Readable Stream API — reading and editing small segments of the file as it streams would be a lot less memory intensive than loading it all at once. It’s not simple to implement (e.g. what if a stream is sliced in the middle of a chunk?) but it should be possible. If we could combine that with the ability to create a Blob URL and/or firing fetch events for notifications we could end up with a very memory efficient system for editing images in service workers.

We had assumed that this would be the last election that the lab would cover. But like I said before, news development can keep you on your toes. The final result of the snap election was so inconclusive that there is speculation of another election soon. If that happens, keep an eye out for another chance to experience these notifications in real time!

Github projects mentioned:

The Guardian Mobile Innovation Lab operates with the generous support of the John S. and James L. Knight Foundation.

--

--

Alastair Coote
The Guardian Mobile Innovation Lab

Doing mobile news-y stuff at @nytimes. Previously of @gdnmobilelab.