How to Reduce GIF Size 10x by Playing to the Spec’s Hidden Strengths

by Efraim Globus

Lightricks
Lightricks Tech Blog
8 min readJul 26, 2022

--

If you’re looking to reduce GIF size, this article will help. In this post, I provide an overview of how GIF encoding works and show how an improved encoder can help to produce more compact files, by carefully reading into the spec and only encoding the differences between frames.

My own experimenting into reducing GIF size started when my first daughter was born. I wanted to share the good news with my family, and used our Motionleap app to make a vibrant animated poster with sparkles and a playful video layer.

I wanted to export the announcement poster as an animated GIF for two reasons.

Firstly — because Gmail (and many other web apps) would automatically play the animation when it was attached to a message, ensuring every recipient would see the full effect.

Secondly — because, by default, GIFs are rendered in a loop. This makes them a natural choice for short animations that don’t have a clear start or end point.

A weighty problem

When I exported the animation as a GIF I got a ~35MB file, which made it impractical to send it as an attachment. I found I could significantly reduce the size with some GIF utilities, but it made me realize that Motionleap should have exported it in a compressed manner in the first place, so that all of our users could get the most out of their creations.

I decided to try to understand the GIF spec better, hoping to improve our current GIF encoder and was surprised (and relieved) to find out that it is not as complicated as one might expect. In fact, in comparison to modern video encoders, GIF encoding is very simple.

In this article, I’ll be giving a top level view to enable a solid understanding of the process, but if you’d like to dive into the finer details, then this is a great educational blog post.

How GIF encoding works

It is worth mentioning that the standard for GIF encoding was developed back in 1987, making it the oldest format still widely used today. It was optimized in line with the capacity of the hardware available at that time, when most screens were not able to display more than 16 distinct colors. The GIF format evolved and in 1989, it was extended to support animations and transparency. This extended format is known as GIF89a and is now the de facto standard for GIFs.

The GIF format only supports up to 2⁸ distinct colors in a frame, whereas for a standard RGB image there are 2²⁴ possible distinct colors. As a result, when encoding a frame, the color space needs to be reduced to a color palette with 2⁸ colors. (The process of choosing the best color palette is a non-trivial optimization issue and this arguably deserves its own blog post!)

A ‘color table’ defines this palette and can be defined globally as the default color palette for all the frames, or locally for a specific frame overriding the global table. In the case of Motionleap, we decided to use only a global table, because the animation in most cases is based on a still image to which we add movement.

As a result, the color space is very much alike across frames. When we experimented with a different color table for each frame, the same pixels got different colors in consecutive frames which resulted in a flickering effect.

For example, a realistic image of a person wearing a red shirt will, of course, include many similar shades of red. As a result of choosing the reduced color table, we end up with very few shades of red. Each reddish pixel from the original image gets mapped into one of the red colors in the reduced color table.

On the other hand, if we create a new color table for each frame, we are likely to end up with different shades of red in each and this can result in the same color from the original image getting mapped onto different shades of red in consecutive frames. Again, this results in an unpleasant flickering effect.

A figure showing how encoding a gif with a different color table for each frame can result in flickering.
Using a new color table for each frame results in flickering as seen on the figure’s left leg.
A figure showing how using a global color table can solve flickering.
Using the a global color table ensures that stationary pixels receive the same color.

Encoder Opportunities

The color table is represented as an array of 2⁸ colors in RGB format with 3 bytes per color. When encoding a frame, the actual color of each pixel is represented as an index into the color table, instead of a complete color, which is only 1 byte instead of 3.

Although this does slightly compress each frame, on its own, it is not enough. The GIF format uses the LZW lossless encoding algorithm, which in essence attempts to map long sequences of bytes to shorthand codes. Instead of writing out the actual pixel color indices, the encoder produces shorter codes that represent longer sequences of pixel color indices.

The shortest possible sequence is one byte, which neatly maps to a single pixel’s color index. Of course, we want to have codes that represent longer sequences. The problem is that if we map longer sequences to codes, how will the decoder know which sequence the codes represent?

One possible solution is to transmit the mapping table as well, but LZW uses a different approach, where the codes are built incrementally during the encoding process. The decoder uses a similar process to incrementally rebuild the same mapping table. The core idea behind the algorithm is that when the encoder first sees a new sequence, it encodes it in full. A new shorthand code is then added to the table to be used to encode subsequent instances of that sequence. This compaction method works well when the data contains many long sequences that are repeated.

The LZW encoding format is well defined and cannot be changed. The question then is, what can be done in order to make the file more compact?

Looking backward to move forward

Imagine you are talking to a friend over the phone and describing to him what you are seeing. You describe the views, the color of the sky and all the objects in the scene. When a bird flies by, you don’t have to repeat everything, you can just describe the bird and your friend’s mental image of the scene will be updated to include this new element.

Video works in the same way. There’s no reason to redefine the entire scene every frame; we just need to define what has changed. A one minute video with 60 frames per second can be much smaller than a naïve implementation concatenating 3600 JPEGs because the codec can leverage the similarities between the frames and encode less information explicitly.

Recycling pixels

As I read the GIF spec, I discovered there is a way to leverage previous frames by using an option named ‘disposal methods.’ This option gives instructions on what to do with the previous frame once a new frame is displayed.

One of these options is ‘do not dispose’ which means ‘keep the currently displayed pixels’. Pixels that are not covered by the next frame will continue to display. An easy way to think about it is to imagine you have two images printed on transparent sheets. When you place one image on top of the other, every pixel that is not covered by the second image is still visible.

There are two ways to determine which pixels from the previous frame should not get covered by the current frame:

  1. Partial frames: in this instance, the current frame can be smaller than the previous frame. This means that only the part of the frame that changes will be defined, while the rest of the frame remains unchanged. This option comes with limitations: the smaller frame must be a rectangle and only one rectangle can be defined.
  2. Transparent pixels: it is possible to use ‘transparent’ as a pixel color value. Out of the 256-color table, one color can be defined as transparent. When the transparent color is used, the pixel from the previous frame can still be displayed if the ‘do not dispose’ option is configured.

In our use case, Option 1 cannot be used, as any part of the image can be animated and thus we must define the complete image for each frame. We cannot assume that changes occur only in one sub-rectangle.

So, we are left with Option 2, the ‘transparent pixels’ option. We still need to define every pixel color value, but we can use the transparent color value for every pixel that is identical to the previous frame.

An infographic showing how we encode only the difference between two sequential frames.
The decoder keeps the pixels from frame 1 and only changes the non transparent pixels resulting with frame 2.

However, if we still need to define every pixel color value and the only difference is that we assign to it the transparent color instead of the actual color, how does this help us? I mean, we still need to define the color value for every pixel…

The answer is that since many pixels are identical to their value in the previous frame, many pixels can have the same color, i.e. transparent. If we have long sequences with the same value, this means that the LZW compaction works very well.

Above, I mentioned that we use a global color table, since switching color tables between frames can cause flickering effects. Because we quantize to only 2⁸ colors (instead of 2²⁴,) the likelihood is that, for many pixels, the value will be identical to the value on the previous frame. The only sacrifice we had to make was to give up one color in favor of the transparent color, so we went down from 256 colors to 255.

Applying this technique allows us to reduce the file size by an order of magnitude which enables us to create GIFs with much higher resolution (larger frames) while still producing files with reasonable size.

To sum up…

  • Use a global color table.
  • Set the disposal method to ‘do not dispose’.
  • Allocate a transparent color.
  • For every pixel that has the same value as the previous frame, set the value to transparent.

Lessons I learned during this process

Don’t be afraid to peer into the way that things are done in order to improve them — a little time spent investigating the possibilities can lead to a really useful and lasting improvement to your own process.

Being a developer gives you a unique perspective on the inner workings of your product — but being a user of your product is also very valuable. In the everyday use of your product, you’ll quickly come to recognize any shortfalls. This unique perspective can help drive improvements that might not be flagged by other users or recognized by the product team.

The animated announcement poster I made that set me off on this journey.

Create magic with us

We’re always on the lookout for promising new talent. If you’re excited about developing groundbreaking new tools for creators, we want to hear from you. From writing code to researching new features, you’ll be surrounded by a supportive team who lives and breathes technology.

Sound like you? Apply here.

--

--

Lightricks
Lightricks Tech Blog

Learn more about how Lightricks is pushing the limits of technology to bridge the gap between imagination and creation.