Rendering Pixel Art

Matthew Michelotti
5 min readFeb 7, 2019

--

If you’ve ever tried to make a game with pixel art, perhaps you’ve run into this problem: how can you make your low-resolution pixel art appear crisp and clean when displayed at higher resolutions? The graphics need to be scaled up, so some sort of interpolation is necessary. The most common types of interpolation are “nearest” and “linear”, but these will cause your beautiful pixel art to look distorted or blurry, respectively, assuming a non-integer scale factor. Rendering a rotated sprite has the same problem. There are several approaches to address this. In this post, I’ll describe my personal experience with this problem and how my approach to solving it changed over time.

In my early games, I took the easiest way out and forced the game window to be a specific predetermined resolution, and scaled all images by a predetermined integer scale factor. This is, of course, very limiting. It does not make use of the full device display, and attempting arbitrary scaling and rotation of sprites leads to the graphical artifacts described above.

When I started work on Weaponless in 2014, I knew that I needed a more flexible solution. My first thought was that I could use a shader of some kind, but I was a novice in OpenGL and completely unfamiliar with other graphics libraries, and the OpenGL shader language intimidated me. To make this more complicated, I was using Java and libGDX, but most of the tutorials for learning OpenGL were in C/C++. So I came up with another solution that did not involve writing a shader. First, I would scale all of my image files by 200% using nearest interpolation, overwriting the old image files. Then, when rendering these images, I would use linear interpolation. The combined effect was something halfway between nearest and linear interpolation, so I dubbed it “hybrid” interpolation. It looked acceptable, although still a little fuzzy. Also, the images, of course, occupied 4 times as much memory as they needed, which was not a big deal in practice.

hybrid scaling I used in Weaponless, 2014

Later, my language of choice shifted from using Java and libGDX to using Rust and SDL2. At this point, I decided it was high time for me to learn how to write OpenGL shaders properly. SDL2 interoperates well with OpenGL, giving the user the option of invoking all OpenGL methods directly. I went through tutorials on making simple shaders to render textures. Then I looked around to see if I could find a shader to render pixel art cleanly. I stumbled across Wikipedia’s page on pixel art scaling, but the shaders mentioned were designed to make pixel art look less like pixel art, which was not the kind of thing I was looking for.

Having grasped the basics of the shader language, I decided to see what I could come up with on my own. The first step in solving a problem like this is to define the desired output. If we wanted to scale a sprite by e.g. a factor of 3.5, what colors should ideally be assigned to each pixel value? I decided the image should take on the same colors as it would if it were super-sampled using nearest interpolation. The basic idea is that instead of sampling one color for each screen pixel, you sample several colors at different locations within each screen pixel and average the colors together. This is very expensive, but the results look good. If you want to try it without writing a shader, open some pixel art in a painting program, scale by 1600% using nearest/no interpolation, and then scale by 22% using something like linear interpolation.

In our case, excessive super-sampling is redundant. Think of this problem as two grids overlaid on each other: the texels (texture pixels) overlaid on the pixels (screen pixels). In general, we expect a texel to be much larger than a pixel. If we ignore the possibility of rotation for the moment and assume a scale factor greater than 1.0, then each pixel will overlap at most 4 texels, and then only at a corner of the texels. Thus, we should only need to sample the texture 4 times per pixel and compute a weighted average of these colors.

weighted average of neighboring texels

So, this was my initial approach, but even 4 samples per pixel seemed like a higher cost than necessary. I wondered if it could be improved, but after thinking about it some more, I was convinced that taking a weighted average of 4 neighboring texels was definitely necessary. Then I had an epiphany moment: I realized that there is a builtin mechanism in OpenGL for taking a weighted average of 4 neighboring texels, and it is called linear interpolation! This seems a little counter-intuitive at first, but it is possible to use linear interpolation while carefully adjusting the sampling position to yield a result that looks like nearest interpolation with super-sampling.

This works for arbitrary scaling, but what about arbitrary rotation? Well, although it is no longer an exact replication of the super-sampling effect when applied to rotation, it is a close approximation. The same shader code can be used for rotations, and it is hard to notice any deficiency.

The code I wrote to do this is part of my open-source Gate library. The current version of the fragment shader is located here. However, one thing I learned about OpenGL is that a shader should not be considered in isolation; it is important to know the expectations on the OpenGL settings, textures, and how the vertex buffers are filled. This blog post is not meant to be a tutorial, and given all of the subtleties, I’m not going to try to explain the shader code here. If you’re interested in all of the specifics, you can browse my code. But concerning the surrounding expectations, I will say that OpenGL is set to use linear interpolation, the alpha values are pre-multiplied and the appropriate blend function is used, the texture is a sprite atlas with 2 pixels of padding between sprites, and each sprite is drawn with a 1 pixel padding to account for the tweaking of sampled coordinates.

Gate’s pixel art shader

After I wrote this shader and noticed how short the code ended up being, I decided that other people must have done something similar. So I searched around some more, and came upon a blog post that described a shader to achieve exactly the same effect as I did, although the arithmetic is a little different. I find it interesting how I arrived at the same result from a different perspective: trying to achieve a super-sampling effect without the cost of super-sampling.

That’s pretty much all I have to say about rendering pixel art. I also wanted to touch on how I dealt with the problem of tiling. Graphical artifacts can crop up when drawing a mesh of tiles if you’re not careful. But I think I’ll save this discussion for a future blog post.

--

--