Go, Colourspace, and All the RGB Colours in One Image

RGBenge of the nerd

max golden
7 min readJun 22, 2014


I started picking up Go recently because when you’re a recent college graduate without a job, you’ve got plenty of extra time. Go always looked flippin sweet, but almost anything I write is easy and fast enough in Python that it never made a whole lot of sense to learn another language to solve my problem.

The Compressed Reduced Jaypeg Version of a run of the program

But in every coder’s life there comes a problem that would take Real Amounts Of Time to solve in a “fast” language, and using Python would be out of the question. That problem for me was filling in every RGB colour in one image. I originally encountered the problem through a hackernews posting to Joco’s blog (and he in turn found the problem on the StackExchange code puzzles site) where he describes the problem and his solution in detail. His post is certainly worth reading, but I’m going to talk about my own solution (I like my pictures better ☺). If you’re not interested in the technical details and just want to look at the pictures, there is a gallery of a few of the generated images at the bottom of the article.

The easy way to paint every RGB colour is just to iterate over them and paint them in an image one by one. But, like Joco before me, I wanted to fill them in based on nearby colours, so the final product would have a semblance of beauty. For my solution, I used 24 bit colour representation (8 bits per channel) for a grand total of 256 * 256 * 256 colours, which leaves something like 16777216 pixels (and a 4096 * 4096 png image) that needs to be filled in with unique colours. Long story short, I did it, and Go made it pretty easy.

The Algorithm

Basically, my algorithm for filling the square is as follows:

  1. Enqueue the initial point (that is, some x, y point on the image)
  2. Pop a point off of the queue
  3. If it’s the first point, colour it the starting colour and note that that colour has been consumed (then goto step 7)
  4. If it’s not the first point, calculate its “target colour” by averaging the colours of the 8 adjacent points (they only contribute to the average if they themselves have been coloured)
  5. Using that target colour, search the space of unused colours for the nearest match (this is done by treating the RGB space as a voxel cube [see image below] which is searched in expanding spheres about the target colour point)
  6. Colour the popped point with the matched colour and note that that colour has been consumed
  7. In either case, enqueue the uncoloured, unqueued points adjacent to the point popped off the queue (this is done using a goroutine so as not to suspend the main thread’s search for viable colours)
  8. Continue popping and filling points until done
More or less how I represented the colour space when searching for colours to add. Photo from Wikipedia.

Easy enough, right?

Big Problems

This algorithm was not super easy to implement, and while it accomplishes the goal of filling in all of the RGB colours, there are still certain issues which I would like to correct if it’s possible and reasonable to do so.

The first major problem was how do you quickly search the 16.777 million available colours for a colour close to the one you’re looking for? I had lots of ideas for simulating locality and then searching smaller data structures (what if we searched by the sum of the vector components?) but I was never able to convince to myself that they would be better than searching the RGB colour space in spheres of increasing radius about the target colour. I did this by dividing the RGB search sphere up along the R axis, so for n distinct possible R values, I’d have n GB circles of varying sizes to search. Similarly, you can divide up those circles along the G axis, allowing you to iterate along just B at the lowest level. This was the absolute best solution I could come up with, and it still requires many maths for each expanding iteration.

Artists depiction of a circle in a square with some trapezoids just standing around and watching

Luckily, there are some nice optimisations that can be used here and there. Some square root calculations are necessary for establishing the RGB coordinates to start searching, but once you start down a row that you’re fairly certain is in the outer shell, you can use distance squared metrics to verify that a given RGB point wasn’t searched in the previous spherical shell. When you encounter voxels that would have been searched in the previous shell (that is to say distance from target < search radius -1) you can simply reflect over to the other end of the row and search only those points you know to exist in between the search radius and search radius -1.

Another problem (that still exists) is darkness bias. Several of my algorithmic choices tend, all other things being equal, to select colours closer to black (#000000). Specifically, when find the average colour of the neighbours of the point to be coloured, integer averaging trends down. For example, the average colour of the three points #010100, #000101, and #010001 would be #000000 (for red, green and blue channels : 2/3 = 0). This is probably the main source of darkness bias, and it can potentially be eliminated by changing how the averaging is calculated, but I don’t want to sacrifice a lot of speed on this routine that must run for (almost) 256^3 pixels.

Darkness Bias clearly visible in this image, which was seeded with white (#FFFFFF) in the centre, but quickly gravitated towards the blackness.

The other source of darkness bias comes from the spherical search itself. The algorithm records only the RGB voxel closest to the target colour, not the set of all valid voxels at that distance. If searched voxel has the same distance from the centre as the current minimum, it will be completely disregarded. The spherical search always starts with negative offsets relative to the target point, which again causes a bias towards #000000. This could be eliminated by maintaining the set of voxels at the closest distance to the target and then choosing randomly amongst them, but the voxel set data structure would in all likelihood need to be dumped and regenerated at least a few times for non-trivial search radii.

Update (23 June): After going and thinking about the problem some more, I’ve made some progress against the forces of darkness. I went ahead and changed the pixel averaging to float values which are then rounded to get their int approximates. The performance hit doesn’t seem to be so bad; in fact it execution seems to speed up in certain cases. This can be attributed to the algorithm more evenly choosing colour values, resulting in less time spent searching for available colours in the endgame when the colour space is more sparsely populated. The predicted bias from the colour searching algorithm is still in effect, and darkness bias is still visible, but the resultant images now look much better.

Reduction of darkness bias. Left is with the change in average evaluations, right is without. Both images were seeded at the midpoint with white and given a channel size of 8.

Future Plans

I’ll probably keep tweaking my code and modifying initial conditions to generate new images. They’re beautiful and I made them and it’s as close as I’ll ever get to painting rainbows. But I really think that more coders who like pretty things should attempt this project. Pick your favourite language and implement your own version of an RGB filling algorithm. Joco’s source code is linked in his post and mine is on github. But there are a lot of different ways to approach this problem, and I would love to see the sorts of art that other nerds can generate.


All pictures were generated with some version of my code.

My first generated image: queue issues led to a premature finish, but this is way better than anything else I had upto this point. The crystalline colour patterns are an artefact of the unbounded queue size & interrupts caused by system calls, and they do not show up in all images. Full size available here.
My second generated image. The centre point was moved from (0, 0) to (2048, 2048).
This image was generated with an off-centre initial point and an initial point colour of #FFFF00 [yellow].
This image was generated with the most recent version of the code. The more blurred, less structured colour pattern is caused by using the smallest possible pixel queue.
This image was generated by limiting the size of the output to 1024 by 1024 and modifying the draw potion of the algorithm to ensure that blue was always the dominant colour channel.
This image was generated with 1/2 the normal render height, seeded with red, and using an unbounded queue. All of the geometric patterns are a result of the algorithm running normally under those conditions.

Questions? Comments? Ideas?

Hit me up on twitter @KapuraMax



max golden

treat everything you find on the internet as fact.