Yet Another Neural Network Generated Color Names

TLDR: As part of a series of experiments on generating text with neural networks, at some point, I came up with the not-so-fresh idea of the colors names generation.

Aleksey Tikhonov
Altsoph’s blog
3 min readNov 29, 2018

--

You could try the intractive color namespace exporation here (better to be viewed on desktop).

And now for some more details:

At first, I checked if someone had done something like this before me, and of course, I found a similar project: there, a researcher from Colorado, Janelle Shane used the 7700 color names base from the Sherwin-Williams Company, the world’s largest paint manufacturer. Then she taught char-based RNN to generate names for RGB-style components. She also wrote a follow-up later in which she tested a few more ideas.

I wasn’t happy with the quality of the results of that work even considering some obvious cherry-picking, but I had some ideas of mine that I decided to check on my own. In this post, I release the first part of my results, and I will post some follow-up if everything works out later.

First of all, I decided to make a good dataset, not paint color names (where marketers come up with abstract selling words), but something closer to the “real” color perception of a human person. By the way, I remembered that I already came across such a dataset six years ago and I even wrote something about it in my blog. Folks from CrowdFlower, engaged in the survey automation like Amazon’s Mechanical Turk, published a hand-cleaned base of names for 4000 colors. Even more, for each of them, in addition to the standard English name, they collected its name in 8 other languages ​​and translations of these names back to English. As a result, there were up to 9 different English names for each of the colors.

Since then, CrowdFlower was rebranded to Figure Eight Inc., changed the site address and focused on the AI & ML, but the same dataset can still be found online ​​(with the broken encoding, but English names are still correct there).
I also added several smaller hand-cleaned datasets, manually harvested from different corners of the Internet, so as a result, I had approximately 15K unique RGB + name pairs.

One of the ideas I wanted to test is the usage of an extended numeric color representation. Instead of using just one of the RGB, HSL, YIQ, … spaces, I decided to generate several alternative representations from RGB (with the python’s colorsys module), and then conditioned the name generation by the concatenated vector of different representations. The point was that the individual components in different representations can affect different words of the name. For example, in the name “dark red” probably the “darkness” is well defined by the L component of HSL, and “redness” is easiest to determine from the R component of RGB. Technically, the network can learn this itself, but on a small dataset and without direct targeting, there is not much chance of that. As an architecture, I used a multilevel char-based LSTM with a layer normalization and a pair of crutches.

For visualization, I took the Dave Oleson and Dawn Ho D3.js-based client code, written by them once for that original post from CrowdFlower. I slightly rewrote it and used the set of generated names for the same 4000 colors as input data.

All of these names were generated automatically, without any kind of cherry-picking, but with some automatic filtering — basically, those names that appeared in the original dataset were suppressed from the resulting list, so all the names should be unique here.

--

--