Generative Impressionism

a simple algorithm for Impressionist-style paintings

Matt DesLauriers


A small experiment I developed yesterday has been getting a bit of buzz on twitter and reddit, so I figured I’d briefly discuss how it works. The demo shows a generative rendering technique, using a source photograph, which produces Impressionist-style paintings after several seconds.

An inkblot re-mixed with the generative algorithm
the same source image, with different settings

The project began as I was experimenting with a visual effect for my interactive portfolio website. What started out as generative fur quickly turned into a more flexible system, capable of producing Impressionist-style brush strokes like those in Vincent van Gogh’s Starry Night.

The technique was originally inspired by an article on perlin noise flow fields. First; 2D perlin noise is generated and stored in a float array, for example 256x256 in size. Then, for each particle in our scene, we sample from the noise texture at that particle’s current position. The noise value is mapped to an angle, from which we derive a unit vector. We add this to the velocity, and normalize it again to give the particle its new direction. This produces the swirling and “flowing” motion.

The color of the particle is sampled from the color map (the source image). The “strokes” emerge because the canvas is not being cleared; so they accumulate every frame. Each particle has a short lifespan, after which it is reset to another random position on the canvas. By default, I’m only drawing about 500 particles per frame, which makes the effect suitable for mobile.

Here is some pseudo-code:

The real beauty comes from adjusting multiple parameters as the simulation is running. The demo includes an example animation sequence, but there are lots of other sequences that would produce drastically different results. This would be a good candidate for recording multi-touch events or MIDI knob input (like those in DAWs and keyboards), allowing the user to “orchestrate” the painting and fine-tune the brush strokes and colours as it generates.

Update Dec 15, 2014: You can test the effect with a webcam here (Chrome only):

using a webcam feed as source

Further Explorations

There’s lots more I’d like to explore. It would be a great project for an interactive exhibit, or even just for print. I’d also like to see whether I can create a similar technique in WebGL, as a post-processing effect for games. It would be great to play a bit more with colour, user-uploaded content, brush textures, and mouse/touch interactions. Adding multiple octaves of noise could also introduce a nicer “scribble” for the stroke.

Update Dec 15, 2014: I’ve since prototyped the concept as a WebGL post-processing effect. You can check it out here:

Source Code

The source code is pretty messy and wasn’t really written for public eye. At a later point, I hope to clean it up and package some of the more useful features (noise generation, bilinear sampling, etc) as NPM modules. In the meantime, you can poke around in the GitHub repo.

re-mixing a popular scene from the game Limbo
generated from a Game of Thrones poster