CPPN mini-project

Troels Rasmussen
6 min readAug 9, 2019


For the mini-project of the summer course, Advanced Topics in Procedural Content Generation, I have chosen to focus on compositional pattern producing networks (CPPNs) and mixed-initiative PCG. The tools I have used during development include JavaScript (ES6–8), p5.js, three.js and tensorflow.js. While JavaScript is perhaps not the most performant choice, the ability to share my mini-project online is just too appealing to pass over. My mini-project consists of four web applications each making use of CPPNs at their core:

Art piece generator, first attempt at a flower generator, second attempt at a flower generator, a t-shirt generator, which makes use of the previous flower generator.

In the following I go through my small journey with CPPNs. Compositional pattern producing networks are a kind of artificial neural network (ANN), however they differ in the way they are used. Artificial neural networks, such as convolutional neural networks, are used for classification or in more general terms for making predictions from data. Compositional pattern producing networks are used as generators of life-like patterns, hence their name. You give a CPPN an input, and out comes a (hopefully) desirable pattern. Also, typically an ANN consists of only one or a few types of activation functions, whereas a CPPN can make use of a variety of activation functions. The choice of activation functions will affect the geometry of the pattern produced by the CPPN. In PCG it is important to be able to represent (game) content in a way that allows a PCG method to search the content space for desirable artifacts (playable levels, balanced weapons etc.) The weights, activation functions and topology of a CPPN is the representation of content.

Neural networks and CPPNs are slightly different. First, CPPNs are used as generators of content, whereas neural networks are used for making predictions from data. Furthermore, there is a point to using various activation functions in the nodes of a CPPN, because the activation functions affect the geometry of the patterns generated. Image from the book Procedural Content Generation in Games — A textbook and an overview of current research (Noor Shaker, Julian Togelius, and Mark J. Nelson).

I use tensorflow.js for creating CPPNs, however tensorflow.js is meant for creating ANNs for machine learning. What this means is that I am not able to build CPPNs that include many different activation functions, and I’m restricted in the ways I can change their topology. Thus my CPPNs contain hidden layers with the tanh activation function and an output layer with the sigmoid activation function (see illustration of CPPN network in tensorflow below).

Image from https://kwj2104.github.io/2018/cppngan/

During my first attempt at creating a CPPN a few days ago, I created a CPPN that takes as input the (x, y) coordinates of an image, the distance r of coordinates (x, y) from coordinate (0,0) and a latent vector k with 8 dimensions. The CPPN outputs one value per input, and I interpret this as the grayscale value at image coordinate (x, y). See the image below for an illustration of how the CPPN works.

The CPPN can be regarded as a function that takes an (x,y) image coordinate, distance r between (x,y) and (0,0), and a latent vector k as input and returns a grayscale value. The function is called for each (x, y) coordinate in the intended output image. Image from https://kwj2104.github.io/2018/cppngan/

The result of interpreting the output of the CPPN are some “smoky”, abstract looking “art pieces”, which I am sure will sell for great value on eBay. See one of the “art pieces” below and try out the pattern generator here! Give it some time to load.

Art made by my very first CPPN network.

The CPPNs I have shown so far produce nice looking art — or ugly trash, depending on your subjective opinion and taste of course. My goal from hereon was to use CPPNs to produce familiar content (cars, spaceships, flowers etc.). Therefore, I decided to build a CPPN, which would produce flowers as described in chapter 9 of the PCG book and in this research article. The CPPN takes as input polar coordinates (q, r), and an L value, which represents the layer in the flower. It then outputs RGB color values and an r_max per input. First, each value of q is used as input to the CPPN while keeping r=0, (sin(q), 0). The output contains a list of r_max values, which make up the outline of the flower. Secondly, values between 0 and r_max are input for each polar coordinate q, (sin(q), [0, r_max]). The output contains a list of RGB values, which make up the internal colors of the flower. The procedure described above is run once per layer L, and the layers are scaled according to their depth in the flower (smaller layers in the front, larger layers in the back). See the image below for an illustration of the inputs and outputs of the CPPN. My first flower generator, which can be found here, behaved in an unexpected manner and while the generated patterns were symmetric and dare I say beautiful, they did not quite look like flowers.

Image from https://www.aaai.org/ocs/index.php/AIIDE/AIIDE12/paper/view/5449

My second and improved flower generator can be found here! I have implemented the possibility to mutate individual flowers and breed two flowers, resulting in a third flower with traits from both parents. It is visually clear how a mutated flower relates to its one parent. However, it is less visually clear how a flower, created through breeding, relates to its two parents. See image from generator below.

Screenshot of my flower generator from http://pcg-mini-project.herokuapp.com/cppnflowers2. From left to right: flower 1, flower 2, and flower 3, which results from breeding flower 1 and 2.

Mutation of a flower works by adding a small value to each weight in the CPPN by chance (there is a 10% chance that a weight will be modified). Breeding works by mixing the weights of the two parent CPPNs (each parent flower contains a CPPN) and applying them to a new CPPN which then produces the child flower.

Now, for my final mini-project I wanted to give the flower-generator a purpose. Therefore I created a t-shirt generator, which can be found here! Using the t-shirt generator a user can 1) generate a flower pattern, which is then placed on a t-shirt, 2) select the overall color of the t-shirt, 3) mutate the t-shirt, effectively mutating the color and flower pattern, and 4) breed two t-shirts. See screenshot of t-shirt generator interface below.

Screenshot of my tshirt-flowa-powa-generator from http://pcg-mini-project.herokuapp.com/cppnflowers3

It is an odd, yet fun concept that makes use of mixed-initiative PCG as described in chapter 11 of the PCG book. The user can select the color of the t-shirt and set a few parameters that control the appearance of the flower pattern, however the flower pattern generated by the CPPN is still unpredictable and surprising. I argue that the unpredictability of the inner-workings of the CPPN actually helps a casually creative person overcome some of the barriers to creativity such as “blank canvas” and “fear of failure”, because she is not in full control of what the CPPN will produce, yet she can trust the system to make a sensible, perhaps even beautiful t-shirt. Considering the kind of artifacts that my CPPNs produce - flower patterns - it makes sense to have users rate the quality (or rather aesthetics) of the flowers and interactively evolve them, instead of creating an objective fitness function measuring the quality of flowers wrt. some decided upon metrics and autonomously evolving flowers with a good fit.

As future work, I consider the ability to more flexibly change the topology and activation functions of the CPPN. As far as I know, tensorflow does not support this flexibility, because tensorflow has been developed with other use cases in mind. Thus, I may have to write my own CPPN library for JavaScript, akin to the work by Daniel Shiffman, where he built his own small neural network library from scratch. The ability to change topology and activation functions would allow me to explore NeuroEvolution of Augmenting Topologies (NEAT). NEAT is a genetic algorithm for evolving neural networks, including CPPNs. With NEAT I could explore more advanced interactive evolution and mixed-initative concepts, for instance by giving users the ability to control the shape of the flower using UI handles and swapping out activation functions responsible for certain geometric shapes behind the curtain. Furthermore, I would like to extend my work to 3D, for instance by using CPPNs to generate spaceships in 3D.

If you’re interested in checking out the code for the mini-project, here is a link to my github repository.