# Chaos, Prediction and Golang: Using AWS Machine Learning to Mispredict The Mandelbrot Set

When I was a CS student about 13 years ago (damn it, I’m getting old), I was very much fascinated by Fractals. After doing some coding, debugging and fixing things, I had my first Mandelbrot explorer up and running with zoom capabilities.

Then, in the closest thing I have ever had to a religious experience, I witnessed how the most simple and random looking way of generating the set produces something that is infinitely self similar and amazingly complex.

It was the first time I truly understood how order can spawn out of chaos. It’s one of the best examples of how chaotic systems are sensitive to initial conditions. In my mind, this is a great confirmation to evolution theory, and how the simplest generational behavior in nature can end up creating humans from space dust.

If you don’t know what I’m talking about, I highly suggest you check out this video of Mandelbrot Zoom 10²²⁷ [1080x1920]:

### Mandelbrot Generator in Go: fun with builtin complex128 and a buffered channel semaphore

The mere definition of this process as chaotic suggests that it would be impossible to predict the system’s behavior in the long term. Consider the formal definition of the Mandelbrot Set:

For a given complex C, it’s basically only possible to tell if it belongs to the set or not by means of iteration that produces the Mandelbrot sequence starting with n = 0.

Writing the code in Go to generate the set is fairly straightforward, and there are a few cool features in Go that make this fun.

The first is that Go has a built in type for complex numbers and a native package for playing with them quite useful.

This allows us to pass the pixel (x, y) coordinates (width, height) as a complex number. For example, in the belongsToSet function:

`func belongsToSet(c complex128) bool { current := complex128(0)`

Furthermore, we can easily work with the real and imaginary components of the complex. For example, when checking if C’s modulus is smaller than 2, we square up both sides of the equation and check that that the square of the modulus is larger than 4 like so (note the bold parts in the code section):

`math.Pow(real(current), 2)+math.Pow(imag(current), 2)`

Second, Go’s concurrency primitives once again prove their usefulness, when setting the concurrency levels of the computation. In the mlbrot program we wish to test wether a complex C (representing a pixel) belongs to the set for more than one value of C concurrently. However, we don’t want to fire off a goroutine for all pixels of the image at once (especially in the AWS Machine Learning example, that one sends off an HTTP request on every iteration. We might run out of sockets if we attempt to do this). Enter buffered channel semaphore!

We first init the channel with a capacity of var concurrencyLevel:

`var sem chan boolvar concurrencyLevel = flag.Int(“c”, 20, “concurrency level”)`
`sem = make(chan bool, *concurrencyLevel)`

Then, on each iteration of the computation, we try to load the channel with a boolean value:

`sem <- true`

However, we only deplete the channel once we are done with the computing:

`go func(c complex128) { res := getColorFromClassicMandelbrot(c) <-sem }(c)`

The result is that the sem channel quickly gets filled with n = concurrencyLevel values of “true” and moves to the computation. Once we try to push the concurrencyLevel + 1 “true” value, we get blocked until someone (our function that does the actual computation) gets a value off the channel ( <-sem). A queue is created and the block is released when the desired concurrency level for the computation is reached.

Here’s a more complete snippet:

`func main() { flag.Parse()`
` log.Printf("coloring method: %s", *coloringMethod) log.Printf("concurrency level: %d", *concurrencyLevel)`
` sem = make(chan bool, *concurrencyLevel)`
`......}`
`func renderImage() {for i := 0; i < imHeight; i++ {  for j := 0; j < imWidth; j++ {   plotPixel(complex(float64(i), float64(j)))   currentPixel++  } }`
`for i := 0; i < cap(sem); i++ {  sem <- true }}`
`// check if c belongs to set, assign color accordingly and plotfunc plotPixel(c complex128) { sem <- true`
`switch *coloringMethod { case "ml":  go func(c complex128) {   res := getColorFromMLMandelbrot(c)   <-sem   plot(res.C, res.Color)  }(c) case "classic":  go func(c complex128) {   res := getColorFromClassicMandelbrot(c)   <-sem   plot(res.C, res.Color)  }(c) }}`

### Using Machine Learning For Prediction

As suggested above, once cannot really predict whether a given complex C is a member of the Mandelbrot set without iterating the sequence. This is a part of what defines the system as chaotic.

However, at attempt could be made to predict an extremely rough estimate of the set based on it’s geometric properties. Consider the output of running the generator in “classic” mode. We color black those pixels which correspond to complex numbers that belong to the set. Otherwise we use the color white:

It’s fair to say that the shape, at least in this resolution, has some fairly predictable properties to the human eye. Symmetry, the fact that the shape could be bounded inside a single rectangle, and more, would allow a human eye to predict the shape in some ways if a part of it was left unknown.

With this we set out to use a machine learning approach to try and predict Mandelbrot set values. Let’s define our ML procedure.

We are going to gather our training data from the classic Mandelbrot computation. Since we have an algorithm that computes Mandelbrot set belonging with 100% accuracy, we simply need to use this algorithm to create a data file. Our data file will look like so (this is tail -n 10 of the file):

So our target attribute is “result”, conveniently a boolean value telling us for the real and imag attributes, wether the complex they comprise is a member of the Mandelbrot set or not (in the table above we happen to have all true values, but those are just the last 10 rows of a file).

For the given attributes real and image, the result is either true or false .This training data looks like a good candidate for classic binary classification ML algorithms. For our model, let’s use Logistic Regression with a loss function for classification and SDG (Stochastic gradient descent). The choice of algorithm in this case was made simply because that’s Amazon Machine Learning’s learning algorithm for binary classification.

### Using AWS Machine Learning

AWS Machine Learning is an end to end platform for managing a ML application. It’s really an awesome product by Amazon for a great few reasons. Let’s go over what it takes to use AWS ML for our job:

1. After generating our data file, we upload the training data to AWS S3, and then create a new ML Datasource over that file (note: you can also get the data from Redshift DB). AWS ML will attempt to guess the schema on it’s own, but you an manually change that.

Using S3 or Redshift is super useful for real life applications as your app is probably already using one of those if you are running in the public cloud.

2. We Create a new ML model using that data source. By default, AWS will set aside 30% of our training data to evaluate the training. This serves to tell us how accurate the model’s predictions are. Next we can let AWS determine the recommended setup for our data (in this case, the logistic regression’s parameters) and use that or choose to customize a bunch of stuff to our needs.

3. Once the model is ready, we can start generating predictions. The UI is quite useful to testing things out but we are interested in using the API. So we enable the AWS ML Endpoint for this model under Enable real-time predictions. This gives us a URL for the form https://realtime.machinelearning.<AWS ZONE>.amazonaws.com

Now we can use this EP from our code instead of the classic Mandelbrot computation to generate predictions based on our trained model!

Let’s analyze the Predict function of ml/aws.go:

Predict takes up an AWS ML service (see how to generate here) and a complex C. It returns true if our ML model predicts that the the real and imaginary components of this C form a number that reaches a target result of “true” from our training file.

The POST request to AWS ML endpoint takes up your AWS credentials (secret and access key from your ~/.aws/credentials file) and uses a body that includes the model ID and a record, corresponding to the schema used in the training phase.

It returns a PredictOutput struct that gives us the PredictedLabel (for binary or multiclass ML models) or PredictedValue (for a regression ML model) and some more useful stuff (PredictedScores — raw classification score corresponding to each label and more).

The AWS ML endpoint can get handle up to 200 QPS without requiring opening a ticket with AWS support. On our code, we have a flag for concurrency levels, this is the maximum number of concurrent go routines that would hit AWS ML to get a prediction.

On a MacBook pro with a standard broadband connection you should be able to us about 50 concurrent goroutines without losing too much data.

### Results

As we can see, AWS’s benchmark considers the prediction to be quite good: “On your most recent evaluation, ev-vcyMrjolll1 , the ML model’s quality score is considered very good for most machine learning applications.”

OK. So numerically speaking we seem to be fine. But it’s quite hard to tell wether the prediction is actually good without visualizing the set. And, well, expectedly, the when visualized, we realize the prediction is interesting, but not very good. Using ImageMagick compare command we can see the difference in red:

In red, we see the difference between the classic coloring method (using the iteration computation) and the ML method (using AWS ML). The white shape represents the ML result. We can see that the prediction captured quite a lot of area that is in fact a part of the set, but it was still far from accurate for the complete image.

I bet it’s possible to achieve a much higher accuracy in prediction for this problem using more advanced ML techniques and better data selection. Perhaps sometime soon I’ll followup on this post with such improvements.

Till then, you are welcome to view and fork mlbrot on Github!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.