# CMAES algorithm in C++ using MLPack

Hi there!. Today we’ll be discussing about how CMAES algorithm works in layman language and how you can use it too as a self learning evolution algorithm. During our journey we will also write some simple code in C++ language using an open source library MLPack and see the algorithm in action.

I’ve been fortunate enough to be selected for Google Summer of Code 2017 in this organization and is contributing to the class of evolution algorithms of which CMAES is also a part :). So lets get started !

### Lets dig deeper in CMAES

CMAES stands for Covariance Matrix Adaptation Evolution Strategy. Though the fancy name sounds a bit complex, the nuts and bolts of it’s working isn’t much though. Below we won’t talk about the equations lying behind the algorithm but how it achieves it’s greatness (the search for global minima).

CMAES is a derivative free method, means unlike other methods we don’t take the gradient of the function and move to the side where our objective function decrease. No derivative is found, so to the rescue comes probability!

The above GIF very well shows the working of CMAES. Let’s break it down. The multiple concentric ellipse shows our function in a visual way where the axis are not shown and the final 3D image is tried to be shown in 2D. The goal here is to reach the center which is the global minima. Evolution time !!

As we are working with evolutionary algorithms the plan is to have a population which is 50 as given in the gif image. How these points are selected? thats using a multivariate normal Gaussian distribution. So now every points gives a function value which we call its fitness. The less the objective function value it gives the better the point and its fitness. So from lambda = 50 (here) we select mu best points or to say individuals or child which now get a chance to become the parent (the lucky ones) and the rest dies (what can we do, some algorithms are just simply too cruel). The parents then give rise to the next generation and changes the mean of the whole population to a new place which of course is better than the previous position that is much more closer now to the minima. What if we repeat this again and again thousands of time where every new generation is better than the previous and comes closer to the minima. Slowly by slowly we will finally touch the minima.

The algorithm also takes care of step size control which is just a smart measure to decrease the time of convergence and regular update of the covariance matrix from the population.

### Coding time !

As of now as we know the basic working of the code. Its time we take an equation and see its convergence.

As i mentioned before we will need the MLPack library to implement this. There is a beautifully written blog on the site itself for its installation → link

As of now (12 July 2017) my code is not present in the latest release. U can go for downloading my GitHub repolink and then open terminal inside the folder then type

`mkdir buildcd build`

we will use CMake to build MLPack

`cmake ..make `

This will take a while, so just sit back and relax. After installation don’t forget to install the headers into the include folder. U can simply type -

`sudo make install`

and Voila !! You are all set to go!

### Lets Optimize !

Let’s take the above function to optimize. Of course this function would have been much easier to optimize using its derivative and that kind of optimizers and also if you will look carefully x=y=z=0 get’s the least value as -1 of the function. But for our better understanding we will stick to this function only. The algo will find it all by probability and the method described above.

at the start we have simply included MLPack headers and name spaces that we will be using in the code.

we have to give our CMAES function a class. This class will tell our optimizer

• NumFunctions() the number of function to optimize. In our case its 3.
• Evaluate() the function provides the euquation to optimize. I request to stop for a while and look how the function is written in evaluate to be optimized.

After the class in our main function we have created an object of our created class.

arma::mat is used to make armadillo matrix of the given dimension. Yes, you read that right, the inputs given to the function is expected to be armadillo matrix/vector.

fill() function is used to assign all the values in the matrix with the given value.

the CMAES class then takes the following parameters -

1. the function itself
2. the initial start point of the algorithm
3. the initial standard deviation of the normal Gaussian distribution.

(Dont worry if you are not able to judge a good value for parameter 2 and 3 mostly giving 0.5 and 1.5 will do the task.)

Default parameters (4 and 5 mentioned below can be skipped also) -

4. Max number of iterations allowed to the algorithm. (may terminate in between if sufficient value reached)

5. The change in function value from last generation to present generation is less than the given value then termination occurs. Usually it means that as the function is already on minima, hardly it will change and that means the minima is reached. Increasing it’s value will bring very hard bound accurate result constraint (like we have done here).

At the end we make a matrix which will get us the answer with the name arma::mat coordinates(N,1)

Give it to the CMAES Optimize function and we will get the result !!

save the file with the extension .cpp . Then run (expecting filename cmaes.cpp) -

`g++ -std=c++11 cmaes.cpp -larmadillo -lmlpack`

after the code compiles. Run it using -

`./a.out`

and if everything is all right we will end up with the answer =

`-12.63111e-161.30166e-08-3.96967e-08`

as expected the result to be -1 and the value of x, y and z is zero (believe me, those very small values people call zero these days).