# Portfolio optimization in R using a Genetic Algorithm

Portfolio optimization is one of the most interesting fields of study of **financial mathematics**. Since the birth of Modern Portfolio Theory (MPT) by Harry Markowitz, many scientists have studied a lot of analytical and numerical methods to build the best investment portfolio according to a defined set of assets.

In this article, I’ll show a method to optimize a portfolio by using a numerical technique based on **genetic algorithms**. The power of genetic algorithms makes it possible to find the optimal portfolio even when the objective function isn’t smooth or differentiable.

### Portfolio composition

Let’s say we have selected *N *financial assets we want to invest in. They can be stock, funds, bonds, ETF etc. Each one of them has many **historical returns**, that are the price relative difference from one period to another. Periods can be days, weeks, months and so on.

The **return **of the *i*-th asset between period *t* and period *t-1* is defined as:

Now, when we want to build an investment portfolio, we want to **mix many assets** together allocating a fraction *x *of our capital to each one of them. Each fraction is called **weight***.*

The portfolio return at time *t* is then:

The goal of portfolio optimization is to find the values of the weights that **maximize some performance metric** of our portfolio according to the weights constraints, which are:

So, we are talking about a **constrained optimization** problem.

### Objective function

In his famous essay, **Harry Markowitz** explains a complete theory about portfolio composition. Further studies have identified a **useful objective function** for portfolio optimization which is called **Sharpe ratio**, defined as:

*E[P] *is the **expected return** of the portfolio, *Var[P] *is the **variance**. For simplicity, we are not considering the risk-free return that is included in the original Sharpe ratio formula.

There are many interpretations of the Sharpe ratio. I’ve always considered it a **stability **metric. If it’s high, it means that the average return is higher than the variance, so the returns are quite stable around the mean value. Remember that our goal as investors is to reach the **highest stability of portfolio**, not the highest profit.

There are many objective functions we could use for our optimization, but in this article, I’ll use Sharpe ratio maximization for its simplicity. The examples I’m going to provide can be generalized to many other performance metrics.

### Constraints

Constraints are the real hard part of the problem as they make it much more difficult to solve.

Remember that in portfolio optimization the basic constraints are the following:

In fact, our weights must be **positive **(let’s suppose we don’t or we can’t go short) and must be **less than 1**. Their **sum **must be exactly equal to 1 in order to cover the **entire capital** we want to invest.

Constrained optimization is a tough analytical challenge that can be fought in many ways. In the following part of the article, I’ll show you the method of the **penalty function**.

#### The penalty function method

In science, it’s often useful to transform a complex problem in a simpler one. The great issue of a constrained problem are the constraints, so it could be useful to **transform **our problem in an **unconstrained** optimization problem.

Let’s suppose we want to optimize (i.e. **minimize**) a generic *f(x)* function subjected to some constraints. In order to make the constraints disappear explicitly from our problem, we can create a new function that **penalizes **all the points in the function domain that don’t satisfy the constraints.

Since we are facing an optimization problem, the simpler way to do it is **adding a positive penalty** to the original *f(x)* for all the points outside the constraints. With this trick, the only way we can find the global minimum is by the satisfaction of the constraints.

### Building the penalty function according to the constraints

When our constraint is an **inequality** in the form *g(x) < 0*, we can build a penalty function in the form *max(0,g(x))*. This way, if *g(x)* is negative, the *max* function returns 0, else it returns the value of *g(x)* itself, **increasing **the value of the penalty function and **discouraging **the optimization. The higher the value of the penalty function, the further the global minimum is.

In order to keep our function reasonably **differentiable**, it’s useful to square it. So, our **inequality constraints** give us the following penalty functions:

What about the sum constraint? It’s an **equality **constraint, which is simple to describe as a penalty function.

If we have a generic constraint in the form *f(x)-d=0*, we can easily build a quadratic form like this:

This function has an obvious global minimum when *f(x)=d*, which is exactly what we want.

So, the **equality constraint **becomes:

For the final function, we want to emphasize the constraints in the optimization procedure. Remember that we are **transforming **a constrained problem in an unconstrained one, but our algorithm **doesn’t know** it, so it’s useful to **force **the global minimum search **multiplying **the penalty function by a factor. Usually, it’s multiplied by 10, but I suggest you multiply it by 100, in order to **enhance **the optimization process.

Finally, the function we want to optimize is the **Sharpe ratio with minus sign**. We want to maximize Sharpe ratio, but the penalty function formalism seen so far works only for functions to be **minimized**, so we can multiply Sharpe ratio by -1 in order to transform a maximization problem in a minimization problem.

The **final function** we have to minimize is then:

**Genetic algorithms**

Genetic algorithms follow the **natural selection law**, according to which only the **best individuals** survive to evolution. These algorithms are nearly a science by themselves and writing too much about them goes beyond this article. What it’s worth saying is that genetic algorithms are very useful because the **random part** of their process makes them work even with** non-continuous** or **non-differentiable** functions. Therefore, they **can escape** really well from local minima and go toward the **global minimum**, unlike deterministic methods like the Gradient Descent.

### Optimization framework in R

After the theory there comes the practice. In this section, I’ll show you some example code written in R.

#### Collect data of assets returns

First of all, we must choose the assets we want to invest in and collect their historical returns.

For this example I’ve collected historical data for these assets:

- Apple
- Amazon
- Nasdaq index
- S&P 500 index
- WWE

I’ve used Yahoo Finance in order to get daily data in CSV format from November 11, 2013 to November 9, 2018.

Each asset data is stored in a separate CSV file.

Now we can open **RStudio **and change the working directory to the one that contains the CSV files.

#### Data preparation

First of all, we have to merge all data into a single structure, joining them by date.

f = NULL

files = c("AAPL.csv","SP.csv","AMZN.csv","NDAQ.csv","WWE.csv","FB.csv")

for (i in 1:length(files)) {

csv = read.csv(files[i])

csv = csv[,c("Date","Close")]

names(csv) = c("Date",files[i])

if (i == 1) f = csv

else f = merge(f,csv)

}

The resulting *f* data frame is:

Now we have to **calculate the returns**. If the date field is the first one, we can use this simple R code

for (i in 2:ncol(f)) {

# Price time series of the i-th asset

prices = f[,i]

# Price lagged by 1

prices_prev = c(NA,prices[1:(length(prices)-1)])

# Returns time series

returns = (prices-prices_prev)/prices_prev

# Replace the i-th column with returns

f[,i] = returns

}

# Remove the first row with NAs and the Date column

asset_returns = f[2:nrow(f),2:ncol(f)]

The data frame *asset_returns *looks like this:

#### Portfolio returns

Portfolio historical returns are a function of the weights. Remember that historical data is always the same and what we want to do is to find an optimal portfolio **changing the weights values**.

We can define portfolio returns time series in function of the weights by **multiplying **each asset return by the **weight **related to it. Then we can sum all the weighted time series element by element.

We can define a *portfolio_returns *function that does this job. It’s a function of an array *x *of weights.

portfolio_returns = function(x) {

port.returns = 0

# Multiplication of the i-th asset by the i-th weight in "x"

for (i in 1:length(x)) {

port.returns = port.returns + asset_returns[,i] * x[i]

}

return (port.returns)

}

#### Objective function with penalty

First of all, we have to calculate the Sharpe ratio on the historical weighted portfolio returns.

sharpe = function(x) {

port.returns = portfolio_returns(x)

return (mean(port.returns)/sqrt(var(port.returns)))

}

Now we have to write the code for the **penalty function**. We can write a *constraint* function that implements all the constraints.

constraint = function(x) {

boundary_constr = (sum(x)-1)**2 # "sum x = 1" constraint

for (i in 1:length(x)) {

boundary_constr = boundary_constr +

max(c(0,x[i]-1))**2 + # "x <= 1" constraint

max(c(0,-x[i]))**2 # "x >= 0" constraint

}

return (boundary_constr)

}

Finally, we write our objective function to be **optimized**.

obj = function(x) {

# We want the maximum Sharpe ratio, so we multiply it by

# -1 to fit an optimization problem

return (-sharpe(x)+100*constraint(x))

}

#### Optimization via Genetic Algorithm

Now comes the **optimization **procedure. R has a wonderful general purpose **Genetic Algorithm** library called “GA”, which can be used for many optimization problems.

Remember: the goal is to find the values of the weights that **maximize **the Sharpe ratio according to the **constraints**.

The *ga* function in R is a **simple but effective** genetic algorithm implementation for solving maximization problems.

For this article, I configured the genetic optimization in order to make it perform 50.000 iterations, stopping only if the maximum fitness doesn’t change for 50 consecutive iterations. This is our **stopping **criterion for the genetic algorithm.

Let’s take a look at the code:

library("GA")

ga_res = ga(

# Tell the genetic algorithm that the

# weights are real variables

type="real-valued",

# "ga" function performs maximization, so we must

# multiply the objective function by -1

function(x){-obj(x)},

# x_i >= 0

lower = rep(0,ncol(asset_returns)),

# x_i <= 1

upper = rep(1,ncol(asset_returns)),

# Maximum number of iterations

maxiter = 50000,

# If the maximum fitness remains the same for 50

# consecutive transactions, stop the algorithm

run=50,

# Exploit multi-core properties of your CPU

parallel=TRUE,

# We want to see the partial results of the process

# while it performs

monitor=TRUE,

# Seed useful for replicating the results

seed=1

)

Optimization procedure may take a **few minutes**, depending on the speed of your CPU.

For this example and the settings I’ve shown you, optimization process took 310 iterations.

Let’s store the resulting weights in a vector.

# Store the resulting weights in a vector

sol = as.vector(summary(ga_res)$solution)

And, finally, we can **read their values** near the names of our assets:

According to these weights, we should allocate 11.9% of our capital on Apple stocks, 6.3% on S&P 500, 25.9% on Amazon and so on.

As you can see, each weight is positive and less than 1 and their sum is 0.99903, which is pretty close to 1 as we wanted. Don’t forget that numerical methods have a **physiological inaccuracy** that can be controlled in order to fit your problem needs.

Now we can **plot **the historical returns of the weighted portfolio versus the historical returns of the single assets.

optimal_returns = portfolio_returns(sol)

plot(cumsum(optimal_returns),type="l",lwd=5)

lines(cumsum(asset_returns[,1]),col="blue")

lines(cumsum(asset_returns[,2]),col="red")

lines(cumsum(asset_returns[,3]),col="green")

lines(cumprod(asset_returns[,4]),col="violet")

lines(cumsum(asset_returns[,5]),col="peru")

lines(cumsum(asset_returns[,6]),col="pink")

legend(0,1.5,legend=c("Weighted portfolio",names(asset_returns)),

col = c("black","blue","red","green","violet","peru","pink"),lty=1)

As you can see, the portfolio return (black thick line) is the **most stable** curve, although it’s not the best performing one (Amazon performs better). It’s nice to see that our portfolio outperforms S&P 500.

### Conclusions

In this article, I’ve covered the **penalty function** method in order to perform **portfolio optimization**. I’ve shown you how to perform it in **R** using **genetic algorithms** and I’ve plotted the results of the weighted portfolio versus the single assets.

Everything I’ve written it’s just an **example **of how we can do portfolio optimization using free software like R and free data sources like Yahoo Finance.

The method involved here is **flexible **enough to be used with other objective functions (e.g. Sortino ratio), it can be improved by the use of a **deterministic optimization** algorithm after the genetic algorithm, in order to reach a certain precision (e.g. gradient descent or any algorithm included in the *optim* R function).

Plus, we have performed an optimization without an out-of-sample cross validation, which is fine for this example but **should not be done** in live trading. Remember, never perform an optimization without cross-validate its results!

I hope this article helped you understand the power of genetic algorithms in portfolio optimization.