What the Father of Fractals Can Teach Us About Finance

Jordan Shimabuku
12 min readJul 22, 2020

--

The Mandelbrot Set

Efforts to predict stock market swings have been long documented, with investors and market enthusiasts devising countless ways to model the movement of stock prices. In The (Mis)Behavior of Markets, Benoit Mandelbrot, whose work includes the creation of “Fractal Geometry” and finding examples of this phenomenon in nature, offers a method to apply this process in modeling the motion of financial securities including currencies, commodities, and stocks. He calls this model the Multifractal Model of Asset Returns which provides a more realistic model for the unpredictable nature of a human-driven market.

Before we get there, let’s examine some classic examples of stock price simulations.

Random Walk Hypothesis

The Efficient Market Hypothesis claims all securities are efficiently priced (with varying degrees of efficiency), but any outperformance can be attributed to chance.

In a similar theory, a model of the “Random Walk Hypothesis” assumes that a stock’s movements are random. We can then simulate a stock by giving it an arbitrary starting point, then everyday we flip a coin, heads it goes up, tails it goes down, lets say +/- 1%. We repeat this for 1,000 days, we can get a picture of a “random walk” of a stock. Below is a picture of this simulation run 10 times.

Each is line unpredictable when following a path since each day is randomly decided, however, the Law of Large Numbers dictates the stock should end up where it started, or at least the average of ending values of a large sample should be its starting point.

Much like an actual stock, it seems like whether a day’s closing price is higher or lower is random, and there is little use in predicting its closing price, but is this model realistic? Not really. If you zoom in to see granular detail, it’s not very convincing. We see predicable zig-zags, “teeth” that form as a result of consecutive up-down-up-down streaks.

But without zooming in, there are other ways to see how unrealistic this model can be. Let’s look at the daily changes in one of these paths:

As we can see the daily change is always either -1% or +1% (because we designed it that way).

We can take a baby-step towards making it more realistic by allowing a daily change to be anything between -1% and +1%, as some days stocks move very little. This way we can remove those predictable zig-zag teeth in the chart. The distribution of ending values is more narrow now that we’re allowing for smaller changes in magnitude while still capping the changes at +/- 1%:

However, the daily variances look more realistic, as shown below:

Now instead, let’s look at the distribution of daily changes:

The rectangle shape occurs as the probability of a daily change is uniformly distributed between -1% and 1%. With more simulations, the bars would even out.

What would be even more realistic is if we allow the changes to fall under a normal distribution with a mean of 0, and allowing 1% to be the standard deviation. Below are the resulting paths, daily changes, and distribution of changes:

Resulting stock prices
Daily changes for one path
Distribution of daily changes for one path

This is more like it, daily changes are variable, with calmer days between -1% and +1% more common (occurs ~68% of the time) than a larger move. In this model, the mean variance is still 0%, meaning if we run this simulation multiple times, the end results still fan out, but the average ending point of these paths will be where we started. This random behavior may be a good model for short periods of time but over longer periods, we know that stocks tend to drift higher over time.

Geometric Brownian Motion

Geometric Brownian Motion (GBM) has become the standard for modeling stock movements and is used in pricing options via the Black-Scholes Model. The idea is similar to the last iteration of the Random Walk we implemented, but with a drift factor.

Image from Investopedia

The drift factor, μ becomes the risk-free rate (continuously compounded), σ is the standard deviation, Wt represents the Weiner process, a stochastic process that determines our random shock.

Using the equation above, with the same starting point as the previous simulations, a risk-free rate of 5%, a standard-deviation of 20% (both annual figures) and 1,000 periods of 1/252 years (1,000 trading days), we get the iterations below:

The daily changes still look variable…

and the distribution still looks normal (Gaussian):

The drift causes the distribution to shift to the right, with μ equalling the daily equivalent of the risk-free rate, and the standard deviation the daily equivalent to the annual volatility.

Let’s compare it to an actual stock.

Here is GM’s actual movement over a 1,000 day period (you can see where the starting point of the prior simulations came from).

Actual stock price of GM from January 1st, 2013
Daily changes of GM closing prices
Distribution of daily changes

The GBM does a pretty good job. Even the daily variances and the distribution of those variances could fool someone, if not this specific example, with enough simulations and the benefit of picking our example stock, we can find one that can. I even kept the input parameters fairly basic but could derive them from a history of GM’s prices and have it spit out even more accurate simulations of its stock.

What if we stretched it out even further, do we get a picture that can resemble the market over longer periods of time? Here is the another GBM simulation for 5,000 trading days, about ~20 years:

Because of the (positive) drift factor, the longer we stretch this out, the higher the expected ending price will be. Looking at one path’s daily volatility over that same period:

Nothing really exciting to see here, we have more occurrences of +/-4% days but thats’s because we have more days, the overall distribution (and therefore the likelihood of that happening on a given day) has not changed:

Now let’s look at 20 years of actual stock market data:

This is SPY, an ETF that tracks the S&P 500 over the last 5,000 trading days. Just looking at the price, the GBM model could be a convincing model. In fact, the purple line resembles a similar trajectory. But looking at the variances, we see a different picture:

It’s plain to see that the variances greatly deviate from the the mean in certain cases, multiple cases of +/- 10% in a single day which the brownian motion in our sample set never produced.

The distribution of daily changes has some noticeable differences as well:

Despite a real-world example seeming more volatile than the theoretical model, the distribution shows that a “quiet” day is much more likely, while at the same time allowing for even larger deviations on occasion.

One critique you may have is that by using an index, the zigs counteract the zags and therefore we get a clustering around the mean, however, the same shape of distribution is reproducible using single stocks. The image below shows the price, daily changes, and distribution of changes for AAPL, BA, and KO (from top to bottom, left to right and yes, AAPL fell over 51% in a single day).

We can see that in the real world, the tails extend further out and a “quiet” day is much more common than in a simple Geometric Brownian Motion, implying the distribution has a higher kurtosis than the normal distribution.

So where do we go from here?

One way we can adjust the model is to have our shock/noise factor randomly draw from a distribution more similar to the one we observe above (leptokurtic), which would allow for random days of extreme deviation while increasing the probability of having tame days.

But an important distinction here is that I said random days of extreme deviation. Look again at the real-world examples, the extreme days aren’t sprinkled throughout randomly, they’re clustered. This clustering makes sense, we can point to those three main clusters and say “dot-com bubble boom/bust,” “global financial crisis,” and “COVID-19.” Pick a stock and you can explain a period of volatility on a smaller scale as well.

You might be saying “those events are unpredictable though, how/why are we supposed to model unpredictable market shocks?” Well isn’t that what we’ve been doing? The point is to find a better model that incorporates these shocks as we use these models to calculate risk, price derivatives, etc.

How then do we deal with this clustering of volatility? Enter Benoit B. Mandelbrot…

The Multifractal Model of Asset Returns

Some of Mandelbrot’s most notable work has been in the field of Fractal Geometry and its presence in nature; in coastlines, clouds, plants, fluid turbulence, just to name a few. He had an eye for repeating patterns and his work led to his name “The Father of Fractals.” (Legend has it he gave himself the middle initial in Benoit B. Mandelbrot, when asked what the ‘B’ stood for, he would reply “Benoit B. Mandelbrot,” what a guy).

Romanesco Broccoli

He decided to set his focus on financial markets and observed what we have observed above, volatility is clustered.

Mandelbrot explains that traders intuitively understand that time in financial markets is relative; the first 15 minutes or the last 15 minutes of a given day’s market hours are generally more volatile, and on larger scales, days and weeks may seem this way. In fact, professional traders often speak of “fast” or “slow” markets. Due to this, he postulated the existence of “trading time” distinct from clock/calendar time.

Using this new version of time that gets stretched and squished as a more realistic version of market time as an input to the prior model that treats time as linear should yield more realistic results. Mandelbrot calls this a “baby” model where the mother is Brownian Motion and the father is this model transforming clock time to “trading time.”

Image from The (Mis)Behavior of Markets, Chapter 11: The Multifractal Nature of Trading Time

This looks interesting, but how do we transform clock time into trading time?

Time Warp

One method of deforming time Mandelbrot explains in his book is through a process called a multiplicative cascade.

Imagine the distribution of a resource like gold across a country’s population of 100 people. We can split the population in half and give one half 60% of the gold, and the other half 40%. The distribution would look like this:

The blue area would be if everyone had equal share of the gold, 1oz for example, and the orange would be this new distribution. In total we still have 100oz of gold in the population. We can split each half into half a couple more times and do the same process, but randomly select which side is getting the 60%.

We can see that there are higher peaks now in segments of the population that were included in the lucky 60% multiple times. If we now assume this resource is time, and we go with our 1,000 day period, each trading day can have more or less clock time.

The total area is still equal to if each day was a calendar day. Another thing to note is that if the left half started with X% of the resource, it will always have X% no matter how many times we repeat this process thereafter. We can continue splitting and reallocating into smaller and smaller pieces or adjust the base allocation and get something that looks like this:

Because we include an element of randomness, we can get a different image each time (this one above resembles the VIX to me). We can see that because we split so many times, the peaks deviate further from the average, with most of the time spent below the average, sound familiar?

For those interested, below is the algorithm I wrote for this recursive process in Python, where timeList is a list of linear/calendar time, min is the minimum interval we want to split down to, and alpha is the higher allocation. This returns a new list where each element (each calendar day) has a new trading time.

def redistribute(timeList, min, alpha):
values = timeList.copy()
if len(values) <= min:
return values
middle = round(len(values)/2)
total = sum(values)
left = values[:middle]
right = values[middle:]
if random.randint(0,1):
for i in range(len(left)):
left[i] = total*alpha/len(left)
for i in range(len(right)):
right[i] = total*(1-alpha)/len(right)
else:
for i in range(len(left)):
left[i] = total*(1-alpha)/len(left)
for i in range(len(right)):
right[i] = total*alpha/len(right)
return redistribute(left, min, alpha) + redistribute(right, min, alpha)

Putting this back into the model for Geometric Brownian Motion with the same inputs:

By prices itself, it looks about as good as GBM at simulating what real prices look like. We can see from these 10 runs that the “stocks” tend to stay lower. This is due to the fact that we are now allowing for larger drawdowns, which are harder to overcome with equal percentage gains i.e. you need to increase by 50% to get back to even after falling 33%, 100% after falling 50%, etc. We can attempt to adjust for this by increasing the drift factor, μ (which was left at 5% per annum) if you’d like.

Where we can see the major difference is now in the daily changes and their distributions:

Stretched out for 5,000 days to replicate the ~20 years:

So there we have it, a model for stock movements more realistic than Geometric Brownian Motion that accounts for the clustering of volatility and is flexible for different lengths of time. Again, this is just a model so the outputs are only as good as the inputs.

What can we do with this knowledge? How is it helpful?

As mentioned, we can use these models to calculate risk when managing portfolios, price derivatives like options, and to better understand how markets function as it can give us insight into human behavior.

This article is long enough so I will post another article with the implications of Mandelbrot’s Multifractal Model on options prices and add the link here when finished: Implications of a Multifractal Model of Stocks on Options Pricing.

--

--