The Volatility of Volatility for Stocks and Crypto, Part 6
Speed up
The first thing we do in this edition of this series is to add another (though lesser) speed boost to our monte carlo macro. It turns out that arrays-of-arrays in Excel Visual Basic are a bit faster than a 2 dimensional array (matrix) for this particular use case. However, the syntax is a little confusing if you are familiar with other programming languages.
The example sheet for this article is here:
https://github.com/diffent/excel/blob/main/volofvol6Q.xlsm
We got the hint for how to do arrays-of-arrays in VBA macros from here:
https://stackoverflow.com/questions/9435608/how-do-i-set-up-a-jagged-array-in-vba
This new update was only about 20% (maybe less) faster than our prior mcFast
macro, but we will take what we can get.
This macro is called, unimaginatively, mcFaster
. In it, we set up the array-of-arrays to match our prior employed 2D array, with the 1st index being the time step (day count), and the 2nd index being the monte carlo run index. The important change to allocate the array of arrays is here:
' excerpt from mcFaster macro
' old fast way 2D array commented out
'Dim mcBlock(ntimesteps + 2, nmonte) As Double
' set up array of arrays for new faster day
Dim mcBlockOuter(ntimesteps + 2) As Variant ' so it can hold arrays
Dim oneRow() As Double ' template for oneRow, use Double for speed
For i = 1 To ntimesteps + 2
ReDim oneRow(1 To nmonte) ' strange
mcBlockOuter(i) = oneRow
Next
Now we can reference mcOuterBlock(i)(j)
like we did the 2D array mcBlock(i,j)
later in the macro.
The strange thing to non VBA programmers is the ReDim
statement in the loop. Not only does it re-dimension the row array (allocating a new array), it also makes a new pointer to that array which can be assigned to one element of the mcBlockOuter
array. Once the oneRow
array variable (pointer) is then copied to an mcBlockOuter
element, apparently oneRow
can be used again in the next loop iteration for another (separate) row array. This syntax is kind of strange to C and Javascript programmers, but it seems to work, and runs faster for us, so…
Price roll-up correction
The next thing we do is correct our price roll-up over time (converting returns to prices) to match the theoretical stochastic diffeq solutions.
Instead of:
finalPriceA = P0*return1*return2*return3*return4*…
[the way we had it before, which was modeled after our MCR3D apps which just reshuffle historical returns and multiply up the reshuffles]
We use:
finalPriceB = P0*exp(r1)*exp(r2)*exp(r3)*exp(r4)*…
[part of the analytic solution to the relatated stoch diffeq]
where returnN
is a 1-centered return e.g. 1.02 or 0.98
and rN
is a 0-centered return e.g. 0.02 or -0.02
This required a small change to the formula pattern in Sheet1 column B:
As we noted in an update to Part 4 of this article series, these two formulas yield slightly different results, with more difference the farther the return is from 0 (or 1 in the one-centered returns space), and with other properties such as asymmetric behavior of bullish and bearish returns.
Zero centered null return and 1-centered null return are the same in both cases because exp(0) = 1.
A little more digging
Using log = natural log and taking the log of both sides of finalPriceA and finalPriceB, we get:
case (A): [1-centered returns multiplied up]
log(finalPriceA) = log(P0) + log(return1) + log(return2) + …
And since log(e^x) = x :
case (B): [0-centered returns exponentiated then multiplied up]
log(finalPriceB) = log(P0) + r1 + r2 + …
Using the rule: the log of a product is the sum of the logs.
Now let’s just check the 2% constant daily return case for the two above cases.
In one centered return case (A) for a 2% daily return, we get log(1.02) = 0.0198026
In the 0-centered return case (B), the terms summed up are each 0.02: slightly higher than the 0.0198026 of case (A).
So we see that when we perform the sums for finalPriceA and finalPriceB, case (B) will become more slightly extreme over time (wider potential range) vs. case (A) as shown by the plot in the Caution light section of Part 4 of this series:
https://medium.com/@nttp/the-volatility-of-volatility-part-4-fb168534c89c
(Search for Caution light in the above article near the end… can we directly link to sections of a Medium article? I don’t know!)
Multiplying 1-centered returns together seems like the proper way to compute a final price if all we have is raw returns, but it is not quite what the doctor ordered as a solution to the Heston stochastic differential equation 1, or even the ordinary geometric Brownian motion equation with constant drift and constant diffusion model.
From the Geometric Brownian Motion Wiki (upon which the Black-Scholes methods are based):
A pretty cool table in this article compares the Heston stochvol model to the Black-Scholes constant vol model:
https://corporatefinanceinstitute.com/resources/derivatives/heston-model
As we discuss in our other articles, models like this are useful for more than just option pricing. We can also use them for risk analysis (how likely are we to lose, say 10%, in the next 3 months).
Introducing real price data: General Electric
GE has been on a run-up lately, so it will be interesting to see what our Excel model might say about it.
We will pick stock market symbol GE (General Electric) and use our MCarloRisk3D app to compute its recent daily volatility. We will use some typical settings that we start with in that app: 100 days withheld (for a quick bulk backtest as we call this process in the app) and then look backwards from that 100 another 252 days (1 trading year) to compute the volatility, so we only use data prior to the 100 day forecast window to compute the vol: Don’t want to include “future” data in our backtest!
The MCR3D app computes daily volatility of 0.015134 for this 252 day window, and a last known price (before we start forecasting into the withheld data window) of $100.29. This price was from the trading day before Jan 3 2024, which was Jan 2 2024. We use this volatility and starting price as our base data instead of $1 at 1% vol as we have been doing for a theoretical asset. Starting price goes into Sheet 1, B2.
Our historical data is from the IEX Cloud historical data service.
Putting this volatility into our Excel at D1 / sheet HE2 “theta” (using 1.5134 percent, per our sheet set up) and setting the stochastic vol parameters to their neutral settings (no stochvol effect): kappa = 1, xi = 0, target correl = 0, we can run our new mcFaster
macro at nmonte = 5000
(default count for our MCR3D app).
For now, you have to edit the macro to change the run count.. the
nmonte
parameter. We should probably make this a sheet parameter.
A 5000 run study with 1000 days (default in the mc macros) takes about 17 minutes to run on our system. Yikes. Since we are only going to be looking 100 days ahead, we can in the future speed this up considerably for repeated trials by only running the macro 100 days forward. The days fwd is another variable that we can bring out to the sheet.
A first try with baseline settings (only normal distribution diffusion, no stochastic volatility, no drift) shows that General Electric stock price reality is going far outside the bounds of our model, quickly. It almost seems like TSLA stock at some point in the recent past, right?
Has Mr. Musk been moonlighting at GE?
Even if we add the 99.5% curve (the top dark grey curve) to our graph, reality goes far beyond that. The missing model drift term of, say, 4% per annum, would not likely account for this large of a model bounds violation.
We have re-ordered the plots in the legend here versus our last article, for easier reading.
As you can see from the constant probability curves, we are still computing the model out 1000 days when we are only looking forward 100 days (since we withheld 100 points of GE for this test case), which is inefficient.
Introducing stochvol parameters
A rallying cry among modelers is often “Increase the DOF!” when they can’t find a solution with a simpler model. Meaning, increase the degrees of freedom. Or maybe this is just the modelers we know.
Next, we can arbitrarily crank up the stochastic volatility parameters and see if this new model contains reality better.
Setting the stochvol parameters to {kappa = 0.02, xi = 0.3, target correl or rho = 0.8} and leaving the volatility as measured historically, then re-running the monte carlo, we get:
Now the top 99.5th percentile curve is mostly enveloping the behavior of our GE real data 100 days out in this one snapshot backtest. That curve is coarse because 5000 MC runs isn’t enough to get it to converge to a smooth shape out to that extreme percentile. The general idea is that if we add more MC runs, this rare event bullish curve (99.5% = 1 in 200) will get pushed up a bit more and become smoother. If we add more MC runs, we may show that this model captures or envelopes that blue reality GE curve entirely without any further adjustments to the model. In fact, these stochvol parameters might turn out to be too extreme.
If you read our other articles on this drift-diffusion forecasting topic, you will know that just a single snapshot in time backtest is typically not a robust enough test. We think that you should do rolling window backtests — which we call “exhaustive validation backtests” in our MCR3D apps — and see how well the model contains reality in those rolling windows. Just because GE has been behaving bullishly in this recent 100 day window doesn’t mean that is the case for all 100 day windows in history.
This one test case example is just to show that a stochvol model is more flexible than an ordinary “volatility scales as root (time)” type of model. Without doing something drastic like assuming that volatility is 2x or 3x of what it has been in the recent past, we are able to capture within the probability envelope of the model this extreme bull run that we see with GE, at least approximately. Further tuning would be required of this model if we wanted to really use it. As we noted in our previous articles, a recommended way to tune these stochvol parameters (see the Wikipedia article on the Heston model / Calibration) is to listed options prices in the market. But that is too much for our slow running macro at the moment, so we pause here and allow you to mull all of this over.
As a contrast, we can do a similar run on one of our several MCR3D apps. Since those apps re-sample the non-normal actual return data to build the forward forecasts, the base model captures the extreme GE run-up within the 99th percentile band of its envelope without having to unleash the stochastic volatility degrees of freedom yet:
An asset performing far above expectations is not usually a concern… unless you have shorted that asset (or are purchasing put options or similar “bets on the bears” derivatives or ETFs)… which no doubt many traders do.
Further reading
Next up, Part 7: macro controls and drift!
https://medium.com/@nttp/the-volatility-of-volatility-part-7-70b031a60998