The Pitfalls of Relying on the Central Limit Theorem in Portfolio Return Analysis

QuantPy
17 min readSep 29, 2023

--

In the world of finance, both individuals and investing professionals alike strive for making sound investment decisions with consideration of risk. Usually this process involves understanding and analysis of portfolio returns. Central to this methodology is the Central Limit Theorem (CLT), a statistical concept widely employed to assess the behavior of financial data. However, beneath its apparent utility lies a series of assumptions that can mislead rather than illuminate. This article delves into why a critical examination of the CLT’s applicability in portfolio return analysis is imperative, shedding light on its limitations and the alternatives that offer a more accurate view of financial reality.

In this article you will:

  • Understand why the Central Limit Theorem (CLT) is a commonly used tool for analyzing financial data.
  • Uncover the hidden complexities and limitations of the CLT.
  • Explore alternative approaches that offer more accurate insights into portfolio performance.
  • Embark on a journey to make your financial decisions wiser and more informed.

1. The Normal Distribution and it’s Limitations in Finance

The question that inevitably arises among finance beginners is whether it is safe to assume that stock returns follow a normal distribution. In short, the answer is almost always NO. The distribution of asset prices is a product of intricate micromarket dynamics involving trade flow, trade price distributions, and volumes. However, it’s essential to recognize that this assumption of normality can yield relatively accurate results for some inquiries while leading to disastrous outcomes for others.

The primary issues to consider are as follows:

  • Asset returns exhibit skewness and fat tails.
  • Returns may display serial correlation, meaning successive returns are not independent.
  • Volatility is not constant over time; it exhibits heteroscedasticity, with Sharpe volatility clustering.
  • Relationships exist between volatility and asset returns, such as the Leverage Effect.

While we won’t delve into the correlation structure between assets here, it’s crucial to acknowledge that assuming normality in portfolio modeling poses significant risks. “One of the lessons learned from past financial crises is that correlations tend to increase in stressed market conditions” [1]. However, ongoing research and the development of improved methods for incorporating non-stationary asset correlations into credit and portfolio modeling are addressing this concern.

In this article, we will explore when it’s reasonable to assume a normal distribution for asset returns, such as when determining likely investment returns over a 25-year period. We will also discuss situations where modeling using the normal distribution is inappropriate, such as assessing the probability or risk that your investment manager might lose all your money in the coming year.

2. The Central Limit Theorem (CLT) Explained:

The Central Limit Theorem (CLT) is a fundamental statistical concept that states that the distribution of the sample mean of a sufficiently large number of independent and identically distributed (i.i.d.) random variables approaches a normal distribution, regardless of the original distribution of those variables. In simpler terms, it suggests that when we take many random samples and calculate their means, those means will tend to follow a bell-shaped, Gaussian (normal) distribution.

My favourite video on the Central Limit Theorem is from the 3Blue1Brown YouTube channel. I recommend giving it a watch.

Application of CLT in Finance

In the context of finance, the CLT is often employed to analyze portfolio returns. It assumes that the returns from individual assets within a portfolio are i.i.d., and by extension, the portfolio returns themselves are approximately normally distributed. This assumption simplifies risk assessment and return prediction by leveraging the properties of the normal distribution.

An Example of CLT: using Sharpe Ratio’s for Investment Decisions

Millions of individuals worldwide grapple with a common financial question throughout their lives:

‘Where should I invest my hard-earned savings?’

To shed light on how most people approach this challenge, consider a scenario involving 12 portfolio management funds. Each of these funds features randomly assigned distributions based on average returns, volatility, and skew. Additionally, these portfolio managers have been assigned varying lengths of operational history, ranging from as little as 3 years (equating to just 3 data points) to an impressive 31-year track record.

import warnings
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt

sns.set_theme()
colors=sns.color_palette()
colors+=colors
warnings.filterwarnings( "ignore", module = "seaborn\..*" )

no_portfolio_mgrs = 12

no_returns = np.random.normal(loc=20, scale=8, size=no_portfolio_mgrs)
means = np.random.normal(loc=0.10, scale=0.05, size=no_portfolio_mgrs)
stds = np.random.normal(loc=0.3, scale=0.05, size=no_portfolio_mgrs)
skews = np.random.randn(no_portfolio_mgrs)*0.5

portfolios = [stats.pearson3(loc=mean, scale=std, skew=sk)
for mean, std, sk in zip(means, stds, skews)]

porfolio_mgr = np.arange(no_portfolio_mgrs)
shape_dim = [3,4]
porfolio_mgr = porfolio_mgr.reshape(shape_dim)
fig, axs = plt.subplots(nrows=3, ncols=4, figsize=(12,11))

portfolio_returns = {}
for (x,y), portfolio in np.ndenumerate(porfolio_mgr):
years = int(no_returns[portfolio])
portfolio_returns[portfolio] = portfolios[portfolio].rvs(size=int(years))

sns_dist = sns.histplot(portfolio_returns[portfolio],
bins=10, kde=True, ax=axs[x,y],
color=colors[porfolio_mgr[x][y]])
axs[x,y].set_title(f"Mgr {porfolio_mgr[x][y]}, {years} yrs")
axs[x,y].set_xlim(-1,1)

fig.suptitle('Comparison of Portfolio Managers Returns')
Comparison of Portfolio Managers Returns

Naturally, we would like to compare the Sharpe Ratio among several different investment manager portfolios. So, how do we estimate the Sharpe Ratio?

We can use the following formula to calculate the Sharpe Ratio:

Sharpe Ratio Adjusted for Risk-Free Rate

Here, ‘R’ represents the mean returns of the asset (in this case, a Portfolio Management Fund), ‘σ’ is the standard deviation of the returns, and ‘Rf’ is the risk-free rate. In this example, let’s make the simple assumption that the risk-free rate is 4%. We’ll consider this rate as a barrier or hurdle rate, representing the opportunity cost.

Now, let’s illustrate this with a table of portfolio returns and corresponding Sharpe Ratios using Python code:

risk_free_rate = 0.04
data = {mgr: [np.mean(portfolio),
np.std(portfolio),
(np.mean(portfolio)-risk_free_rate)/np.std(portfolio)]
for mgr, portfolio in portfolio_returns.items()}

df = pd.DataFrame.from_dict(data,
orient='index',
columns=['mean', 'std', 'sharpe_ratio'])
df.sort_values(by='sharpe_ratio')
Pandas dataframe of Sharpe Ratio results from Portfolio Managers

At this point, you might think that ordering the portfolio managers from lowest to highest Sharpe Ratio and selecting the portfolio management fund with the highest Sharpe Ratio is the way to go. However, it’s essential to dive deeper into this analysis.

3. The Problematic Assumptions of the Central Limit Theorem in Finance

Relying solely on the Sharpe Ratio as a single point estimate can be misleading, especially when we haven’t taken into account the varying durations of these portfolio management funds. Some funds have only been in operation for 3 years, while others boast a 30-year track record. The question that arises is: What are we assuming when we calculate return means and standard deviations to compute a point estimate Sharpe Ratio?

The crucial assumption underlying our estimation method is the use of Maximum Likelihood Estimation (MLE) from the Normal Distribution. This inherently invokes the Central Limit Theorem (CLT) in the context of ’n’ approaching infinity.

Regular Monte Carlo Experiment

To gain a deeper understanding of the Central Limit Theorem (CLT) and its application to our portfolio example and Sharpe Ratios, let’s consider the hypothetical case of an Investment Manager whose portfolio returns exhibit the distribution depicted below.

true_mean = 0.08
true_std = 0.15
true_skew=-1

dist_skew = stats.pearson3.rvs(loc=true_mean, scale=true_std, skew=true_skew, size=int(1e6))
sns_dist = sns.displot(dist_skew, kind='kde', fill=True)
plt.title('Returns Distribution: \n$X \sim r_P$ ')
plt.xlabel('Annual returns (%)')
plt.show()
Returns distribution of hypothetical Portfolio Manager

In this scenario, I am currently 25 years old and intend to invest my money for a 25-year period. My goal is to comprehend the most likely average returns and the potential variability in returns that I might achieve over this investment horizon.

We have access to the returns distribution of the Portfolio Manager (represented as ‘r’), and we can simulate independent and identically distributed (i.i.d.) random variables, X₁, X₂, and so forth, representing the returns in each year of the investment fund. Our objective is to examine portfolio characteristics expressed as Y = g(x), where ‘g’ is a real-valued function. Specifically, we aim to make inferences about the expectation, E[g(X)]. To do this, let’s explore the sampling distribution of the mean by conducting a Monte Carlo simulation.

We can represent the mean as follows:

We are interested in the samples of Y, which are i.i.d. with a mean (μ) and variance (σ²). Applying the Central Limit Theorem, if we were to repeat this Monte Carlo simulation numerous times, each time calculating the average value of our simulated Y samples (i), our estimate for the sample mean of Y in distribution would converge to:

Therefore the empiricial variance can be estimated by:

The more common metric for Monte Carlo variance is the Standard Error, which is the error due to the fact we are taking an average of randomly generated samples, and so therefore the calculation is itself random.

Now that we’ve delved into the mathematical foundations, let’s start implementing these concepts in Python. We’ll begin by simulating 10,000 different 25-year investment paths, resembling 10,000 potential portfolio outcomes for an individual who starts investing at the age of 25 with a Portfolio Manager exhibiting overall portfolio statistics of an 8% mean return, 15% volatility, and a skew of -1.”

#define holding period
years = 25
#number of theoretical simulations
M = 10000
#sample from distribution of returns
r_sample = stats.pearson3.rvs(loc=true_mean,
scale=true_std,
skew=true_skew,
size=(years,M))
#check for cases where returns were less than -1
#lost all your money in 1 year!
bankrupt = (r_sample < -1)
print("Number of times the portfolio ",
"manager lost your money",
f"in a single 1 year: {len(r_sample[bankrupt])}")
print(f"max return in single year {np.max(r_sample)*100:2.1f}%")
print(f"min return in single year {np.min(r_sample)*100:2.1f}%")

After conducting 10,000 simulations over a 25-year investment period, we observed that in just 2 instances (please note that this number may vary in your own executions), the Portfolio Manager’s returns led to a complete loss of our investment in a single year. This outcome aligns with our expectations, as evident from the maximum and minimum returns, which can reach as low as -115.4%. Such extreme negative returns in a single year result in the total loss of invested capital.

However, in the remaining 9,998 simulations, we successfully reached the end of the 25-year investment horizon. During these simulations, we employed log returns for calculations, as they allow us to sum the log returns to obtain the actual return over the 25-year period. Now, let’s visualize the aggregate distribution of returns we achieved at the age of 50 years old by plotting a histogram.

#calculate log returns
log_returns = np.log(1 + r_sample)
X = np.cumsum(log_returns, axis=0)
X_T = X[-1,:]
plt.hist(X_T, bins=100)
plt.ylabel('Frequency')
plt.xlabel('Aggregate Log Returns')
plt.show()
print(f"The monte carlo mean estimate is: {np.nanmean(X_T):.2f}")
print(f"The monte carlo standard error is: {1.96*np.nanstd(X_T)/np.sqrt(M):.2f}")

We observe a well-defined histogram of aggregate returns that approximates a somewhat normal distribution. Our estimation indicates a robust mean of the log returns, standing at 162%, with a standard error of 2% (equivalent to 2 standard deviations). This statistical measure provides a strong basis for understanding the expected returns.

A crucial insight emerges when we compare this approach to using the Sharpe ratio of the Portfolio Manager. If we had relied solely on the Portfolio Manager’s Sharpe ratio, we would have obtained only one return value, which would have appeared as a single point at the center of this histogram. This approach would not have captured the distribution of possible returns or the negative returns that occurred in certain years.

Changing Timeframes (when we can not use CLT)

What happens when we consider varying holding periods and examine total returns over different timeframes: 1, 2, 5, 10, 25, and 10000 years? This scenario arises when we might need to access our investments unexpectedly and choose to discontinue our investment, leading us to close our account. Additionally, the 10,000-year holding period is an extreme example allows us to envision the potential distribution of returns over an extended investment horizon.

We can conduct Monte Carlo simulations, assuming an individual has invested in the same portfolio management fund with the returns profile described at the beginning of this section (denoted as ‘r subscript P’).

samples = np.array([1,2,5,10,25,10000]).reshape([2,3])
M = int(1e4)
fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(8,6))
count = 0
for (x,y), sample_size in np.ndenumerate(samples):
means = []
for _ in range(M):
dist_skew_sample = stats.pearson3.rvs(loc=true_mean,
scale=true_std,
skew=true_skew,
size=sample_size)
means.append(np.mean(dist_skew_sample))

sns.histplot(means, bins=20, kde=True, ax=axs[x,y], color=colors[count])
mu_est = np.nanmean(means)
std_est = 1.96*np.nanstd(means)
axs[x,y].set_title(
f"{sample_size} yr, \
MC E[R] {mu_est:.3}, \
\nMC +/-3 Std {std_est:.3}")
axs[x,y].plot([mu_est-std_est,mu_est-std_est],[0,1000], 'k--')
axs[x,y].plot([mu_est+std_est,mu_est+std_est],[0,1000], 'k--')
count+=1

plt.suptitle("Annualised Return and VaR_5 comparison with years of Fund")
fig.tight_layout()
plt.show()

In the analysis, we notice a significant characteristic: a long, fat left-hand tail in the distribution of returns for the 1, 2, and 5-year fund scenarios. This pattern appears consistent with the ‘real’ distribution of the Portfolio Manager’s true returns.

However, as we extend the holding period, the influence of the Central Limit Theorem (CLT) becomes more evident. Watch how the CLT’s real-world effects come into play. The distributions gradually shift toward a closer approximation of normality, and the potential variance in investment returns over the holding period decreases.

Changing number of “Simulations”

In the real world, we face inherent limitations. We cannot live for 10,000 years, nor can we relive 10,000 lives over a 25-year timeframe to compute the average return that might have been, as our Monte Carlo analysis allows us to do.

Instead, we find ourselves in a single realized outcome, represented by just one bar on a histogram at a specific point in time. Given this limitation, what conclusions can we draw about the returns based on the available data? One option is to apply the Maximum Likelihood Estimator and place our trust in the Central Limit Theorem.

Maximum Likelhood Estimator (MLE)

The goal of maximum likelihood estimation (MLE) is to determine the parameters for which the observed data have the highest joint probability.

We write the parameters governing the joint distribution as a vector:

and the parameter vector lives in a parameter space (Ω) and this distribution falls within a parametric family set:

We need to have the probability density function (f_n above) defined.

The likelihood function: is the joint probability of the observed data as a function of the parameters of the statistical model If X generates x1,x2,…xn as an i.i.d. process then the likelihood function can be written as the product of the conditional

Although MLE using the above equation directly can provide the correct result, it can often be computationally challenging to compute due to the arithmetic underflow problem. To make this calculation more tractable, we can take the logarithm of the function mentioned earlier. This transformation yields the summation of larger values and is equivalent to a maximization statement.

For those interested in understanding how to derive MLE from the optimisation problem of the log-likelihood function of the normal distribution, a helpful reference can be found here [3].

Below, we present the maximum likelihood estimate of the variance for normally distributed variables. It’s important to note that this estimate is biased, as it underestimates the actual variance of the larger population. This bias can be corrected by subtracting the degrees of freedom from the denominator.

MLE for normal distribution

The key assumption underlying this approach is that all sample returns (r) are considered independent and identically distributed (i.i.d.) and originate from an underlying normal distribution. As we’ve established earlier, this assumption holds reasonably well as the sample size ‘n’ approaches infinity.

Now, let’s delve into our example where we have a single lifetime and a 25-year investment window with our portfolio manager.

years = 25

fig, axs = plt.subplots(nrows=1, ncols=1, figsize=(6,4))

dist_skew_sample = stats.pearson3.rvs(loc=true_mean, scale=true_std, skew=true_skew, size=years)

sns.histplot(dist_skew_sample, bins=20, kde=True, ax=axs)
mu_est = np.nanmean(dist_skew_sample)
std_est = np.nanstd(dist_skew_sample)
# mult by 1.96 crit for norm distr 97.5%
se3 = 1.96*np.nanstd(dist_skew_sample)/np.sqrt(years)
axs.set_title(f"{years} yr \n SR {mu_est/std_est:.3}
\n E[R] {mu_est:.3} +/- {se3:.2f} (3std) \n Unbiased Std {std_est:.3f}")
axs.plot([mu_est-se3,mu_est-se3],[0,int(years/5)], 'k--')
axs.plot([mu_est+se3,mu_est+se3],[0,int(years/5)], 'k--')
count+=1

plt.suptitle("Annualised Return and VaR_5 comparison with years of Fund")
fig.tight_layout()
plt.show()

What we’ve come to realize is that this approach yields only a point estimate with at best some confidence intervals (CIs) based on the available data. This characteristic is inherent to the reliance on the MLE method and CLT. It stands to reason that with less data, our confidence or certainty in the results diminishes.

A summary of the issues with CLT & MLE approach assuming normal distribution

Using the Central Limit Theorem and Maximum Likelihood Estimation assuming normal returns to estimate the Sharpe ratio of investment portfolios can lead to several issues, especially when making investment decisions with personal money. Here are the key problems associated with this approach:

1. Assumption of Normality:

  • The CLT relies on the assumption that individual returns are normally distributed. In reality, financial data often exhibits non-normal characteristics such as skewness, kurtosis, and fat tails. This can lead to inaccurate estimates when assuming normality.

2. Sensitivity to Outliers:

  • Normal distributions are sensitive to outliers. Even a few extreme returns can significantly impact the estimated mean and standard deviation, leading to a distorted Sharpe ratio. Personal investments may be particularly vulnerable to such outliers.

3. Volatility Clustering:

  • Financial markets exhibit volatility clustering, where periods of high volatility are followed by more high volatility, and vice versa. Assuming constant volatility as MLE often does can lead to misrepresentations of risk and return expectations.

4. Autocorrelation:

  • Asset returns can be autocorrelated, meaning they are dependent on past returns. MLE and CLT assume independence of observations, which may not hold in practice. Ignoring autocorrelation can lead to inaccurate parameter estimates.

5. Non-Stationarity:

  • Financial markets are not always in a stationary state, which means that return characteristics can change over time. Assuming constant parameters using MLE can result in inaccurate estimates, especially if market conditions change.

6. Loss of Information:

  • Using the Sharpe ratio as a single metric to make investment decisions may oversimplify the complex dynamics of a portfolio. It reduces the information in your data to a single number, potentially leading to suboptimal investment choices.

7. Diversification Benefits:

  • The Sharpe ratio does not explicitly account for the benefits of diversification. It assumes that returns follow a joint normal distribution, which may not accurately represent the interactions among different assets in your portfolio.

8. Risk of Overconfidence:

  • Relying on MLE and assuming normal returns can give a false sense of precision in your Sharpe ratio estimates. Overconfidence in these estimates can lead to inappropriate investment decisions.

9. Missed Opportunities:

  • Focusing solely on the Sharpe ratio and assuming normality may cause you to miss investment opportunities in assets or strategies that do not conform to these assumptions but still offer attractive risk-adjusted returns.

In the context of personal investment decisions, it’s crucial to be aware of these limitations. It’s often advisable to incorporate alternative risk assessment techniques that account for the complexities and uncertainties of financial markets, such as Monte Carlo simulations, robust portfolio optimization, or non-parametric methods. Additionally, considering a broader range of risk and return metrics beyond just the Sharpe ratio can provide a more comprehensive view of your investment choices. Diversification and periodic portfolio rebalancing are also essential strategies for managing risk and maximizing returns in personal investment portfolios.

4. Real-World Examples of CLT Misapplications

Relying on the Central Limit Theorem (CLT) in portfolio return analysis has led to flawed conclusions in several real-world cases.

These examples underscore the importance of critically evaluating the suitability of the CLT in specific financial scenarios.

  1. The 2008 Financial Crisis: During the 2008 financial crisis, many investment models based on CLT assumptions failed to predict the extreme market events and losses. The crisis highlighted the inadequacy of assuming normality in highly volatile markets, leading to severe financial repercussions.
  2. Flash Crashes: Instances of flash crashes, like the “Flash Crash” of May 6, 2010, showed that market returns can exhibit rapid and extreme fluctuations that defy the assumptions of CLT. Models relying on these assumptions often underestimated the risks associated with such events.
  3. Cryptocurrency Markets: The cryptocurrency market’s extraordinary volatility challenges the CLT’s applicability. Sharp price swings and the lack of fundamental drivers often result in non-normal return distributions. Investors who relied on CLT-based models may have misjudged their portfolio risk.
  4. Black Swan Events: The term “black swan” describes highly improbable, unforeseen events that have a profound impact. CLT-based models tend to underestimate the likelihood and consequences of such events. For example, the collapse of Lehman Brothers in 2008 was considered a black swan event that significantly disrupted financial markets.
  5. Long-Term Capital Management (LTCM): LTCM, a hedge fund founded by Nobel laureates and renowned quants, relied on CLT-based risk models. In 1998, LTCM suffered a near-collapse due to extreme market moves that their models didn’t anticipate, highlighting the limitations of CLT assumptions in complex financial environments.

These real-world examples demonstrate that blindly applying the CLT to portfolio return analysis can lead to severe misjudgments of risk and can have detrimental consequences for investors and financial institutions. Alternative approaches and a critical evaluation of data distribution characteristics are essential for a more accurate assessment of financial risks.

5. Alternative Approaches to Portfolio Return Analysis

Recognizing the limitations of the Central Limit Theorem (CLT) in finance, alternative statistical methods offer more robust options for portfolio return analysis.

These methods better accommodate non-normal data distributions and complex financial scenarios.

In response to the shortcomings of CLT-based analysis, alternative approaches provide valuable insights:

  1. Bootstrapping: Bootstrapping is a resampling technique that generates multiple simulated datasets from the observed data. It allows for the estimation of a portfolio’s statistical properties, including risk and return, without assuming a specific underlying distribution. This method is particularly useful when dealing with non-normally distributed returns or limited data.
  2. Monte Carlo Simulations: Monte Carlo simulations involve generating thousands of random scenarios to model various market conditions and portfolio outcomes. This approach is flexible and can incorporate complex factors such as correlations and fat tails. It offers a comprehensive view of potential performance under diverse market conditions.
  3. Non-Parametric Statistics: Non-parametric methods, like the Kolmogorov-Smirnov test or kernel density estimation, make minimal assumptions about the data distribution. They are well-suited for analyzing data with unknown or non-standard distributions, providing more accurate insights into portfolio characteristics.
  4. Bayesian Methods: Bayesian techniques, such as Bayesian inference and Bayesian networks, offer a probabilistic framework for modeling and analyzing financial data. These methods allow for the incorporation of prior beliefs and updating of probabilities as new information becomes available, making them valuable for decision-making in uncertain financial environments.

By embracing these alternative methods, including Bayesian approaches, investors and analysts can better navigate the challenges posed by non-normal financial data and enhance their decision-making processes in an ever-evolving financial landscape.

6. Conclusion

Key Points:

  • The central limit theorem (CLT) is commonly used in finance to analyze portfolio returns, assuming normal distribution.
  • However, financial data often deviates from a normal distribution due to fat tails and outliers.
  • Violations of CLT assumptions, such as non-i.i.d. data, can lead to misleading results and poor investment decisions.
  • Real-world examples highlight the dangers of blindly applying the CLT in portfolio return analysis.
  • Alternative approaches like bootstrapping, Bayesian Inference and Monte Carlo simulations offer more robust ways to analyze non-normally distributed data.
  • Investors should critically assess the appropriateness of using the CLT and consider alternative methods when analyzing portfolio returns.

7. Additional Resources

To delve deeper into portfolio return analysis, consider these resources:

  1. “Monte Carlo Methods in Financial Engineering” by Paul Glasserman.
  2. “The Bootstrap and Edgeworth Expansion” by Peter Hall.
  3. “Statistics and Data Analysis for Financial Engineering” by David Ruppert.
  4. “Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach” by Gregory W. Corder.
  5. Online courses and tutorials on bootstrapping, Monte Carlo simulations, and non-parametric statistics offered by educational platforms and universities.

These resources can provide valuable insights and practical guidance for investors and analysts seeking to move beyond the limitations of the central limit theorem in portfolio return analysis.

References

[1] Hull, John C. 2009. The credit crunch of 2007: What went wrong? Why? What lessons can be learned? J. Credit Risk 5(2): 3–18.

[2] Mühlbacher A, Guhr T. 2018. Credit Risk Meets Random Matrices: Coping with Non-Stationary Asset Correlations. Risks. 2018; 6(2):42. https://doi.org/10.3390/risks6020042

[3] Taboga, Marco. 2021. “Maximum likelihood estimation”, Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix.

--

--

QuantPy

Quantitative Finance. I aim to implement financial concepts with Python, because I believe the best way to learn is to Build Something! quantpy.com.au