Black–Litterman Model for Asset Allocation for Top 20 Indian Companies by Market Capitalization and Backtesting — Part 1

Sabir Jana, CFA
Analytics Vidhya
Published in
9 min readJun 19, 2020

Asset allocation is the most important step in the investment management process. The ability and willingness to take the risk, the return objectives, and the constraints of the investor along with capital market expectations are the key inputs for asset allocation. Harry Markowitz first introduced the Mean-variance optimization (MVO), perhaps the most common approach used in practice to develop and set an asset allocation policy. However, there is some criticism for MVO such as instability, concentration, and underperformance of portfolio in particular.

One of the ways to address these problems is the Black–Litterman model (BL) created by Fischer Black and Robert Litterman. The Black-Litterman model starts with reverse optimization, where the starting weights and implied returns are calculated based on the market capitalization of the asset and subsequently it allows to accommodate alternative forecasts by the analyst. The mathematical details of the Black–Litterman model might be boring for a nonmathematician and are beyond the scope of this article. BL model along with a mean-variance optimization often leads to well-diversified asset allocations by improving the consistency between each asset class’ expected return and its contribution to systematic risk.

Here, we are going to look at the application of this model to derive the optional weights for the top 20 Indian companies by market capitalization. I have taken an example of equities however, it can be any asset class. In the second part of this article, we are going to backtest the weights derived from the model using Quantopian zipline or Backtrader. This is not a magic formula hence don’t think that by the end of this article you will have magical weights for these 20 stocks to make you super-rich in no time. The objective is to help you understand and learn the process and code along. I have used python PyPortfolioOpt open-source library which provides the python implementation for this model and many more hence I would highly recommend you to explore it. I must thank Robert Martin for the amazing work in developing PyPortfolioOpt. The code and data for this article can be found at my Github repository.

The overall approach is as follows:

  1. Data gathering for the top 20 Indian Companies.
  2. Prior calculation and analyst views on returns.
  3. BL Posterior estimates of the returns and covariance matrix.
  4. Confidence matrix on analyst views and Efficient Frontier.

Data gathering for top 20 Indian Companies

To start with, we will need the list of the top 20 Indian companies by market capitalization. This information is available in the public domain and shouldn’t be difficult to obtain. I have taken these details from https://www.moneyworks4me.com/best-index/nse-stocks/top-nifty50-companies-list/ and saved them in mcap.csv file.

Next, we will need daily historical closing price details for these companies. I have fetched these details from the database that I maintain on my local machine. However, for your quick reference, a copy of the pricing dataset is provided in prices.csv.

Prior calculation and analyst Views on returns

To construct the priors which are market-implied returns embedded into the market capitalization of the asset, we will need a market-implied risk premium. This is called delta and can be calculated by dividing the market’s excess return by its variance. Once we have delta, priors can be calculated by multiplying delta to market-cap weights. PyPortfolioOpt makes all these calculations very easy.

Let’s look at the code which performs the following tasks:

  1. Read mcap.csv as pandas dataframe.
  2. Create the tickers list from the dataframe created in step 1.
  3. Read daily closing prices from prices.csv.
  4. Create ticker’s market cap dictionary from the dataframe created in step 1.
  5. Download BSE-500 closing prices to represent market prices. We need this to calculate market excess returns. I have used yfinance python library to download daily price data from Yahoo Finance.
  6. Calculate asset covariance and delta. To calculate asset covariance, I have used shrinkage estimators implementation provided in PyPortfolioOpt. You can check out PyPortfolioOpt documentation if you are interested in finding out more about it. I have used 10 years India bond yield of 5.796% as the risk-free rate.
  7. Plot Spearman correlation matrix of daily returns.
  8. Calculate market-implied returns using market cap dictionary, delta, and asset covariance matrix.
  9. Incorporate analyst’s views on returns. We have views.csv file created for this. Read the file and create a dictionary for views.
# necessary imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pypfopt as pyp
import seaborn as sns
import datetime
import yfinance as yf
idx = pd.IndexSlice


%matplotlib inline
%config InlineBackend.figure_format = 'retina'

# read market capitalization and ticker details
mcap = pd.read_csv('data/mcap.csv')
mcap.head(2)

# create the tickers list
tickers = [ticker[:-3] for ticker in mcap.Tickers]
print(tickers)

# Read daily prices from csv
prices = pd.read_csv('data/prices.csv', index_col=[0], parse_dates=[0])
prices.head(2)

# create market cap dict
mcap_dict = {ticker[:-3] : cap for ticker, cap in zip(mcap['Tickers'].values, mcap['Market Cap'].values)}
print(mcap_dict)

# get market daily prices - BSE-500
market_prices = yf.download("BSE-500.BO", period="max")["Adj Close"]
market_prices.head(2)

# calculate asset covariance and delta
# market-implied risk premium, which is the market’s excess return divided by its variance
S = pyp.risk_models.CovarianceShrinkage(prices).ledoit_wolf()
delta = pyp.black_litterman.market_implied_risk_aversion(market_prices, risk_free_rate=0.05796)
print(delta)

fig, ax = plt.subplots(figsize=(10,8))
sns.heatmap(prices.pct_change().corr(method ='spearman'), ax=ax, cmap='coolwarm', annot=True, fmt=".2f")
ax.set_title('Assets Correlation Matrix')
plt.savefig('images/chart1', dpi=300)

# calculate prior - market implied retunrs
market_prior = pyp.black_litterman.market_implied_prior_returns(mcap_dict, delta, S)
market_prior.name = 'Prior'
print(market_prior)

# plot prior
market_prior.plot.barh(figsize=(12,6), title = 'Market Iimplied Retunrs',grid=True);
plt.savefig('images/chart2', dpi=300)

# provide absolute views - read csv
view_confidence = pd.read_csv('data/views.csv', index_col=[0])
view_confidence.head(20)

# create view dict
views_dict = {ind : view_confidence['View'][ind] for ind in view_confidence.index}
print(views_dict)
Assets Correlation Matrix
Priors — Market Implied Returns

The asset correlation matrix shows the correlation between these 20 stocks and in the second picture you can see bar plots for market-implied returns. The correlation coefficient ranges from +1 to -1 where +1 indicates a strong positive correlation and -1 indicates a strong negative correlation. It is not a surprise to see that companies in a similar business have a higher correlation between them, for example, ICICI Bank and Axis Bank show a correlation of 0.61. The returns, risk, and correlation among the asset classes are key to portfolio construction. The lower the correlation, the higher the diversification benefit.

BL Posterior estimates of the returns and covariance matrix

In this section, we are going to run the Black–Litterman model with historical covariance, market prior, and analyst views as the inputs. We also have the option to specify the confidence matrix on the views, which will be covered in the next section. In the absence of a confidence matrix, PyPortfolioOpt considers uncertainty regarding the views in proportion to the variance of the market priors. The BL model output posterior estimates the returns and covariance matrix.

Let’s go hands-on to perform the following tasks:

  1. Run the BL model with historical covariance, market prior, and analyst views as the input.
  2. Extract posterior returns and create a dataframe of ‘Prior’, ‘Historical’,’Posterior’, ‘Views’ returns, and plot them for a quick comparison.
  3. Save the returns dataframe as csv file for future reference.
  4. Extract the covariance matrix and save it as csv file for future reference.
# run the Bl model
bl = pyp.BlackLittermanModel(S, pi=market_prior, absolute_views=views_dict)

# Posterior estimate of returns
bl_return = bl.bl_returns()
bl_return.name = 'Posterior'

# get historical returns
mu = pyp.expected_returns.mean_historical_return(prices)
mu.name = 'Historical'
# print(mu)

# create returns dataframe
returns_df = pd.DataFrame([market_prior, mu, bl_return, pd.Series(views_dict)],
index=['Prior', 'Historical','Posterior', 'Views']).T
print(returns_df)

# write it for future reference
returns_df.to_csv('data/returns.csv', header=True, )
returns_df = pd.read_csv('data/returns.csv', index_col=[0], )

# plot the returns
returns_df.plot.bar(figsize=(14,6), title = 'Returns Estimates - Prior, Historical, Posterior, Views', grid=True);
plt.savefig('images/chart3', dpi=300)

# get the covariance matrix
S_bl = bl.bl_cov()
S_bl.to_csv('data/S_bl.csv')
S_bl = pd.read_csv('data/S_bl.csv', index_col=[0])
Returns Estimates — Prior, Historical, Posterior, Views

The pictorial view of various returns looks very interesting as there are wide variations among various estimates. The historical returns are quite high, and we don’t think it will be feasible to have a similar performance during the forecasting horizon. Our estimates are reflected in ‘Views’.

Confidence matrix on analyst views and Efficient Frontier

Now let’s add one more twist to the model as we want to specify — how confident we are regarding our views on returns? The implementation of Idzorek’s method in PyPortfolioOpt allows us to specify uncertainties regarding the analyst view as a percentage. That means a number of 1 is 100% confidence and 0 is no confidence at all. We will choose input parameter omega=”idzorek” to use this feature and will pass a list of confidences (between 0 to 1) into the view_confidences parameter of initiation of BlackLittermanModel class. To know more about Idzorek’s method please refer to PyPortfolioOpt documentation.

Let’s code for the following:

  1. Create a list of confidence inputs for all 20 stocks.
  2. Initiate BlackLittermanModel class with inputs as the covariance matrix, market prior, view dictionary, and confidence list with omega equal to “idzorek”.
  3. Extract the returns and create pandas dataframe with all the estimated returns to this point. This includes ‘Prior’, ‘Historical’, ‘Posterior’, ‘Views’, ‘Posterior_confidence’, and plot them for a quick comparison.
  4. Extract the covariance matrix and save it as csv file for future reference.
  5. Run the efficient frontier with inputs as posterior returns and posterior covariance matrix generated from the BL model. I have kept a constraint to have a maximum weight of any particular stock, not more than 10%, and opted for minimum volatility optimization.
# create confidences vector
confidences = list(view_confidence.Confidences)

print(confidences)

# use Idzorek's method and run the model
bl_confi = pyp.BlackLittermanModel(S, pi=market_prior,
absolute_views=views_dict,
omega="idzorek", view_confidences=confidences)

# Posterior estimate of returns
bl_return_confi = bl_confi.bl_returns()
bl_return_confi.name = 'Posterior_confidence'

returns_df = pd.DataFrame([market_prior, mu, bl_return, pd.Series(views_dict), bl_return_confi],
index=['Prior', 'Historical','Posterior', 'Views', 'Posterior_confidence']).T
print(returns_df)

# write it for future reference
returns_df.to_csv('data/returns.csv', header=True,)
returns_df = pd.read_csv('data/returns.csv', index_col=[0],)

returns_df.plot.bar(figsize=(14,6),
title = 'Returns Estimates - Prior, Historical, Posterior, Views, Posterior-confidence', grid=True);
plt.savefig('images/chart4', dpi=300)

# get the covariance matrix
S_bl_confi = bl_confi.bl_cov()
S_bl_confi.to_csv('data/S_bl_confi.csv')
S_bl_confi = pd.read_csv('data/S_bl_confi.csv', index_col=[0])
S_bl_confi

# Long-only minimum volatility portfolio, with a weight cap of 10% and regularisation
ef = pyp.EfficientFrontier(bl_return_confi, S_bl_confi, weight_bounds=(0, 0.1), gamma=0)
ef.add_objective(pyp.objective_functions.L2_reg, gamma=0.1)
weights = ef.min_volatility()
ef.portfolio_performance(verbose=True), print('\n')
wt_min_vola = pd.DataFrame([weights],columns=weights.keys()).T * 100


# write it to csv for part 2
wt_min_vola.to_csv('data/wt_min_vola_wts.csv')
wt_min_vola = pd.read_csv('data/wt_min_vola_wts.csv', index_col=[0])


print ('Weights in Percentage ********************')
print(wt_min_vola.round(4))

# plot the weights
wt_min_vola.plot.bar(figsize=(14,6),
title = 'Asset Allocation Based on BL with Confidence Matrix', grid=True,legend=False);
plt.ylabel('Percentage')
plt.savefig('images/chart5', dpi=300)
Returns Estimates — Prior, Historical, Posterior, Views, Posterior-confidence
Asset Allocation Based on BL with Confidence Matrix

Again it is interesting to compare various returns estimates after the confidence list as an input. We can see that both ‘Posterior’ and Posterior with confidence are less than the ‘Views’ due to uncertainties we have accounted for. And, obviously, the ‘Views’ are less than historical returns as per our own forecasts.

When we run the efficient frontier optimizer with inputs as posterior returns, the posterior covariance matrix generated from the BL model, the weight, and minimum volatility constraint, we get a well-diversified portfolio of the top 20 Indian companies by market cap. The expected annual return comes around 8% with an annual volatility of 9.8% and the Sharpe ratio is 0.61. Please refer to Jupyter Notebook on GitHub for detail.

Here, I would like to remind you that we have opted for minimum volatility hence it will be a key parameter to compare with any benchmark we choose. In Part 2 of this article, we will take these weights and backtest the portfolio with regular rebalancing for more than 10 years of historical data.

Until then enjoy investing and do leave comments for the article.

Thanks!

Please Note: This analysis is only for educational purposes and the author is not liable for any of your investment decisions.

--

--