Long-Short Equity Strategy using Ranking: : Simple Trading Strategies Part 4

Auquan
auquan
Published in
12 min readJan 11, 2018

In the last post, we covered Pairs trading strategy and demonstrated how to leverage data and mathematical analysis to create and automate a trading strategy.

Long-Short Equity Strategy is a natural extension of Pairs Trading applied to a basket of stocks.

Download Ipython Notebook here.

Underlying Principle

Long-Short equity strategy is both long and short stocks simultaneously in the market. Just like pairs trading identifies which stock is cheap and which is expensive in a pair, a Long-Short strategy will rank all stocks in a basket to identify which stocks are relatively cheap and expensive. It will then go long (buys) the top n equities based on the ranking, and short (sells) the bottom n for equal amounts of money(Total value of long position = Total value of short position).

Remember how we said that Pairs Trading is a market neutral strategy? So is a Long-Short strategy as the equal dollar volume long and short positions ensure that the strategy will remain market neutral (immune to market movements). The strategy is also statistically robust — by ranking stocks and entering multiple positions, you are making many bets on your ranking model rather than just a few risky bets. You are also betting purely on the quality of your ranking scheme.

What is a Ranking Scheme?

A ranking scheme is any model that can assign each stock a number based on how they are expected to perform, where higher is better or worse. Examples could be value factors, technical indicators, pricing models, or a combination of all of the above. For example, you could use a momentum indicator to give a ranking to a basket of trend following stocks: stocks with highest momentum are expected to continue to do well and get the highest ranks; stocks with lowest momentum will perform the worst and get lowest rans.

The success of this strategy lies almost entirely in the ranking scheme used — the better your ranking scheme can separate high performing stocks from low performing stocks, better the returns of a long-short equity strategy. It automatically follows that developing a ranking scheme is nontrivial.

What happens once you have a Ranking Scheme?

Once we have determined a ranking scheme, we would obviously like to be able to profit from it. We do this by investing an equal amount of money into buying stocks at the top of the ranking, and selling stocks at the bottom. This ensures that the strategy will make money proportionally to the quality of the ranking only, and will be market neutral.

Let’s say you are ranking m equities, have n dollars to invest, and want to hold a total of 2p positions (where m > 2p ). If the stock at rank 1 is expected to perform the worst and stock at rank m is expected to perform the best:

  • You take the stocks in position 1,…,p in the ranking, sell n/2p dollars worth of each stock
  • For each stock in position m−p,…,m in the ranking, buy n/2p dollars worth of each stock

Note: Friction Because of Prices Because stock prices will not always divide n/2p evenly, and stocks must be bought in integer amounts, there will be some imprecision and the algorithm should get as close as it can to this number. For a strategy running with n=100000 and p=500, we see that

n/2p=100000/1000 =100

This will cause big problems for stocks with prices > 100 since you can’t buy fractional stock. This is alleviated by trading fewer equities or increasing the capital.

Let’s run through a hypothetical example

We generate random stock names and a random factor on which to rank them. Let’s also assume our future returns are actually dependent on these factor values.

import numpy as np
import statsmodels.api as sm
import scipy.stats as stats
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
## PROBLEM SETUP ### Generate stocks and a random factor value for themstock_names = ['stock ' + str(x) for x in range(10000)]
current_factor_values = np.random.normal(0, 1, 10000)
# Generate future returns for these are dependent on our factor values
future_returns = current_factor_values + np.random.normal(0, 1, 10000)
# Put both the factor values and returns into one dataframe
data = pd.DataFrame(index = stock_names, columns=['Factor Value','Returns'])
data['Factor Value'] = current_factor_values
data['Returns'] = future_returns
# Take a look
data.head(10)

Now that we have factor values and returns, we can see what would happen if we ranked our equities based on factor values, and then entered the long and short positions.

# Rank stocks
ranked_data = data.sort_values('Factor Value')
# Compute the returns of each basket with a basket size 500, so total (10000/500) baskets
number_of_baskets = int(10000/500)
basket_returns = np.zeros(number_of_baskets)
for i in range(number_of_baskets):
start = i * 500
end = i * 500 + 500
basket_returns[i] = ranked_data[start:end]['Returns'].mean()
# Plot the returns of each basket
plt.figure(figsize=(15,7))
plt.bar(range(number_of_baskets), basket_returns)
plt.ylabel('Returns')
plt.xlabel('Basket')
plt.legend(['Returns of Each Basket'])
plt.show()

Our strategy is to sell the basket at rank 1 and buy the basket at rank 10. The returns of this strategy are:

basket_returns[number_of_baskets-1] - basket_returns[0]

4.172

We’re basically putting our money on our ranking model being able to separate and spread high performing stocks from low performing stocks.

For the rest of this post, we’ll talk about how to evaluate a ranking scheme. The nice thing about making money based on the spread of the ranking is that it is unaffected by what the market does.

Let’s consider a real world example.

We load data for 32 stocks from different sectors in S&P500 and try to rank them.

from backtester.dataSource.yahoo_data_source import YahooStockDataSource
from datetime import datetime
startDateStr = '2010/01/01'
endDateStr = '2017/12/31'
cachedFolderName = '/Users/chandinijain/Auquan/yahooData/'
dataSetId = 'testLongShortTrading'
instrumentIds = ['ABT','AKS','AMGN','AMD','AXP','BK','BSX',
'CMCSA','CVS','DIS','EA','EOG','GLW','HAL',
'HD','LOW','KO','LLY','MCD','MET','NEM',
'PEP','PG','M','SWN','T','TGT',
'TWX','TXN','USB','VZ','WFC']
ds = YahooStockDataSource(cachedFolderName=cachedFolderName,
dataSetId=dataSetId,
instrumentIds=instrumentIds,
startDateStr=startDateStr,
endDateStr=endDateStr,
event='history')
price = 'adjClose'

Let’s start by using one month normalized momentum as a ranking indicator

## Define normalized momentum
def momentum(dataDf, period):
return dataDf.sub(dataDf.shift(period), fill_value=0) / dataDf.iloc[-1]
## Load relevant prices in a dataframe
data = ds.getBookDataByFeature()[‘Adj Close’]
#Let's load momentum score and returns into separate dataframes
index = data.index
mscores = pd.DataFrame(index=index,columns=assetList)
mscores = momentum(data, 30)
returns = pd.DataFrame(index=index,columns=assetList)
day = 30

Now we’re going to analyze our stock behavior and see how our universe of stocks work w.r.t our chosen ranking factor.

Analyzing data

Stock behavior

We look at how our chosen basket of stocks behave w.r.t our ranking model. To do this, let’s calculate one week forward return for all stocks. Then we can look at the correlation of 1 week forward return with previous 30 day momentum for every stock. Stocks that exhibit positive correlation are trend following and stocks that exhibit negative correlation are mean reverting.

# Calculate Forward returns
forward_return_day = 5
returns = data.shift(-forward_return_day)/data -1
returns.dropna(inplace = True)
# Calculate correlations between momentum and returns
correlations = pd.DataFrame(index = returns.columns, columns = [‘Scores’, ‘pvalues’])
mscores = mscores[mscores.index.isin(returns.index)]for i in correlations.index:
score, pvalue = stats.spearmanr(mscores[i], returns[i])
correlations[‘pvalues’].loc[i] = pvalue
correlations[‘Scores’].loc[i] = score
correlations.dropna(inplace = True)
correlations.sort_values(‘Scores’, inplace=True)
l = correlations.index.size
plt.figure(figsize=(15,7))
plt.bar(range(1,1+l),correlations[‘Scores’])
plt.xlabel(‘Stocks’)
plt.xlim((1, l+1))
plt.xticks(range(1,1+l), correlations.index)
plt.legend([‘Correlation over All Data’])
plt.ylabel(‘Correlation between %s day Momentum Scores and %s-day forward returns by Stock’%(day,forward_return_day));
plt.show()

All our stocks are mean reverting to some degree! (Obviously we choose the universe to be this way :) ) This tells us that if a stock ranks high on momentum score, we should expect it to perform poorly next week.

Correlation between Ranking due to Momentum Score and Returns

Next, we need to look at correlation between our ranking score and forward returns of our universe, i.e. how predictive of of forward returns is our ranking factor? Does a high relative rank predict poor relative returns or vice versa?

To do this, we calculate daily correlation between 30 day momentum and 1 week forward returns of all stocks.

correl_scores = pd.DataFrame(index = returns.index.intersection(mscores.index), columns = [‘Scores’, ‘pvalues’])for i in correl_scores.index:
score, pvalue = stats.spearmanr(mscores.loc[i], returns.loc[i])
correl_scores[‘pvalues’].loc[i] = pvalue
correl_scores[‘Scores’].loc[i] = score
correl_scores.dropna(inplace = True)
l = correl_scores.index.size
plt.figure(figsize=(15,7))
plt.bar(range(1,1+l),correl_scores[‘Scores’])
plt.hlines(np.mean(correl_scores[‘Scores’]), 1,l+1, colors=’r’, linestyles=’dashed’)
plt.xlabel(‘Day’)
plt.xlim((1, l+1))
plt.legend([‘Mean Correlation over All Data’, ‘Daily Rank Correlation’])
plt.ylabel(‘Rank correlation between %s day Momentum Scores and %s-day forward returns’%(day,forward_return_day));
plt.show()

Daily Correlation is quite noisy, but very slightly negative (This is expected, since we said all the stocks are mean reverting). Let’s also look at average monthly correlation of scores with 1 month forward returns.

monthly_mean_correl =correl_scores['Scores'].astype(float).resample('M').mean()
plt.figure(figsize=(15,7))
plt.bar(range(1,len(monthly_mean_correl)+1), monthly_mean_correl)
plt.hlines(np.mean(monthly_mean_correl), 1,len(monthly_mean_correl)+1, colors='r', linestyles='dashed')
plt.xlabel('Month')
plt.xlim((1, len(monthly_mean_correl)+1))
plt.legend(['Mean Correlation over All Data', 'Monthly Rank Correlation'])
plt.ylabel('Rank correlation between %s day Momentum Scores and %s-day forward returns'%(day,forward_return_day));
plt.show()

We can see that the average correlation is slightly negative again, but varies a lot daily as well from month to month.

Average Basket Return

Now we compute the returns of baskets taken out of our ranking. If we rank all equities and then split them into nn groups, what would the mean return be of each group?

The first step is to create a function that will give us the mean return in each basket in a given the month and a ranking factor.

def compute_basket_returns(factor, forward_returns, number_of_baskets, index):data = pd.concat([factor.loc[index],forward_returns.loc[index]], axis=1)
# Rank the equities on the factor values
data.columns = ['Factor Value', 'Forward Returns']
data.sort_values('Factor Value', inplace=True)
# How many equities per basket
equities_per_basket = np.floor(len(data.index) / number_of_baskets)
basket_returns = np.zeros(number_of_baskets)# Compute the returns of each basket
for i in range(number_of_baskets):
start = i * equities_per_basket
if i == number_of_baskets - 1:
# Handle having a few extra in the last basket when our number of equities doesn't divide well
end = len(data.index) - 1
else:
end = i * equities_per_basket + equities_per_basket
# Actually compute the mean returns for each basket
#s = data.index.iloc[start]
#e = data.index.iloc[end]
basket_returns[i] = data.iloc[int(start):int(end)]['Forward Returns'].mean()

return basket_returns

We calculate the average return of each basket when equities are ranked based on this score. This should give us a sense of the relationship over a long timeframe.

number_of_baskets = 8
mean_basket_returns = np.zeros(number_of_baskets)
resampled_scores = mscores.astype(float).resample('2D').last()
resampled_prices = data.astype(float).resample('2D').last()
resampled_scores.dropna(inplace=True)
resampled_prices.dropna(inplace=True)
forward_returns = resampled_prices.shift(-1)/resampled_prices -1
forward_returns.dropna(inplace = True)
for m in forward_returns.index.intersection(resampled_scores.index):
basket_returns = compute_basket_returns(resampled_scores, forward_returns, number_of_baskets, m)
mean_basket_returns += basket_returns
mean_basket_returns /= l
print(mean_basket_returns)
# Plot the returns of each basket
plt.figure(figsize=(15,7))
plt.bar(range(number_of_baskets), mean_basket_returns)
plt.ylabel('Returns')
plt.xlabel('Basket')
plt.legend(['Returns of Each Basket'])
plt.show()

Seems like we are able to separate high performers from low performers with very small success.

Spread Consistency

Of course, that’s just the average relationship. To get a sense of how consistent this is, and whether or not we would want to trade on it, we should look at it over time. Here we’ll look at the monthly spreads for the first two years. We can see a lot of variation, and further analysis should be done to determine whether this momentum score is tradeable.

total_months = mscores.resample('M').last().index
months_to_plot = 24
monthly_index = total_months[:months_to_plot+1]
mean_basket_returns = np.zeros(number_of_baskets)
strategy_returns = pd.Series(index = monthly_index)
f, axarr = plt.subplots(1+int(monthly_index.size/6), 6,figsize=(18, 15))
for month in range(1, monthly_index.size):
temp_returns = forward_returns.loc[monthly_index[month-1]:monthly_index[month]]
temp_scores = resampled_scores.loc[monthly_index[month-1]:monthly_index[month]]
for m in temp_returns.index.intersection(temp_scores.index):
basket_returns = compute_basket_returns(temp_scores, temp_returns, number_of_baskets, m)
mean_basket_returns += basket_returns

strategy_returns[monthly_index[month-1]] = mean_basket_returns[ number_of_baskets-1] - mean_basket_returns[0]

mean_basket_returns /= temp_returns.index.intersection(temp_scores.index).size

r = int(np.floor((month-1) / 6))
c = (month-1) % 6
axarr[r, c].bar(range(number_of_baskets), mean_basket_returns)
axarr[r, c].xaxis.set_visible(False)
axarr[r, c].set_title('Month ' + str(month))
plt.show()
plt.figure(figsize=(15,7))
plt.plot(strategy_returns)
plt.ylabel(‘Returns’)
plt.xlabel(‘Month’)
plt.plot(strategy_returns.cumsum())
plt.legend([‘Monthly Strategy Returns’,’Cumulative Strategy Returns’])
plt.show()

Finally, lets look at the returns if we had bought the last basket and sold the first basket every month (assuming equal capital allocation to each security)

total_return = strategy_returns.sum()
ann_return = 100*((1 + total_return)**(12.0 /float(strategy_returns.index.size))-1)
print('Annual Returns: %.2f%%'%ann_return)

Annual Returns: 5.03%

We see that we have a very faint ranking scheme that only mildly separates high performing stocks from low performing stocks. Besides, this ranking scheme has no consistency and varies a lot month to month.

Finding the correct ranking scheme

To execute a long-short equity, you effectively only have to determine the ranking scheme. Everything after that is mechanical. Once you have one long-short equity strategy, you can swap in different ranking schemes and leave everything else in place. It’s a very convenient way to quickly iterate over ideas you have without having to worry about tweaking code every time.

The ranking schemes can come from pretty much any model as well. It doesn’t have to be a value based factor model, it could be a machine learning technique that predicted returns one-month ahead and ranked based on that.

Choice and Evaluation of a Ranking Scheme

The ranking scheme is where a long-short equity strategy gets its edge, and is the most crucial component. Choosing a good ranking scheme is the entire trick, and there is no easy answer.

A good starting point is to pick existing known techniques, and see if you can modify them slightly to get increased returns. We’ll discuss a few starting points here:

  • Clone and Tweak: Choose one that is commonly discussed and see if you can modify it slightly to gain back an edge. Often times factors that are public will have no signal left as they have been completely arbitraged out of the market. However, sometimes they lead you in the right direction of where to go.
  • Pricing Models: Any model that predicts future returns can be a factor. The future return predicted is now that factor, and can be used to rank your universe. You can take any complicated pricing model and transform it into a ranking.
  • Price Based Factors (Technical Indicators): Price based factors, like we discussed today, take information about the historical price of each equity and use it to generate the factor value. Examples could be moving average measures, momentum ribbons, or volatility measures.
  • Reversion vs. Momentum: It’s important to note that some factors bet that prices, once moving in a direction, will continue to do so. Some factors bet the opposite. Both are valid models on different time horizons and assets, and it’s important to investigate whether the underlying behavior is momentum or reversion based.
  • Fundamental Factors (Value Based): This is using combinations of fundamental values like P.E ratio, dividend etc. Fundamental values contain information that is tied to real world facts about a company, so in many ways can be more robust than prices.

Ultimately, developing predictive factors is an arms race in which you are trying to stay one step ahead. Factors get arbitraged out of markets and have a lifespan, so it’s important that you are constantly doing work to determine how much decay your factors are experiencing, and what new factors might be used to take their place.

Additional Considerations

  • Rebalancing Frequency

Every ranking system will be predictive of returns over a slightly different timeframe. A price-based mean reversion may be predictive over a few days, while a value-based factor model may be predictive over many months. It is important to determine the timeframe over which your model should be predictive, and statistically verify that before executing your strategy. You don’t want to overfit by trying to optimize the rebalancing frequency — you will inevitably find one that is randomly better than others, but not necessary because of anything in your model. Once you have determined the timeframe on which your ranking scheme is predictive, try to rebalance at about that frequency so you’re taking full advantage of your models.

  • Capital Capacity and Transaction Costs

Every strategy has a minimum and maximum amount of capital it can trade before it stops being profitable. The minimum threshold is usually set by transaction costs.

Trading many equities will result in high transaction costs. Say that you want to purchase 1000 equities, you will incur a few thousand dollars in costs per rebalance. Your capital base must be high enough that the transaction costs are a small percentage of the returns being generated by your strategy. For example, if your capital is 100,000$ and your strategy makes 1% per month(1000$) , then all of these returns will be taken up by transaction costs.. You would need to be running the strategy on millions of dollars for it to be profitable over 1000 equities.

The minimum capacity is quite high as such, and dependent largely on the number of equities traded. However, the maximum capacity is also incredibly high, with long-short equity strategies capable of trading hundreds of millions of dollars without losing their edge. This is true because the strategy rebalances relatively infrequently, and the total dollar volume is divided by the number of equities traded. Therefore dollar-volume per equity is quite low and you don’t have to worry about impacting the market by your trades. Let’s say you’re trading 1000 equities with 100,000,000$. If you rebalance your entire portfolio every month, you are only trading 100,000 dollar-volume per month for each equity, which isn’t enough to be a significant market share for most securities.

--

--

Auquan
auquan

Building Tools and Platform to solve finance problems using Data Science