# How Some Academics Misguide Traders And Hedge Funds

Some academics misguide traders and hedge funds in focusing on combating data-mining bias when in most cases this is an exercise in futility, increases Type-II error out of bounds (missed discoveries) and diverts attention from the real problem, which is that most strategies fail due to changes in market conditions.

In fact, the reality that some academics refuse to face- and not the only one — is that some data-mined and over-optimized strategies have worked for extended periods of time delivering large alpha and in the meantime have generated performance records that some funds are still using for marketing purposes. An example is the trend-following CTAs that used “pedestrian” type of data-mined strategies developed in the late 1980s to early 1990s that delivered high absolute returns but stopped performing well when market conditions changed.

Below are the three conditions that changed after the 1990s:

- Returns are mean-reverting in most markets
- Certain strategies, such as trend-following, became too crowded
- Dumb money inflow to markets decreased, especially after 2008

Although data-mining bias is a reality and academics have raised awareness about its impact on developing strategies, the persistent focus on correcting for its effects has shifted the attention of strategy developers from the more important problem of changing market conditions. One could claim that the focus on correcting for data-mining bias is more popular among academics than the problem of changing market conditions because it offers a range of potential solutions, such as adjusting Sharpe or t-statistics values to compensate for multiple comparisons. However, there is a high price to be paid for such corrections, which involves the possibility of falsely rejecting genuine strategies (Type II error) while at the same type some random strategies may pass all tests but then fail (Type I error.)

**An example**

Due to high serial correlation in equity market returns in the past, one could just implement the simple strategy of buying when price was above a fast moving average and sell when it fell below. As it turns out, from 1960 up to and including 1999, a period that covers 40 years, this strategy worked well and delivered high alpha.

Let us suppose that a strategy developer at the end of 1970 decided to find the best moving average to trade the S&P 500 (not tradable directly then.) The strategy works as follows:

*Buy next open if closing price > MA(n) — sell next open if closing price < MA(n)*

We vary n from 2 to 49 in increments of 1 and record the results. As it turns out, n=2 maximizes CAGR, as shown below:

CAGR for the strategy is 15.92% versus 3.99% for the buy and hold and that with only a trivial entry and exit rule. Therefore, we should expect this over-optimized strategy in a period of 11 years between 1960 and 1970 to fail or at least underperform in the next 29 years from 1971 to 1999. But apparently it does not, as shown below.

Not only the strategy did not stop working for 20 more years forward, which is enough to generate a stellar performance record for a fund and wealth for its management, but it outperformed buy and hold CAGR of 10.06% with a 16.32% CAGR. Risk adjusted outperformance of the strategy is even higher: MAR (CAGR/Max. DD) is 1.38 for the strategy versus only 0.21 for buy and hold. One could have leveraged 2x the strategy and retire after a few years.

What is more important here is that the n=2 optimum value persists in the forward sample, i.e., the optimum for 11 years from 1960 to 1970 stays optimum for the next 29 years! This is an amazing fact.

Now suppose that another developers sits at the end of 1999 in front of a computer and after in-sample and out-of-sample test determines that the n=2 moving average cross is a good strategy to trade. The t-statistic from the out-of-sample from 1971 to 1999 is estimated at about 8.7 (roughly equal to Sharpe times the square root of the number of years in the out-of-sample.) This is a high t-statistic value and indicates that it is very unlikely that the simulated Sharpe obtained in out-of-sample comes from a random system given the null hypothesis that the actual Sharpe is 0. In fact, the one-tailed p-value for the out-of-sample t-statistic of 8.7 is close to 0. However, this is what happens after 1999.

Strategy performance deteriorated quickly with CAGR falling to -4.54% and maximum drawdown rising to -78%. Obviously, something that worked for many years no longer worked after 1999. We claim this was the structural change from a high positive serial correlation to negative correlation, as shown in the chart below that combines the performance of this strategy for all periods.

The vertical red line shows the change in market conditions related to serial correlation. The 252-day, 1-lag, serial correlation was positive and high in 1960s through late 1990s but then turned negative. As we have written before in his blog, we consider this a structural change in price action that rendered a good part of technical analysis ineffective, including chart patterns and indicators.

**Conclusions**

(1) Optimized strategies can continue to generate alpha for extended period of time if they are properly aligned with price action dynamics. Therefore, trashing all optimized strategies may result in missing out very profitable opportunities and actually may lead to the** dark path** of trying to find the Holy Grail instead of something that works.

(2) Corrections for data-mining bias **cannot prevent failure** from a change in market conditions and also limit sound possibilities. This is a naive approach to trading strategy development unfortunately followed by some academics.

(3) Knowing when market conditions change and when strategies stop to perform as expected and must be terminated are far more important than any claims about data-mining bias or optimization. However, these do not provide a ground for the rigorous analysis required in academic publications and for this reason the focus is on the mathematical aspects of data-mining bias. But funds that get the wrong advice will see that reflected in their performance.

(4) Practitioners are ahead of academics in trading strategy development because of skin-in-the-game. Although trading strategy analysis has provided recently a new source for publications, most are about “pedestrian” strategies and anomalies and the bulk is about momentum. Anyone that discloses a real edge in a paper is either naive or irrational. There is little to be gained from reading academic publications on the subject of trading strategy analysis and development as the focus of practitioners at this point is on low capacity idiosyncratic alpha that is hard to replicate, i.e., practitioners are again ahead of academics about 10 to 20 years.

This article was originally published in Price Action Lab Blog.

*None of the information contained in this article constitutes a recommendation that any particular security, portfolio of securities, transaction, or investment strategy is suitable for any specific person. **Diclaimer.*

If you have any questions or comments, happy to connect on Twitter:@mikeharrisNY

**About the author:** Michael Harris is a trader and best selling author. He is also the developer of the first commercial software for identifying parameter-less patterns in price action 17 years ago. In the last seven years he has worked on the development of DLPAL, a software program that can be used to identify short-term anomalies in market data for use with fixed and machine learning models. Click here for more.