Time Series Nested Cross-Validation
Courtney Cochrane
1.1K13

Very interesting post. Day Forward-Chaining cross-validation also goes under the name of walk-forward cross-validation.

It is not necessarily a problem to use future information in the training set, it really depends to which extent the future data is correlated to the present data, which is very domain specific. There may be a typical correlation time. By excluding from the train set data following a test set within the correlation time, data leakage is unlikely. This procedure is known as embargoing.

Once you allow yourself to include future data (with a suitable embargoing), there is a very powerful cross-validation technique called combinatorial cross-validation. You split your data set into k contiguous segment along the time axis, use p < k of them as a test set and the rest as a train set. With p > 1 you can easily get a lot of cross-validation folds, providing a better idea of the variance of the model.

As some people in the comments were asking about implementations of walk-forward cross-validation, I wrote a Python package for walk-forward cross-validation and combinatorial cross-validation with embargo, with an interface close to scikit-learn. More information can be found in this Medium post.

But again great post.