During the last months, I have worked in the Financial Forecasting Challenge proposed by G-Research. This is a competition alike to Kaggle competitions, in fact, it was very similar to the competition proposed by Two Sigma in Kaggle one year and a half ago.
I ended up at 29th place on the private leaderboard, among about 400 participants. Also, I have shared my code in a github repository with models, preprocessing scripts and some useful exploratory data analysis.
Like a typical machine learning problem, we got our train and test dataset with variables such as ‘Market’, ‘Day’, ‘Stock’, 11 different features and ‘y’ column only present in the training dataset (our target). So, the goal was to predict this column for test dataset.
If we take a look at the image, we can see an example of ‘y’ values for 100 random stocks (every curve correspond to one stock) and we can check that the time period was 729 days (possibly days).
Really was 3022 different stocks and they were divided into 4 markets. These markets were considerably different between them.
The separation between test and train datasets was a split from the original dataset. So, we need predict gaps. Really strange for me.
The metric used was the weighted mean squared error (wMSE) and the prize was 30.000$ for the winner.
- Date features: Including ‘year’, ‘month’, ‘week’, ‘weekday’ and ‘daymonth’. I supposed that every row matches with a day, so it was easy to find out the weekends and some bank holidays. I crossed this idea with a financial calendar and I supposed the dataset was matched with some Asian markets. So, my guess was the dataset starts 2015–10–01.
- Black magic: I added rolling_mean, inverse_rolling_mean, diffs, cumsum and shift for all features order by ‘Stock’ with different periods. When I included them my results grew up greatly.
- Technical Analysis: I used some indicators based on my own technical analysis library. Above all, I was interested in volume, volatility and trend indicators, because we didn’t know the meaning of different features. For it, I supposed that ‘x2’ was a derived from volume or ‘x4’ and ‘x3E’ from close price.
Basically, I worked in 3 ideas:
- I played with different input dataset. On the one hand, I divided the dataset by 4 markets, on the other hand, I used full dataset.
- I used and tuned different boosting models like XGBoost and LightGBM, by the way, XGBoost had always better results than LightGBM.
- I stacked the best models using different weights.
Others ideas that I tried with bad results:
I worked too in some other ideas, but I didn’t include them in my final submissions.
- I attempted, unsuccessfully, to improve my results with Recurrent Neural Networks using Keras library, specifically I focused on LSTMs with different approaches.
- I tried to delete and/or clip outliers values through some observations and using statistical methods like standard deviation.
- I did feature engineering using decomposition methods such as pca, svd or tsne for different groups of features from the original dataset.
- Data normalization.
- I used some models implemented in scikit-learn library such as ExtraTreesRegressor, RandomForestRegressor, and different linear regressions.
Thank you so much to competition organizers. I hope this challenge was the first of a lot of more.
See you in future challenges.