Thanks! There are many ways of reducing the dimensionality of your data. Starting from removing highly correlated features (i.e. here https://chrisalbon.com/machine_learning/feature_selection/drop_highly_correlated_features/) and using regularisation in the models (for example, using Lasso regression, as discussed in the article) and ending with feature space transformations, like PCA (https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) or Random projections (https://scikit-learn.org/stable/modules/random_projection.html).
It’s also possible to use model-based feature selection, when you drop variables that do not add much quality to the predictions of the model (https://scikit-learn.org/stable/modules/feature_selection.html).
It’s hard to tell beforehand what works best, so I suggest trying various approaches and choosing the one that gives you the best forecast quality. Hope that helps!