Simon
Simon
Aug 31, 2018 · 1 min read

In the article, you write that the models trained during cross-validation should be averaged. How is this possible? I mean, depending on the model, this is not possible at all, right?

Is it a good practice to do cross-validation by training on the ‘train’ set and evaluating each fold on the ‘validation’ set and the, after the best model parameters are found, train a new model using the best parameters, but this time use both ‘train’ and ‘val’ as training set and ‘test’ as test set?

    Simon

    Written by

    Simon