Grid Search vs Random Search

Deepak Senapati
3 min readAug 29, 2018
Image result for grid search vs random search

In this article, we will focus on two methods for hyperparameter tuning- Grid Search and Random Search and determine which one is better. First let’s find out What are Hyperparameters?

Hyperparameters are model-specific properties that are ‘fixed’ even before the model is trained or tested on the data. For Example: In the case of a random forest, hyper parameters include the number of decision trees in the forest , for a neural network, there is the learning rate, the number of hidden layers, the number of units in each layer, and several other parameters.

Hyperparameter Tuning is nothing but searching for the right set of hyperparameter to achieve high precision and accuracy. Optimising hyperparameters constitute one of the most trickiest part in building the machine learning models. The primary aim of hyperparameter tuning is to find the sweet spot for the model’s parameters so that a better performance is obtained.

There are several parameter tuning techniques, but in this article we shall focus on two of the most widely-used parameter optimising techniques:-

  • Grid Search
  • Random Search

Grid Search

In Grid Search, we try every combination of a preset list of values of the hyper-parameters and evaluate the model for each combination. The pattern followed here is similar to the grid, where all the values are placed in the form of a matrix. Each set of parameters is taken into consideration and the accuracy is noted. Once all the combinations are evaluated, the model with the set of parameters which give the top accuracy is considered to be the best.

Visual Representation of grid search

One of the major drawbacks of grid search is that when it comes to dimensionality, it suffers when the number of hyperparameters grows exponentially. With as few as four parameters this problem can become impractical, because the number of evaluations required for this strategy increases exponentially with each additional parameter, due to the curse of dimensionality.

Random Search

Random search is a technique where random combinations of the hyperparameters are used to find the best solution for the built model. It tries random combinations of a range of values. To optimise with random search, the function is evaluated at some number of random configurations in the parameter space.

Visual Representation of Random search

The chances of finding the optimal parameter are comparatively higher in random search because of the random search pattern where the model might end up being trained on the optimised parameters without any aliasing. Random search works best for lower dimensional data since the time taken to find the right set is less with less number of iterations. Random search is the best parameter search technique when there are less number of dimensions. In the paper Random Search for Hyper-Parameter Optimization by Bergstra and Bengio, the authors show empirically and theoretically that random search is more efficient for parameter optimization than grid search.

There are many theoretical and practical concerns when evaluating optimisation strategies. The best strategy for your problem is the one that finds the best value the fastest and with the fewest function evaluations and it may vary from problem to problem. While less common in machine learning practice than grid search, random search has been shown to find equal or better values than grid search within fewer function evaluations for certain types of problems.

--

--