# Hyperparameter tuning: Grid search and Random search:

*Hyperparameters* are model-specific properties that are ‘fixed’ even before the model is trained or tested on the data. For Example: In the case of a random forest, hyper parameters include the number of decision trees in the forest , for a neural network, there is the learning rate, the number of hidden layers, the number of units in each layer, and several other parameters.

**Hyperparameter Tuning **is nothing but searching for the right set of hyperparameter to achieve high precision and accuracy. *Optimising hyperparameters* constitute one of the most trickiest part in building the machine learning models. The primary aim of hyperparameter tuning is to find the *sweet spot* for the model’s parameters so that a better performance is obtained.

There are several hyperparameter techniques, we shall look into two of the most widely used parameter optimizer techniques:

**Grid Search****Random search**

**Grid search:**

Grid search is the simplest algorithm for hyperparameter tuning. In grid search, we simply build a model for every combination of various hyperparameters and evaluate each model. The model which gives the highest accuracy that is considered to be the best. The pattern followed here is similar to the grid, where all the values are placed in the form of a matrix.

One of the major drawbacks of grid search is that when it comes to dimensionality, it suffers when the number of hyperparameters grows exponentially. With as few as four parameters this problem can become impractical, because the number of evaluations required for this strategy increases exponentially with each additional parameter, due to the curse of dimensionality.

**Random search:**

Random search is a technique where random combinations of the hyperparameters are used to find the best solution for the built model. It is similar to grid search, and yet it has proven to yield better results comparatively.

The drawback of random search is that it yields high variance during computing. Since the selection of parameters is completely random; and since no intelligence is used to sample these combinations, luck plays its part.

The chances of finding the optimal parameter are comparatively higher in random search because of the random search pattern where the model might end up being trained on the optimised parameters without any aliasing. Random search works best for lower dimensional data since the time taken to find the right set is less with less number of iterations. Random search is the best parameter search technique when there are less number of dimensions.

# Conclusion

There are other optimisation techniques which might yield better results compared to these two, depending on the model and the data. When it comes to science, there is no luck but when it comes to randomness we hope to find the best sample with the least possible time.