What is the difference between Hyperparameter and parameters?
A learning model estimates model parameters for the given dataset, and then continually updates these values. Now, after learning is complete, these parameters become part of the model.
On the other hand, Hyperparameters are specific to the algorithm itself and they cannot be calculated or estimated from the data. They influence the learning process of the algorithm and are always configured before starting the model learning process.
Hyperparameters are used to calculate the model parameters and different hyperparameters produce different parameter values for a given dataset.
Hyperparameter tuning methods
We can find optimal hyperparameter values using manual or automated methods. In manual hyperparameter tuning, we start start by using the default recommended values and then search through a range of values using trial and error. You can already tell that this will be tedious and time-consuming, especially when the hyperparameters are many.
In the automated approach, we use an algorithm to search for the optimal hyperparameters. Here are some of the automated methods:
Random Search - This method tries a random combination of hyperparameters in each iteration and records the model performance. After a number of iterations, it returns the combinations that produced the best results.
Grid Search - This method creates a grid of possible hyperparameter values and then fits the model with the possible combinations. It records the model performance for each set and then picks the combination that produced the best results.
Bayesian optimization - This method performs the search for the best hyperparameters as an optimization problem. When performing the iterations through the combinations, it takes into account the previous evaluation results when choosing the next hyperparameter combination.
Use the R furrr package to parallelize hyperparameter tuning