Sampling the parameters Space
Sat Apr 09 2022 14:25:20 GMT+0000 (Coordinated Universal Time)
Saved by @wessim
Random sampling supports discrete and continuous hyperparameters. It supports early termination of low-performance runs. Some users do an initial search with random sampling and then refine the search space to improve results. Grid sampling supports discrete hyperparameters. Use grid sampling if you can budget to exhaustively search over the search space. Supports early termination of low-performance runs. Grid sampling does a simple grid search over all possible values. Grid sampling can only be used with choice hyperparameters. Bayesian sampling is based on the Bayesian optimization algorithm. It picks samples based on how previous samples did, so that new samples improve the primary metric. Bayesian sampling is recommended if you have enough budget to explore the hyperparameter space. For best results, we recommend a maximum number of runs greater than or equal to 20 times the number of hyperparameters being tuned. The number of concurrent runs has an impact on the effectiveness of the tuning process. A smaller number of concurrent runs may lead to better sampling convergence, since the smaller degree of parallelism increases the number of runs that benefit from previously completed runs. Bayesian sampling only supports choice, uniform, and quniform distributions over the search space.
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters
Comments