Sample Average Approximation for Black-Box Variational Inference
#variational #optimization #hyperparameter
Machine Learning with #XGBoost or #LightGBM gets better with #hyperparameter optimisation and a tool like #Optuna is there to help. Also, you can integrate the results with #KNIME - a walkthrough with some code (https://medium.com/p/dcf0efdc8ddf)
#xgboost #lightgbm #hyperparameter #optuna #KNIME
No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL
Han Wang, Archit Sakhadeo, Adam M White et al.
#hyperparameters #hyperparameter #learns
2/10) #SSL models show great promise and can learn #representations from large-scale unlabelled data. But, identifying the best model across diff #hyperparameter configs requires measuring downstream task performance, which requires #labels and adds to the #compute time+resources. 😕
#deeplearning #ml #ai #compute #labels #hyperparameter #representations #ssl
I wonder if anyone has written about #ASHA #hyperparameter optimization vs restart strategies.
I've been playing around with ASHA in #ray on my CMA-ES optimization task. Turns out my hyperparameters (population_size, sigma_0) are almost uncorrelated to performance, but the final performance varies a lot per run.
So what ASHA actually does for me is just picking the lucky runs. Which helped a lot. Maybe I should try this BIPOP restarting strategy of CMA-ES instead.