Massively Parallel Hyperparameter Optimization on AWS Lambda

Michael Hart
Feb 7 · 7 min read
And that was before I stumbled on the ASHA paper

Background

Enter Lambda

Dashed lines represent min and max scores of the trials
Default limits for Lambda on left, SageMaker on right: Not exactly a fair fight
Not getting much better…

ASHA

(credit: https://blog.ml.cmu.edu/2018/12/12/massively-parallel-hyperparameter-optimization/)

We argue that tuning computationally heavy models using massive parallelism is the new paradigm for hyperparameter optimization.

Our implementation

Future Extensions

Conclusion

Michael Hart

Written by

Director of Research Engineering at Bustle, AWS Serverless Hero, creator of LambCI. github.com/mhart

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade