MosaicMLBioMedLM: a Domain-Specific Large Language Model for Biomedical TextThe Stanford Center for Research on Foundation Models (CRFM) and MosaicML announce the release of BioMedLM, a purpose-built AI model…Jan 27, 2023Jan 27, 2023
MosaicMLTraining Stable Diffusion from Scratch Costs <$160kWe wanted to know how much time (and money) it would cost to train a Stable Diffusion model from scratch using our Streaming datasets…Jan 25, 2023Jan 25, 2023
MosaicMLWhy Enterprises Should Treat AI Models Like Critical IP (Part 2)In 2022, the potential of Large Language Models (LLM) and Generative AI entered the mainstream, while organizations began to recognize the…Jan 25, 2023Jan 25, 2023
MosaicMLWhy Enterprises Should Treat AI Models Like Critical IP (Part 1)Five years ago, The Economist proclaimed that data was the new oil. Since then, the power of amassed data to impact the world has become…Jan 23, 2023Jan 23, 2023
MosaicMLNew in Composer 0.12:We are excited to announce the release of Composer 0.12 ( release notes)! This release includes several new features, plus improvements to…Jan 21, 2023Jan 21, 2023
MosaicMLEfficiently Estimating Pareto Frontiers with Cyclic Learning Rate SchedulesBenchmarking the tradeoff between model accuracy and training time is computationally expensive. Cyclic learning rate schedules can…Aug 19, 2022Aug 19, 2022
MosaicML5 Best Practices for Efficient Model TrainingIn the course of our research and product development we’ve codified a number of best practices for efficient CNN training, and we’d like…Aug 19, 2022Aug 19, 2022