Best Alternatives to MLflow Model Registry

Patrycja Jenkner
neptune-ai
Published in
9 min readMay 21, 2021
Source: neptune.ai

This article was originally posted on the Neptune blog.

MLflow Model Registry is one of the four components of the MLflow platform. The other three components are Tracking, Projects and Models. The way they are designed makes it possible to use every component separately, but they also work well together.

“The MLflow Model Registry component is a centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of an MLflow Model. It provides model lineage (which MLflow experiment and run produced the model), model versioning, stage transitions (for example from staging to production), and annotations.” — MLflow documentation

The goal of MLflow Model Registry, just like any Machine Learning model registry tool, is to allow ML teams to easily find all model-related metadata whenever they need them. In an indirect way, the model registry facilitates the process of moving models from training to production and ensures model governance.

Both the MLflow Model Registry component and the MLflow platform as a whole, are great tools and can be a huge help in every ML project. But, they certainly don’t check all the boxes for everybody.

CHECK ALSO

The Best MLflow Alternatives for Experiment Tracking

Some of the things that you may see as challenging:

  • No model lineage and evaluation history features, like models created downstream or history of testing runs.
  • Code versioning and datasets versioning are missing in the MLflow model registry, which makes reproducibility more difficult.
  • The team collaboration and access management features are not available, so if you work in a team, you have to figure out some time-consuming workarounds to this.
  • MLflow is an open-source tool. So, unless you want to use the Databricks platform, you need to set up and maintain the MLflow server yourself.
  • In this setting, you’re often on your own with debugging any issues. There’s no dedicated user support to tell you what to do step-by-step. And even though the open-source community is quite active, they may not have all the answers or it may take some time to get them.

MLflow Model Registry is used by many teams, and they definitely see its value. But in case the above points are important for you or your team, and you’d like to have them in your model registry tool, here are a few alternatives you should consider.

MLflow Model Registry alternatives

1. Neptune

Neptune is a metadata store for MLOps. Its main focus is helping Data Scientists and ML Engineers with experiment tracking and model registry.

What does it offer in the model registry area?

First of all, you can log all kinds of model building metadata to Neptune, including code, git information, files, Jupyter notebooks, datasets, and more. This way you have the model versioned in one central registry and you can easily analyze, compare or retrieve the data.

Source: neptune.ai

No matter where you or your colleagues (if you work in a team) run the training — whether it’s in the cloud, locally, in notebooks, or somewhere else — Neptune can be the one source of truth for you and the database of your past runs.

Having that, for any model, you can know who created it and how, but you can also check what data your model was trained on and compare datasets between runs.

Source: neptune.ai

What Neptune doesn’t have yet is the approval mechanism for models. But as an ML metadata store, it gives you a lot of flexibility so you can set up protocols for promoting models yourself.

Neptune — summary:

If you want to see Neptune in action, check this live Notebook or this example project (no registration is needed) and just play with it. You can also take a look at this in-depth comparison between MLflow and Neptune.

MAY INTEREST YOU

Setting up CI/CD for the infrastructure design optimization engine [Continuum Industries Case Study]

2. Amazon Sagemaker

Amazon SageMaker model registry | Source

Amazon SageMaker is a fully managed service that developers can use for every step of ML development, including model registry. With the SageMaker model registry, you can catalog models for production, manage model versions, associate metadata (such as training metrics) with a model, and manage the approval status of a model.

To register a model in Amazon SageMaker you just need to create a model version and specify what group it belongs to. You can also register it with an inference pipeline specifying containers and associated variables. Then, you will use AWS Python SDK to create new version models.

What is convenient about AWS is that you can deploy the model out of the registry. After the machine learning model is trained, it can be deployed to SageMaker endpoints that can serve real-time inferences with low latency.

After deploying your model, you can use Amazon SageMaker Model Monitor to continuously monitor the quality of your machine learning model in real-time.

LEARN MORE

See an in-depth comparison between Neptune and SageMaker.

3. Verta AI

VertaAI model registry | Source

Verta AI is a model management and operations tool with model registry functionality, where, in one unified space, you can manage and deploy your machine learning models.

In a friendly UI, you register your models and publish all model metadata, documentation, and artifacts. Then, you can connect your model to an experiment tracking system where you will be able to manage the experiment end-to-end. Verta provides a unified view of all model information you have for better discoverability.

Verta AI system also provides tools for ML projects’ version control and allows to separately track changes in code, data, config, and environment. You can access the audit log at any moment to check the compliance and robustness of the model. This platform can be used at any stage of the model’s life cycle.

Verta AI allows you to reduce time to release while not compromising on quality. Only once the models pass basic security and privacy checks, they will be released. You can build a custom approval workflow that suits your project and integrate it with the ticketing system of your choice.

The main features of Verta AI include:

  • Dashboards for reporting and performance evaluation that you can customize based on your needs.
  • Integrations — the tool works on Docker and Kubernetes and is integrated with most tools for machine learning such as Tensorflow and PyTorch. It also has great opportunities for CI/CD since it is integrated with CI/CD pipelines like Jenkins and GitOps.
  • Git-like environment — if you have experience using Git (and most developers do), then you will find this system intuitive and easy to use.

4. Azure Machine Learning

AzureML dashboard | Source

Azure Machine Learning is a cloud MLOps platform that lets you manage and automate the whole ML lifecycle, including model management, deployment, and monitoring. The following MLOps capabilities are included in Azure:

  • Create reproducible ML pipelines.
  • Create reusable software environments for training and deploying models.
  • Register, package, and deploy models from anywhere.
  • Deals with data governance for the end-to-end ML lifecycle.
  • Notify and alert on events in the ML lifecycle.
  • Monitor ML applications for operational and ML-related issues.
  • Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure Pipelines.

Azure ML provides features in the model registry and audit trail area. You can use the central registry to store and track data, models, and metadata, as well as automatically capture lineage and governance data with an audit trail.

Azure is helpful if you would like to make your whole ML infrastructure cloud-based, or already are.

5. Comet

Comet model registry | Source

Comet is a machine learning experiment management platform. It’s a feature-rich system that helps you log experiment models via its Python SDK Experiment, as well as register, version, and deploy them.

In the Registered models tab, you will see all the versions of your model with detailed information about each of them. Comet makes it simple to keep track of the history of the experiments and model versions. The maintenance of ML workflow also becomes more efficient due to model reproduction and model optimization via the Bayesian hyperparameter optimization algorithm. Here you can read more about the model registry with Comet.

In general, Comet has powerful functionality that allows both individual developers and professional teams to run and track experiments:

  • Integrate fast. It’s easy to integrate the solution with other tools you use in just a couple of lines of code. Comet is compatible with the majority of platforms and machine learning libraries.
  • Compare Experiments. You can access code, hyperparameters, metrics, and dependencies in one user interface, which makes it convenient to compare experiments.
  • Monitor constantly. If the model’s performance is degrading, you will get an alert. Automated monitoring from training to production boosts the quality of your project.
  • Improve reporting. In-built features for visualization and reporting facilitate communication with stakeholders and other members of the team.

MAY INTEREST YOU

See an in-depth comparison between Neptune and Comet.

6. Weights & Biases

Dataset versioning in Weights & Biases | Source

Weights and Biases is a platform for experiment tracking, dataset versioning, and model management. Model registry is not its main focus but one of the WandB’s components, Artifacts, allows you to version datasets and models which helps with the lineage of ML models and guarantees their reproducibility.

Wandb also has incredible visualization tools that help with visualizing model performance, training metrics, and model predictions. You can use Weights & Biases together with your favorite libraries and frameworks.

Apart from that, the tool enables engineers to train the model with various combinations of hyperparameters. Thanks to the tool, the process became much easier: all the data scientist has to do is prepare the code to train the model and adjust the hyperparameters.

The main benefits of WandB include:

  • Easy experiment tracking
  • Automated hyperparameter tuning
  • Customizable visualizations
  • Integrations with popular frameworks
  • Collaboration features

CHECK ALSO

See an in-depth comparison between Neptune and Weights & Biases.

Summary

MLflow is a super useful tool, that not only offers model registry but also experiment tracking, code & model packaging, model deployment, and more. It’s one of the most popular open-source tools among ML practitioners. But, it lacks some of the functionality, such as model lineage, code & dataset versioning, access management, or sharing projects (that can be especially beneficial for ML teams). It also has to be hosted on your own servers.

If those points are crucial for you, it’s always good to look around the market and check what alternative tools are available out there.

What should you choose? If you’re looking for a tool that’s focused on the model registry and expanding the functionality in this area, Neptune is the best choice (its main focus are experiment tracking and model registry). If you’re interested in tools that can help you with the whole ML model lifecycle, check Amazon SageMaker or VertaAI. AzureML and Comet also cover a wider scope of tasks.

Analyze your needs and your use case, and test the tool that matches them best. Hopefully, this list helped you find some options!

This article was originally posted on the Neptune blog. You can find more in-depth articles for machine learning practitioners there.

--

--