Hydra — A fresh look at configuration for machine learning projects

PyTorch
PyTorch
Published in
9 min readFeb 3, 2020

This post is authored by Omry Yadan, Software Engineer at Facebook AI who created Hydra.

Hydra is a recently released open-source Python framework developed at Facebook AI that simplifies the development of research and other complex applications. This new framework provides a powerful ability to compose and override configuration from the command line and configuration files. As a part of the PyTorch ecosystem, Hydra helps PyTorch researchers and developers to more easily manage complex machine learning projects. Hydra is general-purpose and can be applied in domains beyond machine learning.

This post is divided into two parts. The first part describes common problems appearing when developing machine learning software, and the second describes how Hydra can address those problems.

Part 1 — Your code is more complicated than you think.

One of the first things every software developer learns about is the command-line. At its core, the command-line is a list of strings that are typically broken down into flags (e.g., — verbose) and arguments (e.g., — port=80). This is enough for many simple applications. You can define 2 to 3 command-line arguments in a command-line interface (CLI) parsing library, and you are done.

Snippet from PyTorch ImageNet Training Example

As people start using your application, they will inevitably discover missing functionalities. Soon enough, you will add more features resulting in a command-line flag creep. This is especially common in machine learning.

The image on the left is from the PyTorch ImageNet training example. Despite being a minimal example, the number of command-line flags is already high. Some of these flags logically describe the same component and should ideally be grouped (for example, flags related to distributed training) — but there is no easy way to group those flags together and consume them as a group.

Building on this example, you might want to add new functionality; supporting a new model, dataset or optimizer which will require additional command-line flags. You can imagine how this example will grow in complexity as you extend it to support new ideas.

Another subtle problem with this style is that everything tends to need the parsed args object. This encourages coupling and makes individual components harder to reuse in a different project.

Config file

A common solution to keep the growing complexity in check is to switch to configuration files. Configuration files can be hierarchical and can help reduce the complexity of the code defining command-line arguments. Unfortunately, config files also have challenges as you’ll see in the next section.

Config files are hard to change

While experimenting, you will want to run your application with different configuration options. At first, you might just change the configuration file in place before each run — but you will soon realize that it is hard to keep track of the changes associated with each run.

An attempt to fix that issue might be to copy the configuration file, name it after the experiment and make changes to the new file. This is not great either, as it creates a long trail of config files which quickly falls out of sync with the code and becomes useless. In addition, it is difficult to tell what you were trying to do by looking at the experiment config file because it is 99% the same as the other config files.

Finally, you might fall back to command-line flags for the things you are frequently changing to allow them to be changed from the command-line. This is tedious and makes the command-line code complex again. Ideally, you would be able to override everything in your config from the command-line, without having to write code for every single case.

Config files become monolithic

When developers write code, they like to break things down into small-sized chunks (modules, functions). It helps them hold a mental model of their code, and it makes the code easier to maintain. It also enables functional reuse — calling a function is easier than copying it.

Configuration files do not offer similar facilities. If you want your application to use different config options, say one for ImageNet dataset and one for CIFAR-10 dataset, you have two choices:

  1. Maintain two config files
  2. Put both options in one config file, and somehow use just what you need at runtime

The first option seems great until you realize that as you add more alternatives, things fall apart quickly. For example, you may want to try out three different model architectures (AlexNet, ResNet50, and something new and exciting you call BestNet) in addition to the two dataset choices. You may also want to have a choice between two loss functions. This brings the total number of combinations to twelve! You really want to avoid maintaining twelve similar configuration files.

The second approach works better initially. You just end up with a big configuration file that knows about chosen two datasets, three architectures, and two loss functions. But wait, it turns out that your learning rate needs to be different when training on AlexNet versus ResNet50, and you somehow need to express this in the monolithic config file.

This complexity also leaks into your code, which now needs to figure out which learning rate to use at runtime! Large configs that are mostly unused create a significant cognitive load when designing, running, and debugging experiments. With 90% of the config being unused, it is hard to tell the important 10% for each run.

Adding the capability to compose the configuration with the capability to override everything in it from the command-line gives a powerful solution to those problems. For this reason, many projects dealing with rising complexity, eventually get to the point where it becomes necessary to develop a subset of the functionality that Hydra offers. This functionality tends to be closely aligned with the needs of the individual project and therefore difficult to reuse, forcing developers to continually reinvent the wheel with every new project.

Unfortunately, by the time many developers realize this — they already have a complex and inflexible codebase, with high coupling and hard-coded configurations. Ideally, you want to compose your configuration just like composing code. This enables you to scale up the complexity of your project.

Part 2 — Compose your configuration like composing code with Hydra

If you got this far, you must be wondering what is this marvelous solution to those ailments of software engineering I described in Part 1. You probably guessed it is called Hydra.

Hydra is an open-source Python framework developed at Facebook AI Research that solves the problems outlined in Part 1 (and a few others), by allowing you to compose the configuration passed to your application. The composition can happen via your config file or the command-line, and everything in the composed config can also be overridden through the command-line.

Basic example

The source code for the examples below is available here.

Say you have this config for a dataset:

config.yaml

Here is a simple Hydra application that loads this config:

my_app.py

The most interesting line here is the @hydra.main() decorator. It takes a config_path, mentioning the config.yaml file above.

The program pretty-prints the config object it gets. It would come as no surprise that the config object contains the ImageNet dataset configuration:

Regular output from my_app

We can now override anything inside this config file from the command-line:

Output when overriding dataset.path

Composition example

At some point, you may want to alternate between two different datasets, each with its own configuration. To support that, introduce a config group for dataset, and place individual config files in it, one per option:

You can also add a ‘defaults’ section to your config.yaml, telling Hydra how to compose the config. In this case, we just want to load the config for cifar10by default because it is faster to train on it:

config.yaml

The app looks almost the same, the only difference being that the config path now points to conf/config.yaml. Running the app, we get the expected cifar10 configuration loaded:

But we can also easily choose to use imagenet:

You can have as many config groups as you like. Let’s add another one for the optimizer:

config.yaml can be updated to load adam by default as well:

config.yaml

Running the app, we get a single config containing a union of cifar10 and adam:

There is much more to say about composition, but for now, let’s move on to the next exciting feature.

Multirun

Multirun is the capability of Hydra to run your function multiple times, composing a different config object every time. This is a natural extension of the ability to compose a complex config with ease and is very handy for doing parameter sweeps without writing tedious scripts.

For example, we can sweep over all 4 combinations (2 datasets X 2 optimizers):

The basic built-in launcher is running the jobs serially, but alternative launcher plugins can run the code in parallel or even remotely. Such plugins are not yet publicly available but with help from the community, I hope to get some soon.

Automatic working directory

If you look closely at the output above, you will notice that the sweep output directory was generated based on the time I ran the command. One of the common problems people are dealing with when they do research is where to save the output. The typical solution is to pass in a command-line flag specifying output directory, but this gets tedious quickly. This is especially annoying when you want to run multiple jobs at once and have to pass in a different output directory for each.

Hydra is solving this problem by generating an output directory for each run and changing the current working directory before running your code. When performing sweeps with — — multirun, an additional subdirectory is generated for each individual job.

This works well to group jobs from the same sweep together, while keeping the output of each one separated from the others.

You can still access your original working directory through an API in Hydra.

original_cwd.

Output when running this from /home/omry/dev/hydra:

The generated working directory can be fully customized, this includes having it contain your command-line parameters or anything else from your configuration as a part of the path.

Final words

Contained in this article is just a subset of the features that Hydra offers. Additional capabilities include dynamic tab completion, automatic configuration of the Python logging subsystem, support for packaging configuration with libraries and applications, and more.

At Facebook AI, we use Hydra to launch code directly to our internal cluster from the command-line. With help from the community — I hope Hydra can grow to support launching to AWS and GCP as well to provide similar capabilities for researchers outside of Facebook AI.

Another area of interest is command-line driven hyperparameter optimization. The first such plugin, utilizing Ax — is in development.

Hydra is still new, and we are just starting to scratch the surface of how it can change things.
I am looking forward to seeing how the community uses Hydra in the years to come.

To learn more about Hydra, see the tutorial and documentation on the Hydra website.

--

--

PyTorch
PyTorch

PyTorch is an open source machine learning platform that provides a seamless path from research prototyping to production deployment.