Neptune.ai

Utsavpreetihamedium
10 min readMar 16, 2022

--

Need a tool to manage experiments? Neptune can help you plan-et out.

What is Neptune.ai?

Neptune.ai is a tool that can be integrated into the machine learning framework to facilitate the management of experiments and storing ML metadata. ML metadata is all the additional information that is tracked during the running of an ML experiment and is typically used for bookkeeping and storing important details about the experiment.

Some examples of this metadata are:

  • Metrics
  • Train/Validation Learning Curves
  • Hyperparameters
  • Console logs
  • Paths to the datasets used
  • Feature names and Data Types

It is vital, especially in large scale ML systems and during production, to keep regular track of this information. Even in academic research based projects, having a robust method to store this data is essential for optimizing workflow and maintaining all the metadata under one roof.

As with most products in this niche, the platform allows comprehensive model tracking, which is imperative for all ML experiments. Their website states that using neptune.ai enables convenient model tracking, grouping and comparing experiments, sharing results across teams etc.

Why is this needed?

Typically when working with software systems, and ML experiments in particular, there are multiple auxiliary factors around the fundamental model that go into building a successful system. In both research work and industry, being able to conveniently log the relevant details of the system, especially when trying out multiple experiments, is vital to efficiently analyze and post-process data related to the experiments. This can involve multiple facets such as time taken to run, type of model, size of model, accuracy, number of parameters etc. Software platforms like neptune make this incredibly easy by providing a direct link from your code and model results to a unified database that continuously logs this information.

The use of such systems is typically evident after one finishes experimentation and needs to compare multiple runs, experiment configurations, accuracies etc. Consistent use of tools like Neptune.ai, which fall under the umbrella of model logging, saves many hours of frustration of losing data or forgetting to record your results manually.

Breaking Down What They Do

Neptune.ai’s website describes their service as consisting of three key components:

  1. Client Services
  2. Metadata Database
  3. Dashboard
Neptune.AI How it works.
  1. The Client Services

It consists of the suite of functions and API calls required by the user to interact with the ML metadata database (described in the next section)

A code snippet from Neptune.ai’s website regarding using the run API.

The set of API calls and functions provided by the Neptune client enable you to log model building and also download model building data from Neptune.

This enables seamless communication to-and-fro from the client’s side to the Neptune database which can be used to push model metadata as well as retrieve it.

Code snippet highlighting creation and use of a neptune object.

As the snippet shows, it is easy to integrate this into your existing python code by instantiating a Neptune class which can store a multitude of relevant parameters pertaining to your specific run.

As with most ML Ops software, like Tensorboard, the client services are intimately linked with a complete UI suite which is used for visualizing and deriving insights from the metadata being collected.

From this UI, the user can obtain the relevant project name and api_token used by the client, which must be copy pasted into the code while instantiating the neptune object. And that’s it! After instantiating the object, the relevant metrics can be captured by defining the requisite variables.

2. Metadata Database

The metadata database is where the experiment, model and database metadata are stored.

A visual abstraction of the metadata being stored

The data being collected must be specified when instantiating the client class for it to be successfully logged onto the database. Note that the database is merely a storage module for holding the metadata and requires a dashboard type abstraction to actually interact with it.

3. Dashboard

The dashboard is the actual interface using which you can visually see the metadata collected in the database.

Training curves displayed on the dashboard

The dashboard is accessed using a uniquely generated url which can be locally accessed through a browser window. Those familiar with Tensorboard will find the overall concept quite similar. It provides an all-in-one platform for viewing your model’s metadata. Especially useful here are having a permanent log of training runs, loss curves, validation error plots etc.

A Brief Tutorial To Using Neptune

This brief overview of using neptune is designed for Python interfaces, however, neptune is compatible with a host of programming languages and platforms (refer to https://www.neptune.ai for more info).

  • Creating an account

To use Neptune, you must first have a registered account. You can use a free account for most purposes with some limitations in functionality. You can easily sign up on their website directly with a google account.

  • Installing Neptune

This is a one liner :)

  • Creating a Project

After creating your account, you will find yourself on neptune’s main dashboard which looks something like this:

Here, click on the New Project button.

We will create this tutorial for a movie recommendation system for our course project in Machine Learning In Production.

After choosing the name, and entering a description (optional), click create.

The next screen will give a brief list of steps required to using Neptune.

Some Important Definitions Before We Proceed

  • Workspace

A space inside Neptune.ai where you can manage multiple projects.

  • Project

Collection of runs assigned to a project

  • Run

An instance inside a project where you log model metadata. Typically, a run is created everytime you run a script and is terminated at the end of the script.

  • Field

A field of a run, is a namespace in which various metadata can be logged.

An example of some fields are shown here:

You can assign almost anything to a field!

Fields follow a “directory-like” structure and can store pretty much anything.

Fields have multiple nuances of their own and here we only elaborate on the main two functionalities that are required for most tasks.

Neptune’s General Pipeline

  1. Import neptune
  2. Storing metrics
  3. Stop Run
  • Importing and initializing neptune
import neptuneneptune.init(project_qualified_name = 'workspaceName/projectName',
api_token = 'YourAPItoken')

Here, I am taking the project name and api_token directly from the auto-generated values that are available in the main project screen of the UI.

Storing metrics

  1. Storing a single metric

For example, if you wish to store the fact that your model is a 20 layer ResNet you can do it as so :

run[“model_details”] = “20 layer ResNet”

That’s it!

2. Storing a series of data

The canonical example here is storing training accuracy over epochs, something everyone who has worked with in ML is used to dealing with.

This can be done using the .log() function in Neptune’s API.

for i in range(NUM_EPOCHS):run[“training/train_acc”].log(train_accuracy)

There are multiple other function calls that are useful, such as .upload(), .assign() etc, which can be looked into in further detail in the docs.

3. Closing Neptune

To end your run on neptune:

run.stop()

That’s it.

Neptune Demonstration

Armed with the knowledge of how projects, runs and fields work we can now customize how we want to log our data. Before the model training starts you could store any metadata you wish. For example:

run[“task”] = “movie_recommendation”run[“model_type”] = “collaborative_filtering”run[“max_epochs”] = 50

Another way is to use Neptune’s integrations. These are essentially neptune’s APIs built into the standard ML libraries training pipelines.

For example, if using Tensorflow/Keras, we can use Neptune’s integration API to easily store the requisite metadata.

Integrating Neptune Into ML Libraries

Note: If you mistakenly forget to add some details to a previous run or wish to continue a previous run, you can log into the previous run by specifying the run name in the neptune instance.

Logging into an old run

Now let’s get back to our original discussion of implementing Neptune for our movie recommendation system.

The first step involves importing the requisite libraries:

# Import surprise library for model
import surprise
from surprise import SVD, Reader, Dataset
from surprise.model_selection import GridSearchCV
import pickle
# Standard imports
import os
import numpy as np
import pandas as pd
import neptune.new as neptune

You can import whichever libraries are required, make sure to import neptune here as well.

Now, we create the neptune instance, with the project name and token.

run = neptune.init(
project=”utsavdutta98/movie-recommendation”,
api_token=”ENTER_YOUR_TOKEN"
)

In this run we are using a collaborative filtering based approach which uses an SVD model, the model details are wholly unimportant here, but let us use neptune to keep track of these.

run[“model”] = “Collaborative Filtering : SVD”

Now, we import our data. We push this data to our neptune instance as well, using the .upload() command.

df = pd.read_csv(‘model1_data.csv’)
run[“dataset”].upload(‘model1_data.csv’)

The library we are using requires some preprocessing prior to use which we mention here for completeness, neptune has nothing to do with this.

# Read into surprise reader
reader = Reader(rating_scale=(1, 5))
data = Dataset.load_from_df(df[[“user”, “movie”, “rating”]], reader)
trainingSet = data.build_full_trainset()

Now, we define our hyperparameters over which we have to perform our search. As this is important metadata, we immediately push it to our database

param_grid = {
“n_epochs”: [50,100,200],
“lr_all”: [0.002, 0.005],
“reg_all”: [0.4, 0.6]
}
run["params"] = param_grid

Next, we use our library’s gridsearchCV method (a common option for performing runs over different hyperparameters) to evaluate the model for each of these parameters.

gs = GridSearchCV(SVD, param_grid, measures=[“rmse”], cv=3, joblib_verbose = 3)
gs.fit(data)
best_params = gs.best_params[“rmse”]

What is contained in this gs object?

Let’s view it with a dataframe.

results_df = pd.DataFrame.from_dict(gs.cv_results)

Without going into too much detail to analyze the numbers here, it suffices to say that each row corresponds to a particular hyperparameter setting and the accuracy/errors obtained for each.

This is a single table that captures a lot of key information and we push this to our neptune server as a .csv file.

results_df.to_csv(“results.csv”)
run[“grid search results”].upload(“results.csv”)

The last bit is training the model and uploading the model to the neptune interface as well.

svd_algo = SVD(n_epochs=best_params[‘n_epochs’],
lr_all=best_params[‘lr_all’],
reg_all=best_params[‘reg_all’])
svd_algo.fit(trainingSet)
with open("gridsearchresults.pickle","wb") as f:
pickle.dump(gs,f)
run["model_final"].upload('SVDmodel.pkl')

We retrain the model and save it as a pickle file, which is now uploaded to Neptune. And we are done!

Viewing Neptune’s Databases

Let us see how we are doing with regards to the data we have pushed so far. Logging into the Neptune.ai website lands us at the main page which has our projects.

Going into the movie recommendations project, we find our run with the self-assigned (and easily changeable) name of MOV-3.

As we have only performed one run, we see our results table has just one row, let us enter this and see what has been logged.

Voila! As we can see, all the metadata that we logged is present, but hang on, there are some additional folders at the bottom.

Let us see what information is present in monitoring.

The information here is primarily regarding CPU and memory usage, and gets stored by default.

Similarly the sys file contains additional metadata.

Now moving to the good stuff, we see our dataset is stored as a .csv file, unfortunately it is too large to view inside the neptune dashboard, but can be downloaded whenever required in the future.

We see that our grid search results stored as a csv file is readily viewable here, in the neptune interface directly.

Image of Neptune’s grid search results

Benefits and Limitations

Finally, let’s discuss some of the benefits and limitations of Neptune.

Some of the obvious benefits are: Neptune allows you to manage and maintain the various experiments that you perform and keep track of how each of these experiments compare to one another in a simple manner that can be completely automated for the user’s convenience. This live model tracking engenders team work and allows you to collaborate more efficiently, and allows you to update your model as it trains. The UI is easy to understand, and the learning curve is small, and well worth the effort!

However, there are a couple of limitations that we should go over too: The platform isn’t as well popularized as some of the other model monitoring platforms and hence issues that you face might generally take longer to resolve. Some people also find the visualization of Neptune could be improved. Others find that it might not be sufficient for full-time, long-term projects.

Conclusion

To summarize, Neptune.ai is an easily integrable tool which is becoming increasingly crucial in the machine learning pipeline as a reliable all-in-one suite to capture all the relevant information pertaining to an experiment’s run. Hopefully this blog encourages people starting out in this field to integrate using software platforms like neptune into your work to make the process of logging and storing vital metadata that much easier.

Thanks for reading!

--

--