PyMLPipe: MLOPs Python Package with Pytorch support

Indresh Bhattacharyya
Coinmonks
5 min readJul 20, 2022

--

In the Last Article We Covered Basics of using PyMLPipe

In this one let’s see how we can use PyMLPipe with Pytorch.

PyMLPipe helps in

  1. Model Monitering
  2. Model Version control
  3. Data Version Control
  4. Model Parameter tracking
  5. Data Schema Tracking
  6. Model Performance Comparison
  7. One click API deployment

Installation (via pip):

pip install pymlpipe

Usage of PyMLPipe:

To follow along you can download the data set from the link below

Dataset : link

It is a churn prediction data, its a very small dataset, just for Demo purpose

Step1 :First lets import our libraries

Here we are going to use:

  1. torch- For creating our neural network and training
  2. Sklearn- for spliting train and test data , different metrics
  3. PyMLPipe for monitoring and deployment

Step 2: Now lets read the dataset and encode the categorical features

  1. We are reading the train.csv file (given the link above)
  2. We are encoding certain categorical features
  3. We are taking trainX (features/dependent Variable) trainY(predictor/independent variable)

Step 3: Next lets divide the data into train and test set

  1. Divide the data into train test set
  2. Convert the data frame into torch tensors and type convert them into float

** converting to FloatTensor is important as the Neural network will work on Float values not Long as well as BCELoss (for binary classification) will also take Float values for calculating loss.

Step 4: Define the Neural Network

This is a simple neural network with 3 layers

Layer 1 : Dense layer(29x15)

Layer 2 : Dense layer(15x 10)

Layer 3 : Dense layer(10x 1)

Finally with a sigmoid activation function.

Step 5 : Initiate model and Loss function and optimizer

Initiate the model

We are using SGD as optimiser feel free to use any other like Adam

Loss function as BSELoss (Binary Cross Entropy Loss)

New to trading? Try crypto trading bots or copy trading

Step 6: Create a validation function [This is optional, you can use your own function]

model — is the Pytorch Model, testX is the test features, testY is the test predictors

torch.where(prediction>.5,1,0) we are setting the class threshold as .5

Then we calculate the accuracy score and f1 score.

Step 7: Initiate PyMLPipe

Initailize PyMLPipe

set an experiment name and version (see last article for more details)

Step 8: Train the model

Now We can train the model,

mlp.run() → start the instance for Monitering

mlp.log_params() → Logs the parameters for training

mlp.register_artifact() → Save the training data and generates data schema

** see last article for more details on the above functions

Few new things here

mlp.log_metrics_continious(dict) → this function stores metrics in a continuous manner meaning it will store the metrics for each epoch of the training.

mlp.pytorch.register_model(model_name,Pytorch_Model) → Registers and saves the pytorch model in torch.jit format for serving and prediction

Also you can use

mlp.pytorch.register_model_with_runtime(modelname, modelobject, train_data_sample)

  • train_data_sample- is a sample of input data. it can be random numbers but needs tensor dimension
  • This method is preferred as in future releases this models can be then converted to other formats as well ex: "onnx", "hd5"

Entire code :

Step 8: Finally we can start the UI and see the details

Start the UI by using the command pymlpipeui

or

Once you start the UI you can see the experiments

Click on the RUN ID to see details

In the Modelstab you can see the model details. training parameters as well as `Torch Layers` used for the model

In the Model Architecture tab you can see the model Visualisation

In the training Logs you can see the continuous metrics used for training you can also plot the same.

Click the Deploy Button to deploy the DL model

You can see the deployed Models on Show Deployment Tab

deployment URL — is the you endpoint, you can send a POST request to get predictions

You can click on the Deployment URL to get a API screen

If you are following along you can enter the following data to receive a prediction.

dtype is the data type that is expected by the model. As discussed this model expects FloatTensor for input type so we provide "dtype": "float"

Github link: https://github.com/neelindresh/pymlpipe

Contribution is always Welcome

Documentation: https://neelindresh.github.io/pymlpipe.documentation.io/

Hope you enjoyed the POST. Leave a like and Share.

--

--