Introduction to NeuralPy: A Keras like deep learning library works on top of PyTorch

In this blog, we’ll learn about a new Deep Learning library, NeuralPy. It is similar to Keras in many ways but works on top of PyTorch. To understand NeuralPy, we’ll create a simple linear regression model that predicts the value of y for a given X.

Abhishek Chatterjee
The Startup
6 min readJun 19, 2020

--

First, let’s understand what NeuralPy is?

NeuralPy is a Keras like Deep Learning library that works on top of PyTorch. It is entirely written on Python, available on GitHub under an MIT license. It is a simple, easy to use Deep Learning library that is cross-compatible with PyTorch models and suitable for all kinds of machine learning experiments, learning, research, etc.

Here are some highlights of NeuralPy:

  • Provides an easy interface that is suitable for fast prototyping, learning, and research
  • Works on top of PyTorch
  • Can run on both CPU and GPU
  • Cross-Compatible with PyTorch models

To learn more about NeuralPy, please check the NeuralPy GitHub repository and official documentation.

Note: NeuralPy currently is at a very early stage of development, because of that, it supports limited types of layers, optimizers, loss functions, etc. Also, it is unstable and there could be bugs.

So now as we understand what NeuralPy is, let setup the development environment.

NeuralPy works on top of PyTorch, so before installing NeuralPy, we need to install PyTorch.

Installing PyTorch

To install PyTorch, go to their official website at the following link: https://pytorch.org/get-started/locally/ and follow their instructions.

If you have CUDA available on your system, then please install CUDA supported version of PyTorch.

NeuralPy also needs Numpy, but as it a dependency of PyTorch, there is no need to install it separately.

Installing NeuralPy

To install NeuralPy, type the following command on terminal/cmd.

pip install neuralpy-torch

If you have multiple versions of Python installed on your system, you might need to use pip3.

Once we’ve NeuralPy on our system, we’re ready to start working on the Linear Regression Model

For making any Machine Learning model, we first need some dataset. Here in this blog, for simplicity, we’ll not use any dataset, instead, we’ll create a synthetic dataset using numpy.

So first import numpy and then type the following code. Here we are using the np.random.rand method to create some random data points for our model. We’re creating three versions of the data, one for training, one for validation, and finally one for testing/evaluation.

# Importing dependencies
import numpy as np

# Random seed for numpy
np.random.seed(1969)

# Generating the data
# Training data
X_train = np.random.rand(100, 1) * 10
y_train = X_train + 5 * np.random.rand(100, 1)

# Validation data
X_validation = np.random.rand(100, 1) * 10
y_validation = X_validation + 5 * np.random.rand(100, 1)

# Test data
X_test = np.random.rand(100, 1) * 10
y_test = X_test + 5 * np.random.rand(100, 1)

Once we have the data ready, we can start making the model.

In this case, we’ll use a simple Sequential model. NeuralPy provides a Sequential class, that is similar to Keras Sequential class in many ways. Internally it uses PyTorch’s Sequential class to build the model.

In NeuralPy currently, at the time of writing this blog, there are only 2 ways to make a Model, on through the Sequential class and one using the Model subclass. We’ll discuss the Model subclass in some future blogs, for now, check the documentation for more information.

To create a Sequential model, write the below code.

from neuralpy.models import Sequential# Making the model
model = Sequential()

Here first, we’re importing the Sequential model class from the model module, and then initializing the class.

Sequential supports 3 parameters, all of them are optional and have default values. Here in this blog, we’ll use the default values. For more information about those parameters, please check the documentation.

Once the model is ready, we can start adding layers to the model.

Currently, NeuralPy supports only one type of Layer, and that is the Dense Layer. A Dense Layer is a normal densely connected Neural Network layer. It performs a linear transformation of the input. Luckily, for this Linear Regression model, we just need this Dense layer :)

To use this Dense Layer, first we need to import it. The Sequential model class provides an easy .add() method to add layers. We’ll use the add method and pass the Dense layer with proper parameters.

from neuralpy.layers import Dense...
...
# Adding the layer
model.add(Dense(n_nodes=1, n_inputs=1))

Here we’re passing the Dense Layer with two parameters, n_nodes and n_inputs.

n_input is the size of the input layer, here in the dataset it is just a single number, so n_inputs is set to 1.

n_nodes is the size of the output layer. It is a Linear Regression model, so in the output layer, we just need a single number, so the n_nodes is set to 1.

Along with these values, the Dense Layer also supports some other additional parameters. For list of all possible parameters, please check the official documentation.

Once the model is ready, we can build it. For that the Sequential model class provides an simple .build() method. The .build() method is responsible for making the real PyTorch model based on the NeuralPy layers.

# Building the model
model.build()

Now we can use the .compile() method to compile our model. The .compile() method is responsible for attaching the Loss Function and Optimizer with the model.

To use the .compile(), we first need to import the Loss Function and Optimizer. NeuralPy supports various type of Loss Functions and Optimizers. For this model, we’ll use the Adam Optimizer and MSE Loss Function.

First, we’ll import the Loss Function and Optimizer and then pass them into the .compile() method.

from neuralpy.optimizer import Adam
from neuralpy.loss_functions import MSELoss
...
...
...
# Compiling the model
model.compile(optimizer=Adam(), loss_function=MSELoss())

Both Adam and MSELoss supports some parameters, here for simplicity, we’re just using the default values. Please check the documentation for more information regarding the supported parameters.

Now finally its time to train the model!

To train a model, we’ve the .fit() method. The .fit() method is one of the most important method. There are several parameters that it accepts.

In our case, the first parameter that will pass is the train_data parameter and then the test_data parameter. The train_data and test_data is a tuple of (X, y).

Finally we’ve the epochs and batch_size. Here we’ll set the epochs to 10 and batch_size to 4.

# Training the model
model.fit(train_data=(X_train, y_train), test_data=(X_validation, y_validation), epochs=300, batch_size=4)

Once the model is trained, we can use the .evaluate() method to evaluate the model. The Sequential Model class provides an .evaluate() method to evaluate a model on test data.

# Evaluating
ev = model.evaluate(X=X_test, y=y_test, batch_size=4)

A Happy Ending to the Story

As we observed, NeuralPy is similar to Keras in many ways, but there are some notable differences also.

NeuralPy is cross-compatible with PyTorch models, that means we can create a model on PyTorch and then train it on NeuralPy or we can create a model on NeuralPy and train it using PyTorch.

Currently, NeuralPy is limited, there are several features that are missing, there is no support for Convolutional Layers, Recurrent Layers. I’m constantly working on this project to add more features. If you’re interested to contribute to this project, please feel free reach me.

I created this project because there is no easy way to get started with PyTorch, there is no Keras for PyTorch. Also I was bored, so an easy way to kill some time.

If you’re interested in this project, please join the official NeuralPy discord server. Here is the invite link: https://discord.gg/6aTTwbW. Also follow me on twitter. I’m available on twitter at @imdeepmind.

Important Links

--

--

Abhishek Chatterjee
The Startup

Creator of Rocket | Senior Software Engineer | @imdeepmind | imdeepmind.com