PyTorch

Sigmoid Function with PyTorch

In this article, I will tell you how to calculate the sigmoid(activation) function using PyTorch.

Ahmad Anis
Analytics Vidhya
Published in
4 min readFeb 1, 2020

--

Sigmoid Function

First of all, we need to know what is the Sigmoid Function. Sigmoid Function is very commonly used in classifier algorithms to calculate the probability. It always returns a value between 0 and 1 which is the probability of a thing.

Read more about the sigmoid function in detail here.

Sigmoid Function

PyTorch

PyTorch is a deep learning framework by the Facebook AI team. All deep learning frameworks have a backbone known as Tensor. You can think of tensor as a matrix or a vector i.e 1d tensor is a matrix, 2d tensor is the matrix, 3d tensors are array with 3 indices i.e RGB color code in 3d tensor. We can have n dimensions of the tensor. Let’s take a look at how we will calculate Activation(sigmoid function with PyTorch).

PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you’ll use PyTorch tensors pretty much the same way you’d use Numpy arrays.

Let’s Code it and Understand

Let's generate some random data using built-in methods of PyTorch.

We are using torch.manual.seed(), torch.randn(), torch.randn_like() to generate random data.

Now that we have generated random data lets code our sigmoid function

1st Method

We calculate sigmoid function by sigmoid(w1x1+w2x2+….+wnxn+b) i.e we multiply weights and features which is the element by element multiplication i.e w1x1 + w2x2 +….+wnxn and add bias in the result. We send the whole result to the activation function and the answer is stored in y.

2nd Method

2nd Method is the Matrix Multiplication method. We multiply our features vector with weights vector using torch.matmul() or torch.mm() method which simply means torch.matrixmultiplication.

We use torch.mm() or torch.matmul() commonly in PyTorch. Here we simply multiply 2 vectors and add bias into it. Then we send it to the activation function and get the result. This method has a trick which we will see and that will help us a lot when we are going to deal with complex neural networks later. Let’s code it and see things. We are going to use previously declared variables and functions.

It seems pretty simple, right? But here is a real problem. Both of our vectors are 5 x 1 dimension which means they can’t multiply i.e To multiply vectors/matrices, No. of columns of 1st must be equal to No. of Rows of 2nd which in our case is not happening. It returns the following error.

This is a very common error and one of the most important ones to deal with. We have to reshape our tensors so that they can multiply. We have to reshape our weight vector to get the correct dimensions. Luckily we have some functions to deal with. We can use weights.reshape() or weights.resize() or weights.view(). There is a slight difference b/w them which you can read in the official documentation. We will use weights.view()

Lets code it now

You see that’s how we calculate the Activation function in PyTorch. It is even more powerful when we have multiple layers i.e hidden layers in our neural network.

Want to know more about Deep Learning with PyTorch?

Intro to Deep Learning with PyTorch’

Completely new to the field? Start here.

‘Beginners Learning Path for Machine Learning

--

--

Ahmad Anis
Analytics Vidhya

Deep Learning at Roll.ai, Researcher at Data Providence Initiative, Community Lead at Cohere for AI