Auto gradients of tensor objects
How do PyTorch’s tensor objects use built-in functionality to calculate gradients?
Differentiation and calculating gradients play a crucial role in updating weights in a neural network. Pytorch takes care of that. Here’s an example.
Now the loss function, in this case, let’s say, the sum of the square values or square loss.
loss = ΣX²
gradient = 2*x
Let’s see how it is calculated using PyTorch built-in function. The function used is backward().
The gradients are stored in the x values or input values and can be obtained as:
Notice: The values are in fact 2*x
~END~
Coming up next: Why use PyTorch’s tensors over NumPy arrays?