Loss Functions Unraveled

om pramod
2 min readAug 15, 2023

--

Part 1: Introduction

A loss function is a crucial component of neural networks and machine learning algorithms. A loss function in neural networks is a mathematical function that compares the predicted output of the network and the true output (label). The difference between these two values represents the error or loss. The goal of training a neural network is to minimize the value of the loss function, by adjusting the weights and biases, thereby achieving a good fit to the training data. This is done by using an optimization algorithm, such as gradient descent. So, The aim of a loss function in a neural network is to provide a measure of how well the network is performing on a given task.

Fig. Loss and Loss function(J) Reference

Loss function vs Cost function:

Loss Function: The loss function is used to evaluate the performance of the network on a single training example. It provides a way to quantify the difference between the predicted output and the true output for that example.

Cost Function: The cost function is used to evaluate the performance of the network over the entire training dataset. It provides an average measure of the error made by the network on the training data. The cost function is calculated by summing the loss values for each training example and dividing by the number of examples. The cost function can also include regularization terms, which are used to prevent overfitting in the network.

Here’s a simple example in Python to illustrate the use of a cost function with L2 regularization:

import numpy as np

def mean_squared_error(y_pred, y_true):
return np.mean((y_pred - y_true)**2)

def l2_regularization(weights, lambda_reg):
return (lambda_reg/2) * np.sum(weights**2)

def cost_function(y_pred, y_true, weights, lambda_reg):
mse = mean_squared_error(y_pred, y_true)
reg = l2_regularization(weights, lambda_reg)
return mse + reg

y_pred = np.array([1, 2, 3, 4])
y_true = np.array([1.5, 2.5, 3.5, 4.5])
weights = np.array([0.1, 0.2, 0.3, 0.4])
lambda_reg = 0.1

cost = cost_function(y_pred, y_true, weights, lambda_reg)
print("Cost: ", cost)

Output:

Cost:  0.265

Types of Loss Functions in Neural Networks:

There are two main types of loss functions — these correlate to the 2 major types of neural networks: regression and classification loss functions.

Final Note: Thanks for reading! I hope you find this article informative.

Your thirst for knowledge is commendable, and I invite you to satiate it by joining me in the upcoming sequel: “Loss Functions Unraveled | Part 2: Regression Loss Functions.” Together, we’ll delve into specific loss functions tailored to regression tasks. Happy coding, and let’s continue our exploration into the heart of deep learning!

--

--