Neeharika Patel
Pythonโ€™s Gurus
2 min readJul 2, 2024

--

๐ŸŽฏ ๐‘๐ž๐Ÿ๐ข๐ง๐ข๐ง๐  ๐ง๐ž๐ฎ๐ซ๐š๐ฅ ๐ข๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐ฐ๐ข๐ญ๐ก ๐Ÿ๐จ๐ซ๐ฐ๐š๐ซ๐ ๐Ÿ๐ฅ๐จ๐ฐ ๐š๐ง๐ ๐›๐š๐œ๐ค๐ฐ๐š๐ซ๐ ๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ŸŽฏ

โžก ๐…๐จ๐ซ๐ฐ๐š๐ซ๐ ๐๐ซ๐จ๐ฉ๐š๐ ๐š๐ญ๐ข๐จ๐ง โžก

Forward propagation refers to the process by which input data is passed through a neural network to generate an output. It involves the following steps:

๐Ÿ. ๐ˆ๐ง๐ฉ๐ฎ๐ญ ๐‹๐š๐ฒ๐ž๐ซ: The input data is fed into the network.

๐Ÿ. ๐‡๐ข๐๐๐ž๐ง ๐‹๐š๐ฒ๐ž๐ซ๐ฌ: The input data is then passed through one or more hidden layers. Each neuron in these layers applies a linear transformation followed by a non-linear activation function to the input data.

๐Ÿ‘. ๐Ž๐ฎ๐ญ๐ฉ๐ฎ๐ญ ๐‹๐š๐ฒ๐ž๐ซ: The final transformed data reaches the output layer, producing the networkโ€™s prediction or output.

During forward propagation, no adjustments to the networkโ€™s weights and biases are made. It is simply the process of generating predictions based on the current state of the network.

โฌ… ๐๐š๐œ๐ค๐ฐ๐š๐ซ๐ ๐๐ซ๐จ๐ฉ๐š๐ ๐š๐ญ๐ข๐จ๐ง โฌ…

Backward propagation, or backpropagation, is the process used to update the networkโ€™s parameters based on the error of the prediction. It involves the following steps:

๐Ÿ. ๐‚๐š๐ฅ๐œ๐ฎ๐ฅ๐š๐ญ๐ž ๐ญ๐ก๐ž ๐‹๐จ๐ฌ๐ฌ: The difference between the networkโ€™s prediction which is output from forward propagation and the actual target values is computed using a loss function.

๐Ÿ. ๐‚๐จ๐ฆ๐ฉ๐ฎ๐ญ๐ž ๐†๐ซ๐š๐๐ข๐ž๐ง๐ญ๐ฌ: Using the chain rule of calculus, the gradient of the loss with respect to each weight and bias in the network is computed. This involves working backward through the network, from the output layer to the input layer, hence the name backward propagation.

๐Ÿ‘. ๐”๐ฉ๐๐š๐ญ๐ž ๐๐š๐ซ๐š๐ฆ๐ž๐ญ๐ž๐ซ๐ฌ: The computed gradients are then used to update the networkโ€™s weights and biases in the direction that minimizes the loss. This is typically done using an optimization algorithm like gradient descent.

โœจ ๐’๐ฎ๐ฆ๐ฆ๐š๐ซ๐ฒ โœจ

๐…๐จ๐ซ๐ฐ๐š๐ซ๐ ๐๐ซ๐จ๐ฉ๐š๐ ๐š๐ญ๐ข๐จ๐ง: The process of passing input data through the network to generate an output.

๐๐š๐œ๐ค๐ฐ๐š๐ซ๐ ๐๐ซ๐จ๐ฉ๐š๐ ๐š๐ญ๐ข๐จ๐ง: The process of updating the networkโ€™s parameters by computing and using the gradients of the loss function concerning each parameter.

Both processes are repeated iteratively during the training phase of a neural network to minimize the loss and improve the modelโ€™s performance.

#๐ƒ๐š๐ญ๐š๐€๐ง๐š๐ฅ๐ฒ๐ฌ๐ข๐ฌ #FowardPropagation #BackwordPropagation #DataScience #๐ƒ๐š๐ญ๐š๐„๐ง๐ ๐ข๐ง๐ž๐ž๐ซ๐ข๐ง๐  #๐Œ๐š๐œ๐ก๐ข๐ง๐ž๐‹๐ž๐š๐ซ๐ง๐ข๐ง๐  #๐ƒ๐ž๐ž๐ฉ๐‹๐ž๐š๐ซ๐ง๐ข๐ง๐  #LossFunction #๐๐ž๐ฎ๐ซ๐š๐ฅ๐๐ž๐ญ๐ฐ๐จ๐ซ๐ค #๐ƒ๐ข๐ ๐ข๐ญ๐š๐ฅ๐๐ซ๐š๐ข๐ง #๐€๐ซ๐ญ๐ข๐Ÿ๐ข๐œ๐ข๐š๐ฅ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž #๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐ฏ๐ž๐€๐ˆ #๐๐ž๐ซ๐œ๐ž๐ฉ๐ญ๐ซ๐จ๐ง #๐Œ๐‹๐ #LLM

Pythonโ€™s Gurus๐Ÿš€

Thank you for being a part of the Pythonโ€™s Gurus community!

Before you go:

  • Be sure to clap x50 time and follow the writer ๏ธ๐Ÿ‘๏ธ๏ธ
  • Follow us: Newsletter
  • Do you aspire to become a Guru too? Submit your best article or draft to reach our audience.

--

--