๐ฏ ๐๐๐๐ข๐ง๐ข๐ง๐ ๐ง๐๐ฎ๐ซ๐๐ฅ ๐ข๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ ๐ฐ๐ข๐ญ๐ก ๐๐จ๐ซ๐ฐ๐๐ซ๐ ๐๐ฅ๐จ๐ฐ ๐๐ง๐ ๐๐๐๐ค๐ฐ๐๐ซ๐ ๐ญ๐ฎ๐ง๐ข๐ง๐ ๐ฏ
โก ๐ ๐จ๐ซ๐ฐ๐๐ซ๐ ๐๐ซ๐จ๐ฉ๐๐ ๐๐ญ๐ข๐จ๐ง โก
Forward propagation refers to the process by which input data is passed through a neural network to generate an output. It involves the following steps:
๐. ๐๐ง๐ฉ๐ฎ๐ญ ๐๐๐ฒ๐๐ซ: The input data is fed into the network.
๐. ๐๐ข๐๐๐๐ง ๐๐๐ฒ๐๐ซ๐ฌ: The input data is then passed through one or more hidden layers. Each neuron in these layers applies a linear transformation followed by a non-linear activation function to the input data.
๐. ๐๐ฎ๐ญ๐ฉ๐ฎ๐ญ ๐๐๐ฒ๐๐ซ: The final transformed data reaches the output layer, producing the networkโs prediction or output.
During forward propagation, no adjustments to the networkโs weights and biases are made. It is simply the process of generating predictions based on the current state of the network.
โฌ ๐๐๐๐ค๐ฐ๐๐ซ๐ ๐๐ซ๐จ๐ฉ๐๐ ๐๐ญ๐ข๐จ๐ง โฌ
Backward propagation, or backpropagation, is the process used to update the networkโs parameters based on the error of the prediction. It involves the following steps:
๐. ๐๐๐ฅ๐๐ฎ๐ฅ๐๐ญ๐ ๐ญ๐ก๐ ๐๐จ๐ฌ๐ฌ: The difference between the networkโs prediction which is output from forward propagation and the actual target values is computed using a loss function.
๐. ๐๐จ๐ฆ๐ฉ๐ฎ๐ญ๐ ๐๐ซ๐๐๐ข๐๐ง๐ญ๐ฌ: Using the chain rule of calculus, the gradient of the loss with respect to each weight and bias in the network is computed. This involves working backward through the network, from the output layer to the input layer, hence the name backward propagation.
๐. ๐๐ฉ๐๐๐ญ๐ ๐๐๐ซ๐๐ฆ๐๐ญ๐๐ซ๐ฌ: The computed gradients are then used to update the networkโs weights and biases in the direction that minimizes the loss. This is typically done using an optimization algorithm like gradient descent.
โจ ๐๐ฎ๐ฆ๐ฆ๐๐ซ๐ฒ โจ
๐ ๐จ๐ซ๐ฐ๐๐ซ๐ ๐๐ซ๐จ๐ฉ๐๐ ๐๐ญ๐ข๐จ๐ง: The process of passing input data through the network to generate an output.
๐๐๐๐ค๐ฐ๐๐ซ๐ ๐๐ซ๐จ๐ฉ๐๐ ๐๐ญ๐ข๐จ๐ง: The process of updating the networkโs parameters by computing and using the gradients of the loss function concerning each parameter.
Both processes are repeated iteratively during the training phase of a neural network to minimize the loss and improve the modelโs performance.
#๐๐๐ญ๐๐๐ง๐๐ฅ๐ฒ๐ฌ๐ข๐ฌ #FowardPropagation #BackwordPropagation #DataScience #๐๐๐ญ๐๐๐ง๐ ๐ข๐ง๐๐๐ซ๐ข๐ง๐ #๐๐๐๐ก๐ข๐ง๐๐๐๐๐ซ๐ง๐ข๐ง๐ #๐๐๐๐ฉ๐๐๐๐ซ๐ง๐ข๐ง๐ #LossFunction #๐๐๐ฎ๐ซ๐๐ฅ๐๐๐ญ๐ฐ๐จ๐ซ๐ค #๐๐ข๐ ๐ข๐ญ๐๐ฅ๐๐ซ๐๐ข๐ง #๐๐ซ๐ญ๐ข๐๐ข๐๐ข๐๐ฅ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ #๐๐๐ง๐๐ซ๐๐ญ๐ข๐ฏ๐๐๐ #๐๐๐ซ๐๐๐ฉ๐ญ๐ซ๐จ๐ง #๐๐๐ #LLM
Pythonโs Gurus๐
Thank you for being a part of the Pythonโs Gurus community!
Before you go:
- Be sure to clap x50 time and follow the writer ๏ธ๐๏ธ๏ธ
- Follow us: Newsletter
- Do you aspire to become a Guru too? Submit your best article or draft to reach our audience.