Two Steps Forward, Two Steps Back
Repeat Until Convergence (Neural Networks and Back Propagation)
In this post, I’ll start with a high-level review of what we’ve learned so far with neural networks and how they work up through a complete forward pass, and then conceptually walk through the back propagation technique to use gradient descent and adjust the randomized weight and bias values to align predictions more closely to actual labels. We will uncover some really neat math effects of using the ReLU activation function, and find out how the chain rule is applied to make finding the gradients across all…