However, the computational power of RNNs makes them very hard to train. It is quite difficult to train a RNN because of the exploding or vanishing gradients problem. As we backpropagate through many layers, what happens to the magnitude of the gradients? If the weights are small, the gradients shrink exponentially. If the weights are big, the gradients grow exponentially. Typical feed-forward neural nets can cope with these exponential effects because they only have a few hidden layers. On the other hand, in a RNN trained on long sequences, the gradients can easily explode or vanish. Even with good initial weights, it’s very hard to detect that the current target output depends on an input from many time-steps ago, so RNNs have difficulty dealing with long-range dependencies.