Here we have introduced new notation where *wˡᵤᵥ *is denoting the connection from the *v*:th neuron in layer *l-1 *to the *u*:th neuron in layer *l,* and *bˡᵤ *is the bias of the *u*:th neuron in layer *l*. The expression can be a bit confusing, particularly because of all the new indices. But the biggest takeaway from this is that the neural network is just a mathematical function. And this function can be *derived *with respect to any variable. We will use our newly introduced notation, and define an error function, or “cost” function *C* using a sample of our training data, and then see how the error changes as we change our weights.