In current work on neural nets, there’s a definite tradeoff one sees. The more what’s going on inside the neural net is like a simple mathematical function with essentially arithmetic parameters, the easier it is to use ideas from calculus to train the network. But the more what’s going is like a discrete program, or like a computation whose whole structure can change, the more difficult it is to train the network.
A New Kind of Science: A 15-Year View
Stephen Wolfram

What does “training” mean? How does training look different with a function vs. a computation?

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.