Why it’s Not Difficult to Train A Neural Network with a Dynamic Structure Anymore!
Tanay Gahlot

Also would add Chainer: http://chainer.org/ to the list.

It’s similar to Torch in that it is imperative (define the computation graph by running it), but is all python and has light wrappers of numpy/Cuda that perform backprop. Implementing new ops is as simple as writing two numpy functions. It has saved me when implementing complex dynamic model architectures a couple times already.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.