Transparent Multi-GPU Training on TensorFlow with Keras

Very nice !

Do you have a similar utility for model parallelism —

I have a big embedding variable which doesn’t fit in memory and I want to distribute it over multiple GPUs in a single machine

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.