On a previous episode of AI Adventures, we looked at Colab as a great way to get started in the data science and machine learning world. But there are some models out there that need to run for a long time (Colab instances will reset after several hours), you want to get more memory or GPUs than is provided for free.
The question then becomes: How can we hook up Colab’s frontend with some more compute power? We’re going to use Google Cloud Platform’s Deep Learning VMs to power up your Colab environment. …
Did you know that you can convert a Keras model to a TensorFlow Estimator? It will give you a whole host of options around distributed training and scaling. We’re going to prepare a Keras model for running at scale by converting it to a TensorFlow estimator.
So we have a Keras model; easy to define, clear to read, and friendly to maintain. But we it doesn’t do so good with scaling up to larger datasets or running across many machines.
Luckily, Keras and TensorFlow have some fantastic interoperability features.
What we’d like to do is convert our Keras model to a TensorFlow Estimator, which comes with distributed training built-in. That’s our ticket to solving our scaling challenges. …
This is a short post — the real substance is in the screencast below, where I walk through the code!
If you’re getting started in the machine learning world (and let’s be real here, who isn’t?), the tooling seems to just keep getting better and better. Keras has been a key tool for some time, and now it’s been integrated right into TensorFlow. Better together, right? And it just so happens that it’s never been easier to get started with Keras.
But wait, what exactly is Keras, and how can you use it to get started creating your own machine learning models? …