TensorFlow Object Detection API in 5 clicks from Colaboratory
If one of your objectives is to perform some research on data science, machine learning or a similar scenario, but at the same time your idea is use the least as possible time to configure the environment… a very good proposal from the team of Google Research is Colaboratory.
For this opportunity I prepared the implementation of the TensorFlow Object Detection model in just 5 clicks. What do you think if we analyze the objects of an image, with the aim of using an implementation of less than 60 seconds?
Surely you are wondering, how can I access this implementation? The answer is very simple, accessing here [Colaboratory notebook] and using the sequence of steps proposed. *You can see the details of each step below.
For this step, I included the task of downloading the model to the virtual environment proposed by Colaboratory. The rest of the libraries, like TensorFlow, are already available when connecting the environment, which is not a concern for this implementation.
Now let’s set all the necessary imports to execute the object detection example. Pay special attention to the object detection utilities.
This step is very important because we can change the detection model for a more accurate one, but we can also greatly affect the performance of our future detection functions. Pay attention to MODEL_NAME.
In addition, we execute some elements to use an already established and saved model. Finally we set a conversion function for the image to a numpy array.
In this segment of the implementation, we establish the test images to be used. Part of the official example of the Object Detection model.
We execute the TensorFlow session. We iterate the images and proceed to the detection of elements.
Note: ideally a Google sign-in is required to make the operation simple and transparent.
I also gave them some ideas to include your own photographs and perform screening tests. [custom personal images section into the colaboratory notebook]