Computer Vision Mobile App — End-to-end AI pipeline demo using Onepanel

The following steps describe how to use the API’s consumed by the Demo Application. The Code for the App is Hosted here. The whole project can be visualized in the block diagram below :

Onepanel
Onepanel
6 min readOct 2, 2019

--

Steps

  1. Resume dataset-upload-api and run video_upload.py to start the API.

The basic idea of file uploads is actually quite simple. It basically works like this:

A <form> tag is marked with enctype=multipart/form-data and an <input type=file> is placed in that form.

The application accesses the file from the files dictionary on the request object.

use the save() method of the file to save the file permanently somewhere on the filesystem.

The werkzeug.secure_filename() is explained a little bit later. The UPLOAD_FOLDER is where we will store the uploaded files and the ALLOWED_EXTENSIONS is the set of allowed file extensions.

Next the functions that check if an extension is valid and that uploads the file and redirects the user to the URL for the uploaded file:

2. Install the tensorflow object detection API

If it’s already installed you can check your $PYTHONPATHand move on to the usage section. Here's a quick (unofficial) guide on how to do that. For more details follow the official guide INSTALL TENSORFLOW OBJECT DETECTION API.

Usage

Run the script.

Leave --attribute argument empty if you want the to consider CVAT labels as tfrecords labels, otherwise, you can specify a user attribute name like --attribute <attribute>.

Please run python converter.py --help for more details.

Once Data is Annotated and Converted to tfrecords, use the Following to train a model using it

Start the Inference API

Once we’ve retrained our model and exported it to disk, we can host the model as a service. We’ll load the model from disk with a simple function that takes the graph definition directly from the file and uses that to generate a graph. TensorFlow does most of this for us, Resume dataset-upload-api and run video_upload.py to start the API.

Using Flask, much of the heavy-lifting around configuring a server and handling requests is done for us. After we’ve created a Flask app object:

Then, we can easily create routes for where our classification service will live. Let’s create a default route to our classify() function that will allow us to pass an image to the endpoint for identification.

Using the decorator syntax to define the route, it will configure the service so that our uploaded_file() function will be called every time someone hits the root of our service address. We said we wanted users to be able to specify a file to be identified so we’ll store that as a parameter from the request:

In this line, the variable t represents the image tensor that was created by read_tensor_from_image_file() function. TensorFlow will then take that image and run the new retrained model to generate predictions.

Those predictions come as a series of probabilities that indicate which of the classes (poodle, pug, or wiener dog) is the most likely. Since this is just a prediction service, it will simply return a JSON representation of the arrays.

Inside our script we can start our service with:

Then, if we want to launch the script from the command line, all we have to do is run python app.py and it will initialize and start running on port 5000.

Using the Service

We can now use this service either by visiting it in a web browser or generally making any REST call on that port. For an easy test we can access it using the following code snippet:

Download APP

Download the Android app by scanning the QR

or click on this Link .

--

--