Machine Learning Engineering Part 2: Understanding ports, run multiple Flask servers with different I/O.

Part 2 of hands-on coding tutorials on the fundamentals of Machine Learning Engineering. Creating models is fun, operationalizing them can be hard!

Bram Bloks
LaunchAI
6 min readMar 1, 2019

--

Short recap

Intent of this series

Creating machine learning models has become easier and very accessible, however operationalizing them often remains a challenge. As a result valuable algorithms and models are left on the shelves and data scientists, teams and companies not benefiting from their expensive AI/ML efforts. This series covers some essential theory and techniques in the domain of operationalizing ML, a.k.a. Machine Learning Engineering.

Today’s challenge

In the previous part: Machine Learning Engineering Part 1, we created an API from a custom algorithm that returned all the prime numbers for a given range. We deployed it as a webserver on our local machine using the Python Flask framework. When deployed we sent some data to the server using POST requests and retrieved the input. In this part we will dive deeper in understanding the processes of sending and retrieving data from and to a web server. By performing the following things:

  1. Adding GET requests
  2. Run multiple API servers simultaneously
  3. Upgrade server to process different I/O data types

I have dropped all the code and resources in a public Git repository for this series, so you can access all resources centrally.

Let’s get started!

1. Adding GET requests

At the end of the previous part of this series we had a REST API server running on port 3000, to which we could make POST requests. The server loaded and used our prime number algorithm. Our script looked like this:

With the above script we run a web server on the host-port-suffix combination http://127.0.0.1:3000/process, which is the same as localhost:3000/process. You may have noted, that when you open this URL link in your browser, you get an 405 error that states “Method Not Allowed”. This is because we only enabled POST methods (sending data to the resource).

If we remove the suffix (http://127.0.0.1:3000/ (same as: localhost:3000) and enter it in the browser we get a 404 error. No resource available. It makes sense, since we only defined one route in our app, namely @app.route('/process', methods=["POST"] .

So far we have only been sending data to the server and fetching the response. To only retrieve data, without sending any, we can use GET requests. Luckily GET requests are an easy topic. Let’s update our server script with a simple GET request as below. Then restart the server.

We now specified a GET method for the base http://127.0.0.1:3000/. If we now open this URL in our browser, we see the message that we put in in our script.

And if we look to our terminal at the same time. We can see that a GET request is made to our server every time we refresh the browser. You can also still send post requests to the server. The server will notify all the requests that are made. In your terminal it will look something like this:

Well done! Your Flask server now is now equiped with POST and GET requests. Feel free to play around a bit with your code and add other options, or add more suffices that enable requests.

Run multiple servers simultaneously

The way to run multiple Flask servers locally shouldn’t really come as a surprise by now. As we have been playing around with the methods and the URL suffices, we can also change the ports.

Create another Flask server script that runs on another port, for example port 4001. Then open two separate terminals and run both scripts. You will see something like this:

Both servers are now running locally, each with their own port. You can now make requests to them. Note: Make sure to change the ports in your scripts, so you are making requests to the right server. If you open both resources in the browser, you will retrieve the data from the GET requests.

Upgrade with different data input and output formats

Now that we kinda have a feeling about how servers, ports and requests work, it is a good moment to go deeper into processing different data types by our servers. From last part you may recall that we only were able to send data in JSON format, for which we used the following python script:

Let’s use the server that runs on port 4001, we created in previous paragraph, and make sure it is able to retrieve images, instead of JSON.

We start of very basically by sending an image via a POST request and then let the server notify us if the image was uploaded successfully or not. You can use any image you want for the POST request. Make sure you save it in a folder that will be conveniently accessible in your scripts.

Since we are not using the prime number algorithm anymore we can leave the import and associated code out.

To validate that we correctly have uploaded an image we open it using Pillow. A Python library specially for image processing. Then simply store the name of the object as a string, and return that string.

On the processing part we enable the POST method and write a message, that will be returned to the user via an if-else statement. Instead of storing JSON content via request.json, we are now accessing files that we annotated as “image”. For that we use file = request.files.get("image").

Now run the server.

Time to check if our server can retrieve images and will return a result.

We can modify the script to send an image that we have stored locally. For this example I just used the images of the gears, displayed at the top of the article and put it in a folder called “images”, to which the script refers. As good practice we wrap up everything in a function with some variables. Note that the script now specifies files instead of headers+ data.

Run the server in one terminal and the request script in another terminal. You will receive a notification that a POST request was done successfully (Status code 200) and a dictionary object mentioning the uploaded image.

When running your request script multiple times, it should look something like this:

Well done!

Store everything in a Git repository

We have been creating and editing multiple scripts now. To keep things nice and tidy I have stored everything in a public Git repository (‘repo’ in short). Via the repo you can clone all the code sample directly on your computer. An explanation on how to do this is provided there too, or check out the official Github documentation.

If you haven’t heard about Git before, this is your moment to start using it! Git is a version management system for your code and files. Using a platform like GitHub you can easily open-source your software and collaborate with other developers around the globe.

What’s next?

For the coming parts I have planned to upgrade our services with Machine Learning and containerize them using Docker. So stay tuned!

Like this article?

Was this tutorial was useful?! Feel free to share this story or connect with me on LinkedIn to stay up to date on new posts on this series!

The field of software engineering and DevOps is broad, options are endless and things may get complex quickly. I hope to help you in your journey of creating awesome AI applications!

--

--

Bram Bloks
LaunchAI

Entrepreneur, founder, developer & open source software contributor. Tech enthusiast and active AI/ML community member.