Creating a production ready Python REST backend with Flask-Rebar — Part 2

Emile Fugulin
8 min readMay 7, 2019

--

If you have not read the first part of the series, you can find it here. I go in length on the “why” Flask-Rebar. The real meat starts in this post! In this post and the next, I will slowly build a basic multi-tenant SaaS for online payment called stryp. If you see some “resemblances” with Stripe and its API, that’s a pure coincidence I assure you 😅. The scope of this part will be the account (also known as tenant) CRUD. We will discuss code organization, database connection and input validation. Ready? Let’s start!

Code organization

As in any Python and Flask (or Flask-like) project, we are pretty much free to organization our code however we want. If you come from a language or a framework that has a strong opinion on the structure, you might feel lost at first. After a while, I found that you usually settle with one of the two big “families” that I call module-based and layer-based.

Django users will recognize the module-based family since that is the organization enforced by this framework. With this architecture, you can have clearly defined “microservices” inside a bigger monolithic application and keep a good separation between them. Coming from a more Java (Spring, Dropwizard) background, I am more familiar with the layer-based approach and this is the one I will be using for this project. I found this architecture more fit for smaller services where you don’t have a lot of controllers, otherwise it can get messy.

Database connection

As I said previously, the first thing we will add is the Accounts API. This will demonstrate a basic CRUD controller and be the base for our multi-tenant architecture. But before that, we need a database! For local development, I usually use a dockerized one with a docker-compose file:

Now, the only thing that remains is to use the Flask-SQLAlchemy extension to be able to use the popular ORM SQLAlchemy. Let’s also add another nice extension called Flask-Migrate that will help us manage our database migrations with Alembic (which is the tool used for migrations in SQLALchemy). Since we will be using PostgreSQL, you will also need to install psycopg2.

If you have a keen eye, you might have noticed that I also declared a rebar registry. In the Flask-Rebar world, a registry acts like a Flask blueprint. The difference is that all routes need to live inside a registry. Since this SaaS is still quite simple, I decided to only create one “master” registry for the whole API with a prefix that will be appended to all my routes.

Dealing with the configuration

One thing that is often overlooked in the tutorials is the configuration management. It is crucial to get this right to keep our secrets secure and isolated by environment when running in the cloud. We have seen so many examples of leaks because of committed secrets, please don’t do this 😥. If you are a bigger company, I strongly suggest that you take a look at Vault. For smaller projects, I am using the AWS Parameter Store since it’s free and you can encrypt your strings. But for our current scope, a simple uncommitted local file will do. For that, create a .env file at the project root and a config.cfg file beside your app.py:

The only remaining thing is to add this line when we create the app:

app.config.from_envvar("APP_CONFIG")

It will automagically load to variables and SQLAlchemy will pickup the URI and connect to your local database. Again make sure those files are not committed.

Creating our model

Now that we have a working database, we can create our first SQLAlchemy model! Note that I am using the full ORM here and not only the Core though it would also be a valid choice. I like having my tables with an “s” at the end, so we must specify the table name in the model.

Note that we are using a PostgreSQL specific here for the primary key: UUID. They can be interpreted from a string if as_uuid is false (the default) or as a full python UUID object otherwise. It is always useful to set a default value to automatically generate a new UUID so you don’t have to think about it when you create new instances. No need to setup an extension in our database, this is purely done in python.

We are missing just one little thing: an actual table in our database! This is where Alembic comes into play. If you followed the example so far, you should be able to hop on your favourite command line tool and execute the following commands:

# Setup alembic
flask db init
# Initiate the first migration
flask db migrate -m "Create accounts table"

Alembic will compare what our database currently looks like and what models we have registered. It will then generate a migration script to adjust the database to our models. The file it generates is usually accurate, but you should revise it to make sure. For example, I personally prefer to define the constraint directly in the column definition instead of after. For the Account model, we end up with something like

Then a simple flask db upgrade will apply the migrations to the database up to the most current one (we only have one for now). Note that it will store the migration version in its own table so you don’t have to keep that information around. This is quite handy when you work with multiple databases that might not always be at the same version.

Creating a controller

Now that we have our data layer in place, let’s move up the stack to our controller layer. This is where we are going to see Flask-Rebar shine. I am purposefully skipping the traditional service layer for now, don’t worry it will come in a next part 😬!

We start by a simple GET on an account. We can see that Flask-Rebar adds a single decorator to “rule” them all (it was actually funny when I said it my head, sorry 🙈). Bad joke aside, the rule is what you would normally put in a Flask route. The uuid prefix tells Flask to verify that the format of the account_id string respects the format of a UUID (and it will return an error to the user if it doesn’t). Next, the marshal_schema indicates which Marshmallow schema to use for the return value (more on that later). Finally, the whole thing lives inside a handle of the registry I mentioned earlier. So the whole route will be /api/accounts/<some UUID>.

At last, we enter the function. The Account model that we defined earlier contains a special query property that can be used to use the database session directly. Apart from that, the query is pretty much self explanatory.

The next interesting part is inside the None check. Rebar provides a set of generic errors that can be raised and will return a nicely formatted JSON response with the proper HTTP code to the user. Rebar will also handle all unexpected exception and return an internal server error.

Example of the default message when NotFound is raised

Serialization with Marshmallow

The last line of the GET we had above was a return with the database model. Now returning a database model would not be a pretty sight if it was dumped directly. Luckily, we specified a marshal_schema in the handler that will be used to serialize our data with Marshmallow.

Apart from being a delicious treat (especially if roasted by a campfire 🔥), Marshmallow is also a very powerful Python library for serialization, deserialization and input validation. The base concept is a Schema which is basically the model we want as input or output. For example, our account looks like:

Marshmallow will only pick the fields defined in the schema from the object that we give it. That way, you don’t have to worry about leaking information. Note that Marshmallow was created before typing in Python was a thing, so it still uses the “old” way of specifying types on fields. I predict that it will eventually switch to typing-based like pydantic, but don’t quote me on that!

And with that, we are all set to receive a proper JSON response from our backend (I cheated and inserted a row in the database by hand):

Expanding our controller

Hooray, we have a functioning backend! Our business won’t go very far though if we can’t create new accounts in our system, let’s fix that right away.

Here we see a few new things. First, we now have a request_body_schema that is used to verify that the JSON body that is given has all the fields that we require.

If we take a closer look at it, we see that it inherits from RequestSchema instead of a simple Schema. This is a helper for Marshmallow pre-3.0.0 to set our schema to strict mode (meaning the input can’t contain more information than what we specify). We can also see that the parameters are all required, because we need them all in our application.

Error returned in case of an unexpected parameter

Second, the marshal_schema is now a dictionary. Rebar is very flexible and it can have a different schema (and thus JSON body) depending on the status code that is returned. Since we want to return a 201 instead of a 200 when the account is created, we need to specify it.

Third, Rebar provides a few helper methods like get_validated_body to retrieve the information of the current context. One useful trick I picked up recently is the use of the ** to unpack the returned dictionary directly inside our database model so we don’t have to map each field manually. Since the body is protected by the Schema, it is safe to do that.

Putting everything together, we can now create a new account!

New account created with Postman

Conclusion

Wow, we have come a long way! In this article, we setup a new Flask-Rebar project from scratch, we added a controller to create and get accounts and we handled all the corner cases that could go wrong.

The full working code is always available on my Github. To help you further, I also created the missing CRUD methods. Feel free to take a look to see how it was done.

I am glad you sticked around until the end and I hope you enjoyed! I will see you very soon in another part of this series on Flask-Rebar 😉.

--

--