The ultimate async setup: FastAPI, SQLModel, Alembic, Pytest

ESTretyakov
9 min readMay 22, 2022

--

FastAPI is a popular topic nowadays and I have decided to share my setup for an async web-server using this framework. Here is a short description of python packages used in the article (just to make a whole picture to save your time):

Poetry (https://python-poetry.org) — is a tool for dependency management and packaging in Python. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you;

FastAPI (https://fastapi.tiangolo.com) — is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints;

Pydantic (https://pydantic-docs.helpmanual.io) — Data validation and settings management using Python type hinting;

SQLAlchemy (https://www.sqlalchemy.org) — SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL;

SQLModel (https://sqlmodel.tiangolo.com) — SQLModel is a library for interacting with SQL databases from Python code, with Python objects;

Alembic (https://alembic.sqlalchemy.org/en/latest/) — Alembic is a lightweight database migration tool for usage with the SQLAlchemy Database Toolkit for Python.

Description

The project structure (made with draw.io)

As an example for the async setup in the article I am using a simple application. The application has only one table that stores heroes’ records. The web-service itself has the functionality to create, get, patch and delete records in the database.

Development environment

If you don’t want to waste your time on setting up a development environment just head to the next section.

Proceed to the Poetry web-page to get proper way of installation for your platform. Since Poetry is installed you need to initialise the project. In the following example I already have a directory with my environment created by PyCharm, to create a new project, please use this guide for basic poetry usage.

As I mentioned before, my PyCharm IDLE already has setup an venv environment, so to install poetry I can use simple pip command for installation (pip install poetry) and proceed to initialising poetry project:

cd path/to/my/project
poetry init

The poetry initialisation offers a friendly dialog that should be understandable for everybody. The only thing that I would like to mention here is that I add isort and flake8 packages to the poetry dev environment for sorting imports and checking code style.

Let’s install crucial packages for the project (if you haven’t done this during the initialisation):

poetry add fastapi
poetry add sqlmodel
poetry add dotenv
poetry add uvicorn

At the end of the stage you should have two generated files in the root of your project: pyproject.toml and poetry.lock. In general, these two files describe the project dependencies.

Database

We use the PostgreSQL database for our setup as one of the most popular choices nowadays in my opinion. To install the database I would suggest using a docker container, here is a simple command that uses bitnami image:

docker run --name heroes-pg -d -e POSTGRESQL_PASSWORD=thepass123 -p 5432:5432 bitnami/postgresql:13

If you know nothing about Docker, please visit the link to get some knowledge and installation recipes. The command above will install a PostgreSQL container with the name “heroes-pg” and will open a 5432 port under localhost, the password to the database is “thepass123”, the user name “postgres” by default. For your convenience you can use pgadmin, dbeaver, DataGrip or any other tools to interact with the database. Here are the scripts I am using to create a separate database and create the role for the app.

In this case the database connection string for the app will look like this:

postgresql+asyncpg://hero:heroPass123@0.0.0.0:5432/heroes_db

Environment variables

We are going to use Pydantic approach to set environment variables for the project. Let’s create an .env file under the root of the project.

# BASE
API_V1_PREFIX="/api/v1"
DEBUG=True
PROJECT_NAME="Heroes App (local)"
VERSION="0.1.0"
DESCRIPTION="The API for Heroes app."
# DATABASE
DB_ASYNC_CONNECTION_STR="postgresql+asyncpg://hero:heroPass123@0.0.0.0:5432/heroes_db"

To read these variables I create a config.py (app/core/config.py) with Settings declaration. The content of config.py looks like this:

The loading of the file should be on the application initialisation, so place further code lines into __init__.py (app/__init__.py):

Here I use the python-dotenv package to load the .env file and be able to set different env files in Dockerfile instructions in future.

Let’s test environment variables providing some of them in the main.py (app/main.py) file.

Here I use the HealthCheck model from models.py (app/core/models.py) file for nice representation in the SwaggerUI under the schemas section.

Use the command to start the service:

uvicorn app.main:app

You should be able to reach the service health check using this url: http://0.0.0.0:8000, the content will look like this:

{
"name":"Heroes App (local)",
"version":"0.1.0",
"description":"The API for Heroes app."
}

The data in this JSON response reflects the data provided in the .env file.

SQLAlchemy

We are going to create a SQLAlchemy engine to be able to connect to the database. The declaration of the async engine is located in the db.py (app/core/db.py) file.

As we use the asyncpg driver don’t forget to install it: poetry add asyncpg . To pass a database session to a handler in future you can use the Depends approach like this: connection: AsyncSession = Depends(get_async_session) .

SQLModel

When we create a lot of models we have common fields like “uuid”, “created_at” and “updated_at”. For this reason I suggest creating a separate file with relevant models that we are going to use in the future. Here how it is done in models.py (app/core/models.py) file:

Now, when we have UUIDModel and TimestampModel, we are ready to create our first application model. In my practice I separate base and “private” fields. Base fields are used in both cases when we create and read data, “private” fields like foreign keys and IDs I place under the main model. The model and relevant functionality I locate in a separate submodule, so hero model.py (app/heroes/models.py) file looks like this:

In the example I also demonstrate how to handle enum fields. The code in the lines 22–24 will help us in the future with testing. The testing environment will create and drop database tables and we have to provide listeners for ENUMs creation. To create a table that represents the Hero’s model we need to use the alembic package.

Alembic

In order to start using alembic you need to install the package: poetry add alembic. I prefer to hold all migrations under the “migrations” directory, so let’s initialise the alembic setup: alembic init -t async migrations . The command will create a migrations directory with configuration files and here we are going to make some changes.

I prefer to have a nice naming convention for tables and other database entities and I suggest adding this piece of code for auto generate migrations to the env.py (migrations/env.py) file:

In order to let alembic recognise your models you have to import them in the env.py file, like this:

In some cases you would need to ignore tables created by other applications like celery or some database extensions like postgis. For this reason I suggest adding this piece of code to ignore the list of tables mentioned in the .env file:

Also pay attention that we grab database connection string from the .env file.

Also we would need to modify the script.py.mako (migrations/script.py.mako) file to let SQLModel to be presented in imported modules in every migration file. Add this line

import sqlmodel

just before

${imports if imports else “”}

Ok, now we are ready for our first migration. Use this command to generate a migration file in the migrations/versions directory: alembic revision — autogenerate -m “heroes”.

In my case the generated migration file name is “886322ad66ff_heroes”, I suggest renaming it for your comfort like this “0001_886322ad66ff_heroes”. The file renaming won’t impact the performance but will help you to identify migrations in the evolutionary timeline.

Run the command alembic upgrade head to make changes to the database. If everything is ok, there will be created the hrs_heroes table in the database with further DDL:

create table if not exists hrs_heroes
(
role hrs_role,
uuid uuid default gen_random_uuid() not null
constraint pk_hrs_heroes
primary key,
nickname varchar(255) not null,
created_at timestamp default CURRENT_TIMESTAMP(0) not null,
updated_at timestamp default CURRENT_TIMESTAMP(0) not null
);
alter table hrs_heroes
owner to hero;
create unique index if not exists ix_hrs_heroes_uuid
on hrs_heroes (uuid);

A small note here, if you would like to downgrade the migration don’t forget to add the “op.execute(“DROP TYPE hrs_role;”)” line to the downgrade function in the migration file.

CRUD

The CRUD stands for create, read, update, delete functionality which we are going to implement in crud.py (app/heroes/crud.py) file:

Here you can see all imports and __init__ function for our HeroesCRUD class. Let’s implement async functionality one by one.

The signature of the function is simple: we provide data and retrieve created Hero object. Don’t forget to perform a commit, otherwise your changes to the database are not going to be saved.

To retrieve an object by ID we have implemented the get method. Here I use select SQL builder which seems to me quite comfortable to understand what is going on with the database communication.

The record patching here is performed by updating object attributes, you can do the same functionality using update SQL builder function from sqlalchemy as it was made in the get method.

To delete an object you can use SQL builder as it is made in the code above or using async session method delete, like this: await self.session.delete(hero) . Here we are done with crud functionality and let’s proceed to the api handlers.

API handlers

We need to create a dependency that will provide crud functionality to our handlers functions. Let’s create one under dependencies.py (app/heroes/dependencies.py) file:

Now we are ready to create our first handler which will be creating heroes in our database:

The response on the request will provide the client with HeroRead model data in JSON format as we have indicated the pydantic model under response_model argument.

We use the url path parameter hero_id to provide the uuid for HeroesCRUD get method to retrieve the record from the database.

For the delete handler we would need another response model which I call StatusMessage, this is common model for lots of cases so I place it under models.py (app/core/models.py) file:

And the last one handler is looking like this:

Don’t forget to import the StatusMessage model from app/core/models.py. Now we are ready to connect the router to the main application.

Here I use endpoints.py (app/router/api_v1/endpoints.py) file to connect handlers to the main app:

The last thing here is to include the router to the main one in app/main.py file:

Ok, we did it, now start the server and checkout http://0.0.0.0:8000/docs. You will see the SwaggerUI interface, try to play with it and explore more.

Pytest

Testing is a crucial part of any development process, let’s create a proper setup for async functionality. First of all install pytest, pytest_asyncio and httpx packages:

poetry add pytest
poetry add pytest_asyncio
poetry add httpx

We need to create a pytest.ini under the project root with this content:

[pytest]
python_files = tests.py test_*.py *_tests.py
asyncio_mode=auto

Also we would need a separate database where we are going to run our tests. Let it be a heroes_db_test nearby the current database heroes_db. Just use the same SQL scripts with a slight modification in naming and add further lines for app/core/db.py file:

The piece of code will identify if the service is running in pytest environment and will select testing database string for the engine creation.

Also don’t forget to extend the .env file and Settings object in app/core/config.py.

DB_ASYNC_TEST_CONNECTION_STR="postgresql+asyncpg://hero:heroPass123@0.0.0.0:5436/heroes_db_tests"

After the manipulations above we need to create a set of fixtures that will help us to write proper async tests. All common fixtures are placed in the conftest.py (app/conftest.py) file. The first one crucial fixture is event_loop, you won’t succeed to perform async tests without this fixture:

As well, we will need async_client fixture to make requests to our endpoints, that’s how it looks like:

Also we need database sessions to check created objects in the database, that’s how it is done:

If we need data for tests to create objects and check responses I use this fixture:

The fixture retrieves data from data.json file located in the data folder under the same directory where tests.py is.

Writing tests

Let’s create a tests package under app/heroes directory and data.json under app/heroes/data directory, here is the content:

Now we are ready to write our tests, the first one is create hero:

Here we create a Hero object using JSON payload from test data provided in data.json file, also we make a query to the database to assert that all fields are matched. I don’t think I need to comment more about the tests below:

You are welcome to use “pytest” command to run all the tests.

Summary

I think that I have managed to fully describe the async web-service setup using FastAPI, SQLModel, Alembic and Pytest packages providing the implementation of a simple service.

I would be glad to hear some more solutions to enhance the setup. Thank you for reading, the whole project is available in the GitHub repository.

--

--