Deploying Django Application on AWS with Terraform. Connecting PostgreSQL RDS

Yevhen Bondar
5 min readJul 29, 2022

In the previous part of this guide, we deployed the Django web application on AWS ECS.

In this part, we’ll create a PostgreSQL RDS instance on AWS, connect it to the Django ECS task and enable access to Django Admin. We choose PostgreSQL because it supports complex SQL queries, MVCC, good performance for concurrent queries, and has a large community. As AWS says:

PostgreSQL has become the preferred open source relational database for many enterprise developers and start-ups, powering leading business and mobile applications. Amazon RDS makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud.

Add PostgreSQL to Django project

Go to the django-aws-backend folder and activate virtual environment: cd ../django-aws-backend && . ./venv/bin/activate.

First, let’s set up PostgreSQL via Docker for local development. Create docker-compose.yml with the following content:

Then run docker-compose up -d to start a container. Now, let's connect Django to PostgreSQL.

We use a non-standard 5433 port to avoid potential conflict with your local PostgreSQL

Next, we need to add pip packages: psycopg2-binary for working with PostgreSQL and django-environ to retrieve PostgreSQL connection string from environment. Also, we’ll add the WhiteNoise package to serve static files for Django Admin.

Add to the requirements.txt file these packages and run pip install -r requirements.txt:

Now, change settings.py. First, let's load env variables. We try to read the .env file in the project root if it exists.

Now we can use environment variables in settings.py. Let's provide DATABASES from env:

We’ve set a default value for the DATABASE_URL to allow running the Django project locally without specifying this variable in the .env file.

Third, add the WhiteNoiseMiddleware to serve static files and specify the STATIC_ROOT variable.

Also, add the RUN ./manage.py collectstatic --noinput line to the bottom of Dockerfile to collect static files in the static folder.

Apply migrations, create a superuser and start a web server:

(venv) $ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
...
(venv) $ python manage.py createsuperuser
Username: admin
Email address:
Password:
Password (again):
Superuser created
(venv) $ python manage.py runserver
...
Starting development server at <http://127.0.0.1:8000/>

Go to http://127.0.0.1:8000/admin/ and sign in with your user.

Finally, let’s commit changes, build and push the docker image.

$ git add .
$ git commit -m "add postgresql, environ and static files serving"
$ docker build . -t 947134793474.dkr.ecr.us-east-2.amazonaws.com/django-aws-backend:latest
$ docker push 947134793474.dkr.ecr.us-east-2.amazonaws.com/django-aws-backend:latest

We are done with the Django part. Next, we create a PostgreSQL instance on AWS and connect it to the ECS task.

Creating RDS

AWS provides the Relational Database Service to run a PostgreSQL. Now, we’ll describe the RDS setup with Terraform. Go to the Terraform folder ../django-aws-infrastructure and add a rds.tf file:

Here we create an RDS instance, a DB subnet group, and a Security Group to allow incoming traffic from ECS 5432 port only. We’ll specify db_name, username, password, and instance_class in the variables.tf:

Notice that we haven’t provided a default value for the prod_rds_password variable to prevent committing the database password to the repository. Terraform will ask you for a variable value if you try to apply these changes.

$ terraform apply
var.prod_rds_password
postgres password for production DB
Enter a value:

It’s not convenient and error-prone to type a password every time. Gladly, Terraform can read variable values from the environment. Create .env file with TF_VAR_prod_rds_password=YOUR_PASSWORD variable. Interrupt password prompting, run export $(cat .env | xargs) to load .env variable, and rerun terraform apply. Now Terraform picks the password from the .env file and creates an RDS PostgreSQL instance. Check it in the AWS RDS console.

Connecting to ECS

Let’s connect the RDS instance to ECS tasks.

Add to backend_container.json.tpl environment var DATABASE_URL

Add RDS variables to the prod_backend_web task definition in the ecs.tf file:

Let’s apply changes and update the ECS service with the new task definition. Run terraform apply, stop current task via web console and wait for the new task to arise.

Now, go to the admin URL on the load balancer hostname and try to log in with random credentials. You should get the relation "auth_user" does not exist error. This error means that the Django application successfully connected to PostgreSQL, but no migrations was run.

Running migrations

Now, we need to run migrations and create a superuser. But how can we do it? Our infrastructure has no EC2 instances to connect via SSH and run this command. The solution is ECS Exec.

With Amazon ECS Exec, you can directly interact with containers without needing to first interact with the host container operating system, open inbound ports, or manage SSH keys.

First, we need to install the Session Manager plugin. For macOS, you can use brew install session-manager-plugin. For other platforms, check this link.

Next, we need to provide an IAM policy to the prod_backend_task role and enable execute_command for the ECS service. Add this code to the ecs.tf and apply changes:

Stop the old ECS task in web console and wait for the new task.

Then let’s retrieve the new task_id via CLI and run aws esc execute-command. Create a new file touch backend_web_shell.sh && chmod 777 backend_web_shell.sh with the content:

Run ./backend_web_shell.sh to shell into task. If you have some troubles, please check Amazon ECS Exec Checker. This script checks that your CLI environment and ECS cluster/task are ready for ECS Exec.

Run migrations and create a superuser from the task’s console:

$ ./manage.py migrate
$ ./manage.py createsuperuser
Username: admin
Email address:
Password:
Password (again):
Superuser created

After this, check your deployment admin URL and try to sign in with provided credentials.

Congratulations! We’ve successfully connected PostgreSQL to the ECS service. Now you can commit changes in the Terraform project and move to the next part.

In the next part, we’ll set up CI/CD with GitLab.

You can find the source code of backend and infrastructure projects here and here.

If you need technical consulting on your project, check out our website or connect with me directly on LinkedIn.

--

--