Intro to certificate-based authentication in PostgreSQL

Evgenii Saichik
11 min readMar 30, 2024
A secured connection between elephant and user
This image was created with the assistance of DALL·E 2

Hi everyone! Laziness drives progress. Don’t you agree with this point? As for me, I agree with it. Usually, when we develop our applications, we deal with databases. Here I want to make a small remark. In this story by “database” I primarily mean PostgreSQL. So everything in this story applies to this RDBMS whereas it might not be completely true for other RDBMS like MariaDB or Microsoft SQL Server.

TL;DR feel free to use the source code for this story located at the GitHub. It’s tested with macOS Sonoma.

A standard flow of database interaction includes authentication and authorization. Access privileges implement the authorization in PostgreSQL. And roles implement the authentication. (I’m gonna write about both of them later).

When we develop our application, as typical lazy developers we prefer to minimize our manual interaction with different routines. Username and password authentication is exactly one of them. So, how can we not only get rid of this routine but also increase the security level? That’s where a certificate-based authentication type comes to the rescue.

When we talk about database authentication we’re able to use different kinds of it. They include, for example: password, peer, certificate, and other. Without any doubt password authentication is great. Especially when we use our favorite “qwerty” password. But let’s take a closer look at certificate-based authentication.

You might have seen this kind of authentication earlier. Usually, it is used on GitHub, different other applications that require SSH connection, and so on. From an aerial view, certificate-based authentication means that the client transfers a certificate instead of other types of credentials to the server.

And here the technical part begins.

Let’s create our requirements.

  1. As a user, I want to have a PostgreSQL instance.
  2. As a user, I want to be able to log in at that instance and perform some data manipulations.
  3. As a user, I don’t want to deal with passwords of any kind.

N.B. I’m gonna use docker-compose because it looks reasonable. All the configuration adjustments can be applied to a local installation of the PostgreSQL and pgAdmin with minimum changes.

Without further ado, let’s do it!

Initial setup.

  1. As I mentioned earlier, I’m gonna use docker-compose. So we need to install it. I believe you already found this site and followed the instructions.
  2. Also, we need OpenSSL. I hope it won’t be a problem for you to install it.
  3. Of course, we need any favorite IDE or a text editor.

Now let’s take a brief overview of PKI. In the most general case, we have a three-layered architecture:

The three-layered architecture of the certificate chain.
The three-layered architecture of the certificate chain

At the very top, we have a Root CA (certificate authority). Usually, it comes built-in with your operation system or some application. In this example, we’re gonna use a self-generated Root CA. It can sign an Intermediate CA or a Client certificate

In the middle comes an optional part. An Intermediate CA can either exist or not. It’s almost the same as the Root CA. An Intermediate CA contains information about the Root or Intermediate CA which signed it. One of the differences is related to distribution format: not all Intermediate CAs come built-in and some applications can be required to package them into a single bundle with the client certificate.

And finally, we have an end-entity certificate. This kind of certificate is used to confirm the counterparty’s identity and to establish a secured connection.

When a client receives the server certificate, it performs certificate validation. This process includes checking the whole certificate chain to find a trusted CA. If we establish a mutual TLS connection, a server performs the same action with the client certificate.

Now it’s time to think about connections and message exchange in our system.

Given the previously mentioned docker-compose it’s pretty obvious:

  1. We have a PostgreSQL service.
  2. We have a pgAdmin service.
  3. We have a browser on the host operating system.

So the interaction diagram may look like next:

Network interaction diagram
Network interaction diagram

Where:

1–2 — TLS channel establishing. During this step, our services exchange TLS certificates, perform mutual certificate validation, and establish a secured channel connection between each other. This channel is mTLS-secured.

3–4 — Request-response sequence. It’s a pretty standard flow. As per our requirement 2, the PostgreSQL service performs request authentication based on a client-provided TLS certificate. This flow is performed inside the channel established in step 1–2.

5–6 — TLS channel establishing. At this step, pgAdmin service and a client (browsers) establish a secured channel based only on the server certificate in contrast to step 1–2. This channel is TLS-secured.

7–8 — As well as step 3–4 this is a standard request-response sequence inside a secured channel.

To briefly sum this up, we have a mutual TLS (mTLS) connection between our services and a TLS (HTTPS) connection between a browser and the pgAdmin service.

As we understand the interaction requirements, we can create certificate-related requirements. For the sake of simplicity, I won’t use intermediate CAs. So I’m gonna use only one root certificate to sign the underlying certificate(s). We need to create the following kinds of certificates:

  • A Root certificate (1) to sign our mTLS-related certificates.
  • Server certificate used by the PostgreSQL service, signed by the Root certificate (1).
  • Client certificates used by end-entity (e.g. our users), signed by the Root certificate (1). We need as many certificates as many users we want to authorize to use our database.
  • A Root certificate (2) to sign our HTTPS server certificate.
  • And finally, we need the HTTPS server certificate, signed by the Root certificate (2).

Now it’s time to highlight a few obvious things about TLS certificates:

  1. The certificate usually consists of the certificate’s private key and the certificate per se.
  2. In my example, I’m gonna configure PostgreSQL to perform authorization based on the certificate’s CN (common name). However, it’s possible to configure the database to use authorization based on other parts of the certificate’s DN (distinguished name) or even based on the full certificate’s DN.
  3. Apple Keychain as well as Safari don’t support RSA keys with size greater than 8192 bits. I’m gonna use a key length of 8192 bits both for compatibility and generation speed reasons.

As I mentioned the certificate’s CN and DN, it’s time to take a brief overview of these parts. A certificate’s CN is an attribute within the subject field of the X.509 certificate that typically specifies the domain name that the certificate is intended to secure. The CN can also contain the name of an individual or entity if the certificate is being used for something other than securing web traffic, such as for client or server authentication in non-web contexts.

The whole value of the subject field of the X.509 certificate is a certificate’s DN. It’s an unique identifier of the entity to which the certificate is issued. The DN may consist of the following components:

  • C stands for the country.
  • ST stands for the state or province.
  • L stands for the locality or city.
  • O stands for the organization’s name.
  • OU stands for the organizational unit within the organization.
  • CN stands for the Common Name described earlier in the story.

Enough words! Let’s go do some scripting! I’m gonna start with extracts from the bash script intended to generate our certificates. You can find the whole bash script here.

First, I’m gonna define some common variables like algorithm settings (as I mentioned earlier it will be rsa:8192), default certificate validity duration, fields for the certificate’s DN, and a list of our database users:

Common variables setup

Now we can generate the Root CA for a mTLS connection. But before that, we need to combine a certificate’s DN into a single line string:

Root CA generation for mTLS connection

What’s going on in the code above? We generate a private key for our certificate and Certificate Signing Request. After that, we use these files with custom configuration to create the Root certificate.

I will briefly describe the custom configuration later in the story. You can always find a full version of the custom configuration here.

With the Root certificate and key, we can generate a mTLS client certificate that is gonna be used by our PostgreSQL service:

mTLS certificate generation for PostgreSQL service

As you can see, the generation process is pretty similar to the one described earlier. But here we’re using root.key and root.crt generated on the previous step to sign the certificate.

Finally, we’re gonna generate mTLS client certificates for our database users:

mTLS certificate generation for PostgreSQL users

Once again, the generation process is pretty similar to the one described earlier. As well as generation of HTTPS-related certificates (both Root CA and server) is similar too. I’ll skip this part here. As I mentioned earlier, feel free to take a look and use the whole bash script from my GitHub.

Now let’s take a brief look at our OpenSSL configuration files. The /etc/ssl/openssl.cnf is a standard configuration file. And the v3.cnf file contains a custom configuration part. I combine both of these files into a single one using a cat command and pass it as a value of -extfile parameter using a process substitution. So we’re interested only in settings from the v3.cnf file. You can find the full content of this file here.

Cnf files have an ini-like structure, i.e. it has sections, that consist of key-value pairs. And values can contain references to other sections. For example:

v3.cnf example

In the given example we have section v3_mtls_root, where key basicConstraints references to mtls_root_basic_constraints section.

When we generate our certificates using openssl command, we specify the concrete section name of the certificate extensions configuration file as a value of -extensions parameter (e.g. -extensions v3_mtls_root).

In this example, I configured the v3_mtls_root extension section to mark a generated certificate as a root CA and the private key of the certificate can be used to sign other certificates or to sign the certificate revocation list.

You can find full documentation related to the certificate extension configuration format on the official website. Also, there is a brilliant answer related to the certificate extension configuration for different purposes.

Now we have all the required certificates for application configuration. It’s time to configure our services to use them. Let’s start with PostgreSQL service configuration.

First, we have to configure pg_hba.conf file. As per documentation, this file controls client authentication. Our requirements are pretty straightforward:

  1. All local (i.e. using Unix-domain sockets) connections should be allowed.
  2. All SSL-encrypted TCP/IP connections should perform full client certificate validation.
  3. All other connections should be prohibited.
Custom client authentication configuration example

For production usage, you’ll probably prefer to use mTLS connection combined with password authentication. To implement this you can simply replace hostssl configuration line of pg_hba.conf to:

hostssl all all all scram-sha-256 clientcert=verify-full

So the user will be asked to provide a valid certificate to establish a secured connection with the server (and vice versa) and to input a password.

As the next step we have to configure SSL on the PostgreSQL side. It includes a configuration of the next parameters: ssl, ssl_cert_file, ssl_key_file, ssl_ca_file and hba_file. Obviously, we’re gonna set ssl value to on and provide an actual value to every configured _file parameter inside the container (see volumes mapping explanation part down in this story).

After that we have to configure a PostgreSQL service environment variables: POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB as I don’t want the database name to be the same as POSTGRES_USER value. You can find configuration example later in the story.

Now we can create a database users, related authorities and some database strucrure. As you have seen earlier, we’re gonna create a user with read-only access to our database (application_db_ro_user) and a user with read-write access to our database (application_db_rw_user).

Before we process this step, let’s look at the database role model. PostgreSQL doesn’t have a separate entity like a user. A role with login permission is a kind of a user. Roles can be combined into a tree using inheritance, but I’ll skip this part for simplicity and I’ll use inheritance only to “associate” a user with a specific role.

So let’s create a role with read-write authorities to our database and a user with this role authorities:

Creating a user with read-write authorities

In the script above we create a role with the name application_db_rw with no login permission and grant related privileges to the role. After that, we create a role with the name application_db_rw_user with login permission which also inherits all the permissions from the application_db_rw role.

Creating a role with read-only authorities as well as creating a related user is the same. So I’ll skip this part in the text of the story.

Finally, we have to configure ports and volumes mapping. PostgreSQL service by default runs at 5432 container port. And we have to provide an explicit port mapping to our machine. In this example I’m gonna propagate thePOSTGRES_LOCAL_PORT environment variable value or use 54320 port if the value is not defined: ${POSTGRES_LOCAL_PORT:-54320}:5432. And a bit later I’ll explain volumes mapping part.

Our service part for PostgreSQL may look like next:

Part of PostgreSQL service definition in docker-compose.yml

Let’s take a look at this service definition. As I mentioned earlier, I provide a values for ssl-related parameters at lines 5, 7, 8, 9. Also I provide a custom location for pg_hba.conf file at the line 10.

Also, we have a few objects in the volumes mapping section. It’s time to take a closer look at them. Mappings at lines 33, 34, 36 and 31 are related to configuration parameters at lines 7, 8, 9 and 10 respectively. The :ro modifier means that the content of this mapping can’t be modified from inside the container. Mappings at lines 28 and 29 mean that I persist database content and logs on a local machine. Finally, mappings at lines 38 and 39 related to database initialization during the very first start of the service.

So far so good we have all the required certificates and PostgreSQL service definition. Our next step is to configure a pgAdmin service.

First, let’s start with the environment variables configuration. We have to specify either a value for PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD or a value for PGADMIN_DEFAULT_PASSWORD_FILE (it’s a requirement from pgAdmin docker image). Also we have to enable tls by setting up PGADMIN_ENABLE_TLS value to true. And a couple more values to setup PGADMIN_CONFIG_SERVER_MODE and PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED — both of them should have a “False” value.

So our service definition for pgAdmin service may look like next:

Part of pgAdmin service definition in docker-compose.yml

Let’s take a look on a volumes mapping. I have a mapping of the client certificates for superuser, application_db_rw_user and application_db_ro_user at lines 28–29, 31-32 and 34–35 respectively. Also, I provide the root CA mTLS certificate at line 37. As I described the requirements earlier, I want to have a secured connection between the web interface of pgAdmin and a browser, so I provide the required certificates at lines 25–26 and configure port mapping at line 16. And mapping at line 21 means that I persist volume data on a local machine. Finally, I have a mapping for the server settings. It’s mapping of the servers.json file at line 23.

As I already know all the required settings, it’s not a problem at all to create a servers.json file for all users. Here is an example configuration for one of the created users:

Server configuration example for superuser

Finally, we have all required components to start an application:

  1. We have all the required certificates.
  2. We have a configuration for PostgreSQL service
  3. We have a configuration for pgAdmin service
  4. We have a docker-compose file.

After starting docker-compose we have a PostgreSQL instance by default located at port 54320 and a pgAdmin instance by default located at port 5050.

I hope you enjoyed the story. We generated certificates, configured services and voilà with the help of docker-compose we have all things up and running.

--

--