PostgreSQL

PostgreSQL Major Version Upgrade Guide

Streamlining Operations: The Path to PostgreSQL Major Version Upgrade

Anas Anjaria
5 min readMar 24, 2024

I recently had the opportunity to lead the major version upgrade of our production cluster, which manages approximately 2TB of critical data using PostgreSQL with Timescale DB.

This milestone marks one of the proudest moments in my career 🚀. Previously, our upgrade process solely focused on the leader node, leaving replicas untouched.

Recognizing the potential risk posed by this approach, I led the effort to optimize the upgrade process. By extending the upgrade procedure to include existing replicas alongside the leader node, we mitigated potential vulnerabilities and significantly enhanced the stability and reliability of the cluster.

I take great pride in implementing this enhanced upgrade process, as it not only addressed past limitations but also laid the foundation for a more robust and reliable infrastructure to support our critical services.

I’m excited to share this journey with you 🙌.

Cluster overview

A PostgreSQL cluster hosted on AWS cloud. The cluster consists of two nodes running PostgreSQL with Timescale DB, Patroni for high availability, HAProxy as a load balancer, and a dedicated etcd node.
High-level overview of our PostgreSQL cluster

Our cluster, hosted on AWS, comprises three nodes each running the following components:

  1. PostgreSQL with Timescale DB — Manages time-series data.
  2. Patroni — Ensures high availability with automatic replication and failover.
  3. HAProxy — Acts as a load balancer, distributing write workloads to a leader node and read workloads to replica nodes.

Dedicated etcd nodes on AWS EC2 instances ensure efficient management of our database.

Now that we understand our cluster’s architecture, let’s dive into the upgrade process.

High-level overview

📝 This section provides a high-level overview of the process without 
going into technical details. Detailed technical explanations are provided
later in the post.

Major version upgrades involve distinct steps for leader and replica nodes, including:

  1. Pre-upgrade processing.
  2. Upgrade a leader node.
  3. Upgrade a replica node.
  4. Post-upgrade processing.

Pre-upgrade processing

Before upgrading, ensure that replica nodes can communicate with the leader node via SSH, and have an additional node for complete rollback or failure recovery.

Upgrade a leader node

Utilize PostgreSQL’s pg_upgrade tool for the leader node upgrade. The process involves the following steps:

  1. Disable all write activity on the leader node.
  2. Temporarily step down patroni from managing the cluster.
  3. Shutdown database (or Postgres service).
  4. Initialize DB (initdb).
  5. Copy configurations such as pg_hba.conf, postgres.conf etc.
  6. Perform upgrade (pg_upgrade).
  7. Please check the output of the upgrade for any warnings, it might guide you on things you need to take care of after the upgrade.
  8. Adjust patroni configuration.
  9. Wipe complete cluster state from DCS (Dynamic Configuration Settings).
  10. Start Postgres service with target version.
  11. Start patroni.

Congrats 🎉, you have successfully upgraded the leader node.

Upgrade a replica node

Unlike the leader node, replicas cannot use pg_upgrade. The basic idea for upgrading an existing replica node is to synchronize folder structure (data & Write Ahead Log (WAL) directories).

  1. Shut down database (or Postgres service).
  2. Synchronize data and Write Ahead Log (WAL) directories.
  3. Adjust patroni configuration
  4. Start up database (or Postgres service).
  5. Configure replication.

Congrats 🎉, you have successfully upgraded the replica node.

Post-upgrade processing

Enable write activity on the leader node and rebuild statistics for tables.

Technical details

Close to zero downtime using in-place upgrade

Major version upgrades require downtime, but in-place upgrades minimize this downtime. Using pg_upgrade with the --link option reduces downtime to near zero by copying only metadata.

| **Tools**           | **Reloading data**            | **Downtime needed** |
| ------------------- | ---------------------------- | --------------------|
| `pg_upgrade` | Metadata and files are copied | Downtime is needed |
| `pg_upgrade --link` | Only metadata are copied | Close to zero |

Source: Upgrading and updating PostgreSQL

Replica’s Upgrade Process — Synchronization of Folder Structure

You can understand this concept if you are familiar with replication.

High-level overview of a replication
High-level overview of a replication

When incorporating a new replica node into our cluster, the leader node performs the following steps:

  1. Initial Data Replication: The leader node replicates everything except for WAL segments to the new replica.
  2. WAL Segment Replication: Subsequently, it replicates WAL segments to the replica.

Synchronizing folder structures manually performs replication, effectively upgrading existing replica nodes.

Ensure Replica-Node Communication with Leader via SSH

Use rsync for folder synchronization, requiring SSH communication between replica and leader nodes.

You can use the following script to ensure successful communication.

#!/bin/bash

# Configuration
LEADER_IP="..."
LEADER_USER="..."

REPLICA_IP="..."
REPLICA_USER="..."

# generate key
ssh -i ~/.ssh/your.pem "$REPLICA_USER@$REPLICA_IP" \
'sudo ssh-keygen -t rsa -b 2048 -f /root/.ssh/id_rsa -N ""'

# download generated key on your machine
rsync -avz --progress --rsync-path="sudo rsync" \
-e "ssh -i ~/.ssh/your.pem" \
"$REPLICA_USER@$REPLICA_IP:/root/.ssh/id_rsa.pub" "$PWD/replica-ssh-key.pub"

# copying it to a leader node
cat replica-ssh-key.pub | ssh -i ~/.ssh/your.pem "$LEADER_USER@$LEADER_IP" \
'cat >> /home/ec2-user/.ssh/authorized_keys && echo "Key copied"'
⚠️ You need to run this script from your machine.

Importance of Temporarily Stepping Down Patroni

Patroni manages cluster. It keeps track of a leader’s health and act accordingly. It will perform a failover when a leader is unhealthy to ensure high availability.

As upgrading process makes leader unhealthy, we let Patroni to step down temporarily and don’t do anything, and we’re aware of the situation.

Upgrade in Action

This section provides practical insights and applies the aforementioned workflow to enhance understanding.

Source code: https://github.com/anasanjaria/blogs/tree/main/postgres-upgrade

For demonstration, assume a cluster of two nodes: one leader and one replica, running PostgreSQL version 13.x and upgrading to version 15.x.

📝 For the sake of similicity, I am running all the containers locally
and persisting data on different directories.

Data for node-1 & node-2 are stored in data-1 & data-2 respectively.

Upgrading leader node

Execute the provided script to upgrade the leader node. The script guides through the process, requiring user input at various steps.

docker exec -it node-1 /opt/pgupgrade.sh

Upgrading replica node

Similarly, use the provided script to upgrade an existing replica node. Execute the script and follow the prompts for a successful upgrade.

./pgupgrade-replica.sh

Post upgrade process

Enabling write activity — Let’s deactivate write lock now.

curl --request PATCH \
--url http://localhost:8008/config \
--header 'Content-Type: application/json' \
--data '{
"postgresql": {
"parameters": {
"default_transaction_read_only": "off"
}
}
}'

Rebuilding statistics — Rebuilding statistics is extremely crucial to have better performance of your production system. If you skip this step, you will end up with extremely slow queries.

Typically, analyze command rebuilds statistics, but it’s extremely slow. Hence, it's not practical for big databases.

vacuumdb is another tool for the same purpose and also supports parallelism. Hence, the preferred way for production system.

/path/to/vacuumdb -d <DATABASE> \
-t <TABLE-1> -t <TABLE-2> ... -t <TABLE-n> \
--analyze-in-stages \
-j <CONCURRENT_CONNECTIONS>

💡TIP: Use the same number of connections as the number of tables because a single connection is used per table.

Conclusion

The successful upgrade of our production cluster marks a significant milestone in our journey towards maintaining a robust and reliable infrastructure.

By extending the upgrade process to include both leader and replica nodes, we’ve not only mitigated potential risks but also enhanced the stability and resilience of our critical services.

I hope you’ve found this journey as enlightening and empowering as I have. Here’s to smooth upgrades, robust clusters, and endless possibilities! 🚀

Until next time, happy upgrading! 😊

--

--