Event Sourcing with PostgreSQL

Event Sourcing is an architectural pattern that stores all changes to application state as a sequence of events, and then sources the current state by replaying the events.

Not only can the can we query these events, we can also use the event log to reconstruct past states at arbitrary points in time, and as a foundation to adjust state to cope with retroactive changes.

Gratuitous Giant Image

In a recent discussion about event sourcing and databases I suddenly remembered:

“PostgreSQL is AWESOME”

The fresh realisation of PostgreSQL’s awesome in turn led me to ask:

“What would an event sourced system look like if we pushed as much as possible to the database itself?.

And subsequently I was led down a rabbit hole of experimentation and wonder.

The Approach

Events are stored in the database and projections are generated immediately using using triggers and functions.

The basic flow of action is:

 |> after insert trigger
 |> trigger function 
 |> projection function
 |> projection

For reasons that shall become clear, we use a trigger function to wrap another function to generate the project.

There are a few of advantages to this model:

  1. Triggers ensure the projections are always up to date;
  2. Projections occur in the transaction context of the initial event insert; and
  3. We do not lose the ability to replay the event stream, reusing the projection logic (more on this later)


Events are stored in a simple events table.

CREATE TABLE "events" (
"id" serial primary key not null,
"uuid" uuid NOT NULL,
"type" text NOT NULL,
"body" jsonb NOT NULL,
"inserted_at" timestamp(6) NOT NULL DEFAULT statement_timestamp()

Couple of things to note:

a) we assume that all objects/entities in the system have a globally unique identifier or uuid. The uuids probably need to be managed outside of the events table.

b) all events have a type, eg ‘user_create’ or ‘post_delete’

c) event data is stored as json using a jsonb column (postgresql is awesome)

With the structure in place, we can store events in the table.

Here we we have an example event, tracking an update to the name of the user identifier by a specified uuid.

insert into events (type, uuid, body)
values ('user_update', '11111111-1111-1111-1111-111111111111', '{"name"

Projection Functions

A projection function does the actual work of handling the event data and mapping to the appropriate projection.

For example, in the case of the the user_update event created above, we would like to take the name value from the event body and update a read-only record of the user.

We can create functions in PostgreSQL to do this.

Assuming we have a users table with a name and uuid identifier for users, the following function updates a user record into the table based on the `user_update` event:

create or replace function fn_project_user_update(uuid uuid, body jsonb) returns void
security definer
language plpgsql as $$
update users SET name = body->>’name’, updated_at = NOW()
where users.uuid = fn_project_user_update.uuid;

As PostgreSQL is awesome, we can pull the data from the json body using the body->>’name’ syntax.


Once we have a projection function, we can wrap it in a trigger to have it called on insert. Triggers in PostgreSQL can be conditional, so we can ensure a trigger is only executed if a relevant event type has been inserted.

create or replace function fn_trigger_user_update() returns trigger
security definer
language plpgsql
as $$
perform fn_project_user_update(new.uuid, new.body);
return new;
create trigger event_insert_user_update after insert on events
for each row
when (new.type = ‘user_update’)
execute procedure fn_trigger_user_update();

The trigger is simply a wrapper around the projection function, and passes the new data through.

More complex events can be modeled as calls to multiple projection functions as required.

Replaying the Event Stream

Using projection functions means that events can be handled outside the table trigger mechanism, and at any point the events can be replayed, simply by calling the function and passing the correct identifier and data.

Using more PostgreSQL awesome, we can query and recreate an event stream.

The following recreates the user_* events for the user identified by the specified uuid.

do language plpgsql $$
e record;
for e in select type, uuid, body from events where uuid = ‘11111111–1111–1111–1111–111111111111’ order by inserted_at asc loop
case e.type
when ‘user_create’ then
perform fn_project_user_create(e.uuid, e.body);
when ‘user_update’ then
perform fn_project_user_update(e.uuid, e.body);
end case;
end loop;

Any valid query can be used as the basis for the replay loop, and any combination of valid event handlers.

But wait, there’s more.

PostgreSQL is not just limited to processing events iteratively, the projection could simply be a dynamic query on the events table (assuming performance is not or an issue), or be a materialized view or any other appropriate construct.

Here we create a materialized view to track the uuid and name of the users in the events table:

create materialized view users_view as
with t as (
select *, row_number() over(partition by uuid order by inserted_at desc) as row_number
from events
where type = ‘user_update’
select uuid, body->>’name’ as name, inserted_at from t where row_number = 1;
select * from users_view;

In this case we assume that the most recent update event contains the correct user data, and we query to find the most recent user_update event for each user identified.

There are a large number of tools in place inside PostgreSQL worth exploring if you are considering an Event Sourcing model.

Sample code is on tobyhede/postgresql-event-sourcing.

Photo by Dmitrii Vaccinium @ Unsplash