Testing PostgreSQL for Fun

Jonathan Fischoff
Jun 19, 2017 · 7 min read
Image copyright Lucina Lubin

When Programming is Not Fun

When the iteration loop is long, programming becomes frustrating.

Let’s say you have to change a low-level system that lacks a way to test it directly. To test the new code, you have to manually execute a complex series of time-consuming and onerous steps. When something goes wrong, all you know is either your code is broken (for some unknown reason), or you exposed an existing issue in the monstrous system that depends on your change. It’s even worse when the code cannot be tested in a reproducible way, and you get a different behavior with each run.

If that isn’t frustrating enough, imagine you are unable to implement the solution you envisioned. You can’t directly access the low-level system because a series of components — meant to abstract, simplify and create a safe alternative to your low-level system — are in the way. The components accomplish little of their stated purposes, but force your final design to be slower and more complex than necessary.

Finally, you learn that the dev environment you are testing doesn’t match the behavior of production in crucial, important ways.

This is what feels like to write database queries for many websites. It’s not fun. Let’s fix that.

In this post I’ll show how to make database programming fun by using packages that make testing easy. I’ll walk through a example of building a durable queue backed with PostgreSQL.

Isolated Temporary Databases

To make database programming fun, we need a quick way to iterate. We need to bring up instances of our database, PostgreSQL in my case, and directly test the queries we are writing.

One excellent option is pg_tmp. I’m going to utilize a Haskell analog I wrote called tmp-postgres and the test helper based on it: hspec-pg-transact.

hspec-pg-transact provides a helper for setting up a test suite that creates a temporary postgres process. Tests can use the optional itDB, which adds a test and runs the db transaction used for the test.

Here is a simple example of testing db queries with a temporary database.

describeDB migrate "Query” $ 
itDB "work" $ do
execute_ [sql|
INSERT INTO things
VALUES (‘me’) |]
query_ [sql|
SELECT name
FROM things |]
`shouldReturn` [Only "me"]

(You need to make sure that initdb is on the PATH. This is case for brew on macOS, but for Ubuntu you must add /usr/lib/postgresql/VERSION/bin to the PATH)

The full code, including import statements, is here. Let’s walk through a larger example.

Making postgresql-simple-queue

In one of my previous posts, I discussed an in memory queue for sending emails. The problem with the queue, as Cale Gibbard pointed out, is that it’s not durable. However, PostgreSQL is durable, so let’s use it to make our queue as Cale did.

We’ll add new payloads with enqueueDB. tryLockDB will grab the latest payload for us to process. We’ll call dequeueDB on the payload once the processing is finished:

data Payload = Payload
{ pId :: PayloadId
, pValue :: Value
, pState :: State
, pCreatedAt :: UTCTime
, pModifiedAt :: UTCTime
} deriving (Show, Eq)
enqueueDB :: Value -> DB PayloadId
enqueueDB value = do
pid <- liftIO randomIO
execute
[sql| INSERT INTO payloads
(id, value) VALUES (?, ?)
|]
(pid, value)
return $ PayloadId pid
tryLockDB :: DB (Maybe Payload)
tryLockDB = listToMaybe <$> query_
[sql| UPDATE payloads
SET state='locked'
WHERE id in
( SELECT id
FROM payloads
WHERE state='enqueued'
ORDER BY created_at ASC
LIMIT 1
)
RETURNING id, value, state,
created_at,
modified_at
|]
dequeueDB :: PayloadId -> DB ()
dequeueDB payloadId = void $ execute
[sql| UPDATE payloads
SET state='dequeued'
WHERE id=?
|]
payloadId

All the operations occur in a DB monad so we can compose them with other db operations and have them run in the same transaction. For instance, we could use this library to enqueueDB and create a customer in a single transaction.

Back to Testing

We can utilize hspec-pg-transact to write our tests:

main :: IO ()
main = hspec spec
spec :: Spec
spec = describeDB init "Queue" $ do
itDB "empty gives nothing" $ do
tryLockDB
`shouldReturn` Nothing
itDB "enqueue/lock/dequeue" $ do
theId <- enqueueDB $ String "!"
Just Payload {..} <- tryLockDB
pId `shouldBe` theId
pValue `shouldBe` String "!"
tryLockDB
`shouldReturn` Nothing
dequeueDB pId `shouldReturn` ()
tryLockDB
`shouldReturn` Nothing

We can load up our single test file, test/Database/QueueSpec.hs, and run it in isolation:

> :l test/Database/QueueSpec.hs
> :main --match=locks
Database.Queue
postgresql:///test?host=/tmp/tmp-postgres26345&port=57473
empty locks nothing
enqueue/lock/dequeue
Shutting Down
Finished in 1.8647 seconds
2 examples, 0 failures

Two seconds isn’t considered great in the unit-testing world, but for database testing we are in good shape. Not only are the tests fast, but we can also quickly iterate by reloading and rerunning a single file, all in single-digit seconds.

This actually scales quite well. The queries themselves take milliseconds; almost all of the time is taken for the temporary db startup, which we only need to do once.

Stress Tests

Simple tests are easy, but let’s write a more complicated stress test. Unlike this simple test, the stress test will need to run multiple transactions, so we can’t use the itDB test helper.

In this test we create 10 consumer threads that read from the queue and 1000 threads that enqueue messages:

it "stress queue" $ \testDB -> do
let withPool' = withPool testDB
elemCount = 1000 :: Int
expected = [0 .. elemCount-1]
ref <- newIORef [] -- Make 10 queue consumer threads
loopThreads <- replicateM 10
$ async
$ fix $ \next -> do
mpayload <- withPool' tryLock
case mpayload of
Nothing -> next
Just Payload {..} -> do
last <-
atomicModifyIORef ref $
\xs -> ( pValue : xs
, length xs + 1
)
withPool' $ \conn ->
dequeue conn pId
when (last < elemCount)
next
-- Fork a 1000 threads and
-- enqueue an index
forM_ [0 .. elementCount - 1] $
\i -> forkIO $ void $
withPool' $ flip enqueue $
toJSON i
waitAnyCancel loopThreads
Just decoded <-
mapM (decode . encode)
<$> readIORef ref
sort decoded
`shouldBe` sort expected

This test works, but makes clear that there is an issue with our design. We have to poll the db constantly to see if there is new data.

Going Full Postgres

Luckily PostgreSQL has a solution for us, as lpsimth pointed out: NOTIFY and LISTEN. We make a new function, lock, to utilize PostgreSQL’s pubsub feature:

notifyPayload :: Connection 
-> IO ()
notifyPayload conn = do
Notification {..} <-
getNotification conn
unless (notificationChannel
== "enqueue") $
notifyPayload conn
lock :: Connection
-> IO Payload
lock conn = bracket_
(Simple.execute_
conn "LISTEN enqueue")
(Simple.execute_
conn "UNLISTEN enqueue")
$ fix $ \continue -> do
m <- tryLock conn
case m of
Nothing -> do
notifyPayload conn
continue
Just x -> return x

We extend enqueueDB to also NOTIFY our enqueue channel after it enqueues a payload

enqueueDB :: Value -> DB PayloadId
enqueueDB value = do
pid <- liftIO randomIO
execute
[sql| INSERT INTO payloads
(id, value) VALUES (?, ?)
NOTIFY enqueue;
|]
(pid, value)
return $ PayloadId pid

We can write a similar test for lock, and our loop becomes:

loopThreads <- replicateM 10 $ 
async $ fix $ \next -> do
-- blocks waiting for
-- new payload
x <- withPool testDB lock
lastCount <-
atomicModifyIORef ref $
\xs -> ( pValue x : xs
, length xs + 1
)
withPool testDB $ \conn ->
dequeue conn (pId x)
when (last < elemCount)
next

Much nicer (ignoring the super small margins for mobile formatting :p). The final queue is packaged up as postgresql-simple-queue on hackage and github.

Fun is Hard to Write

We went through a bunch of intermediate states before I pushed it to Hackage. I started with something simple, wrote stress tests, which thankfully found bugs, and added pubsub. Most incredibly, it was fun and not frustrating.

Don’t take my word for it: try it. pg-transact and hspec-pg-transact are on Hackage/Stackage.

Extra Credit: Email Queue Take 2

We can now write a better email process queue, discussed in my other post. First, we need a processing loop:

forever $ do
payload <- withResource
connectionPool lock
sendEmail payload
withResource connectionPool $
\conn -> dequeue conn
(pId payload)

postgres-simple-queue’s, defaultMain provides this consumer loop for us and also does command line argument processing. We can make a queue consumer executable easily, like so:

main :: IO ()
main = do
env <- newEnv Discover
runResourceT $ runAWS env $ defaultMain "aws-email-queue-consumer"
$ \payload _ -> do
case fromJSON $ pValue payload of
Success email -> do
resp <- AWS.send $ makeEmail email
logFailedRequest resp
Error x -> throwIO $ userError
$ "Failed to decode payload as an Email: "
++ show x

See EmailQueue for the full example.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMI family. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!

HackerNoon.com

how hackers start their afternoons.

Jonathan Fischoff

Written by

Haskeller Haskelling.

HackerNoon.com

how hackers start their afternoons.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade