In my previous articles where I summarised my findings of Martin Kleppmann’s book, Designing Data Intensive Applications, I guided readers through database transactions and the different isolation levels that we can achieve through those transactions.

A database transaction is nothing more than an abstraction; a way for applications to pretend that certain faults do not exist (think network failures, disk corruptions, race conditions). Because of the existence of transactions, we can greatly simplify the code that we write because we can treat several database write operations as a single logical unit of execution. …

It’s not common that I write an article with such a clickbaity (an admittedly, tacky) title that makes a claim that I so desperately hope turns out to be wrong. But hey, you’re reading this aren’t you 😉

The impetus for the words you’re about to read come from various musings that have lurked their way into my socially isolated mind after 2 months of lockdown and almost zero social interaction. At this point I’m worried I might have forgotten how to interact with another person. Will we ever shake hands again?

Since the crisis began, I’ve had almost a dozen people message me asking the same…

In the last two articles on transaction isolation levels (here and here) we’ve discussed the varying degrees that a database isolates the data that you’re operating on. We saw that read committed isolation protects our application against dirty reads and writes; how snapshot isolation protects us against non-repeatable reads and read skew. We saw how phantoms can infiltrate our transactions and lead us to execute write requests when they shouldn’t be allowed.

None of these problems would exist if users would just behave and wait for someone else to finish their request before submitting theirs. …

In the previous article we explored two forms of weak isolation; read committed, and snapshot isolation. We went through multiple examples to show the problems an application can face when reading data that may or may not reflect the true state of the system.

We’re going to extend on this topic before moving onto stronger forms of isolation. In the last article we only focussed on possible race conditions that occur when clients go to read data. In this article we’ll be focussing on some of the messy problems involved when clients write data.

Lost update problem

This is likely the best known race condition; two clients read some data at the same time, modify it, and then write it back to the database. One of the modifications will be lost because when both clients go to execute their update, they won’t have the modification performed by the other client. …

One of the finest books I’ve had the pleasure of reading recently is Designing Data Intensive Applications by Martin Kleppmann. This book has become renowned for its comprehensive review of distributed systems and its interrogation of the various trade-offs involved in design choices. It took me 3 months to fully complete, and a lot of time to soak up what was taught. In the following series of posts I’d like to summarise some of the key insights I’ve learnt from each chapter, starting with my favourite of them all; database transactions.

As we know, many things can go wrong with databases. …

Think back to just 3 months ago, in December 2019. Looking back it doesn’t even seem real anymore. I remember mulled wine, festive markets, cinnamon lattes, Christmas parties and succulent roast dinners. If you were to go back in time to visit yourself just 12 weeks ago and describe your current reality today, your past self would probably spurt out a gush of champagne in laughter. Imagine telling yourself that in the next 3 months, 3 billion people across the globe would be placed under a form of house arrest and that something as simple as going out for dinner with friends would be illegal. …

Recently we had a nasty bug occur that wasn’t caught by our test suite. I usually see bugs like this as an opportunity to assess the value of the automated tests. We had a bug that was caught by a stakeholder, rather than our CI build; that’s a mistake for us to learn from.

The regression came when we did a rubocop -a fix across the codebase which touched around 6–7 files. One of them had a line of code like event.attendees.select(&:attended).count. One of Rubocop’s rules is to take a line like this and simplify it to event.attendees.count(&:attended)

A company I’ve been working with lately had a very niche problem; for every confirmation email that they sent out, only 65% of users were clicking on the confirmation link to confirm their accounts. This meant we had 35% of users that had registered but now we weren’t able to engage with via marketing emails and such.

To include somebody in our email marketing campaigns and newsletters, we require two things — we need their permission, and we need to validate that the email address they’ve given us actually works. We were finding that a lot of users would opt-in to email when they signed up via our website, but then wouldn’t click the link in the confirmation email we sent them. …

All too often we frame technical debt as a purely technical issue; the clue is in the name after all. Technical debt is a term used to describe the burden a software development team takes on when features are rushed through without any design considerations. Technical debt may be incurred through the saving of time by writing no automated tests that would protect the software from future regressions, for instance. It’s faster and more efficient to ship features this way, but you pay for it (with ‘interest’!) in the future.

I want to tackle this subject from a different angle in this article, and explore it from a moral perspective. Too often we view technical debt as a necessary evil or a nuisance imposed on us by managers. We desperately want to write clean, elegant, poetic code but are constantly at the mercy of those damn business people and their feature wish list. …

I recently started fixing a bug that was raised in our user acceptance testing (UAT) phase. Luckily this bug hadn’t reached production, so it was caught in the nick of time.

The bug was in the confirmation emails that users were receiving from Salesforce when they went to sign up on the website. With bugs like this I try to be very quick in deciding whether it’s a code issue or an environment issue. Nothing is more frustrating than debugging lines of code for hours only to find out a url configuration variable changed in the UAT environment. …

Adrian Booth

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store