Nerd For Tech
Published in

Nerd For Tech

An opinionated Kotlin backend service - Database / Migration / ORM

This is the fourth part of my series on Kotlin based backend services.
If you missed the first three parts:

ORM

There are roughly three ways to interact with a database:

  • using SQL directly
  • using an SQL wrapping DSL
  • using ORM (object relational mapping)

These are of course just rough categories and there’s all kinds of “in-between” solutions but I don’t want to elaborate on the the different approaches. My goal was not to create a framework for high volume, low latency transactions so I decided to go with an ORM.

From the Java world there are some obvious contenders:

As usual I wanted a framework built for Kotlin with a clean and concise Kotlin DSL and none of the Java frameworks fit that bill. AFAIK there are really only two ORM frameworks built with Kotlin in mind:

Both offer a type-safe SQL wrapping DSL and also a DAO / entity API (the ORM). We’ll ignore the SQL wrapping part and focus on the ORM functionality. I my Ktor template code I implemented both frameworks and you can easily switch between the two in the application.conf file:

; possible values: [KTORM, EXPOSED]
orm = "KTORM"

Note that the API tests (article #5 of this series) run the tests twice, once using Ktorm, the second time using Exposed.

DB Connection

Connecting to the database is very simple with either of the frameworks:

// Exposed
org.jetbrains.exposed.sql.Database.connect(
url,
driver,
user,
password,
)
// Ktorm
org.ktorm.database.Database.connect(
url,
driver,
user,
password,
ConsoleLogger(LogLevel.DEBUG)
)

Both can also use a javax.sql.DataSource, as is usually the case in enterprise applications to e.g. benefit from connection pooling. The syntax is identical for both frameworks: Database.connect(dataSource). A working code example including connection pooling (using HikariCP) can be found in the Ktor template.

Intermezzo: configuration in Ktor and dependency injection with Koin

Before moving on to the next topic, I’d like to elaborate on how Ktor handles configuration (including the database configuration we just used).

application.conf is usually the way to go with Ktor. There’s predefined properties that determine the deployment, security etc. as you can see here. You can also use environment variables like ${PORT} which can be injected into the runtime environment. You can also provide default values for those variables using this syntax:

port = 5432
port = ${?DB_PORT}

The configuration can be read in a module as this.environment.config (this referring to the Application class):

environment.config.property("path.to.property")

It would be easy to expose the configuration as a “static” variable or a top-level function to every component in the application. We do however want the ability to modify the configuration e.g. for testing purposes and since I’m already using Koin as a dependency injection framework, I decided to leverage DI. I created an interface Config exposing some convenience methods to access properties and created a simple Koin binding:

single<Config> { ConfigImpl(environment.config) }

To get access to the configuration, a component implements the KoinComponent interface and does:

private val config by inject<Config>()

or in case of a function (which can’t implement KoinComponent):

val config = getKoinInstance<Config>()

With getKoinInstance being:

inline fun <reified T> getKoinInstance() =
object : KoinComponent {
val value: T by inject()
}.value

Note that my template actually takes the configuration from HoconApplicationConfig(ConfigFactory.load()) instead of environment.config. The former seems to be more the Java way to do it but honestly I don’t know if there’s a difference, both seem to work as expected.

DB tables / entities

The syntax for defining database tables and entities is very similar for both frameworks:

Ktorm

Exposed

  • Bind table columns to entity properties: Ktorm does the binding in the table definition (.bindTo{ it.propertyName }) while Exposed does it in the entity (by Customers.propertyName). IMO it makes more sense to bind the entity to the table and I also prefer Exposed’s delegation syntax.
  • Id definition: Both support primary keys with auto-increment or any other data type as primary key (without auto-increment of course). Exposed has a convenience class IntIdTable to eliminate the explicit need to define the primary auto-incremented key.
  • Data types: Ktorm seems to support more data types
    https://www.ktorm.org/en/schema-definition.html#SqlType
    https://github.com/JetBrains/Exposed/wiki/DataTypes
    I had a bit of a struggle to get Postgres ENUMs to work properly (mostly with Exposed). Ktorm has a specific data type pgEnum in its Postres library (org.ktorm:ktorm-support-postgresql) that works out of the box. Exposed on the other hand requires to define a customEnumeration which doesn’t come out of the box and took me some time to find:
  • References: Exposed has as more concise syntax compared to Ktorm:
reference(“account_id”, Accounts)

compared to:

int("account_id").references(Accounts) { it.account }
  • Entities: in Ktorm entities are interfaces which makes them hard to “re-use” e.g. as models for HTTP requests (you can’t serialize/deserialize interfaces with kotlinx.serialization e.g.). While data classes and regular classes are supported since version 2.5 (see here), the limitations are too severe to make them a real alternative to interfaces (column binding not available, entities need to be created manually, entity manipulation API is not available). With Exposed the entities are regular classes (but not data classes) which is definitely more convenient.

Overall Exposed is the clear winner when it comes to the basics of table and entity definition!

Entity manipulation

All the manipulation code is encapsulated in the two AccountDao.kt and CustomerDao.kt classes so you can easily compare the two frameworks

Ktorm: Account.dao / Customer.dao
Ktorm has the concept of an EntitySequence which is used to query entities. Inserting a new entity happens by adding an entity to the entity sequence, deleting and updating happens on the entity itself (after retrieving it from the entity sequence). Updates need to be explicitly written back using the flushChanges function.

Exposed: Account.dao / Customer.dao
Exposed’s manipulation API works off the entity itself: Entity.find() to query, Entity.new { } to insert new entities. Updates and deletes are performed on the entity directly and no need to “sync” the changes with the database, Exposed takes care of that. One inconvenience is that every database operation needs an active transaction. If you don’t wrap everything into a transaction { } call, it will fail with a “No transaction in context” error.

Overall the manipulation is straight forward in both frameworks and it’s mostly a matter of taste which one you prefer. I find the need to wrap each operation into transaction { } with Exposed not just an inconvenience (making the code more verbose) but also a potential source for bugs.

Database Migration

Schema versioning and migration is a problem that needs a good solution. I decided to look into Liquibase and Flyway. As far as I could tell they both do the job. Liquibase’s advantage is to support XML, JSON and YML for the migration scripts on top of SQL which can be an advantage in a heterogeneous environment with multiple database technologies which would result in different SQL migration scripts (e.g. local development with H2 and testing/production using Postgres). Liquibase would support this use case with technology agnostic migration scripts in XML, JSON or YML. A good article comparing the two tools can be found here: Liquibase vs Flyway.

I ultimately decided to use Flyway. There’s a free community edition which doesn’t allow dry runs and undo (down migration) as the two most severe limitations (IMO). After adding the Gradle dependency (org.flywaydb:flyway-core) all it needs it basically this piece of code:

Flyway
.configure()
.dataSource(jdbcUrl, username, password)
.load()
.migrate()

It will load the migration scripts from the resources folder and do the migration if needed:

That’s all. Make sure to stick to their versioning schema Vx__name.sql with two (sic!) underscores. Using a single underscore won’t work.

Database Creation

Flyway only creates and migrates schema but doesn’t create the database itself. Initially I created the database manually following some tutorials and articles like this one: https://medium.com/coding-blocks/creating-user-database-and-adding-access-on-postgresql-8bfcd2f4a91e. Since my template is running in a docker container, you also need to exec into the container to set everything up before Flyway can do its magic.

After having done that several times on different computers (time-consuming and error prone), I decided to automate the process and luckily found this answer: https://stackoverflow.com/a/61729115/534471.

The solution is basically to create an init-database.sh script (see here) and reference it in the docker-compose.yml file:

volumes:
- ./init-database.sh:/docker-entrypoint-initdb.d/init-database.sh

No more manual creation of databases!

That’s it for part 4 of the series. If you enjoyed this, follow up with An opinionated Kotlin backend service - Testing.

As usual feel free to provide feedback. Happy coding!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store