Elixir in Action author Saša Jurić

Towards Maintainable Elixir: The Anatomy of a Core Module

Saša Jurić
Very Big Things
Published in
8 min readMar 22, 2021


So far in this series I’ve focused on the higher level code design. Today I’ll dive a bit deeper and show the code of a typical context (core) module in Very Big Things’ projects. This article will repeat a few points from the previous posts, but it’s worth consolidating this information in a single place.

Building a changeset

To keep things simple we’ll study a small synthetic example. Let’s say we’re building a forum backend, and we need to implement the create post feature. A post is described by the following properties: author, title, and body.

We can start by adding the create_post function in the top-level core module called Forum. Here’s the initial take:

This function follows the pattern outlined in the second part of this series. The interface layer is responsible for normalizing the data, while the core is responsible for business level validations.

The first important point in this example is that the changeset builder function is inlined in the body of the core operation. This is another instance where we depart from the “blessed way” promoted by the official Ecto and phoenix docs. We don’t keep public changeset functions in the schema module. Such approach consolidates the parts which are logically tightly coupled. The changeset building logic is typically needed by a single context function, or occasionally by a couple of related contexts functions (e.g. update and create).

Consequently, our schema modules usually contain very little logic, mostly an occasional function which returns a value that can be computed from the schema fields (including its associations). For example, if in an online shopping system we have two schemas, Order and OrderItem, where each item has the fields price and quantity, both schema modules could contain the total_price/1 function. Usage of Ecto in schema modules is not allowed, which is enforced by the boundary tool.

Reusing the builder logic

We usually start by keeping the changeset builder code directly in the context function. However, if the logic becomes more complex, or if the changeset builder code needs to be shared between multiple functions, the builder code can be extracted into a separate private function. We avoid creating public changeset builder functions, because this leads to weakly typed abstractions which return overly vague free-form data.

Let’s see an example. We’ll expand our code by adding the support for the edit_post operation. We’ll start by refactoring the existing code, moving most of the logic to the private helper called store_post:

With these changes in place, edit_post can reuse the store logic:

With chaining

Typically our public core mutator functions return {:ok, result} | {:error, reason}. Since most of these functions need to perform multiple internal operations, they usually end up being implemented as a with chain. Let’s see this in action. Suppose we want to introduce a constraint that a post can only be edited by its author, a moderator, or an admin. Here’s the first take:

This style allows us to clearly and concisely describe the main flow of the operation. On the flip side, it leads to a proliferation of micro-functions such as fetch_post and validate_post_editor. To be clear, writing small, and even micro functions is not discouraged, as long as they assist with code clarity. Hiding a complicated or a cryptic expression behind an explanatory name can often help explain the intention better. But in this example the additional functions are introduced only for the mechanical reasons. We need them so we can normalize the results into a shape that can be used in the with expression. Consequently, the logic of the operation becomes too fragmented, forcing the reader to excessively jump back and forth.

As it turns out, with a couple of small general-purpose helper functions the entire implementation of the edit operation could be consolidated in a single function. Let’s see how we did it.

First, we’ve expanded our repo module with a set of fetch functions, such as fetch, fetch_by, and fetch_one, which behave similarly to the existing get_* functions, except they return the result as {:ok, result} | {:error, reason}.

Here’s a simplified implementation of fetch_one:

The functions fetch and fetch_by can now be implemented on top of fetch_one.

In addition, we’ve created a small helper function called validate, which converts a boolean into :ok | {:error, reason}:

Many of our validations return {:error, :unauthorized} on failure, so we’ve added another small helper called authorize/1:

With these helpers in place, the edit operation can be expressed as:

Arguably, the top-level flow is as concise as the previous version, but it doesn’t require creating additional private functions. Consequently, a reader can see the entire logic in a single place. Empirically we’ve established that these small general purpose helpers allow us to significantly reduce the amount of micro functions and improve the reading experience.


The previous example demonstrates another important part of our design approach: we typically deal with authorization inside the core. Whether some action is permitted or not most often doesn’t depend on the external interface, such as REST or GraphQL. Therefore, authorization is a core concern.

In addition, we avoid using 3rd party authorization libraries such as Bodyguard or Canada. We’ve established that such libraries don’t add significant value, and they might in fact lead to an overly fragmented code by forcing the developer to move the authorization logic into separate functions or modules. Our view of the world is that authorization is just another business-level constraint, and therefore it belongs together with other business-level validations. Mechanically, we treat authorization as an implementation-level conditional, avoiding the pattern of a separate “policy layer”, which usually complicates the code with little to no benefits.

Sometimes a dedicated policy abstraction might be needed. For example, let’s say that in our forum backend we need to support custom roles, allowing the superuser (admin) to create roles and assigns permissions. This will require a set of database tables and a decision logic that given an (account, operation) combination returns true (authorized) or false (unauthorized). Such logic is most likely best placed in its own core sub-boundary (e.g. Forum.Policy) which will be used by other core boundaries.


Often a context operation must perform multiple database operations, so it needs to run inside a transaction. There are two options for doing this, using Repo.transaction(fn -> … end) or Ecto.Multi. Empirically we’ve found that both of these options introduce some amount of noise. The transaction function requires a manual rollback, which means that we need to add an else clause to the with chain. On the other hand, multi operations add a significant amount of noise in the shape of Multi.some_function(name, …). Finally, in both cases we need to unwrap the result of Repo.transaction to normalize it into {:ok, result} | {:error, reason}.

To reduce this noise, we’ve built our own helper called Repo.transact that works as follows:

  1. It takes a lambda, and runs it inside a db transaction.
  2. If the lambda returns {:ok, result}, the transaction is committed.
  3. If the lambda returns {:error, reason}, the transaction is rolled back.
  4. Either way, transact returns the result of the lambda, so there’s no need for unwrapping.

Let’s see it in action. Suppose that while the post is stored we also want to send notifications to the mentioned users. Typically, this will require inserting one database entry for each notification, so we can fulfill at-least-once delivery guarantees. Since the notification needs to be sent when the post is stored, regardless of whether it is created or edited, the best place for hosting this logic is the store_post/3 function:

For the sake of brevity, the actual storing logic is pushed deeper into store_post_record. Given its current size and the size of the remaining code, it would also be fine if this logic was inlined in store_post.

Let’s see a simplified sketch of the transact function:

This function is bundled together with the previously mentioned fetch helpers in a module called VBT.Repo which is a part of our common internal library. To make these functions available in the repo, we replace use Ecto.Repo with use VBT.Repo. This change is performed automatically by our custom project generator.


This post showcased our lower-level design approach. The key idea is that we’re consolidating things which are naturally tightly coupled, such as changeset builders, authorization, and repo operations. In simpler cases this entire logic can reside in a single function. Otherwise different parts can be extracted into separate internal functions.

Inside our core functions we try to express the flow using the with chain, or the |> pipeline in the less frequent cases where we don’t need to return an error. Utility functions such as validate, authorize, and repo extensions such as fetch functions and transact, helps us reduce the amount of micro-functions.

As more functionality is added we look for the opportunities to split the module. For example, if we end up supporting additional post-oriented operations, we could move the related code to the Forum.Post module, which would expose functions such as create, edit, like, flag, etc. The same module would also contain the query functions for fetching posts.

If Forum.Post becomes large, we need to consider further splitting the module. Broadly speaking, a large module with a large public API is a potential candidate for a “vertical” split, where a single abstraction is replaced by two (or more) peer abstractions. On the other hand, if the module is large but its API is small, extracting some logic into an internal sub-abstraction of the boundary, is possibly a better option.