Setting up a testable environment for Open Banking data transmission: connecting two dockerized Elixir projects and testing requests between them

Demetrius Mânica
Trio HeadQuarters
7 min readJul 28, 2021

--

Here at Trio, we developers are very focused on having a well configured test environment and a productive development environment, so we can have a safe net and know when a new feature developed breaks anything else, and we can quickly pinpoint where the problem is and fix it ASAP.

Currently, we have a lot of separate projects that talk to each other via HTTP requests, but how can we test these requests during development and make sure the communication is working properly and the requests are doing what we really expect them to be doing?

So, suppose we have a dockerized project, named trio-umbrella that wants to access an endpoint from a separate dockerized project, named trio-hub-umbrella. How could we make requests between them, know the requests are completed successfully and write tests to make sure it is all working properly?

This article shows a step by step of how to do that as we did that in our projects.

I hope you like it. 🤘

Projects and environment configuration

Setting up our apps and environments

First, we need to set up our docker containers files.

We are using a similar Dockerfile for both our Elixir projects, as such:

Then, in our docker-compose.yml we set up a web service, which is our main app, and a tests service, which is our test environment, and a db service, for database purposes, which is not going to be addressed here.

So, for our trio-umbrella app, we have the following docker-compose.yml:

And for the trio-hub-umbrella app (note that the exposed ports are different from trio-umbrella's so they don't conflict when running locally):

Consider both trio-umbrella and trio-hub-umbrella are Elixir Phoenix projects.

Configuring the endpoint that will be accessed in trio-hub

In trio-hub-umbrella, we set up a very simple endpoint that just returns a map of company data with a status 200 (ok) status if the provided company_id equals "valid-company-id", and an error message ("Company no found") with status 404 (not found) otherwise.

Router:

CompanyController:

Accessing the endpoint via trio-umbrella

First, we need to configure the dependencies we'll use to make HTTP requests and also to record these requests, so we can simulate different outcomes and keep the test cases testable and independent from the API that is being accessed. So, in trio-umbrella's mix.ex, we are going to add Tesla, Hackney and ExVCR to our deps:

Hackney is a HTTP client library for Erlang that is used by Tesla, which is a HTTP client for Elixir.

ExVCR is a library to record and replay HTTP requests.

Also, we have to add a config for Tesla, using Hackney adapter in our config.exs:

To access the endpoint we'll create an API client using Tesla, like so:

Note we are using an ENV variable, TRIO_HUB_API_BASE_URL. I'll get there in a bit. This variable is used to define our API URL based on the configuration of our docker network.

Find below how we are going to configure our network and connecting our containers to it.

Configuring a docker network

Assuming we have docker configured to be used in the terminal, we are going to configure our docker network. You can check the Docker networks docs, if you want.

First, we need to create a network. Let’s create a trio-blog docker network, like so:

Creating the network:

docker network create trio-blog

Checking if it was created:

docker network inspect trio-blog

To run the trio-hub-umbrella container, we are just running a simple:

docker-compose up web

Now, let’s connect our container to the network.

Finding out the container id through docker ps (1884b97cd6f3):

And connecting the container to the network through docker network connect:

You can check the Containers key showed in the network inspection now shows a new container named trio-hub.

Now we can set up our ENV variable as the IPv4 address that is shown inside the docker network: "172.21.0.2/16".

In trio-umbrella's docker-compose.yml file, let's add our TRIO_HUB_API_BASE_URL env variable to our tests environment, like so:

Note that you have to add the port too, and that it's not the exposed port (4003), it's the internal one (4000).

Let's now get inside our trio-umbrella's tests environment by running a bash command:

docker-compose run --rm tests bash

We are almost there, now we just have to connect our bash test environment run to trio-blog network. Finding our container id (ea946edbddb0):

Finally, connecting it to our network, through docker network connect:

We can see a new container named trio-umbrella_tests_run_6972b4bf6c2e is connected to the trio-blog network.

It's all set up! Let's write and run our tests!

Writing and running tests

So, our plan here is to write and record three test cases for the API client for Trio Hub's endpoint show company:

  1. Make a request when server is unavailable;
  2. Make a request with an invalid company_id;
  3. Make a request with a valid company_id.

Let's start by configuring our base test file and our first test for the Trio Hub API client, like so:

https://medium.com/triohq/setting-up-a-testable-enviroment-for-open-banking-data-transmisson-connecting-two-dockerized-812ed3edeaf5

We are using ExVCR.Mock, with an ExVCR.Adapter.Hackney adapter, as shown in line 3, so we can use the use_cassette option in our tests.

We are running a test case for when the service is unavailable, so let's simulate that and record the ExVCR cassette. To do that, we just stop our trio-hub container, so it's not accessible, leaving our network like this:

So I'll run the tests twice, so we can see the results:

Note that we are expecting a string in the first run, but the return is an atom. After the first run, the request is record in a cassette (shown below), and the second request receives an "econnrefused" string, because the atoms in cassettes are recorded as strings.

So the resultant ExVCR cassette recorded is like so:

After the cassette for a request is recorded, whenever a test has a use_cassette option with the same name as the cassette file and makes the exact same request, it gets the response from the cassette file instead of making the real request again.

Let's now write the other two test cases, like so:

We are testing real requests now, so let's run trio-hub's container, so the network looks like so:

Let's run our tests. Results:

And the recorded cassettes:

That's it! Now every time we run our test suite, the tests get the results from the cassettes and we don't need to make the real requests, if we don't want to in our tests.

Conclusions

There are discussions and opinions going on about whether it is a good practice or not to record cassettes for HTTP requests and I'm not getting deeper into it, but it's important to keep in mind that the cassettes may be a good feature if you want to test API responses signatures and stuff like that, but it is not useful if you are testing API availability, for example.

So, keep in mind that you should always analyze and find what is best for your use cases. For now, this works for us, but maybe we'll learn something new and change our minds at some point and do things differently.

In general, I believe using good practices and patterns in our applications and making them developer friendly is very important to keep development productive and developers interested in what they are doing.

Also, having a good setup for test environments is essential for keeping code testable and maintainable.

I tried to give at least a glimpse of how we develop things here at Trio. I hope it didn't get too long and that it was helpful for you.

Any tips, critics or suggestions are welcome if you feel like it.

Thanks! ^^

--

--