Many books on carpentry or woodwork welcome you into reading with a chapter about proper workplace organisation and tools arrangement. I do believe that development skills should also be viewed as art and culture. A feasible approach to working environment really helps reduce development costs and further refinement. This happens due to early problem detection and developer’s increased productivity. As the topic is really vast, I’m planning on writing a series of articles.
After dealing with such a series, any attentive reader will get an accurate picture of the main approaches used in cutting-edge development and testing on various levels, from tools to distributed cluster systems. This particular article focuses on a toolkit and a basic sandbox. If the topic is found to be interesting, the series on cluster systems in Erlang, Golang, Elixir and Rust will be continued.
Note: To the success of this article, you should have docker, docker-compose and GNU Make on your machine. Docker setup will not take long, however, you have to bear in mind that your user must be added into docker group.
The following code has been checked and tested on debian-like distributives only.
So, let’s try to create an atomic counter. The app must fulfil the following functions:
And meet the requirements:
1. target system: ubuntu 16.04 LTS
2. basic HTTP API
3. stable work on Erlang/OTP >=19.0
4. more than 1k RPS under mixed read and write load in competitive environment from 1 to 100 customer flows
5. basic interface of internal processes monitoring
The article will focus mainly on principles and motivation of decision-making. However, there will be no code samples. The code is available at https://github.com/Vonmo/acounter.
Infrastructure isolation and virtualization
Nowadays there is a number of ways of solving the environment virtualization problem. Apparently, all of them have their own benefits and drawbacks making it hard to choose from.
My idea is to use Docker containerization for server-side and distributed software development. Docker is a flexible modern tool which reduces equipment costs at development stage and optimizes testing processes. What’s more, sometimes it even facilitates end user delivery. For instance, there is no problem with launching a cluster of 12–15 containers with various services on an average laptop. Then, you might easily model these services interaction and write integration tests in the environment close to real one. You can also check service scalability or test and process the fails, including serious crashes and further recoveries.
Note: both docker and docker-compose are suggested to be a solution for development stage: programming environment or staging for testers. The rationale for production environment basis is, however, beyond the scope of this article.
As our environment consists of two levels, which are the host and containers, we need two makefiles:
- Makefile is responsible for external control of virtual environment, that is creation, launch, docker containers stopping and tools for running compiling and analyzing tasks.
- docker.mk is in charge of everything happening to the code inside the containers
We can find the description of all the containers for our cluster in docker-compose.yml.
- Base. To meet para. 2 of the requirements we should build a base image and then pack Erlang/OTP 19.3 and all the necessary software into it.
- Test. This container is inherited from Base, and the code is mounted into it quite transparently.
Note: If you have to ensure your app work on other versions of Erlang, the base image should be complemented with these versions. Kerl has already been installed into it, so everything we have to do is add the required version of Erlang. Also, we should complete the makefile with extra lines for all Erlang versions and running tests.
To control virtual environment in makefile we use the following:
- $ make build_imgs — creates desired docker images
- $ make up — runs and sets up the containers
- $ make down — clears testing environment
Dependencies management and project building in Erlang
A number of programmes which we analyse and develop have some dependencies. For instance, there might be a code dependency on a third-party library or a tool dependency, like a database migration tool one.
By now we’ve dealt with tool dependencies as well as those on binary libraries. Now take a look at dependencies management and project building in Erlang. Out of de-facto available ways in Erlang, the most popular ones are erlang.mk and rebar. As I personally mostly use rebar, let’s opt for that.
Rebar main functions:
- Rebar tackles the problems of dependencies, release builds, and compiler and environment setups. In rebar3 there is also a lock-file with various dependency versions, and a vendor plug-in. The choice is up to us: for example, with every new build we can opt for downloading all dependencies and putting them into our building directory. As an alternative, we might freeze all the dependencies in local directory, using vendor plug-in, and keep them in app repository.
- Rebar 3 also offers a way of release building with the help of relx. To be honest, release building in Erlang is so vast a topic that it’s worth a separate article. In this particular example we are dealing with a basic release with production environment — that is, the source code is off, debug info is removed, and the release itself is ready to be run on a target system without any additional operations. It means that Erlang VM and service init script binding are included into release.
- Extensions make it easy to run various useful tools (I will expand on this issue below).
The following goals serve to build and test in makefile:
- $ make tests — compiles app test profile and runs all tests
- $ make rel — compiles the final release
The struggle for high-quality results
A few words about Common Test Framework
Testing is a standardized approach in engineering. Hardly any object has been developed without this or that kind of tests. In Erlang world there are two basic testing frameworks: eunit and common test (hereafter CT). Both these instruments enable us to test virtually all aspects of system mechanisms, it’s just a matter of the tool complexity and some prior preparation before actual running the tests. While eunit opts for module testing, CT performs as more flexible and multifunctional tool with a focus on integrative testing.
In CT there is a clear hierarchy of the testing process. Specifications facilitate refining of all the test running processes. They are followed with sets, where groups of test-cases are integrated into logically completed units. Within a test group we can also set up the test launch sequences and their parallelism as well as configure test environment flexibly.
The key to configuration flexibility is the three-level model of test cases initialization and completion:
- init_per_suite/end_per_suite — is called once while launching a particular set
- init_per_group/end_per_group — is called once for a certain group
- init_per_testcase/end_per_testcase — is called before each test within a group
I’m pretty sure that every developer working with tests and enuit has been faced with a failed Heisentest. As a result, we often have downloaded apps in testing environment which, in their turn, break the initialization of the further tests. Due to CT flexibility you are able to tackle these problems as well as many others. What’s more, test execution time is reduced through thoughtful environment initialization.
So, why might we need xref? In a nutshell, to reveal the dependencies between functions, apps or releases, or to detect dead code.
In big projects some code often happens to be dead — obviously, for a number of reasons. Just imagine: we wrote A function in X module. Then it moved into Z module under the name A2. All the tests were run as successfully as the developer forgot about X:A. However, as it had been exported, the compiler didn’t tell us that X:A is not being used anymore. Certainly, the sooner we get rid of the dead code, the smaller our codebase and its maintenance costs will be.
How does Xref work? It checks all the calls and matches them with certain functions in modules. If a function is defined but not used anywhere, there will be a warning. Also, there is a use case where we need to know all the places where some certain method is applied.
To use xref in production environment we refer to:
- $ make xref
In the previous paragraph we learnt how to detect dependencies or unused functions. But what if a function exists and is being used while arity (the number of arguments) or the arguments themselves don’t correspond to the definition? Also, there are cases of dead code and types mismatches. Well, dialyzer has been created to look for such discrepancies.
The following goal serves to use dialyzer in working environment:
- $ make dialyzer
Automatic check of code style standards
Every team makes their own choice on whether to follow code style standards and, if yes, which ones. Most of big projects tend to follow these standards as this common practice eliminates a number of problems in codebase support.
As there is no universal IDE for Erlang because somebody is into emacs, while others love vim or sublime, the problem of automatic check arises. Fortunately, there is elvis which makes it easier to follow the same code style standards without any quarrels within a team.
Imagine that we’ve agreed to check code style standards before pushing to repository.
The following goal serves to use elvis:
- $ make lint
Counter app development
Clone the repository:
$ git clone https://github.com/Vonmo/acounter.git
Run the sandbox:
$ make build_imgs
$ make up
Develop and test the main functionality in an iterative way. Run tests after every iteration:
$ make tests
When all the tests are run successfully and your code has reached its logical conclusion, it’s a good idea to run load testing. In this way we’ll check our system compliance with para. 4 of the requirements. In this realization the load testing has been performed not quite correctly because both the load generator and the system work on the same machine. However, even such code lets us assess our realization potential. The generator ‘warms up’ the system before main tests and then gradually increases the load.
At this stage we can boast a fully functional app. It can be packed into release and delivered to end users:
$ make rel
To check your release out, you can run it in console mode:
$ ./_build/prod/rel/acounter/bin/acounter console
and get to http://localhost:18085/. If you see the text which says “The little engine that could.”, the release has been run successfully and operates well.
All in all, I’d like to thank all of you for your interest and patience. Together, we’ve just managed to construct an operating sandbox which does facilitate development process. In my next articles I’ll do my best to tell you how we might enlarge this sandbox for distributed and complex systems development. Erlang is far from being the most popular language, but it’s just perfect for server-side software as well as soft real-time systems.