Go: Database Integration Testing with Docker
An introduction to database integration testing in Go.
At Red Ventures, we write many of our APIs and services in Go. We love Go because it is simple, fast, and compiles to a self-contained binary that can be shipped to production. When developing APIs, integration testing is a vital part of the software development lifecycle. Good integration tests give us peace of mind when pushing new changes to production and provide fail-safes in our continuous delivery process.
Database Integration Tests
One area of testing that is often overlooked is database integration testing. Since nearly all of our APIs are backed by a database, it is necessary that we include database integration tests in our workflow. These tests ensure that schema modifications and database version upgrades don’t cause breaking changes.
In the past we would test code that performed database interactions using a mocking library (go-sqlmock). Mock databases were good for unit testing the interface functions themselves but they weren’t true integration tests. Since queries aren’t sent to the actual database engine, there is no validation of query syntax or key constraints. Also, the need to mock out every query caused duplicative work and made the tests very verbose.
After experiencing these issues, we decided to start running our integration tests on real databases. This allows us to test everything from a simple insert statement to foreign key constraints and validate that all of these still worked after schema upgrades. With Docker, we are able to quickly spin up a database instance that runs the exact version of MySQL as our production instance. The ability to easily control database versions in the MySQL Docker image is a huge bonus when testing version upgrades. Instead of blindly upgrading the database and hoping everything still works, we can first validate our tests on the Docker instance.
Example: Todo API
For this post, we will walk through a simple Todo API example (all of the code can be found on Github). The API uses a MySQL database to store our todo items and will provide the following functionality:
- Create a new todo
Item
- Delete an existing todo
Item
- Find a specific todo
Item
- Update a specific todo
Item
The repo contains the backing database schema which includes a single items table:
CREATE TABLE items (
id bigint(20) unsigned NOT NULL AUTO_INCREMENT,
title varchar(256) NOT NULL DEFAULT '',
description varchar(2048) DEFAULT NULL,
completed tinyint(1) NOT NULL DEFAULT '0',
created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
We will create an Item
struct to represent a row in our items table:
type Item struct {
ID int64 `json:"id,omitempty"`
Title string `json:"title,omitempty"`
Description string `json:"description,omitempty"`
Completed bool `json:"completed"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
Store Interface
In the root of the API repo we have our Store
interface. This defines the behavior that all storage implementations must adhere to.
type Store interface {
StoreReader
StoreWriter
Close() error
}type StoreReader interface {
FindItemByID(string) (Item, error)
}type StoreWriter interface {
CreateItem(Item) (Item, error)
DeleteItemByID(string) (Item, error)
UpdateItemByID(string, Item) error
}
MySQL Implementation
Since our API’s backing store is a MySQL database, we will create a mysql.Store
that implements the todo.Store
shown above. Below is the code that returns a new instance of mysql.Store
:
When creating a new mysql.Store
, we use prepared statements to verify that all of our SQL statements are syntactically correct and to guard against SQL injection. In Microsoft SQL Server, prepared statements (also known as parameterized queries) can also help reduce query plan cache bloat.
API Server
The API server establishes a connection to our database, initializes a new mysql.Store
, and sets up the router. Each route needs both an http.HandlerFunc
and a corresponding method to interact with the database. In the next section, I will just walk through fetching a single Item
, but the rest of the endpoints can be found in the Github repo.
http.HandlerFunc
func FindItemByID(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r) i, err := s.FindItemByID(vars["itemID"])
if err != nil {
jsonResponse(w, http.StatusInternalServerError, err.Error())
} res := map[string]interface{}{
"item": i,
}
jsonResponse(w, http.StatusOK, res)
}
The handler function verifies that the itemID
route variable exists, then looks up the corresponding id in the database. The matching Item
is returned if there are no errors, otherwise a 500 Internal Server Error
is raised.
Store Method
func (s *Store) FindItemByID(id string) (todo.Item, error) {
row := s.stmts[QueryFindItemByID].QueryRow(id)
var i todo.Item
err := row.Scan(
&i.ID,
&i.Title,
&i.Description,
&i.Completed,
&i.CreatedAt,
&i.UpdatedAt,
)
if err != nil {
return todo.Item{}, err
} return i, nil
}
This method uses the prepared statement that executed when the server was spun up and passes in the id. We then scan the result into an Item
struct and return it.
Executing Tests
In order to run our tests, we will need a database to connect to. In the past I’ve installed and ran MySQL locally, but the process was clunky and I ran into numerous issues during setup. I knew there had to be an easier way, so I did some research and came across the MySQL Docker container. When I saw that you could spin up a MySQL instance with just one Docker command, I thought it was too good to be true. I was able to spin up an exact replica of our production database in just a few seconds.
Run MySQL in a Container
For this example, we will start a MySQL instance with the following command:
docker run --name test-mysql --rm -e MYSQL_ROOT_PASSWORD=my-secret -e MYSQL_DATABASE=todo -d -p 3306:3306 -v `pwd`:schema.sql:/docker-entrypoint-initdb.d/schema.sql mysql
This does a few things:
- Assigns the name
test-mysql
to the container, to make it easier to identify (otherwise docker assigns a randomly generated name). - Sets the
--rm
flag to remove the container once it exits. - Sets the root password with the
MYSQL_ROOT_PASSWORD
environment variable. NOTE: This is insecure for production use cases. - Runs the container in the background with the
-d
flag. - Exposes the container’s port
3306
to our localhost’s port3306
so that we can easily connect to it. - Mounts our
schema.sql
file into the container’sdocker-entrypoint-initdb.d
directory. Any .sh or .sql scripts found in this directory will be executed immediately after the database is created.
Run Tests
Now that our database is up and running, we can run the tests:
export DB_USERNAME=root
export DB_PASSWORD=my-secret
export DB_HOST=127.0.0.1
export DB_PORT=3306
cd mysql
go test
This will set the necessary environment variables to connect to our database, change to our mysql directory, and run the tests. Once all tests have successfully passed we can stop our Docker container:
docker stop test-mysql
Wrap Up
So far we’ve been very pleased with our switch to integration testing on real databases. Docker makes database setup and running tests locally a painless process, reducing friction in the development process. We no longer have to mock all of our queries and database interactions, which decreased the verbosity of our tests. We now provide true validation of our queries and key constraints, giving us greater peace of mind when pushing out changes. I highly recommend making the switch and hope that this post provides a good starting point in your journey.
Interested in solving complex problems and building services that scale? Come check us out at RedVentures.com.