There’s something about the human condition that gives us a love of speed. Faster vehicles. Faster computers. Faster internet. We’ve been chasing speed and efficiency for millenia, and so it’s natural for that desire to make its way into our development practices.
Development containers are part of a Visual Studio Code extension that allows you to easily develop your applications inside a Docker container. This is a wonderful tool for speeding up your development cycle. By utilizing them, we get a few instantly recognizable benefits:
1. No need to install frameworks and dependencies locally. These are included in your container and available to your application. Setting up these dependencies is easy, provided you’ve installed a dependency via the command line before, the commands are pretty intuitive(See here for docker file contents):
FROM mcr.microsoft.com/vscode/devcontainers/dotnetcore:0–3.1RUN su vscode -c “source /usr/local/share/nvm/nvm.sh && nvm install lts/* 2>&1”
2. Everything can be spun up quickly, and torn down when it’s no longer useful. In this case, F5 becomes my magic “do all the things” button:
3. Setup for the development container is easily shareable because it’s committed alongside my source code. Now that I have a file that specifies the requirements of the container, I can give that to anyone and have a very low barrier to entry for the project. Conversely, any new project I pick up with a dev container setup requires 0 effort from me.
But there’s one piece that’s missing from this utopia of lightning fast, low effort development environment setup. If the application that you’re developing relies on having access to a database, there are multiple options:
- You can have a database running on your local machine, forward the necessary port to your docker container, and give the application a connection string to the host’s DB. But then you’ll likely need to apply schema changes and manipulate data over time in order to keep the database up to date.
- You can create a local database in the container itself, and let the application(s) access that directly. But then you’ll need to do some polling to wait for the database to be available, so that you can deploy any new scripts to bring the database up to date. You also lose the data whenever you restart the dev container, unless you manage the docker volumes manually.
- You can use your favourite cloud provider to provision your database, but you’d still have to deal with keeping it up to date and giving a connection string to your dev container.
So with any of these options, this all becomes quite a high effort setup process, and the development process becomes slower and more painful than it needs to be.
By utilising a tool called Spawn, you can automate the creation of a cloud-hosted database from inside your development container. This database can already include the necessary schema(s) and data that you need to develop and test your application. You can save and rewind the state of your database, introducing new data or deleting the old for the next time you spin up your development container.
Adding Spawn to your pipeline is simple, as a command line tool it can be invoked from any terminal or script. Starting with a Spawn Data Image, you can create as many Spawn Data Containers as you need for your applications to run in a quick, efficient manner.
Adding Spawn to the development container is as simple as adding it as a dependency in the docker file:
# Install spawnctl
RUN curl https://run.spawn.cc/install | sh && ln -s $HOME/.spawnctl/bin/spawnctl /usr/local/bin
And once your Spawn Data containers are available, you can interact with them from any command line process, using commands specifically designed to feel intuitive to anyone with knowledge of kubernetes and docker.
spawnctl get data-imagesspawnctl get data-containersspawnctl create data-image — image image-1
And what’s more, these Spawn Data Containers can be graduated back to Spawn Data Images, which can be kept and accessed elsewhere, or shared with your colleagues. Spawn Data Containers are meant to exist only as long as they’re useful, and are designed to be torn down once they’ve served their purpose.
Spawn currently supports PostgreSQL, MySQL, MSSQL, MongoDB and Redis, with potential support for other database flavours in the future. It supports database creation from backups, a set of scripts or just a blank slate. This way, you can automate the creation and population of your schemas and data without any extra effort.
If you’re interested in trying out Spawn, there’s a beta program running at the moment, and anyone can sign up at https://spawn.cc/beta
After signing up to the beta, the team will be in touch to give you access.