Building with Microservices and Docker
Below is a blog post from our engineering team explaining our use of Docker at Dwolla. With Docker, we’re able to package applications and their dependencies to run within “containers”. To hear more from some of our engineers follow them on Twitter at @bpholt, @skylernesheim, and @mtravi.
Dwolla’s system started like many other startups. The team built a monolithic app that served business needs and was manageable for the team size. As the business and team grew, it became apparent that the monolith would have to be divided.
The development team evaluated options and chose to move toward a microservices architecture hosted on Amazon Web Services. This architecture was selected because it allowed the team to independently build, deploy, and scale pieces of the system.
How we did it:
We started by building microservices as standalone applications running on their own EC2 instances, with multiple instances of services behind a load balancer. This structure works well in production, but we still observed a couple of sub-optimal areas.
- Running microservices on low-end generic VMs results in a large number of machines that needed to be monitored and managed. Managing several microservices running manually on a smaller number of larger instances was not ideal either.
- Inter-service dependencies make it a challenge to develop locally on this architecture.
About eight months ago, Dwolla’s platform team started looking at Docker alongside Amazon ECS. Most of the VMs running our microservices were not highly utilized, and rolling out new instances took longer than we wanted because we had to provision entirely new VMs.
Mesosphere and ECS were both good candidates for running containers in production, and both have distinct advantages. For our set of requirements, both were comparable in latency and throughput performance, so our focus quickly turned to cost and manageability. Dwolla runs on Amazon Web Services, so it was a natural choice to utilize a resource already within the ecosystem, and ECS’ integration with CloudFormation made it easy to integrate into our current CI pipeline. While there is a lot of innovation in this space, we decided that we would revisit the cluster technologies when our needs require it.
Amazon ECS is a good selection for the a majority of our services, but we run some services in an alternate environment. These services have higher security or IO requirements that make them less suitable for a shared environment. Because Docker does not provide complete process isolation, we deploy these services on their own instances running separately from the majority of our services.
Our end goal is to migrate all services to Docker, for improved management and monitoring automation. As we continue to develop our Dockerization process and tools we’ll be learning and sharing that knowledge on the blog. Stay tuned.
Originally published at blog.dwolla.com on November 13, 2015.