Lessons in Preparing Docker Containers For Production
30 August 2016
I recently spent two weeks helping future architects and technology leaders understand cloud native architecture. We covered many of the concepts covered on the Realscale website.
Part of the discussion included a focus on containerization as a strategy for modernizing legacy applications. A few of these leaders decided to build out part of their project using Docker containers. In this article, I’m going to share key lessons and realizations they discovered while embarking on this project.
Automate everything, as production Docker hosts are short-lived
Many of the developers made the assumption that their servers would be around for a long time. As a result, they didn’t build in the proper automation for recovery if their Docker host — in this case an EC2 instance, were to fail or be terminated. This includes any data stored on the local filesystem rather than remote block or filesystem-based storage.
Your production Docker environment needs to ensure Docker hosts can be replaced with new instances should they fail. This requires server monitoring and automation to make sure the right infrastructure is available for your production Docker instances. While I tend to use Cloud 66 for this, some teams are opting to use other services, or to manage it themselves using Docker Swarm or Kubernetes.
Orchestrate your containers
Do you consider cloud servers as short-lived, ephemeral resources? Containers often live even shorter lives. Depending on their purpose, containers may live for a few seconds or perhaps a few days or longer. This has to be factored into the way you design and implement your code. If your code or application configuration assumes its environment is long-lived, or that other services it depends upon will always be around, then it will likely spell disaster.
Container management and orchestration solutions ensure that enough containers are available, can scale instances up or down as needed, and can control the host resources (e.g. CPU and memory) assigned to your containers as well. During the workshop, we discussed how solutions like Cloud 66 handles this without any extra scripting, to help them understand the benefits of selecting a vendor that fits their target architecture.
Some of the developers realized a little too late that when containers go away, so do their internal filesystem. If you deploy a database — in this case MongoDB, then the database must mount an external volume for necessary data files. Otherwise when the container is destroyed, so will any inserted data. Similarly, any important files must abide by the same constraints.
Docker has a nice introduction to volumes that explains how this works in greater detail.
Choose your database wisely
I’ve written about this before, so I won’t dwell too much on it here. The leaders found that their selection of database positively or negatively impacted their infrastructure and implementation needs. Some teams selected a key/value store like Redis, only to realize that the effort required to implement some key aggregation functions would be better handled by selecting a different database. Thankfully the project was small and the issues were caught early. I’ve seen projects where this isn’t the case, causing considerable effort to replace the initial selection, or worse, lots of workarounds and stopgaps to get around the issue.
Serverless is promising, but not for everything…yet
A few of the leaders decided to explore a pure serverless architecture. While they were able to address many of the modernization needs using a serverless approach, there were a few things missing. These things included a complete API management solution for per-account rate limiting, endpoint and account-level usage reporting, and built-in fine-grained access control. These and other features can be overcome with time and some code, but most teams want to focus on features rather than infrastructure.
Some leaders chose to mix a serverless approach with Docker containers to deploy their API management layer, requiring servers to be managed after all. By choosing the best container orchestration approach for your solution (as mentioned earlier), teams can still incorporate necessary infrastructure using a combination of Docker and serverless functions.
Is Docker production-ready?
Yes, Docker certainly is production-ready. Many organizations are realizing the benefits of using Docker, including GE, ADP, and others. Just make sure you take the time to plan out your production Docker environment. For further reading, check out “9 Critical Decisions for Running Docker in Production”.
Originally published at blog.cloud66.com on August 30, 2016.