In this post, I would like to summarize my thoughts about Domino on Docker. I will mainly focus on “What do I need to run Domino on a scale in a containerized environment” and the requirements which would be needed to do so.
I’m pretty sure this post won’t cover everything, but it will at least be a starting point for what I think we would need to run Domino container in a containerized production environment. Production, in this case, means Kubernetes — which is the de facto standard for a running container in a production-ready containerized environment.
This post will not cover anything about “What do I need to run Domino on my Notebook for a development purpose” because of the completely different needs and requirements.
As described above this post is intended to make you aware of possible challenges and needs when running Domino in a containerized production environment in scale. I’m open to discussion.
First of all, it’s really important to think in Microservices. A microservice is a program which is providing a single service, API or function. Like providing a web service based on a single Domino database. But not a list of databases or even different services like HTTP, SMTP and IMAP. This is really important to reduce the complexity but also to be more flexible and scalable. In the following topics, there will be other important reasons that come back to the microservice approach described here.
Further details on Microservices can be found here.
By default, Domino is logging its console output to the log.nsf as well as console.log. In a containerized environment the common way is to redirect all output to the stdout as well as stderr. From there it is easy to collect all output to later combine and manage it in common log management tools. It also helps to access the logs quickly via any CLI (docker/kubectl logs).
Of course, there are other solutions like gathering the logs using a sidecar container but the described solution above would be the easiest as well as most common one.
Further details on log management with Kubernetes can be found here.
Health and readiness checks
Health and readiness checks are needed to allow a container orchestrator to decide when to forward requests to a container and when to kill or start new container instances. Those checks can be either served by an HTTP service (provided by the running service), TCP check or a script (called within the container).
Let’s assume we have a Domino container running two web-enabled databases as well as an HTTP and LDAP task. When do you decide to kill the container? When one of the two applications isn’t available? When the LDAP task isn’t responding? Ones more it’s really important to think in microservices!
On the other side just killing a Domino server without a proper termination is not a really good idea. But doing this will lead to downtimes. The application will not be answering any requests anymore but the Domino server needs to be stopped before the container can be marked unhealthy to prevent it from being killed.
Further details on health and readiness checks can be found here.
Let’s assume we have a web-service based database running in a Container. Our orchestrator decides, based on a defined threshold, to scale our service by starting more instances. For Domino, this would mean that a new server would need to be registered, configured and started. Automated, in just some seconds! And the other way around (stopped, deleted) when scaling down. We wouldn’t care about a single Domino server instance anymore because it wouldn’t be persistent. Which of course would bring many issues today.
And I haven’t even started talking about Notes-based applications and how to scale them. Just an example: We would need to expose multiple servers on a single host. This could only be done by using different ports (32587 instead of 1352) because we could not use a proxy. Adding multiple IPs to a host will also lead to issues in a multi-node environment.
Further details on Container scalability can be found here.
Support for automation and easier configuration
I already started to talk about this topic in the scalability section. Easily adding instances into a Domain and/or cluster is key. Some examples:
Joining a server into a cluster via API by providing a secret (as is the case in many other modern applications). Supporting common solutions like reading configuration parameters (notes.ini) from exposed environment variables. Providing certificates and other secrets (IDs) by mounting Kubernetes Secrets into the container instance. Or using providing APIs to change global environment configuration like server/configuration/website documents.
Those examples would not only help in a containerized environment but also for any automated deployments.
Reduced version dependencies
Files related to versions like templates and other related Domino Data files (HTTP directory) need to be strictly separated from the databases to make it easier to upgrade/downgrade between different Domino versions. Only real data files (databases) should be stored within the Domino Data directory.
Support for smaller Images
Small images are key to be able to run and scale containers without any downtime. As an example, a small base image could be Alpine with around 5 MB instead of a full-blown CentOS or RHEL.
The above topics are at least some of the requirements that I believe would be necessary to use Domino in scale in a containerized production environment. In my opinion, many things are missing to really think about using Domino on Docker in a production environment. As said above, I’m open to discussion.