Is it possible to share your docker file, even if it’s just genericized as to names of services and…

Sure, you can take a look here:

Basically, it’s a mix of a few other Dockerfiles I’ve seen. Note I’m using supervisor to run things and that the config script must run as well when the container launches. The reason here is because I will replace the Nginx webroot with a path within the mounted EFS. This changes based on the ECS Task. So we can have multiple environments with the same Docker image. I stripped a bunch of other stuff out because it included things not relevant for WordPress (we also had some static pages and such outside of WP).

Also note the user data file in this gist. It’s what you will use for your EC2 instances (ecs optimized AMIs) and having them mount EFS. The cluster name is “default” in this case, but again we have multiple clusters. So you may want to change that too depending on your needs.

So back to your question: This user data script for EC2 and the ECS Task definition is what really hooks up the EFS — not the Dockerfile. If you mounted to /var/www/html (or where ever your webroot was) then you wouldn’t need to have any of this sed stuff in the config script. However, at that point, you would then need multiple Dockerfiles for each environment. A change to your nginx or php config could then mean you will need to build and push several images out to ECS Registry. In our case I just had a script read environment variables (set by ECS) for a more re-usable Dockerfile. This means we only need to manage one Docker image now. Though we do leverage tags on that image so we don’t accidentally push up a new Docker image to production with some Nginx config changes that could potentially cause problems without being QA’d.

Like what you read? Give Tom Maiaroto a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.