Containerizing our Magento stack on AWS

Cheikh MAHAMAN
My Local Farmer Engineering
6 min readMay 24, 2022

As part of our move to AWS, we started rethinking our eCommerce Magento stack. In this post, we will cover what architecture we choose to adopt, how we reach this decision and the highlights/lowlights.

Disclaimer
I Love My Local Farmer is a fictional company inspired by customer interactions with AWS Solutions Architects. Any stories told in this blog are not related to a specific customer. Similarities with any real companies, people, or situations are purely coincidental. Stories in this blog represent the views of the authors and are not endorsed by AWS.

Genesis

Few years ago as without much knowledge on how to build an eCommerce website, we started looking for on-the-shelf options in the market. Among various options like OsCommerce, OpenCart, Zen Cart and WordPress, we picked Magento.

After some years, we reach the limit of this architecture for many reasons — Management (Upgrades/Updates), Manual scalability and the cost incurred, Disaster recovery etc.. As part of the move to the cloud motion, we decided at a first place to modernize it.

Architecture choices & Design

As we were discovering AWS and the overwhelming number of compute services, we had to decide of what we wanted to do — not easy. Based on our experience and the small size of our team, our (biased) objective was to have a platform where we would manage the least possible. Therefore, Amazon EC2 was out of consideration. We then investigated Amazon ECS and Amazon EKS

In our last posts ([1] & [2]), our colleagues explained why they choose Amazon ECS with AWS Fargate as the platform to run their containerized workloads and not Amazon EKS. We wanted to capitalize on what was done by our teams and reuse the tooling already tested.

With that said, we just had half of the solution.. well even less. How would we manage the lifecycle of our docker images ? should we build them ourselves from scratch and maintain them ? During our research we found a Magento’s docker image provided by Bitnami on the Amazon public registry. We chose to start a PoC using this image instead of building and managing a new one ourselves. We used the Magento 2.4.4 version as it was the one we had on-premises.

For the dependencies, Magento needed a MariaDB database and an ElasticSearch. Of course, our motto remained the same: “All managed — when possible”. We decided to build our PoC with an Amazon RDS for MariaDB and Amazon Elasticsearch. We considered using Aurora Serverless for the database and maybe explore the AWS Opensearch service however our goal was to minimize our effort, not to complicate our PoC nor our migration. We decided to keep the same engine / version as the one we had on our production system— well we didn’t really stick to that and will explain shortly why.

Our target architecture was as follow :

1/ An application LoadBalancer forwarding the requests to our Magento application

2/ An ECS cluster on Fargate used to deployed our Magento application on 2 availability zones

2/ A MariaDB RDS cluster with a multi availability zone configuration

3/ An Opensearch Cluster with an Elasticsearch engine (..not really).

This architecture derived from the mandatory requirements (Database and Search) to setup a Magento 2.4.4 application . There are other components such Redis, Varnish and RabbitMQ which are not mandatory nor covered in this PoC that should be considered for a production workload.

We chose to use AWS CDK to create our infrastructure as code and to build reusable deliverables. This choice like the one made for ECS was done to leverage the internal knowledge and capabilities . As a small team, we didn’t want to introduce a new tooling but focus on reusing the exiting ones.

You can find the CDK project on this github. It is composed of 3 mains components :

MariaDB cluster

As explained earlier, we chose to use the same database engine that we have on-prem. We were curious about the capabilities (scaling and manageability) offered by the latest Aurora Serverless . But because we wanted to keep it simple, we used the 10.4 version of MariaDB (please refer to the magento_db_stack.py in the solution code repo).

Elasticsearch domain

Here we struggled a bit. Magneto 2.4.4 require Elasticsearch 7.16 but as this version is not available on AWS (7.10 is the latest), we look for an on elastic.co. Unfortunately only the 7.17.3 was available so we would have had to upgrade our magneto to 2.4.5 — as per Adobe statement in the requirements: “Adobe only supports the combination of system requirements described in the following table.” — which we didn’t want.
Based on this page, we saw that the migration from Elasticsearch to Opensearch seemed “simple”. So we decided to go with opensearch 1.2 hoping for a smooth transition (please refer to the magento_es_stack.py in the solution code repo).

ECS Fargate cluster for Magento

For our ECS deployment we use the construct ApplicationLoadBalancedFargateService. It is a a Fargate service running on an ECS cluster fronted by an application load balancer. The configuration was quite simple (please refer to the magento_app_stack.py in the solution code repo).

This example is provided as a sample that rely on HTTP, which should never be used in production system. Use encrypted traffic instead. Please follow the HTTPS prerequisites in the repository to activate the HTTPS configuration.

Managed yes.. but was it that simple ?

We managed to deploy a full (empty) Magento application but faced some issues and open questions that I would like to highlight here :

  • CDK issues : Our CDK adoption had some challenges related to the CDK version we used. We started our project with latest CDK v2 and as beginner we tried to use examples available on internet and in the official documentation ([1] & [2]). We mainly found examples in CDK v1 and we sometimes struggled to make them work on v2.
  • Opensearch : As explained earlier, we decided to use Opensearch 1.2 to keep our Magento in 2.4.4. The configuration was challenging as we could not connect to OpenSearch on AWS if master user is being used for authentication. We decided to leave OpenSearch without authentication and locked down access to allow only VPC access (no public access). Next step is to deep dive on this issue and test other authentications available (IAM, Cognito)
  • AutoScaling : When running on premises, we didn’t think that much about scalability rules. we were just adding instances on-demand when needed. Now we need to think a bit more about scaling rules on our ECS cluster (scaling policies and boundaries) while keeping in mind the cost.
  • Boot time : Magento first start will take time to initialize the database schemas and the search indexes. We need to be aware of the health_check_grace_period which is the period of time, in seconds, that the Amazon ECS scheduler ignores unhealthy Elastic Load Balancing target after a task first launch. It need to be adjusted to be sure the first container will have enough time to bootstrap the database and configure the indexes.
  • Related to the previous point, we decided to start with one container because of the first boot where Magento configure the database and the search. We were not sure how the first boot would behave if we started 2 containers in parallel. This is something that need to be tested further.
  • Amazon RDS vs Amazon Aurora Serverless : as stated previously, we were curious about the usage of Aurora Serverless to reduce even further our operational overhead. Even though, we were clear about the right sizing regarding the ACUs, we need to benchmark the scaling performance but also compare the cost between Amazon Aurora, Amazon Aurora Serverless and Amazon RDS (MariaDB) vs the management. Also keep in mind that if we move to Amazon Aurora (MySQL or PostgreSQL) we will may need to migrate andor convert some tables from MariaDB (to be tested)

What’s next

In a next blog, we will share the next steps. As for now we got the blueprints for containerizing our Magento app, we will share how we migrate our stack in a near zero downtime.

--

--

Cheikh MAHAMAN
My Local Farmer Engineering

AWS Solutions Architect with passion for technology and cooking. Posts are my own.