It has been almost two months since the decision to go for a multi instance architecture was made, and since then the technical track has focused on the much needed groundwork to enable such architecture.
High level, global architecture
The multi-instance architecture will allow each Greenpeace National / Regional office (NRO) to have its own customisable Wordpress instance. This means the infrastructure will have some services that are shared or centralised, and others that are distributed per each NRO.
Some services will be centralised to serve all NROs, such as continuous integration, unit testing and monitoring services. This is needed to ensure overall consistency and quality across the platform, while at the same time reduce the maintenance needed for each office. The majority of the tools intended to support each of these shared services have been reviewed and tested. Some are still going through selection review. Here is a map of the shared services and tools explored so far:
Each NRO will have 3 different Planet 4 pods or environments: a testing environment to run the automated tests and functional manual testing, a staging environment to perform user acceptance testing (UAT) and a production environment, where the end user platform will sit open to the public.
Each one of these environments will be hosted either on 2 containers or 2 webheads (this will depend on the hosting architecture selection), and will be supported by a high availability key value database (Redis Sentinel) to handle sessions. Below a representation of how Planet 4 local instances architecture will look like.
Infrastructure deployment tool
The new Greenpeace websites will be deployed on Google Cloud through Google deployment manager. The IT team explored three types of tools to deploy and manage Planet 4 infrastructure in a reproducible, predictable, scalable and continuously testable way. These are:
- Google cloud console
- Google deployment manager (GDM)
- A third party service (like Terraform, for example)
We were looking for a tool that supports automation, has version control, supports infrastructure changes and has scalable deployments to multiple nodes. Both the deployment manager and Terraform suit this tasks but due to the problems Terraform has with the current version of the Google Kubernetes Engine API, deployment manager is the option that seem to fit better Planet4 requirements at this moment. GDM automates creation and management of the platform, allowing the infrastructure to be managed as code and including descriptive information to track changes.
At this stage the final hosting architecture of the website is not decided yet. Two different approaches have been explored, deployed and tested in parallel to measure performance, high availability and scalability of the website:
- Virtual machine approach (also referred to as “classic setup” as this is the architecture Greenpeace usually uses for other services), the traditional method of building servers. In March, a test setup for this approach was built, with around five variations the team wants to run performance tests against.
- Docker containers approach (kubernetes/pods), which provides the ability to package and run an application in an isolated environment (container), allowing to run many containers simultaneously on a given host. A test setup for this approach was released in March and will be further evaluated by the team in the coming weeks.
Both approaches have been set up in Google cloud due to high renewable energy rankings and high availability capability and the option to allow users to be spread across instances in different Google cloud zones in the same region.
A Wordpress instance has been set up on each architecture and performance tests are about to be executed on both to see how each setup behaves under stress conditions. The architecture that will perform the best and facilitate long term maintainability will be the chosen one. These tests are critical to ensure Planet 4 will have the right setup to support high volumes of traffic as that from Greenpeace.org audience or bigger.
Continuous Integration and Testing
A Jenkins server has been set up to start building the integration pipeline and configure the jobs needed to automate the deployments and run the automated tests. Jenkins was chosen due to its high possibilities of integration with other tools and its custom configuration available.
At the same time, two performance testing tools have been reviewed to see which one can better accomplish the types of testing we intend to do. JMeter and Gatling were considered and the choice is currently leaning towards Gatling, mostly due to its ease of use for scripting and its reporting abilities.
Progress on the software architecture has also been made. This baseline work will save us some time and help accelerate the roll-out process once the functional specifications of the websites are ready.
As simplified in the scheme above, the idea is to leverage multiple tools to automate as much of the Planet 4 deployment as possible:
- Composer: composer is used to define and manage dependencies of each site, like the Wordpress version used or which modules and themes should be enabled. Composer will build the Wordpress instance based on a site definition file (aka composer file). This file contains the list of the packages (plugins, themes, libraries) provided in public and private composer repositories.
- The Wordpress command line interface: wp-cli solve the needs for automating the core installation and update, as well enabling and disabling of plugins and themes. It is called by composer, once the dependencies are downloaded locally, to run the installation/update scripts, as well as activate the selected themes and plugins.
- A Composer registry (Satis) to serve Planet 4 themes and plugins, as well as to provide a reference to the official releases of Wordpress (or other plugins that do not support composer). The Planet 4 registry will be built using satis. This registry will host a selection of plugins and themes officially supported by the project team to encourage local offices to reuse and share trusted resources to facilitate maintenance.
- The site definition repository: a light repository containing all the information needed to build a site locally (e.g. the composer file and the local configuration file containing information such as database credentials). An example site called “planet4-base” was built using these principles. This base site uses the official master and child themes.
- The master and child themes: Even though the design phase has not started yet, the setup of the theme repositories has already been created and uploaded on Github and available for download to the public. It is envisioned that depending on their customisation needs (and development/design abilities), each office will be able to fork and extend the official master or child themes.
The Planet 4 project offers you the chance to innovate fearlessly. Please feel free to apply and join the team.
You can also join the team on volunteering basis. If any of these areas of work interest you please fill this form and tick the “Technical track”, and we will drop you a line (of code) soon.
Also, we’d love to hear your comments and receive feedback to improve the technical approach of this project. Please add your comments below or send an email to the Google group.
Written in collaboration with Remy, Larry Titus and Diego Lendoiro.