At eDreams ODIGEO we have a complex infrastructure. Complex in terms of component relationships. When we deploy an application in GKE, we have several components that interact each other as for example:
- Monitoring stack
- Infrastructure stack
- Persistence stack
- Application stack
As for our web infrastructure we have 200+ apps running as microservices, either API services and robots, basically, we faced five main challenges:
- Build a tool to glue all of the stack together in one single place.
- Reduce the creation time of a new app from ~1 hour to 5/10min.
- Make it simple and accessible for devs.
- Keep this app agnostic from the point of view of the code (no Python, Go or other scripting language).
- Keep it simple!
With all of these elements we decided to build a tool based entirely on Ansible.
The name of this app is Sushi because “In the same way as Sushi is made, we take several components from different sources and create a meta-application ready to be delivered with 0 interaction from the consumer.”
The tool is packaged into a Docker container and executed and launched via a Jenkins pipeline.
These Ansible playbooks are built very organically by incorporating all the modules as dependencies and describing those dependencies via Ansible Galaxy, so for us it is quite easy to add new functionalities for this tool.
Here is a simplified example of such workflow:
All the configuration needed by Sushi is stored in a single YAML file located in “per application” code repository. This way we separate the configuration from the application as a standard good practice in configuration management.
As an example you can see a snippet of code in the global app configuration that defines some variables for an application that uses Memorystore and Bigtable resources in one of our Gcloud clusters. Sushi will use this information to create those resources properly using Terragrunt*.
*Terragrunt is one of the tools we use internally to create our infrastructure.
Obviously, the configuration YAML can be quite extensive, especially for applications with many dependencies as Sushi acts as a “frontend” for all our “backend” applications: Terraform, Helm, Kafka, etc. and it’s a bit too long to cover it in a single article.
I don’t want to finish without mentioning that all the application deployment using this method generates metadata info to be consumed by other reporting tools. The way we generate this metadata is quite simple. We feed the app metadata inside the same YAML configuration file and this is stored in our regional Consul K/V database by Sushi.
Here’s a short snippet of the global YAML configuration file for our app tickets:
And the metadata part stored in our Consul K/V. The information is available to be consumed by whatever reporting/inventory tool.
In conclusion, with Sushi we achieved to have a better granularity of how we configure our apps and a better traceability when we deploy apps inside our cluster. Despite this being a good approach for us and a good starting point, we are still developing new modules to integrate Sushi with even more tools of our internal tool ecosystem.