IT Operations Technology Stack

Carles San Agustin
Worldsensing TechBlog
4 min readApr 3, 2019

At Worldsensing, we create products with various infrastructure demands. Some installations are single node, distributed, clusterized, etc. but we are always evolving towards security, stability and excellence. We also have some legacy systems we must support and upgrade.

We start designing the architecture thinking on the more restrictive environment, that in our case are on premises installations. These are some of the obstacles we found until this date:

  • Non-scalable nodes, neither horizontal nor vertical.
  • External support team lead us to long response times.
  • Restrictive inbound and outbound firewall rules.
  • No Internet access.
  • Etc.

Then we can architect to less restrictive setups by changing the initial architecture, normally an easier task. This way we can deploy on bare metal and cloud environments following a standard model and not changing much in the initial design.

The IT Operations team provides our company teams with standardized tools and methods for all products and deployments. Unifying the tools this way allow us more focus on learning and improving them; we gain knowledge by practising so we can support and maintain the installed versions.

Software

There is a huge ecosystem of software tools to help development and operations teams. Companies are providing technical solutions for technical needs. From all those solutions we chose the ones that give our projects an increased deployment security, stability and quality and that our teams feel more comfortable working with.

We prefer selecting open-source software because:

  • Widen integration options between software solutions.
  • Community help: Documentation, forums, chat rooms, …
  • Speed: Fix your own bugs
  • Customizability, flexibility, …freedom!
  • There are more talent recruitment opportunities.
  • Helps prevent vendor lock-in

Let’s see the list of software tools we are currently using.

Cloud:

We deploy our solutions on premises and on cloud environments (private or public) too. That depends on our customer requests. Open source solutions are then our first pick when building infrastructure tools because we don’t want provider lock-in, we want flexibility.

Our company history left us with provider legacy. We are slowly moving towards better and stable cloud provider solutions. Customer demands are also taken in consideration when deploying our applications due to integrations, zone locations, certifications, … These reasons lead us to have various cloud providers. In summary I would say we are a multicloud architecture company.

Containers:

We use containerized applications due to our microservices architecture and application modularity. This helps us to separate the operating system from the application on troubleshooting processes plus parallel release deploy for example.

Our infrastructure is based on configuration management scripts following Infrastructure as Code principles. We use Docker Compose files for single node deployments and Kubernetes resource files for cluster deployments. Any of both options allow our developers to specify the application environment and its customizations via script files which are then versioned.

Configuration Management:

Infrastructure changes are pushed to nodes with configuration management scripts. Before I explained we use Docker Compose and Kubernetes resource files for building applications. We now use Ansible playbooks to deploy those scripts, run them and build applications. Ansible, Python and Bash are also used to trigger actions on the infrastructure; networking, storage, node creation, cloud API interactions, etc.

Development tools:

GitOps technique uses Git version control system as the single source for declarative infrastructure definition and changes. Deployments are composed of groups of script files with different syntax structures; yaml, makefiles, python, bash, etc. Versioning changes on these text files helps us having a history of decisions and actions applied to different customizations or environments. Executables, libraries, variables, customizations, … are text based versioned files.

Make is used to standardize the instructions to start, stop, build, delete, … so development and operations talk the same language. Developers or Operators can edit Makefiles to add protocol instructions to be used later for tools like Ansible or Jenkins. Make configuration files are a simple way of customizing environments or installations too.

Monitoring & Backup:

We are monitoring our installations with metrics, notifications and logging on top of our Docker deployments. It is the best way to check the performance of the differents elements and helps our teams troubleshoot issues. They give us eyes on the full stack platform; from nodes to application containers. TICK (Telegraf, InfluxDB, Chronograf & Kapacitor) and ELK (ElasticSearch, Logstash & Kibana) stacks are our selected solutions.

On our Kubernetes ecosystem we are just switched to Heptio-Ark for cluster backup and disaster recovery. Even having Infrastructure as Code versioned in Git repositories, we like to have a snapshot of the resources currently in use in our customers project. We find it is easy to recover from disaster.

CI/Trigger:

Our workflows are triggered by Jenkins jobs. We use job variables to introduce changes and customizations on the tasks. A workflow can be a deployment into protected systems, repetitive tasks automation, Docker images update, … We also use Bitbucket Pipelines for CI/CD.

Summary

We have seen a list of software tools and how we use them in Worldsensing. This is not the unique toolset we could use but it suit us. We keep updating it to apply new features and be up to date on security issues.

I hope you find this post interesting. Please share your stack of tools or comment after this text and see you soon!

--

--

Carles San Agustin
Worldsensing TechBlog

Senior DevOps Engineer & Geek. Photos, mountains, movies, running & beers!