Platform Deep Dive Series: DevOps Culture & Tech

Jon Sharratt
sailci
Published in
4 min readSep 25, 2018

Sail CI aims to solve complex problems to help teams focus on delivering customer value every second of every day (including our own).

As we evolve and improve the Sail CI platform (currently in open alpha at the time of writing), we want to share with everyone how and what happens with your source code when using the platform.

In the first part of our series of deep dives, we give a high-level overview of the tech stack Sail CI is composed. Each following blog post dives more in-depth into each tech component to share insights on how we use them.

To begin the Sail CI team have decided to experiment and track the ever-evolving technologies that are part of the Cloud Native Foundation. Part of our culture is to allow team members to experiment and “always be learning” to help improve try out new ideas. We believe that the mix of DevOps and culture plays such an important role and that automation and CI practices complement a better culture all round within teams.

We have a firm belief that we should practice what we preach (a.k.a dogfooding) and in doing so, we want to foster a culture of enablement for teams to experiment, fail softly (more on this in another post) and deliver new and exciting features every second of every day.

Teams should have true autonomy with the ability to be decoupled and self-serve in every aspect of the value they deliver for our customers. We follow the classic mantra of “you build it, you run it” and get out of the way.

Tech

With all that said let’s dig into the tech stack (current at the time of writing) that makes up the Sail CI platform and helps enable all of this:

Kubernetes

It should probably be no surprise that Sail CI has Kubernetes at its core which allows Sail CI to schedule and extend the core API capabilities of Kubernetes to schedule and run built images. We currently use Google Cloud Kubernetes Engine to host our initial offering, in the future, we will have managed Kubernetes services across all cloud providers as we strive forward to allow scheduling build tasks on any cloud and any device.

Check it out at:
https://kubernetes.io/

Ambassador

Built on top of Envoy ambassador allows teams to self-serve and manage their services routing. With purely just configuration, teams can route traffic to their new services efficiently and autonomously. They can use complex features to allow testing of services in production such as traffic shadowing, rate limiting and authorisation.

Check it out at:
https://www.getambassador.io/

NATS Streaming

Generally, API gateways that are configured within Sail CI using ambassador are microservices that then translate incoming events (such as from GitHub / GitLab) into “cloud events” (see below) that are then published using NATS streaming. NATS streaming allows teams to have persistent events that guarantees at-least-once-delivery in the case of any failures. Sail CI itself is an entirely event-driven platform.

Check it out at:
https://nats.io/

Cloud Events

The messages that we publish use Cloud Events a new spec being formed and developed by the Cloud Native Foundation. Currently, we track and keep ourselves in the loop of the serverless working group as this evolves. We ensure every microservice we build is decoupled and allows teams to experiment more freely. If a team member fancies trying any new exotic language as they feel it will improve a customers experience, no problem.

Check it out at:
https://cloudevents.io/

Kubebuilder

To extend and create new operators within Kubernetes to extend core APIs we use and implement the powerful toolkit provided by Kubebuilder. It allows us to create custom operators in Golang that manage the reconciliation loop within Kubernetes effectively. It allows us to quickly create new operators and deploy them with ease using the CLI provided.

Check it out at:
https://book.kubebuilder.io/

Hasura GraphQL Engine & Postgres

To store data, we use tried and tested Postgres. We couple this with a new GraphQL engine that has arrived in spectacular fashion. We wanted an effective way of querying Postgres and support real-time updates via our public GraphQL API and web application. Hasura provides us with a real-time capability using GraphQL subscriptions over web sockets.

Additionally, one hugely significant feature Hasura provides is a way to record schema changes which Hasura then automatically generates the migration files in an instant. Developers can “record and commit” each change, once pushed they get applied as part of the CI pipeline. It provides a much faster workflow to evolve our database than traditional ORM tooling. So far we have found it a great way to sketch out schema changes and updates before committing them.

Check it out at:
https://hasura.io/

GraphQL Public API & GraphQL Bindings

Based on our previous tech choice it should come as no surprise that our public API uses GraphQL, although primitive at the time of writing we will be expanding this rapidly as we build out the Sail CI web application. We now have the ability using Hasura to provide subscriptions and live updates for every piece of data that changes as part of a CI pipeline within Sail CI. Of course, to help automate and gain some safety, we generate the GraphQL bindings automatically using GraphQL introspection against the Hasura GraphQL API.

Check it out at:

https://graphql.org/
https://github.com/prisma/get-graphql-schema
https://github.com/graphql-binding/graphql-binding

We love to chat all things Kubernetes and DevOps, do reach out to us or ask questions on Twitter and Spectrum.

Next in line for a deeper dive is the Hasura GraphQL Engine.

Originally published at sail.ci.

--

--