A CoPilot Story (Pt 1): Welcome to CoPilot

This is what we’ve been working on for the past six months…

Alex Jupiter
Make Us Proud

--

A CoPilot Story Table Of Contents

1. Welcome to CoPilot

2. The Product Creation Process and Designing Deploying Via A Docker Compose

3. Designing Metrics

4. Designing Notifications

5. Designing The Service & Application Catalogue

6. Designing Versioning

Introduction

In January I joined Make Us Proud as a Product Designer. My task, to work on the future of application and infrastructure management.

BOOM!

In really simple terms the product I would help to build enables the running of internet applications. Whether this be Instagram, Amazon or GMail, all of these applications need somewhere for their code to live and function. The products that provide this functionality can all be put under the general banner of cloud computing providers.

There are many cloud computing providers on the market. Amazon has their product called Amazon Web Services (AWS) and Google has theirs called Google Cloud. There are also smaller players such as Digital Ocean and Heroku.

All of these products offer different functionality and are suited for different use cases and people. For example, AWS is great for experienced software engineers who know the intricacies of application and infrastructure management. However, Heroku is good for the not so experienced who don’t need as much functionality and want something very simple to get their application running (interestingly Heroku actually runs on top of AWS, but if this is confusing don’t worry about that for now).

The product I would help to mature in this market is Triton. Triton is built and managed by Joyent (an early pioneer of Node.JS and cloud computing), which was recently acquired by Samsung. We have big plans for Triton and it’s exciting to finally share them with the community.

To develop new features for Triton we are releasing a Beta product called CoPilot that incorporates ideas that will eventually make their way into Triton over the coming year.

CoPilot, at the time of publication, is an experimental product and we’re going to be working hard to improve it in the coming months — feedback is thoroughly appreciated.

CoPilot (and Triton) is unlike anything else on the market for application and infrastructure management, here I am going to explain why.

Technical Explanation

With such a technical product an explanation is never easy, and if you’re familiar with these concepts then this will only be partly useful. However, I’m writing this so hopefully anyone with a small knowledge of the internet can come to this and understand the project we’ve worked so hard on.

To start this explanation, it’s worth understanding how internet applications (or “apps”) are constructed. In this example we are going to take the example of a blog, such as TechCrunch, that runs on the blogging software WordPress.

Firstly, it’s important to understand that a blog requires multiple services to run effectively, and at scale. For example, if we are going to create a Wordpress blog, we are going to need a Wordpress service. And if we want to store the information for our blog we are also going to need a database service, such as Percona.

WordPress and Percona are our first two initial services that make up our blog application, now we’re going to introduce a few more services.

Other services such as Memcached speed up the saving of information in web browsers; NFS is a service that allows computers to interact with the blog over the internet; a service such as NGINX actually opens the whole blog up to the internet (and provides extra functionality that’s not relevant for now) and lastly a service such as Consul tells all of the other services how to connect with each other: it’s the service registry or service catalog.

The important thing to understand is that an application is made up of different services, and all of these services work in tandem, they’re all like cogs in a machine.

CoPilot is the first time services can be managed from a UI using Joyent’s technology.

In our blog example, all of the services can be represented in the following topology diagram:

An iteration for the topology design for CoPilot

Now, there is probably quite a lot in the above diagram that doesn’t quite make sense at this stage of the explanation, let’s address these points below, and through this explanation we’ll understand Joyent’s technology in greater detail — as well as understanding it’s unique value propositions.

Firstly, in the above diagram, each service is described as having a number of ‘inst’, which is an abbreviation of instances. Each instance can be thought of as a virtual-computer with only one application, and multiple instances make up a service.

Why do we use the term virtual-computer here? Well, this brings us onto a tricky concept to understand, that is inextricably linked to understanding instances, and that term is virtualisation.

Virtualisation: the reproducing of actions and functions of something without either of those actually existing.

Virtualisation is a bit difficult to get your head around, however it can be useful to think about this in terms of creating computers inside computers (if this hurts your mind — hold on for a second). For example, a virtual machine (VM) is an emulation of a computer within a computer.

There are many types of virtualisation, and Joyent takes the approach of operating-system virtualisation that creates multiple virtual isolated environments to run applications in; this is on top of the same piece of hardware and operating system (OS). This makes Joyent unique and you’ll come to understand this process by the end of this post.

Other providers also emulate the hardware with virtual machines (VMs) that lead to extra layers of virtualisation and so losses in performance.

To get to grips with why operating-system virtualisation leads to increases in performance it’s important to understand the different layers of functionality in general computing.

The layers in computing

A computer is a piece of hardware, with an operating system (OS) sitting on top of it, with that application code (sometimes called image) sitting on top that. A user will then interact with the application.

Using an example of something we’re all familiar with: a piece of hardware is an iPhone, the OS is iOS, the application (or “app” for short) is something like the calculator, and you are the user.

In the world of application and infrastructure management, the hardware is a package of computing power in a data centre (a huge collection of computers), the operating system is something like Ubuntu (it could also be Microsoft Windows) and the application is WordPress.

With the hosting of applications in the cloud, the hardware is abstracted from the process. A user interacts with an application over the internet, which lives somewhere within a data centre (or many data centres). The hardware itself, what it is and where it’s located, doesn’t matter (there are slight differentiations on this model but we won’t concern ourselves with these just yet).

A basic client server model, where the server is the data centre in application and infrastructure management.

Now, as originally explained, Joyent employs a method of operating-system virtualisation. Why this increases performance, is because in reality we only care about the application code (the bit the user is interacting with) and everything else is there just to support this functionality. This is why Joyent does its best to put this application code as close as possible to the hardware, whilst still allowing this application code to be replicated many times to cope with larger and larger volumes of usage.

Triton virtualises the operating system as well, which means that multiple applications can use the same OS. The application code is closer to the hardware, meaning fewer levels of virtualisation leading to quicker performance.

The below diagram illustrates why this method of virtualisation is preferred:

A comparison between virtual machines and OS virtualisation. Notice how the app on the right hand side runs closer to the infrastructure, and so improving performance.

At this stage a couple of new terms need to be introduced:

  • bin/libs (or binaries and libraries) are supplementary pieces of code for an application (a thorough understanding is not needed here)
  • A hypervisor is software that creates and runs virtual machines (this is not needed in operating-system virtualisation)
  • Docker Engine is the tool that runs this method of virtualisation

One more thing…

Those “App 1”, “App 2”, “App 3” parts on the diagram above, are the “multiple virtual isolated environments to run applications in” that we mentioned earlier: these are containers filled with application code.

Container virtualisation is carried out by a technology called Docker. There are other methods of carrying this virtualisation out, however Docker is the de-facto standard for doing this that any developer is extremely familiar with. Therefore, any method of container virtualisation needs to be compatible with Docker (if you would like a more thorough understanding of Docker please refer to my colleague Antonas’ post here).

But there is one problem with Docker, it’s only compatible with one OS called Linux. Now Linux is super popular, but it’s pretty old and complicated to use. In the word of one developer I asked, he said it was a “clusterfuck” (<- his words not mine).

To get around the problems of Linux and take containerisation technology to a whole new level, Joyent has integrated Docker with the incredible SmartOS, which has numerous advantages over Linux:

  • SmartOS has a super advanced file system (more on that here if you’re interested)
  • SmartOS is more secure (using a technology called Zones, more again here if you’re interested)
  • SmartOS has an increased ability to debug applications (using a technology called dtrace, again more here if you’re super into this sort of stuff)
  • SmartOS allows containers to run closer to the hardware, or on bare metal, which makes everything run that bit faster

So there you go, Triton/Joyent allows you to run containers on SmartOS, something that no other provider allows. That’s incredible (trust me).

Secondly, going back to the topology view (yes we’re still working our way through an explanation of how this diagram has come together), the Consul service is illustrated outside of the topology. This is because Consul is the service registry, it tells each service how to connect to other services, and is responsible for the illustration of the topology that we now see. Consul is bi-directionally connected to every service, so to avoid confusion, we’ve decided to illustrate it outside of the topology: it is also required for any application that uses Joyent’s extremely useful technology ContainerPilot (more on this in the next section).

Thirdly, each service has a health icon, describing the status of the service. This health icon status is calculated with a piece of Joyent technology: ContainerPilot. ContainerPilot is the application’s orchestrator that takes care of operational tasks of the application such as configuring containers as they start, managing dependencies, handling errors, and of course, performing health checks (amongst other things).

Container pilot is special in the way that it keeps orchestration of services (application orchestration) separate from the orchestration of containers (infrastructure orchestration): this is an implementation of what Joyent has coined the Autopilot Pattern.

The reason why the autopilot pattern is so effective, is that it allows the freedom to use any kind of container orchestrator (examples of this include Kubernetes, Docker Compose and Mesosphere Marathon).

Being able to use different container orchestrators is important because some container orchestrators are integrated tightly with the application (taking care of service discovery and configuration) and it’s important to not have lock in to specific technologies and having these tasks consolidated reduces setup time dramatically.

Container Pilot allows the interdependence of the application and container orchestration by moving the orchestration of services into the container, not only allowing multiple container orchestrators to be used, but also to automate many of the operation tasks related to configuring a container as it’s started. This includes re-configuration of containers during scaling and also health checking.

An illustration of Container Pilot being utlised in an application

Hopefully now the name CoPilot makes sense…

In Summary

Before starting the project all of these technologies were already available. The ability to deploy containers with SmartOS and implement ContainerPilot have all been grown by open source communities: lead by Joyent. The ability to abstract instances into services was also available through the Triton CLI.

Our job was to bring all of these technologies into one awesome experimental product so anyone (well nearly anyone) could utilise the ground breaking application and infrastructure management tools that Joyent had already built.

What we have released for CoPilot is just the beginning. Someone can come to CoPilot and create services and monitor these with metrics. But, we have so much more planned.

What follows is a series on our designs that we eventually plan to implement into CoPilot and then eventually Triton.

The reason why we are releasing our designs is to elicit feedback from the open source community on our concepts. So please do let us know your thoughts.

The GitHub repo for CoPilot is here.

If you’d like to be one of our user testers (and get $250 of free credits on Joyent!) please email aj@makeusproud.com

A CoPilot Story Table Of Contents

1. Welcome to CoPilot

2. The Product Creation Process and Designing Deploying Via A Docker Compose

3. Designing Metrics

4. Designing Notifications

5. Designing The Service & Application Catalogue

6. Designing Versioning

--

--

Alex Jupiter
Make Us Proud

Product Consultant. Email me to see how we can work together to change the world: alex@alexjupiter.com