Linode Cluster Toolkit — Part 1

Karthik Shiraly
Linode Cube
Published in
15 min readJul 19, 2017

Nowadays, deployment — be it big data, highly available databases or load balanced web applications — involves multiple clusters of many servers providing web serving, data processing, storage or coordination services.

In this article, I’ll introduce the Linode Cluster Toolkit, a software library, and LinodeTool, a command-line tool, that together make cluster deployments on the Linode cloud quick, simple and secure.

Motivation

If you are a Linode customer, you have probably used the Linode Manager web application. It’s a user interface — to be used by people.

Linode also provides APIs — Application Programming Interfaces — to be used by applications. They enable applications to programmatically do everything that a user can using Linode Manager — such as creation of servers, disks, images, network configurations, and much more. Linode provides a newer RESTful API (currently in beta) and an older API.

Using the Linode Manager for creating clusters consisting of dozens of servers is simply impractical and cumbersome because its user interface is not designed for cluster provisioning. Cluster provisioning is best done using purpose-built tools that leverage the APIs.

My first brush with the API was a couple of years ago when I built a Solr search cluster for a client. Since then I have gone through the experience of deploying clusters of different software stacks on Linode, using whatever programming languages and technologies seemed appropriate in the context of the overall project. I have used shell scripts, Python, combination of Python and Ansible or SaltStack, and tried out technologies like Terraform.

From all these projects, I discovered some common deployment patterns one runs into again and again. I was re-implementing the same kind of deployment, security and configuration goals for each one, to the point that it was consuming valuable development time and distracting me from the actual functional goals of those projects. The configuration and security goals for all projects appeared similar enough from a high level that it often felt like rework, but also subtly different in low level details in a way that made reusing previously written code cumbersome.

You might be wondering why anybody would need a separate toolkit or tool at all. Isn’t something like Ansible enough? Sure, existing solutions can be coaxed to do most of these things but the devil is in the details, and the coaxing has to be done every time for every project.

So, I decided to spend some time creating a reusable toolkit for Linode cluster management that I can then use in future projects without getting distracted by common deployment and security aspects.

In the sections below, I’ll describe some of the common deployment patterns and use cases that the toolkit and linodetool support. Some of these address aspects of Linode’s cloud architecture and API, some address Linux issues in general and some address common aspects seen in multiple big data stacks.

Target Audience

The toolkit and LinodeTool are useful for a variety of software professionals:

  • Data Engineers, Data Scientists and Machine Learning Engineers who wish to deploy or experiment with data processing clusters on the Linode cloud.
  • SaaS Developers who are using Linode for deploying their software-as-a-service to their customers.
    For example, you may be providing a text search SaaS that automatically deploys a dedicated Solr or ElasticSearch Linode cluster for your customers. For that kind of customer-tunable, dynamically-configurable deployments, something like Ansible has limitations because a lot of its concepts are static and its declarative approach requires one to jump hoops to achieve the same level of flexibility. Using a toolkit like this and writing lightweight Ansible modules and plug-ins on top of it makes that flexibility easier to achieve.
  • Devops Engineers and System Administrators who currently deploy or are looking to deploy products on the Linode cloud.
  • Product Developers who provide infrastructure management products and wish to support Linode cloud.
  • Website Developers looking for a cloud to develop, test and deploy websites for their employers or clients.
  • Website Administrators and Owners who maintain high traffic websites and are looking for ways to improve web server and database availability.
  • System and Performance Test Engineers who write testing scripts to test installation, deployment and performance of their products on cloud infrastructures.

Some Terminology

Before getting into the toolkit’s capabilities, I should introduce the terminology the toolkit and these articles use.

  • Linode Cluster Toolkit is simply “the toolkit” or “LCT”.
  • “LinodeTool” is a command-line tool that makes cluster and single-node deployments easy, using the toolkit.
  • “Plan” means a plan for a cluster, not a Linode plan.
  • For the latter, I use “type” or “linode type,” the same term used by Linode’s v4 API .
  • While the older v3 API used numeric identifiers for types, the new v4 API uses alphanumeric ones like “g5-nanode-1” (for the Linode 1GB type) and “g5-standard-1” (for the Linode 2GB type). I’ll be using the new identifiers in the article, but the toolkit itself supports both.
  • A “Region” refers to a Linode datacenter.
  • While the v3 API used numeric identifiers for datacenters, the new v4 API uses alphanumeric ones like “eu-central-1a” for regions. I’ll be using these new region identifiers in the article, but the toolkit itself supports both.
  • Provisioning — I use this term to refer to creation and initialization of infrastructure such as servers, disks, networks and NodeBalancers. Provisioning steps are usually specific to an infrastructure provider (such as Linode) and have to change for a different infrastructure provider.
  • Configuring — I use this term to refer to installation and configuration of a software stack and its OS dependencies. Configuration steps are usually reusable across infrastructure providers.
  • Orchestration — When a system involves multiple clusters running different software stacks and collaborating with one another, their operations have to be coordinated in some order, much like an orchestra.

Some examples to illustrate orchestration:

1) In a high-availability Wordpress installation consisting of a cluster of web servers and a cluster of MySQL servers, the MySQL cluster should be started before the Wordpress cluster, or should be stopped only after the Wordpress cluster stops.

2) In a real-time IoT event handling system consisting of a Storm cluster and a Zookeeper cluster, the Zookeeper cluster should be started before the Storm cluster and should be stopped only after the Storm cluster stops to avoid data loss.

Okay! Now onto the fun parts…

Linode Cluster Toolkit’s Architecture

The toolkit is a Python library that provides a set of useful interfaces for different kinds of client applications and scripts. It’s targeted at software developers. It supports both Python 3 and Python 2 environments.

LinodeTool is an executable command-line tool that makes it easy to provision and maintain clusters and single servers on the Linode cloud.

Internally, the toolkit implements its services using multiple providers to cater to application clients across the complexity spectrum, from simple shell scripts to multi-tenant SaaS systems.

In the sections below, I’ll describe a subset of the features and deployment patterns that the toolkit and LinodeTool support…

Basic Cluster Creation

A core concept of the toolkit is the Cluster Plan. A plan specifies the regions, number of nodes, types of nodes and some essential configuration information.

Let’s see some examples of cluster plans. Here’s a cluster plan for a cross-region, highly-available, disaster-recoverable 82-node WordPress setup involving Apache web servers with WordPress, Memcached, MySQL cluster with NDB, Block Stores and NodeBalancers.

HA Wordpress cluster plan. See the full plan here.

Here’s another example, a cluster plan for a 52-node big data IoT system involving Spark Streaming, Kafka input pipelines in multiple regions, a PostgreSQL cluster, high memory instances and block stores.

IoT cluster plan. See the full plan here.

Using LinodeTool, provisioning and basic configurations of these massive clusters is as simple as:

$ linodetool cluster create ‘ha-wordpress’ ha-wordpress-plan.yaml

$ linodetool cluster create ‘iot-cluster1’ iot-plan.yaml

$ linodetool cluster create ‘iot-cluster2’ iot-plan.yaml

See the project’s README for more details on usage of LinodeTool.

Using the toolkit’s APIs to create clusters is equally easy. See the project’s README for basic usage examples.

Easy Single Node Provisioning

Since single node provisioning is such a common task, LinodeTool provides an easy command to do so:

$ linodetool node create newark ‘1gb’ ‘ubuntu 16.04 lts’

LinodeTool is very liberal about matching region/datacenter names, type names, distribution names or kernel names. It can handle partial matches and case mismatches. It expects just the bare essential information — for everything else it either uses sane defaults or generates the required information.

All the capabilities of LCT such as node configuration strategies, inventory storage and querying, and secrets storage are available for single nodes too.

See the project’s README for more details on LinodeTool single node features.

Powerful Node Initialization and Configuration options

Cluster creations can be complex. A cluster as a whole as well as its constituent nodes undergo multiple state transitions. For example, here are typical state transitions from the point of view of a single node:

  • created
  • first boot
  • SSH accessible
  • cluster starting
  • cluster ready
  • all clusters in region ready
  • entire plan is ready

Deployments of many software stacks often involve some configuration actions after one or more of these transitions.

In addition, there are multiple technologies to model these configuration actions, including

  • Linode StackScripts
  • cloud-init scripts, which are very common in other clouds
  • Ansible playbooks
  • other configuration management technologies

Solution: In order to enable reuse of existing configuration scripts, LCT supports StackScripts, cloud-init and Ansible for configuration actions. Cluster plans can specify configuration actions using any of these technologies for each of the state transitions (with some technical limitations outside LCT’s control, such as StackScripts being available only for first boot).

Storage Allocation and Filesystem support

Every Linode type comes with a maximum storage capacity which can be spread across multiple virtual disks. Further, with the new Block Storage feature (currently in beta), a Linode can go beyond its type’s maximum capacity to virtually unlimited capacity.

This kind of allocation across multiple disks is necessary for some stacks such as GlusterFS and Ceph. They also recommend that their storage areas are formatted with file systems like XFS instead of the more common EXT4. But the Linode API supports only EXT4 and EXT3 filesystems.

Solution: The toolkit provides support for specifying storage allocations and file systems in cluster plans including block storage, and sharing them between different nodes as depicted here.

Task Queues

Creating a cluster of dozens of nodes involves many network operations such as API requests and SSH commands. It can take a long time if done sequentially, which is often a problem with deployment scripts written using shell languages like bash.

In addition, Linode API imposes some rate limits which can result in failures if crossed.

Solution: The toolkit supports both fully sequential and

parallel operation queue patterns (the latter using the popular and proven Celery framework and RabbitMQ) while respecting the API’s rate limits.

The toolkit caters to use cases at both ends of the size spectrum. While creating a small temporary cluster for experiments, it can be configured to use a simple sequential queue. For a SaaS that creates large clusters on behalf of multiple customers, it can be configured to use dedicated, guaranteed-delivery queues.

Node Names

When creating a single node, the API expects an application to pass the following attributes:

Node creation attributes

The label and group attributes are names shown in tools like the Linode Manager.

When creating a cluster of dozens of nodes, it’s common practice to label nodes with the same name followed by a running numeric counter, such as hdfs-cluster-01, hdfs-cluster-02, …

This naming convention looks simple, but given that nodes can be removed or added or re-inserted in a cluster, managing these counters with tools like Ansible can be rather clumsy and error-prone (involving tricks like using a potentially variable inventory index, setting facts against a dynamic inventory, fact-caching, etc).

Solution: The toolkit solves this problem by supporting placeholders for labels and groups, and storing states of running counters.

Host names

Host names are far more important than node names. Many distributed software stacks use host names to find and communicate with collaborator nodes.

Linode’s default host name for every node is simply “localhost” (this is probably distribution-specific) which can cause serious configuration failures in a cluster of nodes.

Much like node names, managing host names using Ansible can be clumsy and error-prone.

Solution: The toolkit solves this problem by allowing applications and users to specify host name expressions with placeholder variables. The toolkit stores states of any running counters used in these expressions.

The toolkit also supports more advanced name resolution strategies such as separate public and private names, deploying custom DNS server or deploying split-horizon DNS. These will be explained in Part 2 of this series.

Essential Security

I love software engineering, but an aspect I absolutely dislike is security. I suppose it’s the asymmetry in effort and skills required when up against malicious actors, but with little satisfaction or sense of security to show after spending all that effort. One way to reduce wasting time on security is to use secure-by-default configurations. Unfortunately, a newly created Linode comes up somewhat short in that aspect.

Here are some common security shortcomings, some of which are possibly distribution-specific:

  • All ports are open by default, and all packets are accepted
  • Password authentication for SSH is enabled
  • Private network is not really private.
    Despite the name, a service binding to a port on the private network interface is open to all other servers in the same datacenter, including servers of other customers. From a security standpoint, the private IP address is almost as open and vulnerable as the public IP address.
  • There are no non-root users

Solution: The toolkit solves these security problems by taking a secure-by-default approach:

  • Every cluster node’s firewall is configured to close all ports and drop all incoming and outgoing packets by default (SSH access being the only exception).
    The toolkit provides software interfaces to automatically open and close appropriate ports required by software stacks. This will be covered in Part 2.
  • SSH access is hardened. Password authentication is disabled. Root authentication is only possible using keys. The toolkit supports different keys for every node.
  • At least one non-root user account is created. It too requires key based authentication for SSH. Sudo is configured to not prompt for passwords, which is a nuisance when configuring dozens of nodes.

More Security — Users and Groups

Distributed software often involves multiple processes which should execute with different sets of credentials. Creation of multiple users and groups to run specific processes is a very common configuration pattern for distributed software.

Solution: The toolkit supports specifying users and groups in a plan, and creates them after provisioning. Sets of users and groups can be specified just once in the plan and shared by different sets of nodes — all those nodes will be configured with those users and groups.

Even more Security — Secrets Generation and Management

Cluster management can involve potentially hundreds of secrets like passwords, SSH keys, API keys or OAuth tokens.

These are secrets that your development or production teams or customers may have to access occasionally. So how do you manage hundreds of secrets without compromising on data or customer security?

What is necessary is a kind of password manager to store these secrets and control access to them, but one that applications can use programmatically.

Solution: The toolkit provides password and SSH key generation capabilities. It also provides secrets management interfaces to store and query secrets. These interfaces can be implemented using any secrets management provider. For large clusters in enterprise deployments, LCT comes with built-in support for HashiCorp’s Vault.

At the other end of the spectrum, for a lone developer experimenting on a temporary cluster, a full-fledged secrets management solution may feel like overkill. For such use cases, the toolkit can be configured to not store secrets at all or store them in plain-text or in simple encrypted files.

Cluster Inventory Storage

We often need to know the status and composition of a cluster in addition to status of its individual nodes…

  • Is a certain Zookeeper cluster currently starting, running, stopping or stopped?
  • What are the IP addresses of a cluster’s nodes to transfer configuration files to all of them?
  • Which nodes are tagged as master nodes in a cluster?

Solution: The toolkit supports storing and querying such cluster-level and node-level status information.

For storing such information, multiple storage back-ends, such as relational databases or plain text files, are provided to suit different use cases.

Error Handling and Retries

Provisioning and configuring dozens of nodes involves hundreds of API and other requests. It’s not uncommon for a request to fail due to a temporary network glitch.

But a badly written deployment script that aborts the rest of the cluster creation because a single DNS request temporarily failed while creating the 38th node out of 50 nodes can be quite messy to recover from, speaking from my own experiences. Perhaps the script does not support retrying from the 38th node. Or does not support destruction of created nodes because cluster creation did not succeed fully. Hastily written or insufficiently tested deployment scripts can result in such problems.

Solution: The toolkit supports robust error handling and recovery for all types of client applications…

  • It supports the recommended practice of multiple automatic retries with exponential back-off, before sending an operation to the failed list.
  • It does not abort remaining cluster operations just because there’s a failure in the middle.
  • It tracks current status of cluster operations and supports resuming from wherever it had stopped.
  • It supports explicit retrying of failed tasks on demand.

Cluster Shutdown

Every Linode is automatically monitored by a shutdown watchdog named “Lassie.”

“Lassie is a Shutdown Watchdog that monitors your Linode and will reboot it if it powers off unexpectedly. It works by issuing a boot job when your Linode powers off without a shutdown job being responsible.

To prevent a loop, Lassie will give up if there have been more than 5 boot jobs issued within 15 minutes.”

What this means is that if a Linode is shut down using a Linux command like “shutdown -h now,” it’ll be automatically restarted.

To prevent restarts, either the watchdog should be disabled (which is not a good idea because it prevents recovery from other kinds of shutdowns) or the Linode should be shut down by a shutdown API request.

Solution: The toolkit shuts down clusters by issuing shutdown API requests and also supports disabling the watchdog.

The toolkit also orchestrates cluster shutdowns by shutting down services in specified sequence on subsets of nodes — an essential operation to prevent data loss in many distributed stacks.

Code, Installation and Feedback

Detailed code and installation instructions are available in the toolkit’s GitHub repo: https://github.com/pathbreak/linode-cluster-toolkit.

Please provide any feature requests or other feedback either as a GitHub issue, or via a reply to this announcement post on the Linode forums.

Conclusion

These were some basic deployment patterns supported by the Linode Cluster Toolkit.

Follow me on Medium and stay tuned for Part 2 where I’ll be covering more advanced patterns that the toolkit supports — including private clouds across regions, advanced DNS topologies, NodeBalancer provisioning, disk image management, Ansible wrapper module and Bash wrapper for the toolkit, and much more!

(Editor’s Note: Linode will publish Part 2 on its Cube next Wednesday, July 26, 2017.)

Credits

About me: I’m a software consultant and architect specializing in big data, data science and machine learning, with 14 years of experience. I run Pathbreak Consulting, which provides consulting services in these areas for startups and other businesses. I blog here and I’m on GitHub. You can contact me via my website or LinkedIn.

Please feel free to share below any comments or insights about your experience using Linode, Linode API or the Linode Cluster Toolkit. You are welcome to report bugs or feature requests on the project’s GitHub repo. If you found this blog useful, consider sharing it through social media.

--

--

Karthik Shiraly
Linode Cube

Tech lover. Data Science | Big Data | Machine Learning. Pathbreak Consulting. Always on the path less traveled.