Aljabr
Published in

Aljabr

#5: Configuring on all cylinders

A case study in declarative workflows

In our last posts, we delved into Make — an iconic and important tool in the history of workflows. This week, we turn to another case study, which represented the first major step function leap forward in distributed workflow after Make. The era of configuration management systems makes an interesting case study about distributed coordination, because it added new principles of process safety and orchestration that formed the basis for modern cloud systems like Kubernetes. In this post, we consider CFEngine which was an innovative forerunner that brought many of those ideas to light.

There is surely no tool more associated with system administrative work and less associated with data processing than CFEngine, and yet it may come as a surprise to many that pipelines of both kinds are also represented within its repertoire. What’s interesting about CFEngine is that it took on the challenge of distributed systems early on, and developed techniques for making processes parallelizable, safe, synchronizable, and repeatable across multiple hosts.

CFEngine was designed during the age in which many of us assumed that a single administrator would manage all the details of a distributed system from a single location, often according to a single masterplan. However, because it was developed at a university (where all the kids are special), that singular plan was more often than not abandoned, and machines could opt in or opt out of sharing plans and resources on a more ad hoc basis. This was before the concept of machine clusters ever existed — then, there was only a clustering of cultures. CFEngine took the idea of reasoning about a desired end-state for computing to an entirely new and revolutionary level, building in atomic “promises” about converging on well-defined outcomes. Convergent goals left a system to roll into its correct state every time an agent woke up to police the outcomes. And agents could be embedded in any host or container process.

Verbosity or fragmentation?

CFEngine tried to take away the pain of scripting procedural logic, with all the loops and ifs that clouded the intent. It started simply and declaratively, but was quickly forced to add back some basic computational ideas, like parameterization, modularity, etc, in order to efficiently represent common patterns without just writing reams of formless data. It could look very simple, when all the details were concealed (and trusted):

body common control
{
bundlesequence => {
webserver(“on”),
dns(“on”),
security_set(“on”),
ftp(“off”)
};
}

Or very verbose, when they were declared (because users need to see in order to trust):

bundle agent fix_service(service) # process data in class “res”
{
files:
“$(res.cfg_file[$(service)])”
copy_from => cp(“$(g.masterfiles)/$(service)”,”policy_host.mydomain”),
perms => p(“0600”,”root”,”root”),
classes => define(“$(service)_restart”, “failed”),
comment => “Copy a stock configuration file template from repository”;
processes:
“$(res.daemon[$(service)])”
restart_class => canonify(“$(service)_restart”),
comment => “Check that the server process is running…”;
commands:
“$(res.start[$(service)])”
comment => “Method for starting this service”,
ifvarclass => canonify(“$(service)_restart”);
}

To distribute execution across a cluster of hosts, users would direct operational promises to labels or “classes” of hosts by name and circumstance, and ordering could be assured by declaring dependencies to form a DAG. This adds a layer of complexity, too, that is unfamiliar and encourages users to simply log onto every machine manually, or use a tool like Ansible.

Knowledge management

After a few years, CFEngine developed into a knowledge oriented system, one that documented relationships between parts, and could report in detail on which of its promises were kept, when and where. Users either loved that or hated it, but the ability to scale our understanding of increasingly complex systems seems like a modern imperative.

Although it preceded the idea of cluster computing (apart from number crunching clusters), CFEngine allowed each host to be an independent entity, like containers today, or it could coordinate and orchestrate them in groups or “classes” using its peer-to-peer networking stack, and a system of tags or labels. This was the beginning of tagging in HTML, CSS, and later in Kubernetes. At the time, it was cumbersome, because the ability to cluster preceded any good model for scaling our understanding of such ideas. Later, the concept of workspaces was built on this, at about the time Kubernetes was released, and CFEngine innovation was effectively frozen.

Kubernetes supported most of the necessary ideas and was already quite CFEngine-like in its self-healing, label oriented approach. Moreover, Docker containers refactored the interface problem and shifted away from the idea of a single point-of-control to a delegated model, which was possible with CFEngine if you could put up with the verbosity. More importantly a platform like Kubernetes took care of many details which would have to be set up explicitly in CFEngine.

Pipelines for workflow and more

CFEngine could make pipelines. As something like a mixture between Cron and Make, it had the ability to configure and execute tasks, separating by namespaces and subroutines, parameterized recursion, and so on. Its notion of adaptive locking offered safety against DDOS, unchecked recursion, multiple instances of agent execution, and many other realistic cases that would bring down systems based on shell scripts.

One of the problems with CFEngine’s single point of definition was that it all looked very verbose, wrapping in structures to separate concerns. In a Kubernetes world, all this has been separated into many different files, where one loses the ability to get an immediate overview. This has been a tension throughout the history of configuration and programming: one monolithic design, or lots of finely sprinkled dust-like fractals of state coordination logic? If the underlying engine is designed well, this shouldn’t matter, but experience shows that it does matter to user aesthetics. Tastes change.

Lessons learned?

In a way, CFEngine offered too much freedom and generality, in an opinionated way (designed to be safe). Users don’t always appreciate the combination of freedom with guard rails, and they would perform a lot of unnecessary coding to work around these constraints.

The way one separates what networking famously called the “data plane” and the “control plane” is subtle. Control also relies on data, sometimes quite a lot of it, so we try to make two data planes and a semantics plane instead. The example below shows how a typical control change may involve a lot of data, some of which we want to pass as parameters (in order to reuse code semantics) and some of which can be completely internalized like an expert system.

bundle agent addpasswd
{
vars:
“pwd[mark]” string => “mark:x:1000:100:Mark B:/home/mark:/bin/bash”;
“pwd[fred]” string => “fred:x:1001:100:Right Said:/home/fred:/bin/bash”;
“pwd[jane]” string => “jane:x:1002:100:Jane Doe:/home/jane:/bin/bash”;
files:
“/etc/passwd” # Use standard library functions
create => “true”,
comment => “Ensure listed users are present”,
perms => mog(“644”,”root”,”root”),
edit_line => append_users_starting(“addpasswd.pwd”);
}bundle agent services
{
vars:
“service” slist => { “dhcp”, “ntp”, “sshd” };
methods:
“any” usebundle => fix_service(“$(service)”),
comment => “Make sure the basic application services are running”;
}

The bottom line seems to be that CFEngine was too early in approaching these ideas in a world that was not quite ready for them.

Today, we have more concepts to build on, that will tend to align user habits and forge greater agreement. We also have the backing of iconic companies, like Google and Facebook, and others, that align industry beliefs through their choices.

For more about CFEngine, see:

  1. http://markburgess.org/blog_principles.html
  2. http://markburgess.org/blog_cd.html
  3. https://cfengine.com/company/blog-detail/self-repairing-deployment-pipelines/

Simple, Smart Data Pipelines

Recommended from Medium

Spring Boot REST Api and Flyway migrations

PostgreSQL “no pg_hba.conf entry for host “ user “postgres”, database “postgres”, SSL off”

Adding HTTS And Custom Domains To An API Hosted On AWS Fargate Using CDK

Advantage and Disadvantages of SAP ERP System

Server Configuration: Hybrid Cloud Environment with Ansible’s Dynamic Inventory

Configuring Ansible Tower to use Red Hat Virtualization for inventory

JUST Start Experience

Importance of Unit tests — My journey with Unit-tests 2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aljabr, Inc.

Aljabr, Inc.

Simple, Smart Data Pipelines

More from Medium

The Art of Persuasion : Moving to Cloud

Add Primary and Foreign Keys to your Oracle Autonomous Database

Search bar added to application for python, flask, ords, oracle autonomous database application. Chris Hoina, Oracle Senior Product Manager.

No Infrastructure! No Waste of Time! There is Amazon EMR!

Import Cloud Software Data From Azure AD Into AssetSonar