Pushing Kickstarts with Ansible

Or integrating vSphere, Kickstart and Ansible for fun and profit

Jack Price
3 min readApr 11, 2017

Kickstart is a method for automating the installation of RedHat-related Linux distros. Rather than manually answering the repetitive questions normally posed by the OS installer, a single file can provide the configuration.

keyboard --vckeymap=gb --xlayouts='gb'
lang en_GB.UTF-8
timezone Europe/London --isUtc
network --bootproto=dhcp
rootpw --plaintext "p455w0rd"

A typical large-scale scenario would involve network booting and Cobbler serving these kickstart config files dynamically. This does have its merits; it’s flexible, gives you a nice front-end for spinning up new servers, etc.

This scenario may work for you, but we decided against it for a few reasons;

  • We didn’t want to add another cog to our infrastructure, especially if you decide that the Cobbler server is a critical part and needs to be highly-available.
  • We don’t deploy brand new servers enough to justify a dedicated service for it (although enough to want automate it in the first place).

We needed a way to build and push out configuration on-demand.

Network-less Kickstart

The trick is poorly-documented — rather than providing a network location to pull kickstart files from, we would attach a small ISO containing the configuration. If you delve into the Kickstart documentation, you’ll see that any volume labelled OEMDRV will be automatically mounted and a Kickstart installation will be initiated from the ks.cfg file within.

You can quite easily create such a volume from a directory containing a configured ks.cfg file with the mkisofs utility:

$ mkisofs -V OEMDRV -o kickstart.iso path/to/directory

Note: To get mkisofs on your Mac, you may need to brew install cdrtools .

Writing out a Kickstart file by hand is obviously not what we want to do, so…

Ansible to the Rescue

We love Ansible and use it heavily, so it made sense to leverage it here.

We can use its templating module to build a custom kickstart file for each host we have defined in our inventory.

A basic Kickstart template and a playbook that generates a configuration file for each host in our Ansible inventory.

There’s no limit to the complexity we can add here, and thanks to Ansible it’ll all be abstracted away into our host_vars, groups_vars etc.

We can also take care of the ‘bootstrapping’ step that’s often needed to allow Ansible to run — installing hard dependencies (Python and libs) and adding users and SSH keys etc.

A Makefile and simple build script makes generating configuration ISOs on the fly a doddle

An example Makefile and simple build wrapper

Now building configured Kickstart ISOs is as simple as adding the new host to the inventory and configuring its variables.

$ make build/new-hostname.iso

Side note: You could just as easily use pure Ansible for this step, but it would force rebuilding the ISO every time

You could stop here and deploy your Kickstart configs by hand, but that’s still one manual step too many.

vSphere

The final bit of the puzzle is combining templated Kickstart configs with automating their deployment in our vSphere cluster. Once again, we’ll use Ansible.

Side note: The vSphere support in Ansible isn’t mature yet, so the modules feel less polished, the documentation is lacking and idempotency doesn’t always work. We’ve patched some of the modules internally, and I’ll release them as soon as they’re cleaned up.

Once again, everything here is fully customisable though Ansible’ variable system (it’s simplified so you can see what’s going on).

Conclusion

With this system, we’ve removed the last manual process from deploying infrastructure. It means we keep all of our infrastructure configuration in one format, in one repository.

It also brings the whole infrastructure under our rigorous CI and testing framework (more on that in a future post).

The Future

This system is robust and will last us for the foreseeable future. Given the ease with which we can now redeploy from scratch, I’d like to implement a system whereby old servers (>30 days, maybe) are cycled out of production and re-installed from scratch.

--

--