More on PXE Booting SmartOS on Packet and Provisioning with Terraform and SaltStack

A couple of months back, I wrote about a manual version of this process to boot SmartOS using Packet’s custom iPXE feature:

In the time since, I’ve written some automation to facilitate this process, and have turned this into a consumable service for anyone wanting to give SmartOS a try:

With this information, using the custom iPXE method in the guide above on Packet (uses the UI), with the following URL, will boot over the network, and you can proceed to install the system however you choose:

http://pxe.sfo2.gourmet.yoga/smartos.ipxe

However, this is a pain if you want to provision more than a single node at once, so using a tool like Terraform, a block like this can suffice:

provider "packet" {
auth_token = "${var.packet_api_key}"
}
resource "packet_project" "smart_os" {
name = "SmartOS Hosts"
}
resource "packet_device" "smartos-hosts" {
hostname = "${format("smartos-%02d", count.index)}"
count = "${var.count}"
operating_system = "custom_ipxe"
billing_cycle = "hourly"
project_id = "${packet_project.smart_os.id}"
plan = "baremetal_0"
provisioner "local-exec" {
command = "cd tmp; wget ${var.ipxe_script_url} -O ipxe_file"
}
user_data     = "${file("tmp/ipxe_file")}"
facility = "${var.packet_facility}"
}

which pulls down the iPXE config into a tempfile (this assumes a tmp directory in your Terraform working directory, or else just update the path as you see fit), and passes it to Packet’s API as user-data for any number of nodes you choose to create (by setting the count in your tfvars definitions.

And you can proceed using the SOS console steps from the oroginal guide to access your new hosts.

How do I run my own PXE server?

I cover this briefly in the original guide, but like I said, this was a highly manual process, so reducing it down into a configuration management script that I can just apply on provisioning (for example, you can have Terraform include packages to bootstrap the server with via cloud-init, which I don’t cover here, but you can see an example of it here on another provider).

This is a super simple set-up, and while there are more sophisticated methods of servicing a PXE-booted environment, for our example here, a single web server will suffice to serve, both, the iPXE config and the SmartOS archive (the example in the blog above uses the ISO on a mount, but because this example also presumes supporting updates to the archive, this way is quicker, and requires fewer steps to complete on a routine basis).

In my case, I do this using SaltStack, so I whip up a Salt state. In my Salt directory, I’ll have a top.sls file to match states to hosts (in this case, all states will be applied to all hosts):

base:
'*':
- smartos

smartos.sls , which will install Nginx, and pull down the latest archive from Joyent for SmartOS and extract it:

nginx:
pkg.installed
extract_smartos:
archive.extracted:
- name: /usr/share/nginx/html/smartos
- archive_format: tar
- source: https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/platform-latest.tgz
- user: root
- group: root
- skip_verify: True
- if_missing: /usr/share/nginx/html/smartos
/usr/share/nginx/html/smartos.ipxe:
file.managed:
- source: salt://files/smartos.ipxe
- user: root
- group: root

You’ll see that it also manages smartos.ipxe which is the configuration you want to load into your new nodes. Create subdirectory, files , and then populate smartos.ipxe with:

#!ipxe
dhcp
set base-url http://{YOUR_SERVER_IP}
kernel ${base-url}/smartos/platform/i86pc/kernel/amd64/unix
initrd ${base-url}/smartos/platform/i86pc/amd64/boot_archive
boot

and then, you can hit highstate :

salt "*" state.highstate

This is useful, for example, if you wanted to provide this as a highly-available service, and had multiple nodes hosting this archive behind a proxy or load balancer. An example of this can be found here, where I spin up such a cluster via Terraform to support this set-up (albeit for DigitalOcean, but the logic still applies to any Salt-minion running host you choose to apply the above state to):

Why Terraform and SaltStack?

Yawn, I know, I talk about this separation a lot, but it’s one that I’ll probably never stop talking about:

Terraform is platform-provider specific, which makes it an amazing resource if you use a supported provider to manage infrastructure as state via the HCL domain-specific language, however, as configuration management, it leaves a lot to be desired, because that’s a) not what it’s supposed to do, and b) your software doesn’t need that kind of vendor lock-in anyway where it isn’t at all required, and is actually a detriment to making the best use of Terraform as it is.

This is where SaltStack comes in: You can deploy your configuration (and in this case, I deploy a static website along with it, but you can usually integrate your git server, or whatever CI pipeline may exist to facilitate that part) on any system running a compliant operating system (Salt has, in my experience, the best coverage of a range of operating systems including illumos-based systems like OmniOS and SmartOS, itself, for example), making this portable, and something that can be used across all of your providers.

This also gives you the benefit of keeping your provisioning processes light, and your configuration management unconcerned with platform beyond what it’s supposed to be targeting.

Final Thoughts & Resources

Of course, there are a lot of implementation concerns around how I laid this out here, however, this becomes a lot less painful with the (not always fun) work of building configuration management and provisioning tooling specifics for your environment when, frankly, you’d rather just be developing. That said, I recommend, if your provider is supported by Terraform, that you take a look and see what can be automated away, and you might solve a productivity issue you didn’t know you had. Ditto SaltStack; once you get it right, provided nothing about the environment, itself, changes, you can apply and re-apply state, and take all the headwork out of your operations.

Here is a sample Salt repo for setting up the iPXE web server, and managing updating the pieces of managed state across your webserver (`cron.sls` for example, sets a cronjob to update the source tree every 12 hours; you likely do not need it to be this aggressive, but this paradigm can be applied to any source archive, not just SmartOS!):

For the sake of demonstration, I’ll also include a similar setup to this on DigitalOcean via Terraform:

This replicates the setup described above using Terraform providers to do this on DigitalOcean (that vendor-specific aspect of the provisioning tools at-hand) that, like I said, can be used as-is with the SaltStack states above just as they could on hardware on Packet or resources available on public virtualized environments like DO and AWS.