Creating a KVM Virtualization Environment on Packet.net with Terraform

Joseph D. Marhee
Apr 6, 2018 · 5 min read

This will take you through using Terraform to create KVM hypervisors, and check them into DNS (in my case, I do this on DigitalOcean as a post-provisioning step).

Your Terraform script will have 3 components, your authentication block, your project block (basically a demarcation on the Packet-side of things of resources for this set of resources we’ll be provisioning), and the servers themselves:

provider "packet" {
  auth_token = "${var.packet_api_key}"
}resource "packet_project" "kvm-lab" {
  name           = "Compute Hosts"
}resource "packet_device" "kvm-node" {
  hostname = "${format("compute-%02d.kvm", count.index)}"
  count = "${var.count}"
  operating_system = "ubuntu_16_04"
  billing_cycle    = "hourly"
  project_id       = "${packet_project.kvm-lab.id}"
  plan             = "baremetal_0"
  facility      = "${var.packet_facility}"

So, we’re creating a project called kvm-lab and that ID is used in this line:

project_id       = "${packet_project.kvm-lab.id}"

by referencing the packet_project object’s id attribute, much like you would if you’re familiar with any language where you can access a dictionary value by key name — you’ll see this repeated a few times where, for example, if we’re referring to a different resource, like packet_device and its hostname , or if referencing something internal to that resource, self.<attribute> .

The third type of substitution you’ll see is var which will refer to either default, or user-defined variables that you’ll provide to a resource like we did for facility defined as var.packet_facility . Particularly relevant is our variable count which is the number of packet_device objects we’ll create:

count = "${var.count}"

In the packet_device resource, we’ll, of course, want to do a little more than just provision the host and set it aside, so we’ll also want to create something of an init script here as well, which we can run as part of the resource locally, and populate it from a template using (possibly) familiar stream editing:

provisioner "local-exec" {
    command = "sed -e 's|NODENAME|${self.hostname}|' -e 's|FACILITYNAME|${var.packet_facility}|' -e 's|SLACK_HOOK|${var.slack_hook_url}|' -e 's|SLACK_CHANNEL|${var.slack_channel}|' -e 's|DO_KEY|${var.digitalocean_api_key}|' -e 's|ENDPOINT|${var.do_domain}|' templates/ddns-temp.py > files/ddns-up-${self.hostname}.py"
  }provisioner "file" {
    source = "files/ddns-up-${self.hostname}.py"
    destination = "/root/ddns-up.py"
  }

In the above, we’re just running sed locally against a file called templates/ddns-temp.py which we’ll populate by substituting in our variables and other attributes, and then upload it onto the host as ddns-up- and then rename it so we know which host the output file is for -${self.hostname}.py .

Of course, to access the host, before this block, you’ll need connection details:

connection {
    user = "root"
    type = "ssh"
    private_key = "${file(var.ssh_private_key_path)}"
    timeout = "2m"
  }

and you can re-use this connection information in order to remote execute commands on the host itself once online:

provisioner "remote-exec" {
    inline = [
      "apt-get update",
      "apt-get install -y qemu-kvm libvirt-bin virt-manager bridge-utils python python-pip",
      "pip install requests",
      "python /root/ddns-up-${self.hostname}.py"
    ] 
  }

which, in the above, installs the KVM software and bridge-utilities (my use case on Ubuntu 16.04 already creates a private bridge for a NAT’d VM network, but your use case may require an Internet-facing bridge, and you can use this, or the user_data attribute of the Packet provider to configure these remote commands as well — the user-data benefit is that a connection object is not required, and you can still template it, but unlike our case here, the commands run by cloud-init will not be tracked for successful exits by Terraform). This block also installs the requirements for my dns script and runs the script.

In order to move on from here, you can provide your variable values in a terraform.tfvars file like this:

packet_api_key = ""
digitalocean_api_key = ""
node_count = 3
packet_type = "baremetal_0"
priv_ssh_key_path = "./packet-key"
ssh_public_key_path = "packet-key.pub"
packet_facility = "sjc1"
slack_hook_url = ""
slack_channel = ""
do_domain = ""

then you can proceed to plan and run your Terraform script:

terraform plan
terraform apply

Keep in mind, if this is the first time Terraform has been run on this project, you may need to initialize it — I also like to create a new key-pair (you can skip this, and just set the path to your keypair in your tfvars file when you apply your Terraform plan) for when I create a new project like this, and put it into a shell script I run before creating the environment:

#!/bin/bashif [ ! -d .terraform ]; then
 terraform init;
fissh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)" -f packet-key -N ''

So, once it’s completed, you’ll see the confirmation from the output from Terraform, and you can create an output.tf file for a nicer presentation, but that’s totally optional, something like:

output "Your KVM Host Addresses" {
  value = "${join(",", packet_device.kvm-node.*.access_public_ipv4)}"
}

which dumps out something like this once completed:

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.Outputs:Your KVM Host Addresses = {IP1},{IP2},{IP3}

But because I like a little extra flair, and maybe might be collaborating on such a project, I used the DNS script to not only check it into DNS, but post to Slack, so everyone in the channel knows when I’ve created a node:

The script, itself, just creates a DNS record, sends it to my DNS provider, and then makes an incoming webhook request to Slack so I know when a host has come online completely and has been checked into DNS. This is a kind of flashy way of notifying when such a task has been completed and is totally accessible, rather than notifying everyone when there’s been any deploy attempt, just successful ones.

You can, from here, either extend the setup you provide through Terraform, or if you want to supplement this with other configuration management (i.e Saltstack or Ansible), Terraform won’t get in your way from the point the hosts come online during this process. You can also use other providers to extend this functionality — many SaaS solutions such as for network services (CloudFlare, for example) and monitoring (like NewRelic), etc. can be brought into your Terraform playbook as well:

If you’d like to try out this project specifically, my repository with the complete template can be found here:

7