Using Terraform to provision Oracle Cloud Infrastructure Classic Instances with Persistent Block Storage Attachments

Stephen Cross
Oracle Developers
Published in
5 min readJan 18, 2018

This article looks at the options for configuring and managing the lifecycle of Oracle Cloud Infrastructure Compute Classic instances with Terraform when using persistent boot and data storage volumes, using the opc provider.

First lets look at how Terraform provisions a basic compute instance

resource "opc_compute_instance" "storage-example" {
name = "storage-example"
image = "/oracle/public/OL_7.2_UEKR4_x86_64"
shape = "oc3"
}

As there are no explicit storage resources defined, Terraform will create a ephemeral instance will local boot storage. When the instance is destroyed the local storage is gone and the unrecoverable. Ephemeral instances are ideal when there is no local persistence requirements, but if you need to start/stop, resize, or easily recover the instance after an error or outage, then the instance will need persistent block storage.

Persistent Boot Volume

Lets extend the base instance definition to add a persistent boot volume.

resource "opc_compute_storage_volume" "boot-volume" {
size = "20"
name = "boot-volume"
bootable = true
image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
image_list_entry = 1
}
resource "opc_compute_instance" "storage-example" {
name = "storage-example"
shape = "oc3"
storage {
index = 1
volume = "${opc_compute_storage_volume.boot-volume.name}"
}
boot_order = [ 1 ]
}

Now Terraform will create a bootable block storage volume using the requested base image, create the instance with the boot volume attached, and boot the instance from that volume. The use of the boot_order attribute identifies which storage attachment to use as the boot volume.

With separate instance and storage resources the instance resource can now be modified (e.g. setting the desired state from running to suspended, or changing the shape) and the storage volume will be left unchanged. Modifying storage resource definition (e.g. changing the size, storage_type, or image) is still a destructive event. The Terraform prevent_destroy flag can be used to reduce the chance of accidentally destroying storage resource.

resource "opc_compute_storage_volume" "boot-volume" {
size = "20"
name = "boot-volume"
bootable = true
image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
image_list_entry = 1
lifecycle {
prevent_destroy = true
}

}

Persistent Data Volumes

In addition to the boot volume, we can attach multiple data volumes. This is useful for separating the OS from the application and data partitions and enables easier resizing. Lets add a new data volume to our instance definition.

resource "opc_compute_storage_volume" "boot-volume" {
...
}
resource "opc_compute_storage_volume" "data-volume" {
size = "20"
name = "data-volume"
}
resource "opc_compute_instance" "storage-example" {
name = "storage-example"
shape = "oc3"
storage {
index = 1
volume = "${opc_compute_storage_volume.boot-volume.name}"
}
storage {
index = 2
volume = "${opc_compute_storage_volume.data-volume.name}"
}

boot_order = [ 1 ]
}

Great, now with have an instance with persistent boot and data volumes. As before the prevent_destroy option can be used to further guard the resource from accidental deletion. If you run a terraform apply on the above configuration you notice that the instance resource is destroyed and recreated, this is because storage attachments declared in the instance definition are part of the launch definition and are attached at boot time.

Attaching storage at boot time is usually what you want, enabling start-up scripts to automatically bring up any applications and access data on the storage volumes, but lets consider a situation where you want to attach a storage volume to an already running instance without stopping and recreating the instance.

Dynamic Storage Attachments

Using the storage attachment resource we can associate a new storage resource to an already created/running instance. Add the following to the previously configuration

resource "opc_compute_storage_volume" "attached-volume" {
size = "20"
name = "attached-volume"
}
resource "opc_compute_storage_attachment" "storage-attachment" {
instance = "${opc_compute_instance.storage-example.name}"
index = 3
storage_volume = "${opc_compute_storage_volume.attached-volume.name}"
}

A key distinction here is the storage attachment is only associated to the instance after the instance creation has completed, including running any provisioners that may be part of the instance definition. Detection and mounting of the new volume will be specific the instances OS (see Mounting and Unmounting a Storage Volume). For instances running Oracle Linux the new storage attachment will automatically appear as the block device /dev/xvdd

Formatting and mounting the storage volume can be managed in Terraform configuration with the use of a remote-exec provisioner in the storage attachment resource.

Note: form here on we assume the instance has been configured for ssh access, with an appropriately provisioned ssh key, and is accessible to Terraform on the local network — for brevity and to stay on topic, configuring the appropriate keys, bastion host, vpn, ip networks, and/or public ip reservation etc. is left and a exercise for the reader.

resource "opc_compute_storage_attachment" "storage-attachment" {
instance = "${opc_compute_instance.storage-example.name}"
index = 3
storage_volume = "${opc_compute_storage_volume.attached-volume.name}"
connection {
type = "ssh"
host = "${opc_compute_instance.storage-example.ip_address}"
user = "opc"
private_key = "${file(var.ssh_private_key_file)}"
timeout = "10m"
}
provisioner "remote-exec" {
inline = [
"
sudo mkfs -t ext3 /dev/xvdd", # FORMAT THE VOLUME
"
sudo mkdir /mnt/store", # CREATE THE MOUNT POINT
"sudo mount /dev/xvdd /mnt/store" # MOUNT THE VOLUME

]
}

}

This example will reformat storage every time the storage is (re)attached, if you are using a storage volume that already has data you’ll want to remove that step (e.g. a volume previously attached to a different instance, or restored from a snapshot)

Initializing and mounting the storage are clearly important to be able to use the storage volume, but probably more important is ensuring that detaching a storage volume done so cleanly to avoid potential data corruption. Destroying the storage attachment resource is the cloud equivalent of yanking out a USB drive without ejecting it first.

Terraform provides a special destroy time provisioner option to execute commands during destroy rather than create. To ensure the volume is unmounted prior detaching, add the follow additional remote-exec provisioner to the storage attachment resource definition

  provisioner "remote-exec" {
when = "destroy"
inline = [

"sudo umount /mnt/store" # UNMOUNT THE VOLUME

]
}

However, be sure to take heed of the of the following snippet from the Terraform documentation:

Destroy-time provisioners can only run if they remain in the configuration at the time a resource is destroyed. If a resource block with a destroy-time provisioner is removed entirely from the configuration, its provisioner configurations are removed along with it and thus the destroy provisioner won’t run. To work around this, a multi-step process can be used to safely remove a resource with a destroy-time provisioner:

Update the resource configuration to include count = 0.

Apply the configuration to destroy any existing instances of the resource, including running the destroy provisioner.

Remove the resource block entirely from configuration, along with its provisioner blocks.

Apply again, at which point no further action should be taken since the resources were already destroyed.

--

--