Unable to update resource policies attached to disks? Here’s how to.

Satyapal Singh
Google Cloud - Community
7 min readFeb 5, 2023

Resource policies once attached to disks, inherently, cannot be updated in-place. Nevertheless, changing/editing/updating such policies becomes an indispensable requirement in a lot of real time scenarios. In this article, we decode this issue and present a viable approach to solving it.

Terminology

  1. Resource Policy:
    A policy that can be attached to a resource to specify or schedule actions on that resource. In our case, this resource is going to be a disk, that in-turn can be in use by a VM. Keep in mind that resource policies come in different flavours. instance_schedule_policy, group_placement_policy and snapshot_schedule_policy. We make use of the snapshot_schedule_policy resource policy for our demonstration.
  2. Snapshot Schedule Policy:
    A policy that can be used for creating snapshots of a persistent disk at a given frequency. The schedule can be hourly, weekly or daily.
  3. Disks:
    Persistent disks are located independently from your virtual machine instances, so you can detach or move persistent disks to keep your data even after you delete your instances. They are durable storage devices that function similarly to the physical disks in a desktop or a server. Compute Engine manages the hardware behind these devices.
  4. Disk Resource Policy Attachment:
    A resource that adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation.

Note: This article only covers the resource google_compute_disk_resource_policy_attachment for attaching existing resource policies to zonal disks. Please make use of google_compute_region_disk_resource_policy_attachment for working with regional disks.

Problem Statement:

A resource policy once created and attached to a disk cannot be edited. Here’s some sample code for provisioning the required resources for our demonstration:

resource "google_compute_resource_policy" "default" {
name = "schedule-terraform"
region = "us-central1"
project = "wpp-poc-project"
snapshot_schedule_policy {
schedule {
daily_schedule {
days_in_cycle = 1
start_time = "05:00"
}
}
}
}

resource "google_compute_disk" "default" {
name = "test-disk"
provider = google-beta
project = "wpp-poc-project"
type = "pd-ssd"
zone = "us-central1-a"
image = "debian-11-bullseye-v20220719"
labels = {
environment = "dev"
}
resource_policies = [google_compute_resource_policy.default.name]
}

resource "google_compute_instance" "default" {
name = "test-vm"
project = "wpp-poc-project"
machine_type = "e2-medium"
zone = "us-central1-a"

tags = ["foo", "bar"]

attached_disk {
source = google_compute_disk.default.name
}

boot_disk {
initialize_params {
image = "debian-11-bullseye-v20220719"
}
}

network_interface {
network = "default"
}
}

Performing a terraform apply creates the three resources:

The resource policy ‘schedule-terraform’ was created and attached to ‘test-disk’ which in-turn is in use by the vm instance ‘test-vm’.

Updating the resource policy and and trying to run terraform apply runs into the following error. It basically forces terraform to delete the existing policy associated to the disk, create a new resource policy with the updated parameter and then try to attach the new policy to the disk. However, the very first step in this logic fails because any policy already associated to a disk cannot be deleted in the first place.

Updated resource policy.
Resource policy cannot be edited.

Note: Even changing the name of the resource policy besides the actual policy parameters to try and create a new policy altogether fails with the same error.

Solution: Resource Policy Attachment

As already stated above, you can’t really update/modify/delete resource policies as-is. Now, the solution to this problem is to create attachments between the disks and the policies and have these attachments updated rather than the actual policies. That is to say, if you want to update the resource policy on a disk, you should create an isolated resource policy and attach that with your existing disk.

Lets have a look at how this can be achieved. First, avoid passing the created resource policy directly to the disk resource as a parameter. In order to achieve this, we will have to tweak our code a little and start anew with the introduction of a new resource.

Note: Before trying to run the following code, make sure you clean-up all the existing resources by running terraform destroy. This command as well may run into a similar error as mentioned above. You can just go ahead and run it again. It will get rid of your resource policy.

resource "google_compute_resource_policy" "default" {
name = "schedule-terraform-1"
region = "us-central1"
project = "wpp-poc-project"
snapshot_schedule_policy {
schedule {
daily_schedule {
days_in_cycle = 1
start_time = "04:00"
}
}
}
}

resource "google_compute_disk" "default" {
name = "test-disk"
provider = google-beta
project = "wpp-poc-project"
type = "pd-ssd"
zone = "us-central1-a"
image = "debian-11-bullseye-v20220719"
labels = {
environment = "dev"
}
}

resource "google_compute_disk_resource_policy_attachment" "attachment" {
name = google_compute_resource_policy.default.name
project = "wpp-poc-project"
disk = google_compute_disk.default.name
zone = "us-central1-a"
}

resource "google_compute_instance" "default" {
name = "test-vm"
project = "wpp-poc-project"
machine_type = "e2-medium"
zone = "us-central1-a"

tags = ["foo", "bar"]

attached_disk {
source = google_compute_disk.default.name
}

boot_disk {
initialize_params {
image = "debian-11-bullseye-v20220719"
}
}

network_interface {
network = "default"
}
}

This construct now allows you to create the new resource policy with updated parameters, delete the old resource policy attached to the disk and attach this new policy to it.

Deletion of older resource policy and attachment followed by creation of newer versions of both.
Disk with the new resource policy attached.

This approach works, but, it has its pitfalls. Every time you want to update the resource policy, it would mean creation of a new resource policy and simultaneous deletion of the old one. So, if you wanted to make use of the old policy at a later point in time, you would have to recreate the same at that point in time.

A better approach would be to completely isolate the creation of resource policies and their attachment to disks. The resource policy can be created through the console or through terraform. We already have created two such policies named schedule-1 and schedule-2.

Now, our previous code takes the following form to allow just the attachments of the already created resource policies to disks. We would be required to pass the policy names manually to the policy attachment resource:

resource "google_compute_disk" "default" {
name = "test-disk"
provider = google-beta
project = "wpp-poc-project"
type = "pd-ssd"
zone = "us-central1-a"
image = "debian-11-bullseye-v20220719"
labels = {
environment = "dev"
}
}

resource "google_compute_disk_resource_policy_attachment" "attachment" {
name = "schedule-1" #MANUALLY PASSING THE PRE-EXISTING POLICY NAME
project = "wpp-poc-project"
disk = google_compute_disk.default.name
zone = "us-central1-a"
}

resource "google_compute_instance" "default" {
name = "test-vm"
project = "wpp-poc-project"
machine_type = "e2-medium"
zone = "us-central1-a"

tags = ["foo", "bar"]

attached_disk {
source = google_compute_disk.default.name
}

boot_disk {
initialize_params {
image = "debian-11-bullseye-v20220719"
}
}

network_interface {
network = "default"
}
}

The apply succeeds on this program and attaches the mentioned resource policy to the disk.

Creation of the attachment for policy ‘schedule-1’ and destruction of the attachment and old policy.
The pre-existing ‘schedule-1’ now attached to the disk.

Now, switching to a new resource policy becomes very convenient by just plugging in the policy name in the policy attachment resource.

A new pre-existing policy ‘schedule-2’ substituted for ‘schedule-1’.
Terraform now just gets rid of the old attachment and not the old policy itself.
Disk with the policy updated to ‘schedule-2’

Conclusion:

Resource policies once attached to persistent disks cannot be updated or deleted. The only workaround to this problem is to maintain a set of pre-existing resource policies with the desired parameters and creating resource policy attachments between the disks and such policies. Thereby, any updates to the policy attached to disks would mean updating only the attachments. This circumvents the errors/problems associated with updating the policies on the disk in-place.

--

--