Writing my first Terraform Resource

David Harrison
LADbibleGroup
Published in
4 min readFeb 19, 2019

We utilise terraform in our CI/CD process, primarily, its use is managing resources in Google’s Cloud Platform. Recently, we’ve designed a stack for automation that will run a function nightly. Terraforming the PubSub and Cloud Functions were achieved using Google’s Provider resources. What we discovered early on is that Google’s Cloud Scheduler wasn’t currently provided as a resource on the official provider.

After searching through their GitHub repo, it didn’t appear to be in development at the moment either, however, there was a previously raised issue with some interest. I got in touch with the other developer and we started collaborating on building in support for the resource.

Initially, this post was going to be about my experience writing a terraform Plugin. After a couple of days of work we got a comment on the original issue that caused us to pivot the development. Seems the process for developing terraform plugins for Google has moved to a new repo called Magic Modules.

The purpose behind Magic Modules is to have one centralised configuration for each of Google’s Cloud Platform resources; that can be built into each of the supported provider's tools. The providers are currently; Chef, Puppet, Ansible and Terraform.

The work we’d already done on the terraform repo made the process of porting the logic over to the magic modules repo easier than starting afresh. On the process of writing a plugin for terraform. The language terraform uses is Go, I’ve never written go before but I’ve had some experience with statically typed languages like C++ and Objective-C.

Writing support for Cloud Scheduler in Google’s provider plugin involved importing the go package google.golang.org/api/cloudscheduler/v1beta1 and initialising the service with the client (http client). This allows Get/Create/Delete calls to be fulfilled via the RESTful api, we’ll come to these calls later.

import (
...
"google.golang.org/api/cloudscheduler/v1beta1"
...
)
type Config struct {
...
clientCloudScheduler *cloudscheduler.Service
...
}
func (c *Config) loadAndValidate() error {
...
log.Printf("[INFO] Instantiating Google Cloud Scheduler Client...")
c.clientCloudScheduler, err = cloudscheduler.New(client)
if err != nil {
return err
}
...
}

The next step is defining a resource on the provider, the key is that will be used in the terraform config when defining the resource.

func Provider() terraform.ResourceProvider { 
return &schema.Provider{
...
DataSourcesMap: map[string]*schema.Resource{
...
"google_cloudscheduler_job": resourceCloudSchedulerJob(),
...
}
}
}

the method used as the value above will become the method resourceCloudSchedulerJob used to define the resource.

func resourceCloudSchedulerJob() *schema.Resource {  
return &schema.Resource{
Create: resourceCloudSchedulerJobCreate,
Read: resourceCloudSchedulerJobRead,
Delete: resourceCloudSchedulerJobDelete,
Exists: resourceCloudSchedulerJobExists,

Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
...
}
}
}

The method will return a Resource struct with a map of Schema structs and CRUD methods defined on it.

The Schema struct will define the type, optional/required-ness of the field on the resource and other details like validation, “conflicts with”, computed and more.

The CRUD methods will be called when terraform determines so, based on the state diff from the API and the config on the Schema. The Read method will write to the schema.ResourceData, this info is used to diff against the current resource config and determine if an Update/Create/Delete should be called.

The Create and Delete methods should create and delete the resource on the API respectively. I didn’t use the Update method on my resource as Cloud Schedulers can’t currently be updated, instead I set forceNew: true on every field to ensure on any change, the old resource would be deleted and a new resource would be created in its place.

If you were writing support for a resource that would be updatable, you can call HasChange on the config.ResourceData and build your data to send to the providers API’s corresponding endpoint.

Moving the logic from above over to the Magic Modules (MM) is achieved mostly using yaml key/values. MM expects the yaml config to be written to a api.yaml file inside a product directory in the products/ directory. .i.e., products/cloudsheduler/api.yaml.

--- !ruby/object:Api::Product 
name: Cloud Scheduler
prefix: gcloudscheduler
versions:
- !ruby/object:Api::Product::Version
name: beta
base_url: https://cloudscheduler.googleapis.com/v1beta1/
scopes:
- https://www.googleapis.com/auth/cloud-platform
objects:
- !ruby/object:Api::Resource
name: 'Job'
properties:
- !ruby/object:Api::Type::String
name: name
description: |
The name of the job.
required: true
input: true

As well as the api config, you’ll need to provide a provider specific yaml config. i.e., products/cloudsheduler/terraform.yaml this will define name of the resource and any overrides for the resource and properties. Also included are custom files that will bootstrap the resource into the provider. For instance, Terraform needs the product~compile.yaml file which in turn generates the resource map code that will be added to the provider later.

--- !ruby/object:Provider::Terraform::Config
name: CloudScheduler
overrides: !ruby/object:Provider::ResourceOverrides
Job: !ruby/object:Provider::Terraform::ResourceOverride
properties:
name: !ruby/object:Provider::Terraform::PropertyOverride
custom_expand: 'templates/terraform/custom_expand/cloud_scheduler_job_name.erb'
custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb'
files: !ruby/object:Provider::Config::Files
compile:
<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%>

The overrides here build the API expected name string, if any field should be modified before being sent to the API, here is the place to do it. It’s also a place to include validation with the validation key.

For custom expanders and flattening methods, you should write them to the templates directory for the provider. The code is Go written as a string inside a ruby template file, because of this, you should keep this logic short and simple. This is because it can’t be tested thoroughly.

For Validation, you should write to the third_party/terraform/utils/validation.go file.

Compiling your resource is documented in the Magic Modules repo but the gist is;

Clone the existing terraform provider repo to your Go path;

git clone git@github.com:terraform-providers/terraform-provider-google-beta.git ${GOPATH}/src/github.com/terraform-providers/terraform-provider-google-beta

Then in the MM root run the following to compile the whole provider:

bundle exec compiler -a -v beta -e terraform -o "${GOPATH}/src/github.com/terraform-providers/terraform-provider-google-beta/"

Back to the terraform provider repo root run:

make test
make build

This will test and build the binary to the go bin path

${GOPATH}/bin/terraform-provider-google-beta

You can test the binary is working against your provider resource config by copying the above binary to your .terraform/plugins/darwin_amd64/ directory inside the directory you’re running terraform from, note: the above .terraform directory will be available after running ‘terraform init’.

--

--

David Harrison
LADbibleGroup

Web Developer, iOS Rookie Developer & Bastardiser of Design. Consuming Tech News and Excelling at Sharing Apple Rumours.