Oracle Developers
Published in

Oracle Developers

Reusable Terraform modules for OCI

We’ve recently published a number of reusable Terraform modules for OCI on GitHub and the Terraform registry. You can find them here.

In this post, I’ll walk you through those I’m working on, their purpose and how you can use them in your projects.

terraform-oci-vcn

terraform-oci-vcn is a module for creating a VCN. Creating a VCN is simple right, so what’s the big deal with this module? Well, it also optionally allows you to create Internet, NAT and Service gateways and create routing tables accordingly.

Let’s first see how to reuse the VCN module. We’ll use the published version on the registry:

module "vcn" {source  = "oracle-terraform-modules/vcn/oci"
version = "1.0.1"
# provider parameters
region = var.region
# general oci parameters
compartment_id = var.compartment_id
label_prefix = var.label_prefix
# vcn parameters
internet_gateway_enabled = var.internet_gateway_enabled
nat_gateway_enabled = var.nat_gateway_enabled
service_gateway_enabled = var.service_gateway_enabled
tags = var.tags
vcn_cidr = var.vcn_cidr
vcn_dns_label = var.vcn_dns_label
vcn_name = var.vcn_name
}

As an example, if you need to create a public subnet, you also need to have a route table created. Assuming you have created the VCN like above, all you then need to do is pass the route table id that is very helpfully returned from the vcn module:

resource "oci_core_subnet" "bastion" {  
route_table_id = module.vcn.ig_route_id
...
...
}

terraform-oci-bastion

terraform-oci-bastion helps you add a bastion host to your VCN. Just like for the VCN module, you can control all aspects of the bastion host such as the subnet mask for your bastion subnet, the shape of your bastion host, the timezone where you are located etc. By default, it uses Oracle’s Autonomous Linux so you can use ksplice to detect exploit attempts and you can optionally enable an OCI notification topic. or use your own image if you wish.

There are a couple of developer features that will help when you are in the process of developing e.g.

bastion_upgrade     = false
notification_enabled = false

By setting these to false, this will ensure

  1. the bastion host will not run a yum update and therefore it will be available much faster so the rest of any dependent infrastructure provisioning can proceed.
  2. the notification topic does not need to be destroyed.

Together, they can increase the velocity of setup and tear down and you can improve your own productivity.

Using the bastion module is also easy:

module "bastion" {source  = "oracle-terraform-modules/bastion/oci"
version = "1.0.2"
# provider identity parameters
api_fingerprint = var.oci_base_provider.api_fingerprint
api_private_key_path = var.oci_base_provider.api_private_key_path
region = var.oci_base_provider.region
tenancy_id = var.oci_base_provider.tenancy_id
user_id = var.oci_base_provider.user_id
# general oci parameters
compartment_id = var.oci_base_general.compartment_id
label_prefix = var.oci_base_general.label_prefix
# network parameters
availability_domain = var.oci_base_bastion.availability_domain
bastion_access = var.oci_base_bastion.bastion_access
ig_route_id = module.vcn.ig_route_id
netnum = var.oci_base_bastion.netnum
newbits = var.oci_base_bastion.newbits
vcn_id = module.vcn.vcn_id
# bastion parameters
bastion_enabled = var.oci_base_bastion.bastion_enabled
bastion_image_id = var.oci_base_bastion.bastion_image_id
bastion_shape = var.oci_base_bastion.bastion_shape
bastion_upgrade = var.oci_base_bastion.bastion_upgrade
ssh_public_key = ""
ssh_public_key_path = var.oci_base_bastion.ssh_public_key_path
timezone = var.oci_base_bastion.timezone
# notification
notification_enabled = var.oci_base_bastion.notification_enabled
notification_endpoint = var.oci_base_bastion.notification_endpoint
notification_protocol = var.oci_base_bastion.notification_protocol
notification_topic = var.oci_base_bastion.notification_topic
# tags
tags = var.oci_base_bastion.tags
}

All you then need to ensure is your variable values are in your variable file.

terraform-oci-operator

terraform-oci-operator creates a private subnet and a host in it. The host is helpfully pre-installed with oci-cli as as well as the oci-python-sdk. You can also add your own tools or SDKs later if you wish to, like we are doing in the terraform-oci-oke project and installing kubectl and helm on the operator. Similarly, in the terraform-oci-olcne project, we are adding a bunch of OLCNE tools.

You can give the host instance_principal access which means when you login to operator host, you don’t need to store your OCI access key there in order to run oci-cli commands or oci-sdk scripts.

By default, the operator gives full range of permissions to the host in the compartment you choose, so this is something you should carefully evaluate whether you want to enable or not. Additionally, you may want to lower the permission level. Currently, you’ll have to change the code for creating the policy itself but we’ll look to make that configurable in the future. Note that there’s no notification for the operator; at the moment it’s just a placeholder. If you wish to have notification for the operator host, you can extend and add it.

module "operator" {source  = "oracle-terraform-modules/operator/oci"
version = "1.0.7"
# provider identity parameters
api_fingerprint = var.api_fingerprint
api_private_key_path = var.api_private_key_path
region = var.region
tenancy_id = var.tenancy_id
user_id = var.user_id
# general oci parameters
compartment_id = var.compartment_id
label_prefix = var.label_prefix
# network parameters
availability_domain = var.availability_domain
nat_route_id = module.vcn.nat_route_id
netnum = var.netnum
newbits = var.newbits
vcn_id = module.vcn.vcn_id
# operator parameters
operator_enabled = var.operator_enabled
operator_image_id = var.operator_image_id
operator_instance_principal = var.enable_instance_principal
operator_shape = var.operator_shape
operator_upgrade = var.operator_upgrade
ssh_public_key = ""
ssh_public_key_path = var.ssh_public_key_path
timezone = var.timezone
# notification
notification_enabled = var.notification_enabled
notification_endpoint = var.notification_endpoint
notification_protocol = var.notification_protocol
notification_topic = var.notification_topic
# tags
tags = var.oci_base_operator.tags
}

We created the operator module because:

  1. in some cases, we need to run post-provisioning commands
  2. we wanted to give administrators a place where they can run oci commands safely but without the need to transport and store the keys aon an internet facing host
  3. and we wanted to do (1) and (2) without requiring the users to install additional tools and addons locally e.g. kubectl, helm

We use the operator host in the terraform-oci-oke module to install additional addons such as calico or in the terraform-oci-olcne module to finish the installation steps.

terraform-oci-base

terraform-oci-base is a composite module. It assembles the 3 modules above (vcn, bastion and operator) into 1 rather than you doing all the assembling yourself.

Similar to the others, the base module is easy to use:

module "base" {  
source = "oracle-terraform-modules/base/oci"
version = "1.2.3"
# general oci parameters
oci_base_general = local.oci_base_general
# identity
oci_base_provider = local.oci_base_provider
# vcn parameters
oci_base_vcn = local.oci_base_vcn
# bastion parameters
oci_base_bastion = local.oci_base_bastion
# operator server parameters
oci_base_operator = local.oci_base_operator
}

The terraform-oci-base module uses Terraform complex types as inputs. In your own project, you can specify your root variables in simple types and create the complex types as locals which you can then use as parameters as you can see above.

Where do we use the base module? Right now, we are using it in terraform-oci-oke and terraform-oci-olcne.

terraform-oci-oke

terraform-oci-oke is an end-to-end OKE (managed Kubernetes) provisioning module. It sets a lot of sensible defaults but it also allows you to configure according to your own needs.

It has a significant number of features:

  • public/private workers
  • support for mixed workloads
  • internal/public or mixed load balancers
  • installation of popular addons such as calico
  • encryption of etcd using Keys in Vault
  • secret for OCIR
  • or tools such as kubectl and helm on the operator with corresponding aliases
  • creation of service accounts on the Kubernetes cluster which you can use for CI/CD

The main aim of this module is to improve the life of a Kubernetes admin. Whether it’s provisioning, destroying or reusing existing infrastructure, these are the Prime Directives of this project. We try to strike a balance between adding features and keeping the project and the Kubernetes cluster as lean and mean as possible.

In doing so, including more addons either in the project itself or in higher level projects becomes easier. You can thus build on it to install more addons in the OKE cluster such as the OCI Service Broker that will allow you to consume services such as the OCI Autonomous Database, Streaming and Object storage or your own favourite ingress controllers.

terraform-oci-olcne

terraform-oci-olcne provisions an OLCNE environment. OLCNE is an opinionated take on cloud native, consisting of Kubernetes for orchestration and Istio as service mesh among others. At the moment, this terraform module is made available as a technical preview. The objective is to allow those who are interested in using OLCNE to use OCI to do their evaluation.

Reusing

We enable reusing by ensuring each module has the necessary outputs e.g. for the vcn module, we return the ids of the VCN, the NAT gateway, the route table to the Internet and NAT gateways in the outputs. For the bastion module, we return the public IP address for the bastion host and for the operator module, we return the private IP address for the operator host. In the base module, since it is a composite module, we return all of those but we also return convenient commands to ssh to the bastion or the bastion.

Using the outputs, we can now do more complex things e.g. for the OKE module, we wanted to install calico in the Kubernetes cluster. In this case, we use a null_resource to ssh to the operator via the bastion and do the installation for us:

resource null_resource "calico_enabled" {  connection {
host = var.oke_operator.operator_private_ip
....
bastion_host = var.oke_operator.bastion_public_ip
bastion_user = "opc"
bastion_private_key =
....
}
....
....
}

Similarly, in the OLCNE module, rather than creating a separate operator host, we reused the existing operator from above which we created in the base module. We then extended the operator by additional network configurations such as NSG (Network Security Groups), getting private ssh keys etc.

GitHub Cloning vs GitHub Module vs Registry

When would you clone a module from GitHub into your own terraform project, use GitHub as the source of your Terraform module or the registry?

Here’s a simple guideline:

  • Clone the GitHub repo when you need to change the code, add existing functionality to the underlying module
  • Use a GitHub module if you need some changes but they have not added to a release yet and you are impatient to try
  • Use the registry modules if you are looking to reuse only.

IP Address Management

A big challenge when designing cloud infrastructure is IP Address management. One has to balance scalability versus blast radius, the number versus size of subnets, security versus ease of management while also avoiding overlapping subnets. Add in the fact that sometimes users may also want to reuse existing infrastructure (VCNs, subnets), peer with other VCNs, deploy in hybrid mode or use transit routing.

Choosing the right IP range for your VCNs and subnets is therefore very important. Terraform provides the cidrsubnet function to programmatically create your network. I strongly recommend you read this refresher on networks, subnets and CIDR as well as excellent article on using the cidrsubnet function.

In the terraform-oci-oke and terraform-oci-olcne modules, we show how to use this function programmatically.

For each project, we need the following subnets (~size in brackets):

  • OKE: bastion (3), operator (3), worker (5000 — maximum of number of worker nodes supported by Kubernetes), internal (30) and public load balancer (30)
  • OLCNE: bastion (3), operator (3), master (5), worker (5000), internal(30) and public load balancer (30)

Why do we need 3 IP addresses for bastion and operator? Well, in the future we may want to use a reserved public IP address, instance pool and a private floating IP address. As for the load balancers, a default of around 30 gives us plenty of options. All these are of course up to the user but we want to be able to start with a few sensible defaults and while giving ourselves plenty of room to grow. Also, we need to remember that the first 2 and last IP addresses in a subnet are reserved in OCI.

Using the IP Calc tool and terraform console, we can start carving the subnets. In terms of terraform, given a CIDR block of the VCN, what we are looking for are the newbits and netnum parameters for each subnet.

This is what we use as default for terraform-oci-oke:

variable "netnum" {  description = "0-based index of the subnet when the network is masked with the newbit. Used as netnum parameter for cidrsubnet function."  default = {
bastion = 32
int_lb = 16
operator = 33
pub_lb = 17
workers = 1
}
type = map
}
variable "newbits" { description = "The masks for the subnets within the virtual network. Used as newbits parameter for cidrsubnet function." default = {
bastion = 13
lb = 11
operator = 13
workers = 2
}
type = map
}

We set the default in a map and we can now use these to pass their respective values to the base module (which will then pass it to the bastion and operator modules), and the network module in terraform-oci-oke.

In the network module, we have the look up the values for load balancer and workers and use the cidrsubnet function to create them:

# define in locals
worker_subnet = cidrsubnet(var.oke_network_vcn.vcn_cidr, var.oke_network_vcn.newbits["workers"], var.oke_network_vcn.netnum["workers"])
# reference the local variable
resource "oci_core_subnet" "workers" {
cidr_block = local.worker_subnet
....
}

Now that we have a way to carve the VCN, we can control these values using a terraform variable file.

Conclusion

You can try using the above modules, build upon them and publish your own. They take care a lot of repetitive tasks and infrastructure requirements.

If you have additional requirements, please reach out to us on GitHub. Your time and contributions are greatly appreciated.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store