Serverless Deployment on Huawei Cloud with Terraform: How It Works and Practical Example

Burak Ovalı
Huawei Developers
Published in
7 min readMar 8, 2023
Scale-out Redis with FunctionGraph

Introduction

Considering the ever-expanding Huawei ecosystem, Huawei is becoming an important cloud provider. This article shows how we can handle serverless deployment on Huawei Cloud, which offers 200+ services. For a clearer understanding of the subject, our article will proceed with a demo. In this way, we will both get to know Huawei Cloud services closely and see how these services are used without the need for deep cloud knowledge.

Case

Let’s consider a scenario for a clearer understanding of the subject. For example, suppose the Customer needs a Redis service and wants to automatically scale-out this Redis instance based on memory usage. How can we solve this problem with the serverless paradigm?

Solution Architecture

First of all, we need Redis service (DCS for Redis) for this scenario. We need an O&M service (Cloud Eye) that will monitor this service and generate an alarm when conditions are exceeded. We need a message notification service (SMN) that will send notifications in case of alarm triggering. Finally, we need a service (FunctionGraph) that provides a serverless deployment model. In this way, we can determine the necessary services for this case. Let’s examine the relationship between these services in more detail. The following architecture is an example for this case:

Scale-out Redis with FunctionGraph
  • Step 1: Cloud Eye is constantly monitoring the usage memory of the DCS instance. We can create certain alarm rules in the Cloud Eye service. The alarm is triggered according to the alarm rules.
  • Step 2: The SMN service provides an instant notification when the alarm is active on the Cloud eye side.
  • Step 3: When a notification is provided by the SMN, this notification will trigger the function in the FunctionGraph service.
  • Step 4: The function triggered in this step will increase the memory in the instance using DCS APIs, that is, it will scale-out. At this stage, Python SDK is used.

Service Overview

  1. Distrubuted Cache Servis for Redis (DCS for Redis) is an online, distributed, in-memory cache service.
  2. Cloud Eye is a multi-dimensional resource monitoring service. You can use Cloud Eye to monitor resources, set alarm rules, identify resource exceptions, and quickly respond to resource changes.
  3. Simple Message Notification (SMN) is a reliable and flexible large-scale message notification service. It pushes messages to specified subscription endpoints and greatly reduces system coupling.
  4. FunctionGraph hosts and computes functions in a serverless context. It automatically scales to suit fluctuations in resource demands during peaks and spikes while requiring no reservation of dedicated servers or capacities.

Deploy

We made assumptions about the case and came up with a solution to the case. Let’s deploy this architecture on Huawei Cloud. After this stage, instead of using the console of Huawei Cloud, we will proceed by using Terraform, which is an Infrastructure a Code (IaC) tool.

Let’s deploy solution by following the steps on Architecture in order. Of course, we need to create some Terraform files before that.

Let’s define a variable for Terraform. In this way, we can isolate critical information from source code.

Isolate Critical Variables

Let’s define the input variables we will use.

Input Variables

We defined the variables for later use. Finally, let’s define a provider for Terraform and configure its settings.

Huawei Cloud Provider in Terraform
Provider Configuration

After these steps, we can start using Huawei Cloud resources.

In the first step, we stated that we need to create a Redis instance. So let’s create a low-flavor Redis instance. A memory of 0.125GB will be enough for this article. While creating a Redis instance, we can configure the settings in different ways according to the need. Since this article is an overview, let’s configure the settings we need. Keep it simple :)

The source codes for the DCS for Redis instance are as follows:

DCS for Redis Instance Configuration

A data block requests that Terraform read from a given data source (“huaweicloud_dcs_flavors”) and export the result under the given local name (“single_flavors”). In this step, we have determined the flavor for the DCS instance. We could have done this step under the resource block.

Under the resource block we have configured the basic settings we need for Redis.Here we have created a Redis instance. This instance has 0.125gb of memory and uses the 4.0 Redis engine.

Now that the left side of the first step has been created, let’s create a Cloud eye and alarm rule. Then let’s monitoring the Redis instance according to usage memory.

Cloud Eye Configuration

Let’s explain the above arguments step by step:

  • alarm_name = Specifies the name of an alarm rule.
  • metric = Specifies the alarm metrics.
  • metric.namespace = Specifies the namespace in service.item format. For details, see Services Interconnected with Cloud Eye.
  • metric.metric_name = Specifies the metric name. For details, see Services Interconnected with Cloud Eye.
  • metric.dimensions = Specifies the list of metric dimensions. For details, see Services Interconnected with Cloud Eye.
  • metric.dimensions.name = Specifies the dimension name.
  • metric.dimensions.value = Specifies the dimension value. bu adimda olusturmus oldugumuz DCS instance idsini deger olarak atiyoruz.
  • condition = Specifies the alarm triggering condition.
  • condition.period = Specifies the alarm checking period in seconds. The value can be 0, 1, 300, 1200, 3600, 14400, and 86400.
  • condition.filter = Specifies the data rollup methods. The value can be max, min, average, sum, and vaiance.
  • condition.comparison_operator = Specifies the comparison condition of alarm thresholds. The value can be >, =, <, >=, or <=.
  • condition.value = Specifies the alarm threshold.
  • condition.unit = Specifies the data unit. Changing this creates a new resource. For details, see Services Interconnected with Cloud Eye.
  • condition.count = Specifies the number of consecutive occurrence times. The value ranges from 1 to 5.

To summarize the steps above, we created an alarm using the Cloud Eye service, and this alarm is triggered when the usage memory of DCS instance is equal to 0.80GB or more.

In the second step, let’s create a topic in the SMN service and the SMN service will provide instant notification when the alarm is triggered. We can also receive notifications from different channels such as Email and SMS by adding a subscription to the SMN service. For SMN service, it will be enough to add a few lines of code to our previous ce.tf file.

Cloud Eye’s Alarm Rule Configuration

We have added a notification feature with the first 4 lines and the last 4 lines. First, we create a topic in the SMN service, and in the last line, we set an action when the alarm is triggered. So we provide instant notification. This notification will act as a trigger for our function.

I will explain steps 3 and 4 together. In step 3, we stated that we provide instant notification with smn as alarm action. For this, first of all, let’s create a function with Python using a FunctionGraph service. With this function we will invoke DCS APIs and do this with Python SDK. You can get the SDK packages from HuaweiCloud’s official repository. We need huaweicloud-sdk-core and huaweicloud-sdk-dcs dependencies in this demo.

We need to upload the dependencies needed by the function to the dependency section. There are alternative methods for this. For this article, I prefer the OBS method, which is one of them.

What is this OBS?

Object Storage Service (OBS) is a stable, secure, efficient, and easy-to-use cloud storage service that is scalable and compatible, allowing storage of any amount of unstructured data in any format. In this solution, we upload our dependencies and source codes as zip from local to OBS and create our functions from source codes that we upload to OBS. To understand this structure, the architecture is as follows:

Create Functions and Dependencies from OBS/Bucket

As it can be understood, our source codes are ready on the local. Now let’s upload them to OBS with Terraform.

Upload Dependencies and Source Codes to OBS/Bucket

We created a private OBS/Bucket with the first resource In the Second and Third resources, we uploaded the dependencies and source codes as objects to the bucket we created in OBS. After this step, we can create our function with FunctionGraph.

Create Function

We obtain existing buckets, source code/packages and SMS data with data blocks. We are creating a function with the huaweicloud_fgs_function resource. Here’s an overview:

  • name = Specifies the name of the function.
  • app = Specifies the group to which the function belongs.
  • memory_size = Specifies the memory size(MB) allocated to the function.
  • runtime = Specifies the environment for executing the function.
  • timeout = Specifies the timeout interval of the function, ranges from 3s to 900s.
  • handler = Specifies the entry point of the function.
  • code_type = Specifies the function code type, which can be: inline, zip, jar or obs.
  • code_url = Specifies the code url. This parameter is mandatory when code_type is set to obs.
  • depend_list = Specifies the ID list of the dependencies.

Arguments are clearly defined. To summarize, we create a function with the Python 3.6 runtime with our source code in OBS and our dependencies. Let me share the Python source code for those who are curious:

Ptython SDK Usage Example

The above source code increases DCS flavor from 0.125 to 4GB when DCS memory usage increases.

We create our dependencies with the huaweicloud_fgs_dependency resource. Actually this resource works before huaweicloud_fgs_function. We also set a trigger for the function with the huaweicloud_fgs_trigger resource. This trigger is the SMN topic.

Conclusion

For serverless approach, you have seen Huawei Cloud’s FunctionGraph service and how this service interacts with other Services. I explained this whole process with a real example. Of course, this solution needs to be improved in many ways. We may begin to discuss these improvements in next articles.

If you want to get to know Huawei Cloud services better. You can register to use Free packages or test Huawei Cloud Services in Lab environment.

Thanks for reading. Feel free to contact me on my Linkedin account for further questions or comments.

References

Huawei Cloud’s Python SDK Github Repo:

Huawei Cloud’s Official Documentation:

Huawei Cloud’s Forum:

Terraform Documentation: https://registry.terraform.io/providers/huaweicloud/huaweicloud/latest/docs

--

--