How Can We Run Containerized Applications on Huawei Cloud

Burak Ovalı
Huawei Developers
Published in
10 min readApr 4, 2023
Cloud Container Engine — Huawei Cloud

Introduction

Docker has raised a new paradigm to Container technology. Almost every application is now containerized with Docker. So how are these containers managed? The common answer to this question will be Kubernetes. Kubernetes is an open source software that allows you to deploy and manage containerized applications at scale. So where is the CCE on this line? Although Kubernetes is easily used by default, deploying Kubernetes’ master and worker nodes in the production environment requires knowledge and time. At this stage, Huawei Cloud’s CCE service helps us.

Huawei Cloud Container Engine (CCE) for Kubernetes

CCE is a highly reliable, high-performance service through which enterprises can manage containerized applications. CCE supports elastic application scaling and native Kubernetes applications and tools, allowing you to easily set up a container runtime environment on the cloud.

Cloud Container Engine Architecture — Huawei Cloud

CCE Product Advantages:

  • High Performance: Containers can run directly on high-performance physical servers, delivering performance comparable to physical machines.
  • Fast Creation: Thanks to SDI technology, a physical machine can be created in 5 minutes, and a container cluster can be created in 6 minutes.
  • Flexible Scaling: Applications can be automatically scaled in a matter of seconds to meet fluctuating demands.
  • Support for Stateful Containerized Applications: CCE works closely with storage services provide highly available volumes for data persistence. This is suitable for stateful containerized applications, which save data or statuses from each session.
  • High Availability: High Availability is set up on both the cluster control plane and across availability zones. Graceful scale-out and scale-in of containerized applications ensure high service continuity.
  • Open-Source CCE is compatible with Kubernetes/Docker-native versions and incorporates the latest and greatest from the communities instantly.

As you can see, CCE is an essential service with many important advantages. With the CCE service, you can delegate knowledge-intensive configurations of Kubernetes to Huawei.

Why Cloud Container Engine (CCE)?

CCE is a one-stop platform integrating compute, networking, storage, and many other services. It supports heterogeneous computing architectures such as GPU, NPU, and Arm. Supporting multi-AZ and multi-region disaster recovery, CCE ensures high availability of Kubernetes clusters.

The CCE service, which has important advantages, has many uses. The most common uses are:

  • Microservices: A monolithic application is decoupled into multiple lightweight modules. Each module can be independently upgraded or scaled in quick response to market changes.
  • Environment independency: Containers isolate applicatons from their environments, saving a consistent environment in development, testing and O&M phases.
  • Auto scaling with seconds: CCE can scale container instances automatically to accommodate spikes in application load. Also resources are allocated at a fine granularity to let applications make optimal use of resources.

Usage of Cloud Container Engine (CCE)?

You can use CCE by using the Console, kubectl and APIs. Complete the following tasks to get started with CCE:

Usage of Cloud Container Engine — Huawei Cloud
  1. Register a HUAWEI CLOUD account and grant permissions to IAM users. HUAWEI CLOUD accounts have the permissions to use CCE. However, IAM users created by a HUAWEI CLOUD account do not have the permission. You need to manually grant the permission to IAM users.
  2. Create a cluster.
  3. Create a Workload from an Image or Chart.
  4. View workload status and logs. Upgrade, scale and monitor the workload.

Preparations

Before Using CCE, you need to make the following preparations:

  • Registering a Huawei Cloud Account

If you already have a HUAWEI CLOUD account, skip this part.

  • Topping Up Your Account

Ensure that your account has sufficient balance. For details about CCE pricing information, see Product Pricing Details.

  • Creating IAM User (Optional)

If you want to allow multiple users to manage your resources without sharing your password or keys, you can create users using IAM and grant permissions to the users.

  • Creating a VPC

A VPC provides an isolated, configurable, and manageable virtual network for CCE clusters. Before creating the first cluster, ensure that a VPC has been created. For details, see Creating a VPC. If you already have a VPC available, skip this step.

  • Creating Key-Pair (Optional)

The cloud platform uses public key cryptography to protect the login information of your CCE nodes. Passwords or key pairs are used for identity authentication during remote login to nodes. You need to specify the key pair name and provide the private key when logging to CCE nodes using SSH if you choose the key pair login mode. For details, see Creating a Key Pair. If you choose the password login mode, skip this task.

Deploying Django App to Kuberentes Environment (DEMO)

ELB Service Architecture — CCE

We will complete following steps:

  1. Dockerized and Upload Image to SWR
  2. Create a Cluster
  3. Create a Node
  4. Create a Deployment from an Image
  5. Create an Load Balancer Service

Dockerized and Upload Image to SWR

The application must be a container to be deployed in a kubernetes environment. Let’s turn our application into a container to run the application in the Kubernetes environment. In this demo, the application is developed with Python’s Django Framework. This process may vary depending on the application or the language in which the application is developed. Docker is used for application containerization.

The Dockerfile is as follows:

Dockerized Django

The basic Dockerfile is as above.

We need to run the build command on the command line, but before this step, let’s get to know Huawei Cloud’s Software Repository for Container (SWR) service.

SWR enables you to securely manage container images to build containerized applications. Check out this article for details on using SWR.

In summary, we need to create an organization in the SWR service. In this demo, the cce-demo organization was created as an example.

Software Repository for Container — Huawei Cloud

Now let’s run the build command on the command line:

We will store the Docker image in Huawei’s SWR. Therefore, the tag is given in accordance with the Huawei format. Run the following command to label the Django APP image:

docker tag[Image name 1:tag 1] [Image repository address]/[Organization name]/[Image name 2:tag 2]

In the preceding command:

  • [Image name 1:tag 1]: name and tag of the image to be uploaded.
  • [Image repository address]: The domain name at the end of the login command.
  • [Organization name]: name of the organization created.
  • [Image name 2:tag 2]: desired image name and tag.

To Push Image, you must obtain the generate login command and run it in the terminal of the system with Docker engine installed. See here for details.

Example for this tutorial:

docker build -t swr.tr-west-1.myhuaweicloud.com/cce-demo/django-app .

Push the image to the image repository (SWR):

docker push swr.tr-west-1.myhuaweicloud.com/cce-demo/django-app:latest

To view the pushed image, go to the SWR console and refresh the My Images page.

Software Repository for Container — Huawei Cloud

Create a Cluster

Let’s select the CCE service under the service list. Then, let’s click the Create button under the CCE Cluster. Let’s take a quick look at the configurations in the window that greeted us.

Master Node CCE — Huawei Cloud

Basic Settings;

  • Cluster Name, name of the cluster to be created.
  • Enterprise Project, this parameter is displayed only for enterprise users who have enabled Enterprise Project Management.
  • Cluster Version, Cluster’s Kubernetes baseline version. The latest version is recommended.
  • Cluster Scale, maximum number of worker nodes that can be managed by the cluster. If you select 50 nodes, the cluster can manage a maximum of 50 worker nodes.
  • HA, randomly deploys master nodes in one AZ.
  • Network Model, retain the default settings.
  • VPC, where the cluster will be located.
  • Master Node Subnet, where master nodes of the cluster are located.

After completing the basic settings, the cluster will begin to be created. After 5–6 minutes it will switch to Running Status.

Master Node CCE — Huawei Cloud

Create a Node

We have created a Master Node. Now let’s create a Worker Node. Let’s click on Nodes under Resources in the console. Click on the Create Node button at the top right.

Worker Node CCE — Huawei Cloud
  • AZ: Retain the default value.
  • Node Type: Select Elastic Cloud Server (VM).
  • Specifications: Select node specifications that fit your business needs.
  • OS: Select the operating system (OS) of the nodes to be created.
  • Node Name: Enter a node name.
  • Login Mode: Use a password or key pair to log in to the node.
  • If the login mode is Password: The default username is root. Enter the password for logging to the node and confirm the password. Please remember the node login password. If you forget the password, the system is unable to retrieve your password and you will have to reset the password. If the login mode is Key pair, select a key pair for logging to the node and select the check box to acknowledge that you have obtained the key file and without this file you will not be able to log in to the node. A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create Key Pair. For details on how to create a key pair, see Creating a Key Pair.
Worker Node CCE — Huawei Cloud
  • System Disk: Set disk type and capacity based on site requirements. The default disk capacity is 50 GB.
  • Data Disk: Set disk type and capacity based on site requirements. The default disk capacity is 100 GB.
  • VPC: Use the default value, that is, the subnet selected during cluster creation.
  • Node Subnet: Select a subnet in which the node runs.

After completing the basic settings, the node will begin to be created. After 5–6 minutes it will switch to Running Status.

Worker Node CCE — Huawei Cloud

Create a Deployment

The following is the procedure for creating a containerized workload from a container image. In the navigation pane, choose Workloads. Then, click Create Workload.

Deployment CCE — Huawei Cloud
  • Workload Type: Select Deployment.
  • Workload Name: Set it to django-app.
  • Pods: Set the quantity of pods to .
Deployment CCE — Huawei Cloud
Deployment CCE — Huawei Cloud
  • Container Name: Name the container.
  • Image Name: Click Select Image and select the image used by the container.
  • Image Tag: Select the image tag to be deployed.
  • Pull Policy: Image update or pull policy. If you select Always, the image is pulled from the image repository each time. If you do not select Always, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository.
  • CPU Quota Request: minimum number of CPU cores required by a container. The default value is 0.25 cores.
  • CPU Quota Limit: maximum number of CPU cores available for a container.
  • Memory Quota Request: minimum amount of memory required by a container. The default value is 512 MiB.
  • Memory Quota Limit: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated.
  • Privileged Container: Programs in a privileged container have certain privileges.
  • Init Container: Indicates whether to use the container as an init container.

After completing the basic settings, the Deployment will begin to be created. After a minutes it will switch to Running Status.

Deployment CCE — Huawei Cloud

Create a Load Balancer Service

In the navigation pane, choose Networks. Then, click Create Service. The LoadBalancer access address is in the format of <IP address of public network load balancer>:<access port>, for example, 10.117.117.117:80. In this access mode, requests are transmitted through an ELB load balancer to a node and then forwarded to the destination pod through the Service.

Elastic Load Balancer Service — Huawei Cloud
Elastic Load Balancer Service — Huawei Cloud
  • Service Name: Specify a Service name, which can be the same as the workload name.
  • Access Type: Select LoadBalancer.
  • Selector: Add a label and click Add. A Service selects a pod based on the added label. You can also click Reference Workload Label to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click OK.
  • Select the load balancer to interconnect. Only load balancers in the same VPC as the cluster are supported. If no load balancer is available, click Create Load Balancer to create one on the ELB console.
  • The CCE console supports automatic creation of a load balancer. Select Auto create from the drop-down list box. Enter the load balancer name and choose whether to access the public network (if yes, an EIP with a bandwidth of 5 Mbit/s will be created. By default, the load balancer is billed by traffic.) You also need to select the AZ, subnet, and flavor for the dedicated load balancer. Currently, only dedicated load balancers of the network type (TCP/UDP) can be automatically created.
  • Protocol: protocol used by the Service.
  • Service Port: port used by the Service. The port number ranges from 1 to 65535.
  • Container Port: port on which the workload listens. For example, Nginx uses port 80 by default.

After completing the basic settings, the Load Balancer Service will begin to be created. After a minutes it will switch to active.

Elastic Load Balancer Service — Huawei Cloud

After the Load Balancer service is active, you can access the application with the automatically assigned Public IP.

Conclusion

With this demo, we used Huawei Cloud’s Cloud Container Engine service, which offers Kubernetes infrastructure. We were able to deploy the application directly without dealing with the master node. In our next article we will add Nginx Ingress for this demo.

References

--

--