Serverless Containers with Cloud Container Instance — CCI

Burak Ovalı
Huawei Developers
Published in
8 min readMay 22, 2023
Photo by Venti Views on Unsplash

Intro

Cloud Container Instance (CCI) is a serverless container engine that allows users to run containers without creating or managing server clusters. In the Serverless model, users don’t need to care about server statutes because the provider handles all of this process. The serverless model helps users improve development efficiency and reduces IT costs. Traditionally, containers run on Kubernetes and users have to create a cluster first. That is not the case with CCI.

Under the serverless model, CCI allows users to directly create and use containerized workloads on the console or by using kubectl or Kubernetes APIs, and pay only for the resources consumed by these workloads.

Access to CCI

Architecture of CCI

CCI provides an environment to manage and control workloads by integrating Kubernetes Resources with Huawei Cloud services. The CCI architecture is simplified with the image below.

Architecture of CCI

As it can be understood at the bottom layer, CCI is used in integration with other services of Huawei Cloud.

CCI is able to support high-performance and heterogeneous computing architectures (X86, GPU, Ascend) by running containers on physical machines.

As can be understood from the architecture, CCI uses Kata-Containers. Kata containers help us to combine the high performance of containers with VM-Isolation.

With unified cluster management and workload scheduling, you don't need to manage clusters.

Finally, CCI supports Kubernetes features because it has a Kubernetes-based model layer.

Advantages of CCI

Out of the Box: CCI allows us to run Containers in a serverless model environment without creating Kubernetes Server Clusters.

Fast Scaling: Kubernetes Cluster resources are unlimited from a single user’s perspective. Resources can be scaled in seconds, helping you cope with service changes and ensures Cloud Service-Level Agreements (SLAs).

Per-Second Billing: Resources can be billed on demand by seconds to reduce costs.

High Security: CCI provides VM-Level isolation without compromising the startup speed, offering you better container experience.

After hearing that it provides VM-Level Isolation, we may be worried about the startup speed. How does Huawei Cloud manage to keep the startup speed high with VM-Level isolation? The answer is simple: Native Support for Kata Containers.

Kata Containers

Quick Demo

Let’s demo by following Huawei Cloud’s documentation. Prerequisites for the demo:

The figure below illustrates the general procedures for using CCI.

Procedures for using CCI

1- Clone Repository Using Git

By running the command line below, we can easily download the git repository of 2048 games:

git clone https://gitee.com/jorgensen/2048.git

2- Build an Image

You can see the ready-written Dockerfile in the downloaded repo. Let’s get a build for the image. Here we get the build to push to SWR:

$ cd 2048
$ docker build -t swr.ap-southeast-3.myhuaweicloud.com/burak/2048:latest .

3- Pushing image to the SWR

Let’s push the Image that we have Build to SWR, which is the Image Repository service of Huawei Cloud.

How to log in to SWR, you can review my article here.

$ docker push swr.ap-southeast-3.myhuaweicloud.com/burak/2048:latest

After completing the Docker push, you can see the Image is stored in SWR.

SWR Images

4- Create Namespace

At the beginning of the article, we talked about alternatives to using the CCI Service. First, we will use the Console. At the end of the article, I will show the necessary steps for kubectl.

We can access the CCI service by typing CCI under the Service List.

CCI

Let’s click the Namespaces button on the left and then click the Create button under General-Computing.

You can see the Coming Soon warning for GPU and Ascend. Because CCI International has just become active for accounts. Soon both these architectures will be available for CCI.

Creating a CCI

Let’s set a name for the namespace in this field. Same logic as the Kubernetes Namespace resource. I keep RBAC disabled for now. If you enable this option, access to resources in the namespace will be controlled by RBAC policies.

Creating a Namespace

After selecting VPC and Subnet, let’s click the Create button and complete the Namespace creation process.

List of Namespaces

5- Create a Workload

A Deployment is a service-oriented encapsulation of pods and contains one or more pod replicas, each replica having the same role. The system automatically distributes requests to multiple pod replicas of a Deployment. All pod replicas share the same storage volume. Workloads can be accessed from private and public networks through service objects or load balancers, and containers can access public networks through NAT gateways.

Let’s click on Deployments under Workloads from the console on the left. Then click the Create from Image button at the top right.

Let’s keep the settings on the opened page as follows. Field names are already self-explanatory.

Creating a Deployment

Then let’s select the image that we pushed to SWR.

Creating a Deployment

Standard Output File Collection: Enable this function if you want Application Operations Management (AOM) to collect standard output files. You can search for standard output files collected for a workload in AOM.

You can disable this part as I will discuss the AOM service in detail in a different article.

We can also configure other settings if we need, Expand Advanced settings for this. I’m skipping that part for this demo.

Creating a Deployment

If our configurations here are finished, we can click on the Next: Configure Access Settings button.

If you are using 2048 Image, you can configure your settings as in the image below. If you are using a different Image you should configure it according to your application.

Creating a Deployment

Intranet access: Configure a domain name or internal domain name/virtual IP address for the current workload so that the workload can be accessed by other workloads in the intranet. There are two access modes: Service and ELB.

Internet access: Allow access to the workload from public networks through load balancers.

Do not use: No entry is provided to allow access from other workloads. This mode is ideal for computing scenarios where communication with external systems is not required.

After configuring the settings here, let’s press the Next: Configure Advanced Settings button.

We can leave this field as Default.

Creating a Deployment

Next: Let’s complete the Workload creation process by clicking Confirm and then Submit.

6- Access the Workload

All steps are completed. Now we can access our application from the browser.

Let’s click on Ingress under Network Management and request the Access Address from the browser.

Access to Application

Our application is working, congratulations.

We did a quick demo. Before we finish the article, let’s quickly see how we can access the CCI service with kubectl.

Using kubectl

CCI allows you to use native or customized kubectl to create resources. To access CCI with kubectl, we first need to download the cci-iam-authenticator binary of Huawei Cloud. We can download it from CCI’s official website. I’m leaving the link here.

After downloading Binary, you need to move it to the PATH directory depending on your operating system.

Download Binary File and Move to $PATH

To download the binary file:

wget https://cci-iam-authenticator.obs.cn-north-4.myhuaweicloud.com/latest/linux-amd64/cci-iam-authenticator

Grant the execute permission on cci-iam-authenticator and save it to the PATH directory. For Linux you can use the following command lines:

chmod +x ./cci-iam-authenticator
sudo mv ./cci-iam-authenticator /usr/local/bin

Let’s verify:

cci-iam-authenticator --version

Output for the last command:

cci-iam-authenticator version 
Version: 2.6.17
GitCommit: c2405a7dbc9134ba3b4832f3a89b62eb5f85b382
BuildDate: 2020-06-17T10:35:41Z

Initialize the cci-iam-authenticator configuration using AK/SK

There are two different methods here, AK/SK and Username/Password. I will continue with the AK/SK method.

The configuration syntax using AK/SK is as follows:

cci-iam-authenticator generate-kubeconfig --cci-endpoint=https://$endpoint --ak=$AK --sk=$SK

For a list of Regions and Endpoints, see the link here. Your endpoint may be different from mine as I am running this demo in the Singapore Region.

You can follow the steps here to obtain Access Key and Secret Key.

Access Key and Secret Key are already defined as environment variables in my system. Don’t forget to replace $AK and $SK variables with your own values in the code line below.

cci-iam-authenticator generate-kubeconfig --cci-endpoint=https://cci.ap-southeast-3.myhuaweicloud.com --ak=$AK --sk=$SK

After executing the above line command, you should get an Output like this:

Switched to context "cci-context-ap-southeast-3-XXXXXXXXXX"

Congratulations! We can now access CCI with kubectl. Let’s check one last time:

kubectl get pods -A

## OUTPUT ##
NAMESPACE NAME AGE
cci-ns-demo cci-deployment-20235201-55d8c978f4-n7gxg 3h18m
cci-ns-demo coredns-5cb5fc7dbb-2xmkj 3h18m
cci-ns-demo coredns-5cb5fc7dbb-7bdmk 3h18m

Conclusion

In this article, we discussed Huawei Cloud’s CCI service, which provides a serverless environment to run containers. CCI successfully integrates Kata container technology, increasing security for containers at the VM-Isolotion level. CCI service has just been activated for International accounts. That’s why CCI has a Roadmap to complete, but it’s worth remembering; most of this roadmap has already been completed.

References

--

--