How to Deploy WSO2 API Manager on Google Kubernetes Engine

WSO2 API Manager can be deployed on almost all well known virtual machine based infrastructure platforms and container cluster managers. At WSO2 we have implemented Kubernetes resources for automating API Manager deployments on Kubernetes using standard deployment patterns. Nevertheless, some aspects of the deployments may change depending on the underlying infrastructure on which Kubernetes clusters are created on. For an instance when a Kubernetes cluster is created on a public cloud platform such as Google Cloud utilizing its own SQL services for creating database server instances, storage services for managing persistent volumes, load balancer services for routing external traffic may provide higher value than managing them on our own on the same platform. Such managed services would provide the ability to create them in any preferred regions in few steps, high availability within a region, automated periodical backups, support of Google engineers, and most importantly it would only cost for the actual usage.

Currently, Kubernetes resources shipped by WSO2 do not include infrastructure specific resources or instructions for using them on Google Kubernetes Engine (GKE). Many WSO2 users have used them around the globe for creating API Management solutions on on-premise data centers. This allowed us to fix issues and improve them over time. In this article, I will explain how to deploy WSO2 API Manager v2.x on GKE using deployment pattern 1. This may allow you to understand the fundamentals and apply the same concepts to any other deployment pattern. Please refer my previous article “Architecting API Management Solutions with WSO2 API Manager” for details on API Manager deployment patterns.

Deployment Architecture

Figure 1: WSO2 API-M Deployment Architecture for Deployment Pattern 1 for GKE

The above diagram illustrates the deployment architecture of WSO2 API Manager deployment pattern 1 on GKE. Cloud SQL (MySQL 5.7 second generation) has been used for creating API Manager and API Manager Analytics databases. An NFS server has been installed on a Compute Engine virtual machine instance and its filesystem has been created using a persistent disk for preserving state. The NFS server has been used for creating a “ReadWriteMany” persistent volume for API Manager instances for sharing API files and throttling policies. It has also used for creating two ReadWriteOnce persistent volumes for preserving API Manager Analytics instances states. Finally, API Gateway transports and API Manager UI transports have been exposed using two cloud load balancers allowing standard SSL port (443) for HTTPS communication.

Steps to Follow

This deployment process would require seven main steps, please follow each section in detail and execute instructions given:

  • Step1: Install Prerequisites
  • Step 2: Create a NFS Server
  • Step 3: Create a Kubernetes Cluster
  • Step 4: Create a MySQL Database Server
  • Step 5: Deploy WSO2 API Manager
  • Step 6: Create GCP Load Balancers
  • Step 7: Deployment Verification

Step1: Install Prerequisites

1. Install Google Cloud CLI (gcloud) by following its official installation guide:

# OSX
wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-194.0.0-darwin-x86_64.tar.gz
tar -xvf google-cloud-sdk-194.0.0-darwin-x86_64.tar.gz
./google-cloud-sdk/install.sh
./google-cloud-sdk/bin/gcloud init
# Linux
wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-194.0.0-linux-x86_64.tar.gz
tar -xvf google-cloud-sdk-194.0.0-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh
./google-cloud-sdk/bin/gcloud init

2. Install Kubernetes CLI using gcloud CLI:

gcloud components install kubectl

Step 2: Create a NFS Server

WSO2 API Manager requires persistent volumes (pv) with ReadWriteMany (RWX) capability on Kubernetes for sharing API files among API Gateway instances and throttling policies among Traffic Manager instances. According to Kubernetes documentation GCE Persistent Disk does not support that feature at this moment of time. Therefore, in this POC I will be using an NFS server on Google Compute Engine (GCP). Nevertheless, the same can be achieved with any other pv which supports RWX feature such as Glusterfs, CephFS, Quobyte, etc.

  1. Login to Google Cloud Console using your Google Account.

2. Navigate to Compute Engine, VM instances and click on the “Create” button for creating a NFS server. Provide a name, select the required zone and scroll down:

3. Click on “Disks/Add item” and create a persistent disk to be used by the NFS server:

This would allow the filesystem of the NFS server to be preserved even if the VM instance is terminated:

4. Now, click on the “Create” button to create the NFS server VM instance:

5. Once the NFS server VM instance is created, click on the SSH button, connect to it using the Google cloud terminal.

6. List the disks using lsblk command and note the name of the disk, in this scenario its called “sdb”:

sudo lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdb 8:16 0 40G 0 disk

7. Format the addition disk created using the below command, change the device name accordingly (/dev/<device-name>):

sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 10485760 4k blocks and 2621440 inodes
Filesystem UUID: eb7e410a-28b4-4a93-bdbb-c4502162e572
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208, 4096000, 7962624
Allocating group tables: done                            
Writing inode tables: done
Creating journal (65536 blocks): doneWriting superblocks and filesystem accounting information: done

8. Create a directory to be served as the mount point of the new disk:

sudo mkdir -p /mnt/disks/nfs-server

9. Use the mount command to mount the disk to the instance and grant write access to the device for all users:

sudo mount -o discard,defaults /dev/sdb /mnt/disks/nfs-server
sudo chmod a+w /mnt/disks/nfs-server

10. Execute below commands to install the NFS server:

sudo apt update
sudo apt install nfs-kernel-server -y

11. Create a new directory to be shared with NFS clients and mount it to NFS server disk path:

sudo mkdir /exports
sudo mount --bind /mnt/disks/nfs-server /exports

12. Create three folders inside /exports directory to be used by three persistent volumes of API Manager at a future step:

sudo mkdir /exports/pv-1
sudo mkdir /exports/pv-2
sudo mkdir /exports/pv-3

13. Add the following line to the /etc/exports file to configure the above folder /exports to be exposed by the NFS server:

sudo su
echo "/exports *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)" >> /etc/exports
exit

14. Execute the below commands to update the exported directory, restart the NFS server and verify the process:

sudo exportfs -a
sudo service nfs-kernel-server restart
sudo showmount -e
Export list for nfs-server:
/exports *

Step 3: Create a Kubernetes Cluster

1. Now let’s create a new Kubernetes cluster on GKE. Navigate to Kubernetes Engine and click on “Create” button to create a new Kubernetes cluster:

2. Change the machine type to 8 vCPUs, 16 GB memory by using the customize option, set the size of the cluster to 2 and create the cluster:

This configuration will provide 16 vCPUs and 32 GB of memory in total. API Manager deployment pattern 1 has 4 components and would require around 8 GB of memory and 4 CPUs for each.

3. Once the Kubernetes cluster is created, click on the “Connect” button, copy the gcloud command and execute it on the local machine for configuring kubectl:

gcloud container clusters get-credentials wso2-cluster-1 --zone us-central1-a --project <gcloud-project-name>

4. Cxecute the below command to open a proxy in the local machine to access the Kubernetes dashboard:

kubectl proxy
Starting to serve on 127.0.0.1:8001

5. Find the access token configured in kubectl by executing the below command and traversing to <cluster-name>/user/auth-provider/config section:

kubectl config view
- name: gke_<gcloud-account-name>_us-central1-a_wso2-cluster-1
user:
auth-provider:
config:
access-token: <access-token-value>

6. Open a web browser, visit http://localhost:8001/ui and enter the access token found above:

Step 4: Create a MySQL Database Server

Next, we need to create a MySQL database server. Please note that in this POC I have only used a single database server for creating databases required by both API Manager and API Manager Analytics due to the way database scripts and data sources have been currently designed in wso2/kubernetes-apim git repository. In production deployments it is recommended to create a separate database server for analytics as it may affect the performance of the API Gateway.

  1. Navigate to SQL/Cloud SQL Instances in Google cloud console and click on “Create instance” button.
  2. Select MySQL 5.6 or 5.7 as the database engine and press “Next” button:

3. Select MySQL Second Generation type and proceed to the next step:

4. Provide an instance id (“wso2-apim-db”), a root password and create the database instance:

5. Once the MySQL server is created, navigate to instance details/ Authorization tab, and add your local machine’s IP address, and Kubernetes nodes public IP addresses for allowing MySQL client connections coming from those sources:

6. Connect to the MySQL server using gcloud CLI, navigate to ${kuberentes-apim} directory in the terminal and execute MySQL scripts provided for creating the required databases:

gcloud sql connect wso2-apim-db --user=root
cd ${kubernetes-apim}/base/mysql/scripts
mysql -h <mysql-server-public-ip-address> -u root -p
source mysql-apimgtdb.sql 
source mysql-configdb.sql
source mysql-govregdb.sql
source mysql-mbstoredb.sql
source mysql-statdbs.sql
source mysql-userdb.sql

7. Change the character set of stat database to latin1:

ALTER DATABASE statdb CHARACTER SET latin1 COLLATE latin1_bin;

Step 5: Deploy WSO2 API Manager

  1. Clone WSO2 API Manager Kubernetes Resources Git repository and switch to the latest v2.1.0 tag:
git clone https://github.com/wso2/kubernetes-apim
cd kubernetes-apim
export kubernetes_apim=$(pwd) # refer this folder path as ${kubernetes_apim}
git checkout tags/<2.1.0-latest-tag>

2. Update the MySQL server IP address in the following files:

# update analytics 1 datasources
mysql_server_ip_address=<mysql-server-ip-address>
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/datasources/analytics-datasources.xml
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/datasources/master-datasources.xml
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/datasources/stats-datasources.xml
# update analytics 2 datasources
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/datasources/analytics-datasources.xml
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/datasources/master-datasources.xml
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/datasources/stats-datasources.xml
# update apim datasources
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-manager-worker/repository/conf/datasources/master-datasources.xml
sed -i.bak s/apim-rdbms/${mysql_server_ip_address}/g ${kubernetes_apim}/pattern-1/confs/apim-worker/repository/conf/datasources/master-datasources.xml

If the default root password was also changed when creating the MySQL database server, that would also need to be changed in the above files.

3. Update the NFS server IP address and the server path in the following persistent volume file:

vi ${kubernetes-apim}/pattern-1/artifacts/volumes/persistent-volumes.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-1
labels:
type: local
pattern: wso2apim-pattern-1
spec:
storageClassName: nfs
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.128.0.2
path: "/exports/pv-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-2
labels:
type: local
pattern: wso2apim-pattern-1
spec:
storageClassName: nfs
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.128.0.2
path: "/exports/pv-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-3
labels:
type: local
pattern: wso2apim-pattern-1
spec:
storageClassName: nfs
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.128.0.2
path: "/exports/pv-3"

4. Create a new Kubernetes namespace called wso2, a service account with the name wso2svcacct under the same namespace and set the default namespace to wso2:

kubectl create namespace wso2
kubectl create serviceaccount wso2svcacct -n wso2
kubectl config set-context $(kubectl config current-context) --namespace=wso2

5. Create the persistent volumes:

kubectl create -f ${kubernetes_apim}/pattern-1/artifacts/volumes/persistent-volumes.yaml

6. Create API Manager Config Maps:

# create apim analytics 1 config maps
kubectl create configmap apim-analytics-1-bin --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/bin/
kubectl create configmap apim-analytics-1-conf --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/
kubectl create configmap apim-analytics-1-spark --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/analytics/spark/
kubectl create configmap apim-analytics-1-axis2 --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/axis2/
kubectl create configmap apim-analytics-1-datasources --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/datasources/
kubectl create configmap apim-analytics-1-tomcat --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/tomcat/
kubectl create configmap apim-analytics-1-conf-analytics --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-1/repository/conf/analytics/
# create apim analytics 2 config maps
kubectl create configmap apim-analytics-2-bin --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/bin/
kubectl create configmap apim-analytics-2-conf --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/
kubectl create configmap apim-analytics-2-spark --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/analytics/spark/
kubectl create configmap apim-analytics-2-axis2 --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/axis2/
kubectl create configmap apim-analytics-2-datasources --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/datasources/
kubectl create configmap apim-analytics-2-tomcat --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/tomcat/
kubectl create configmap apim-analytics-2-conf-analytics --from-file=${kubernetes_apim}/pattern-1/confs/apim-analytics-2/repository/conf/analytics/
# create apim config maps
kubectl create configmap apim-manager-worker-bin --from-file=${kubernetes_apim}/pattern-1/confs/apim-manager-worker/bin/
kubectl create configmap apim-manager-worker-conf --from-file=${kubernetes_apim}/pattern-1/confs/apim-manager-worker/repository/conf/
kubectl create configmap apim-manager-worker-identity --from-file=${kubernetes_apim}/pattern-1/confs/apim-manager-worker/repository/conf/identity/
kubectl create configmap apim-manager-worker-axis2 --from-file=${kubernetes_apim}/pattern-1/confs/apim-manager-worker/repository/conf/axis2/
kubectl create configmap apim-manager-worker-datasources --from-file=${kubernetes_apim}/pattern-1/confs/apim-manager-worker/repository/conf/datasources/
kubectl create configmap apim-manager-worker-tomcat --from-file=${kubernetes_apim}/pattern-1/confs/apim-manager-worker/repository/conf/tomcat/
kubectl create configmap apim-worker-bin --from-file=${kubernetes_apim}/pattern-1/confs/apim-worker/bin/
kubectl create configmap apim-worker-conf --from-file=${kubernetes_apim}/pattern-1/confs/apim-worker/repository/conf/
kubectl create configmap apim-worker-identity --from-file=${kubernetes_apim}/pattern-1/confs/apim-worker/repository/conf/identity/
kubectl create configmap apim-worker-axis2 --from-file=${kubernetes_apim}/pattern-1/confs/apim-worker/repository/conf/axis2/
kubectl create configmap apim-worker-datasources --from-file=${kubernetes_apim}/pattern-1/confs/apim-worker/repository/conf/datasources/
kubectl create configmap apim-worker-tomcat --from-file=${kubernetes_apim}/pattern-1/confs/apim-worker/repository/conf/tomcat/

7. Sign up at wso2.com and create a Kubernetes secret for pulling API Manager Docker images from docker.wso2.com using WSO2 credentials:

kubectl create secret docker-registry regcred --docker-server=docker.wso2.com --docker-username=<your-email> --docker-password=<your-password> --docker-email=<your-email>

8. Add the Kubernetes secret name in the following following format in the API Manager deployment files below:

Image pull secret definition:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ...
labels:
...
spec:
strategy:
type: Recreate
template:
metadata:
labels:
...
spec:
containers:
...
imagePullSecrets:
- name: regcred

Deployment file list to be updated:

${kubernetes_apim}/pattern-1/artifacts/apim-analytics/wso2apim-analytics-1-deployment.yaml
${kubernetes_apim}/pattern-1/artifacts/apim-analytics/wso2apim-analytics-2-deployment.yaml
${kubernetes_apim}/pattern-1/artifacts/apim/wso2apim-manager-worker-deployment.yaml
${kubernetes_apim}/pattern-1/artifacts/apim/wso2apim-worker-deployment.yaml

9. Create API Manager Kubernetes services and persistent volume claims:

cd ${kubernetes_apim}/pattern-1/artifacts/
kubectl create -f apim-analytics/wso2apim-analytics-service.yaml
kubectl create -f apim-analytics/wso2apim-analytics-1-service.yaml
kubectl create -f apim-analytics/wso2apim-analytics-2-service.yaml
kubectl create -f apim/wso2apim-service.yaml
kubectl create -f apim/wso2apim-manager-worker-service.yaml
kubectl create -f apim/wso2apim-worker-service.yaml
kubectl create -f apim/wso2apim-mgt-volume-claim.yaml
kubectl create -f apim-analytics/wso2apim-analytics-volume-claim.yaml

10. Create API Manager Analytics Kubernetes deployments:

cd ${kubernetes_apim}/pattern-1/artifacts/
kubectl create -f apim-analytics/wso2apim-analytics-1-deployment.yaml
kubectl create -f apim-analytics/wso2apim-analytics-2-deployment.yaml

11. Check the state of the API Manager Analytics pods:

kubectl get pods

NAME READY STATUS RESTARTS AGE
wso2apim-analytics-1-f69b87d67-ttxcs 1/1 Running 2 15m
wso2apim-analytics-2-69f94c77d8-xmwl9 1/1 Running 0 4m

12. Once the API Manager Analytics pods become running, check the logs and wait for them to get activated:

kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
wso2apim-analytics-1-85997f88cc-xnkw6 1/1 Running 0 7m
wso2apim-analytics-2-69f94c77d8-sskjj 1/1 Running 0 6m
kubectl logs <pod-name>

In the logs, scan for errors and wait until the server URLs are printed.

13. Create API Manager deployments:

cd ${kubernetes_apim}
kubectl create -f apim/wso2apim-manager-worker-deployment.yaml
kubectl create -f apim/wso2apim-worker-deployment.yaml

14. Once the API Manager pods become running check the logs and wait for them to get activated:

kubectl get pods
NAME                                       READY     STATUS    RESTARTS   AGE
wso2apim-analytics-1-85997f88cc-xnkw6 1/1 Running 0 11m
wso2apim-analytics-2-69f94c77d8-sskjj 1/1 Running 0 11m
wso2apim-manager-worker-74b45ccf6f-77nw9 1/1 Running 0 3m
wso2apim-worker-7bdc5dfccc-47wgw 1/1 Running 0 1m
kubectl logs <pod-name>

In the logs, scan for errors and wait until the server URLs are printed.

Step 6: Create GCP Load Balancers

The Kubernetes services created in step 5 do not include any load balancer type services for exposing UI and API gateway transports via GCP load balancers. Let’s create two new Kubernetes services for this purpose.

  1. Create a new yaml file with the name wso2apim-load-balancer-service.yaml for exposing UI transports with the following content:
apiVersion: v1
kind: Service
metadata:
name: wso2apim-load-balancer
labels:
app: wso2apim
pattern: wso2apim-pattern-1
spec:
ports:
-
name: 'servlet-http'
protocol: TCP
port: 80
targetPort: 9763
-
name: 'servlet-https'
protocol: TCP
port: 443
targetPort: 9443
selector:
app: wso2apim
pattern: wso2apim-pattern-1
sessionAffinity: ClientIP
type: LoadBalancer

2. Create a new yaml file with the name wso2apim-gw-load-balancer-service.yaml for exposing API gateway transports with the following content:

apiVersion: v1
kind: Service
metadata:
name: wso2apim-gw-load-balancer
labels:
app: wso2apim
pattern: wso2apim-pattern-1
spec:
ports:
-
name: 'pass-through-http'
protocol: TCP
port: 80
targetPort: 8280
-
name: 'pass-through-https'
protocol: TCP
port: 443
targetPort: 8243
selector:
app: wso2apim
pattern: wso2apim-pattern-1
type: LoadBalancer

3. Create above services using kubectl:

kubectl create -f wso2apim-load-balancer-service.yaml
kubectl create -f wso2apim-gw-load-balancer-service.yaml

4. Check the status of the services via the Kubernetes Dashboard and wait for the external endpoints to appear:

5. Now add two new /etc/hosts entries with the given IP addresses:

sudo sh -c "echo '<wso2apim-load-balancer-service-external-ip> wso2apim' >> /etc/hosts"
sudo sh -c "echo '<wso2apim-gw-load-balancer-service-external-ip> wso2apim-gw' >> /etc/hosts"

6. Now, access the API Publisher UI using the following URL using a web browser. You might need to accept the self signed SSL certificate warning raised by the browser and use default admin credentials (username: admin, password: admin):

https://wso2apim/publisher

Step 7: Deployment Verification

  1. In the Pubisher UI click on the “Deploy Sample API” button and deploy the given sample PizzaShack API:

2. Now click on the PizzaShack API and then click on “View in Store” link:

3. Now login using default admin credentials (username: admin, password: admin) and subscribe to PizzaShak API:

4. Click on the “View Subscription” button, navigate to the API application information page and generate OAuth2 keys:

5. Open a new web browser tab, enter API Gateway URL and accept the self signed SSL certificate warning:

https://wso2apim-gw/

6. Now, navigate back to the PizzaShak API in the API Store and invoke the API resource GET /menu using the API console:

Conclusion

WSO2 API Manager can be deployed in Kubernetes environments using Kubernetes resources provided by WSO2. Nevertheless, infrastructure resources such as databases, storage, and load balancers can be utilized from the underlying infrastructure on which the Kubernetes clusters are created on for better optimization. This would also reduce the overhead of managing such components on the same platform on our own and most importantly it would reduce the overall maintenance cost. On GKE, Cloud SQL, Compute Engine persistent disks, load balancing services can be used for this purpose. According to Kubernetes documentation at the moment Google Cloud does not provide a storage option for Kubernetes with “ReadWriteMany” capability even though ZFS/Avere is supported on GCP. Therefore, a distributed file system such as NFS, Glusterfs, CephFS, etc would need to be used.

According to the above proposed design, WSO2 API Manager production deployments would need to consider making the NFS server highly available, preserving the NFS server persistent disk, creating two separate MySQL database servers for API Manager and Analytics for optimizing performance of the API gateway, making the databases highly available within a region, and most importantly following the best practices proposed by GCP for securing the deployment. While implementing this POC I found a collection of improvements that we can do in WSO2 API Manager Kubernetes resources with related to Docker image size, persistent volume permission management, Kubernetes services, database architecture, deployment process, etc. I already did an improvement in reducing the Docker image size and removing the sudoers file and looking forward to discussing the remaining improvements with the WSO2 community and incorporate them in the future releases.

References

[1] Google Cloud Platform Documentation: https://cloud.google.com/docs/

[2] Kubernetes Documentation: https://kubernetes.io/docs/home/

[3] WSO2 API Manager Documentation: https://docs.wso2.com/display/AM210/WSO2+API+Manager+Documentation

[4] WSO2 API Manager Kubernetes Resources: https://github.com/wso2/kubernetes-apim

[5] Map AWS services to Google Cloud Platform products: https://cloud.google.com/free/docs/map-aws-google-cloud-platform

[6] Best Practices for Enterprise Organizations, Google Cloud Platform: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations