Pi to OCI: using ARM to develop and deploy apps to Kubernetes (OKE)

Gabriele Provinciali
Oracle Developers
Published in
4 min readJun 3, 2021

OKE on ARM

Recently, I’ve been excited about the announcements regarding the availability of the Ampere A1 on OCI and, more generally, the ARM-based computing as a renewed instance of the RISC architecture we dealt with in the past. ARM is available at the infrastructure level and — above all — when designing and implementing cloud and container native apps with the world’s leading orchestration engine: the ubiquitous K8S, available on OCI via the Container Engine for Kubernetes (OKE).

Piece of Pi

Immediately after the news, I thought that one of my local Raspberry Pi stack (RPi 4, 8 GB RAM), currently hosting a node of microK8S cluster, could also be used as an ARM developer workstation. Moreover, it would be interesting to check for compatibility issues or any other idiosyncrasies I might experience.

The Pi

An oversimplified schema of the required components is as follows:

Yes, it’s that simple.

First things first, I needed to find a suitable Docker image for this test. I compared the specs of the Ampere A1 with the RPi’s CPU, both adopting the ARM v8 architecture, which is 64-bits, so I selected the arm64v8/ubuntu image for this tinkering session.

Before reserving my OKE cluster, I tried a simple application made with node.js, express.js and Jade just with Docker on my RPi. Here’s the Dockerfile:

That Dockerfile was used by the Docker build command:

docker build -t gabba/gabbasite-armv8 .

Then, I run the application locally via the Docker run command:

docker run --rm -d --name gabbasite -p 3000:3000 gabba/gabbasite-armv8:latest

Having exposed the web app on port 3000, I accessed it:

OKE on ARM

So, the first part was ok. We just need to push the image to a repo for further use:

docker push gabba/gabbasite-armv8

Afterwards, try this application on the ARM-OKE power combo.

Now, I’m required to launch an OKE Quick Create on Oracle Cloud Infrastructure to create a K8S Cluster with the available ARM shape (and we’ll stick with this bare minimum three node setup).

The OKE service allows— via the Quick Create option — creating a cluster within minutes by reserving the necessary Compute and Network resources. Handy for developers! Follow the guide here for details and remember to select the shape called VM.Standard.A1.Flex.

Then, I configured the local environment to access the K8S cluster with the RPi shell, which meant setting up the oci-cli configuration and generating the ~/.kube/config file. At the end of the procedure it’s possible to access the OKE cluster from the RPi shell:

Good old kubectl…

Writing a deployment file

Old habits die hard. I didn’t mix the deployment directives with the service definition: both are yaml files and could be grouped together to issue a single command, but my experience with an OKE/K8S deployment in a production environment has taught me to have separate files. Anyway, this is the deployment.yaml:

deployment.yaml

This is the service.yaml which exposes the app to the cruel world by means of a Load Balancer:

service.yaml

Some simple commands will suffice to test the two configurations:

kubectl apply -f deployment.yamlkubectl apply -f service.yaml
Results…

Check the status of our deployments:

Nodes, services and pods

OKE was so kind enough to provide me with a public IP address associated to the Load Balancer, which I’m going to access via a web browser. The result seemed promising:

Outcomes

  • You can easily use and experiment with a RPi 4 to build the software and the Docker images needed for OKE on ARM. This method maybe isn’t as flashy and fast (and overpriced) as an ARM-based laptop by your favorite fruit brand, but it’s an order of magnitude cheaper.
  • After configuring it with valid OCI and K8S credentials, you can deploy an application directly to OKE (ARM to ARM).
  • ARM is the next big thing. Check it out using the Free Tier and the Always Free Tier on OCI.

Happy tinkering!

--

--