Using Amazon EKS Anywhere to Create Kubernetes Clusters on Equinix Bare Metal

TensorIoT Editor
TensorIoT
Published in
12 min readJan 24, 2023

by Shunmuga Prakash, Pranav Noogri, and Nicholas Burden at TensorIoT

Pushing the Pedal to Bare Metal?

Today, TensorIoT is going to share a technical deep dive into using Equinix Bare Metal and Amazon EKS Anywhere to create and deploy Kubernetes clusters. Equinix and AWS have a long-standing partnership that connects customers to the AWS Cloud via 38 AWS Direct Connect sites worldwide. There are a wide range of use cases for Kubernetes clusters, since they allow containers to run in most environments unrestricted to a specific operating system. Let’s get started!

What is Equinix Bare Metal?

A “bare metal” computing platform is like an empty room with no appliances — you need to provision it! Most virtual machine services require customers to pay for not only the virtual computer but also the pre-loaded programs that allow the computer to function. However, another option is getting a “bare metal” virtual computer, which allows customers to install their own operating system and software to specifically provision their machine. Equinix provides a top-tier service for provisioning bare metals with multiple data centers for customers to virtually provision physical servers to run workloads in edge and on-premises environments. By provisioning Equinix servers, customers receive dedicated hardware with APIs and DevOps / orchestration that allow for dynamic provisioning either on demand or fixed duration with month-to-month, 1-year, or 3-year commitments available.

What is Amazon EKS Anywhere?

Amazon EKS Anywhere is an official management tool to deploy Amazon EKS Distribution. Note that there are slight differences between Amazon EKS and Amazon EKS Anywhere, but for the purposes of this blog it’s important to note that customers can use EKS for cloud-based workloads and EKS Anywhere for hybrid workloads (edge/enterprise data center). Although workloads were initially handled on VMware vSphere-based environments with EKS Anywhere support, the addition of bare metal servers allows customers to natively deploy their cluster workloads on-premise without virtualization.

What are the advantages of running Amazon EKS Anywhere on Bare Metal?

Let’s look at some of the benefits gained by running EKS on BareMetal! Running EKS Anywhere on Bare Metal improves performance, lowers latency, provides better connectivity and enables scalability on demand. This means that developers can provision clusters without virtualization and run their applications both in the cloud and on-premises. Further, running Kubernetes on Bare Metal instances allows better server usage since the operating system communicates directly with the physical hardware.

Migrating apps from on-premises data centers to the cloud can be a time consuming process, which is why TensorIoT provides migration services to help partners expedite this important transition. But if your company isn’t looking for a migration expert to help make the transition, you can use EKS Anywhere on Bare Metal to run apps more reliably than self-managed Kubernetes offerings and benefit from the consistency of EKS workloads running in the AWS Cloud. Running Amazon EKS Anywhere on Bare Metal provides a combination of flexibility and control for Kubernetes-based applications globally via web browser or TF scripts.

Launching a Bare Metal server in Equinix console

Step 1:

The first step is to Sign in to the Equinix Metal Console. Open the Equinix Bare Metal console at https://console.equinix.com/

Enter your valid Email Address and Password to login. Click onto “Register here” if you have not yet registered.

Step 2:

Click New Server to start deploying a server.

Step 3:

In the main page, under Deploy on Demand Servers, choose the closest server to your locations. For this explanation, we’ll select Dallas(DA) as the location.

Browse to c3.medium.x.86 in the list and choose it as your server.

Step 4:

Choose Ubuntu server from the list.

Be sure to specify the Hostname, leave everything set to default and select Deploy now.

Step 5:

View servers in the console to check the status of your console.

Step 6 :

SSH into the instance you want to use to deploy your applications:

Spinning up Equinix server and setting up cluster using Eks-Anywhere Setting up the base:

1. First, update Ubuntu Linux packages for security and apply pending patches. Run the following commands:

$ sudo apt update

$ sudo apt upgrade

2. Install Golang version 1.13 on Ubuntu Linux 20.04 LTS:

$ sudo apt install golang-go

3. Install JQ

4. Install Metal-cli:

Run the commands:

wget — quiet https://github.com/equinix/metal-cli/releases/download/v0.9.0/metal-linux-amd64 chmod 0654 metal-linux-amd64

mv metal-linux-amd64 metal

Steps to run locally and in the Equinix Metal Console

1. Create an EKS-A Admin machine: Using the metal-cli:

Create an API Key and register it with the Metal CLI:

./metal init

Equinix Metal API Tokens can be obtained through the portal at https://console.equinix.com/. See https://metal.equinix.com/developers/docs/accounts/users/ for more details.

Token (hidden):

metal device create — plan=m3.small.x86 — metro=da — hostname eksa-admin — operating-system ubuntu_20_04

2. Create a VLAN:

metal vlan create — metro da — description eks-anywhere — vxlan 1000

3. Create a Public IP Reservation (16 addresses):

metal ip request — metro da — type public_ipv4 — quantity 16 — tags eksa

For the executable snippets that we use later on, you will use these variables to refer to specific pool addresses. You can choose the correct IP reservation using the “eksa” tag.

4. Create a Metal Gateway:

metal gateway create — ip-reservation-id $POOL_ID — virtual-network $VLAN_ID

5. Create Tinkerbell worker nodes eksa-node-001 — eksa-node-002 with Custom IPXE http://{eks-a-public-address}. These nodes will be provisioned as EKS-A Control Plane OR Worker nodes.

for a in {1..2}; do

./metal device create — plan m3.small.x86 — metro da — hostname eksa-node-00$a \ — ipxe-script-url http://$POOL_ADMIN/ipxe/ — operating-system custom_ipxe Done

Note that the ipxe-script-url doesn’t actually get used in this process, we’re just setting it up since it is a requirement for using the custom_ipxe operating system type.

6. Add the vlan to the eks-admin bond0 port:

metal port vlan -i $PORT_ADMIN -a $VLAN_ID

Configure the layer 2 vlan network on eks-admin with this snippet:

ssh root@$PUB_ADMIN tee -a /etc/network/interfaces << EOS

auto bond0.1000

iface bond0.1000 inet static

pre-up sleep 5

address $POOL_ADMIN

netmask $POOL_NM

vlan-raw-device bond0

EOS

Activate the layer 2 vlan network with this command:

ssh root@$PUB_ADMIN systemctl restart networking

7. Convert eksa-node-* ‘s network ports to Layer2-Unbonded and attach to the VLAN.

node_ids=$(metal devices list -o json | jq -r ‘.[] | select(.hostname | startswith(“eksa-node”)) | .id’) i=1 # We will increment “i” for the eksa-node-* nodes. “1” represents the eksa-admin node. for id in $(echo $node_ids); do

let i++

BOND0_PORT=$(metal devices get -i $id -o json | jq -r ‘.network_ports [] | select(.name == “bond0”) | .id’)

ETH0_PORT=$(metal devices get -i $id -o json | jq -r ‘.network_ports [] | select(.name == “eth0”) | .id’) metal port convert -i $BOND0_PORT — layer2 — bonded=false — force

metal port vlan -i $ETH0_PORT -a $VLAN_ID

done

8. Capture the MAC Addresses and create hardware.csv file on eks-admin in /root/ (run this on the host with metal cli on it):

i. Create the CSV Header:

echo hostname,vendor,mac,ip_address,gateway,netmask,nameservers,disk,labels> hardware.csv

ii. Use metal and jq to grab HW MAC addresses and add them to the hardware.csv:

node_ids=$(metal devices list -o json | jq -r ‘.[] | select(.hostname | startswith(“eksa-node”)) | .id’) i=1 # We will increment “i” for the eksa-node-* nodes. “1” represents the eksa-admin node. for id in $(echo $node_ids); do

# Configure only the first node as a control-panel node

if [ “$i” = 1 ]; then TYPE=cp; else TYPE=dp; fi; # change to 3 for HA

NODENAME=”eks-node-00$i”

let i++

MAC=$(metal device get -i $id -o json | jq -r ‘.network_ports | .[] | select(.name == “eth0”) | .data.mac’)

IP=$(python3 -c ‘import ipaddress; print(str(ipaddress.IPv4Address(“‘${POOL_GW}’”)+’$i’))’) echo

“$NODENAME,Equinix,${MAC},${IP},${POOL_GW},${POOL_NM},8.8.8.8,/dev/sda,type=${TYPE}” >> hardware.csv

done

The BMC fields are omitted because Equinix Metal does not expose the BMC of nodes. EKS Anywhere will skip BMC steps with this configuration.

iii. Copy hardware.csv to eksa-admin:

scp hardware.csv root@$PUB_ADMIN:/root

We’ve now provided the eksa-admin machine with all of the variables and configuration needed in preparation.

Deploy a cluster using EKS-Anywhere on Bare Metal:

Initial setup:

Login to eksa-admin with the LC_POOL_ADMIN and LC_POOL_VIP variable defined.

# SSH into eksa-admin. The special args and environment setting are just tricks to plumb $POOL_ADMIN and $POOL_VIP into the eksa-admin environment.

LC_POOL_ADMIN=$POOL_ADMIN LC_POOL_VIP=$POOL_VIP LC_TINK_VIP=$TINK_VIP ssh -o SendEnv=LC_POOL_ADMIN,LC_POOL_VIP,LC_TINK_VIP root@$PUB_ADMIN

1. Install eksctl and the eksctl-anywhere plugin on eksa-admin:

curl “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” \ — silent — location \

| tar xz -C /tmp

sudo mv /tmp/eksctl /usr/local/bin/

export EKSA_RELEASE=”0.10.1" OS=”$(uname -s | tr A-Z a-z)” RELEASE_NUMBER=15 curl “https://anywhere-assets.eks.amazonaws.com/releases/eks

a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere v${EKSA_RELEASE}-${OS}-amd64.tar.gz” \

— silent — location \

| tar xz ./eksctl-anywhere

sudo mv ./eksctl-anywhere /usr/local/bin/

2. Install kubectl on eksa-admin:

snap install kubectl — channel=1.23 — classic

Version 1.23 matches the version used in the eks-anywhere repository.

3. Install Docker

Run the docker install script:

4. Create EKS-A Cluster config:

5. Manually set control-plane IP for Cluster resource in the config

6. Manually set the TinkerbellDatacenterConfig resource spec in config:

7. Manually set the public ssh key in TinkerbellMachineConfig users[name=ec2-user].sshAuthorizedKeys The SSH Key can be a locally generated on eksa-admin (ssh-keygen -t rsa) or an existing user key.

ssh-keygen -t rsa

cat /root/.ssh/id_rsa.pub

8. Manually set the hardwareSelector for each TinkerbellMachineConfig.

For the control plane machine.

9. Change the osFamily to ubuntu for each TinkerbellMachineConfig section

10. Create an EKS-A Cluster. Double check and be sure $LC_POOL_ADMIN and $CLUSTER_NAME are set correctly before running this (they were passed through SSH or otherwise defined in previous steps). Otherwise manually set them!

eksctl anywhere create cluster — filename $CLUSTER_NAME.yaml \

— hardware-csv hardware.csv — tinkerbell-bootstrap-ip $LC_POOL_ADMIN

11. Final details of cluster and ip’s

Configuring Kubernetes

1. Login to the eksa_admin_ip

2. Then change the config path location

cp /root/my-eksa-cluster/my-eksa-cluster-eks-a-cluster.kubeconfig /root/.kube/config

3. You can now use the cluster like you would any Kubernetes cluster.

4. Deploy the test application with:

We’ve created a simple test application for you to verify your cluster is working properly. You can deploy it with the following command:

5. To see the new pod running in your cluster, type:

6. To check the logs of the container to make sure it started successfully, type:

7. There is also a default web page being served from the container. You can forward the deployment port to your local machine with

kubectl port-forward deploy/hello-eks-a 8000:80

8. to view the page example application.

curl localhost:8000

DeploySample Application Using Helm:

Step1:

SSH into the EKS-A Admin node and follow the EKS-A on Bare Metal instructions to continue within the Kubernetes environment.

Step2 : Install Helm and check the version

Step 3 : Download the stable repository so we have something to start with:

Step 4 : Finally, let’s configure Bash completion for the helm command:

Step 5: To update Helm’s local list of Charts, run:

Step 6 :

So next, we’ll search just for nginx: We are going to install Nginx with the help of Helm

Step 7 :

Search the Bitnami repo, just for nginx:

Step 8 : To install the bitnami/nginx with the help of Helm we need to run the following Command helm install mywebserver bitnami/nginx

Step 9 : In order to review the underlying Kubernetes services, pods and deployments, run: we will use the following command

kubectl get svc,po,deploy

Step 10 :To verify the Pod object was successfully deployed, we can run the following command:

Step 11 :To get the external IP we need to run the following Command

Conclusion
In this blog, we have taken Equinix Bare Metal and EKS knowledge and applied best practices to help you accelerate your infrastructure modernization, along with sharing some of the features and benefits of the Equinix Metal server when it comes to deploying EKS clusters. TensorIoT are experts at migrating and modernizing your infrastructure into both the cloud and hybrid setups according to your needs, so if you’re interested in streamlining your process, please contact us today to learn more!

For more information on these subjects, you may also be interested in reading:

Getting started with Amazon EKS Anywhere on Bare Metal https://aws.amazon.com/blogs/containers/getting-started with-eks-anywhere-on-bare-metal/

[1] https://github.com/tinkerbell/

[2] https://github.com/kubernetes-sigs/kind

[3]https://github.com/kubernetes-sigs/cluster-api

--

--