Running HashiQube on Multi-Arch (Arm and x86)/ Multi-OS (Linux, Mac, Windows) with Docker Desktop and Vagrant

Riaan Nolan
8 min readMay 30, 2022

--

With the arrival of the Apple Mac M1 Arm Chipsets, having a development environment that works for everyone became a little trickier.

First off, Virtualbox doesn’t run on Apple Mac M1 Arm Chipsets. That left me with just Docker, that can run somewhat consistently on Mac (Intel and Arm Chipsets), Windows and Linux.

But let’s see if we can run Hashiqube on Multi-Arch and Multi-OS consistently using a just Vagrant and Docker-Desktop ❤

Hands-on DevOps Lab HashiQube

A bit of history, 3 years ago I created HashiQube — A Development Lab Using All the HashiCorp Products.

HashiQube has helped me and many others get to know HashiCorp products and how they work and how they integrate with many other services.

See:

HashiQube use Vagrant and it’s providers such as, Docker, Virtualbox and Hyper-V to declare and run Virtual Machines or Docker Containers that have running, all the HashiCorp products, such as Vault, Consul, Nomad, Terraform, Sentinel, Packer, Waypoint and Boundary.

Hashiqube runs Vagrant, Packer, Terraform, Nomad, Consul, Vault, Waypoint and Boundary and as a bonus Sentinel.

With the arrival of Mac ARM Apple chipset, I needed to run HashiQube on Multi-Arch and Multi-OS reliably. It took some time to figure it out lol.

My colleague, Greg Luxford send me a Medium article one day, and how that could possible be a solution https://betterprogramming.pub/managing-virtual-machines-under-vagrant-on-a-mac-m1-aebc650bc12c

And with some help from https://github.com/rofrano/vagrant-docker-provider

I was able to get a Dockerfile that worked on Mac ARM and Intel Chips. But I wanted to use Vagrant to orchestrate all of this for me, let’s see what we did to accomplish that.

# IMPORTANT:
# https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container
# https://github.com/containers/podman/issues/3295
# --tmpfs /tmp : Create a temporary filesystem in /tmp
# --tmpfs /run : Create another temporary filesystem in /run
# --tmpfs /run/lock : Apparently having a tmpfs in /run isn’t enough – you ALSO need one in /run/lock
# -v /sys/fs/cgroup:/sys/fs/cgroup:ro : Mount the CGroup kernel configuration values into the container
# https://github.com/docker/for-mac/issues/6073
# Docker Desktop now uses cgroupv2. If you need to run systemd in a container then:
# * Ensure your version of systemd supports cgroupv2. It must be at least systemd 247. Consider upgrading any centos:7 images to centos:8.
# * Containers running systemd need the following options: --privileged --cgroupns=host -v /sys/fs/cgroup:/sys/fs/cgroup:rw.# https://betterprogramming.pub/managing-virtual-machines-under-vagrant-on-a-mac-m1-aebc650bc12c
config.vm.provider "docker" do |docker, override|
override.vm.box = nil
docker.build_dir = "."
docker.remains_running = true
docker.has_ssh = true
docker.privileged = true
# BUG: https://github.com/hashicorp/vagrant/issues/12602
# moved to create_args
# docker.volumes = ['/sys/fs/cgroup:/sys/fs/cgroup:rw']
docker.create_args = ['-v', '/sys/fs/cgroup:/sys/fs/cgroup:rw', '--cgroupns=host', '--tmpfs=/tmp:exec', '--tmpfs=/var/lib/docker:mode=0777,dev,size=15g,suid,exec', '--tmpfs=/run', '--tmpfs=/run/lock'] # '--memory=10g', '--memory-swap=14g', '--oom-kill-disable'
}
end

Let’s get our tools together, we will need

Vagrant is really the “Scaffolding in Code”, so we are able to use Vagrant to:

  • Entirely define a machine’s state in code
  • Utilize provisioners to modularize a solution
  • Utilize providers on different Operating Systems and Architectures

Consider being able to run the following commands on Mac Intel, Linux, Windows, Mac ARM machines.

  • vagrant up — provision-with basetools — provider docker
  • vagrant up — provision-with vault— provider docker
  • vagrant up — provision-with docker — provider docker
  • vagrant up — provision-with nomad — provider docker
  • vagrant up — provision-with consul — provider docker
  • vagrant up — provision-with terraform — provider docker

As you can probably tell by now, we want to run a Docker daemon on our Laptop or personal computer (that runs HashiQube), but also want to run a Docker daemon inside of the Container or VM (that is used by Nomad or Minikube to run containers)

Let’s Check our Docker Client and Server versions on our Mac

➜  hashiqube git:(master) docker infoClient:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.8.2)
compose: Docker Compose (Docker Inc., v2.5.0)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 74
Server Version: 20.10.14
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2

Let’s list our containers using docker ps

➜  hashiqube git:(master) docker psCONTAINER ID   IMAGE          COMMAND            CREATED        STATUS        PORTS                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  NAMESa37c18acc230   01d990ee1dba   "/usr/sbin/init"   44 hours ago   Up 16 hours   0.0.0.0:1433->1433/tcp, 0.0.0.0:3000->3000/tcp, 0.0.0.0:3306->3306/tcp, 0.0.0.0:3333->3333/tcp, 0.0.0.0:4566->4566/tcp, 0.0.0.0:4646->4646/tcp, 0.0.0.0:5001-5002->5001-5002/tcp, 0.0.0.0:5432->5432/tcp, 0.0.0.0:5580->5580/tcp, 0.0.0.0:5601-5602->5601-5602/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8088->8088/tcp, 0.0.0.0:8200->8200/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8800->8800/tcp, 0.0.0.0:8888-8889->8888-8889/tcp, 0.0.0.0:9000-9002->9000-9002/tcp, 0.0.0.0:9011->9011/tcp, 0.0.0.0:9022->9022/tcp, 0.0.0.0:9080-9081->9080-9081/tcp, 0.0.0.0:9090->9090/tcp, 0.0.0.0:9093->9093/tcp, 0.0.0.0:9200->9200/tcp, 0.0.0.0:9333->9333/tcp, 0.0.0.0:9443->9443/tcp, 0.0.0.0:9702->9702/tcp, 0.0.0.0:9998-9999->9998-9999/tcp, 0.0.0.0:10888->10888/tcp, 0.0.0.0:18888->18888/tcp, 0.0.0.0:19200->19200/tcp, 0.0.0.0:19702->19702/tcp, 0.0.0.0:31506->31506/tcp, 0.0.0.0:32022->32022/tcp, 0.0.0.0:8600->8600/udp, 0.0.0.0:2255->22/tcp, 0.0.0.0:33389->389/tcp, 0.0.0.0:4443->443/tcp, 0.0.0.0:5005->50001/tcp   hashiqube_hashiqube0serviceconsul_1653711358

And finally let’s check our Docker Client and Server versions inside that container, using Vagrant.

➜  hashiqube git:(master) vagrant ssh -c "docker info"Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 7
Server Version: 20.10.16
Storage Driver: overlay2
Backing Filesystem: tmpfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2

and Docker Exec

➜  hashiqube git:(master) docker exec -it a37c18acc230 /bin/bash -c "docker info"Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 7
Server Version: 20.10.16
Storage Driver: overlay2
Backing Filesystem: tmpfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2

What made this all work was the TempFs and cgroup volumes we mount inside the container using TempFs and mount arguments, without this line, the Storage driver inside the container was VFS (Painfully slow) — with this config I was able to get OverlayFs on top of TempFs, for info see: https://docs.docker.com/storage/storagedriver/select-storage-driver/

--tmpfs=/var/lib/docker:mode=0777,dev,size=15g,suid,exec

I was unable to boot the container without this line

-v /sys/fs/cgroup:/sys/fs/cgroup:rw

Let’s bring it up and see what it looks like, we will start with bringing up the image: generic/ubuntu2004

vagrant up --provision-with basetools --provider docker
vagrant up --provision-with docker --provider docker
vagrant up --provision-with docsify --provider docker
vagrant up --provision-with vault --provider docker
vagrant up --provision-with consul --provider docker
vagrant up --provision-with nomad --provider docker

and this will allow you to ssh into the container like so:

vagrant ssh

Access Documentation on http://localhost:3333/

vagrant up --provision-with docsify --provider docker
vagrant up --provision-with consul --provider docker

Access Consul on http://localhost:8500/

vagrant up --provision-with vault --provider docker

Access Vault on http://localhost:8200/

vagrant up --provision-with nomad --provider docker

Access Nomad on http://localhost:4646/

vagrant up --provision-with waypoint --provider docker

Access Waypoint on https://localhost:9702/

Things that tripped me up while trying to get this to work was:

  • WSL2 wrong volume mounts and unable to boot Ubuntu 20.04
  • Vagrant Box that supports, Virtualbox, Hyper-V and Docker
  • OOM Killed Containers
  • OverlayFS on top of OverlayFS (Docker daemon running inside docker container)
  • Cgroups

Links that helped me throughout this project

As a BONUS! While testing Waypoint, I had the opportunity to bring up Minikube as well, let’s see how that works.

vagrant up --provision-with minikube --provider docker

Access Minikube Dashboard on http://localhost:10888

vagrant@hashiqube0:~$ kubectl get po,svc -ANAMESPACE              NAME                                            READY   STATUS      RESTARTS      AGE
default pod/hello-minikube-7bc9d7884c-2sjk9 1/1 Running 0 12m
ingress-nginx pod/ingress-nginx-admission-create-wj4n9 0/1 Completed 0 14m
ingress-nginx pod/ingress-nginx-admission-patch-2p9rt 0/1 Completed 0 14m
ingress-nginx pod/ingress-nginx-controller-cc8496874-2hm9g 1/1 Running 0 14m
kube-system pod/coredns-64897985d-j6r5t 1/1 Running 0 14m
kube-system pod/etcd-minikube 1/1 Running 0 14m
kube-system pod/kube-apiserver-minikube 1/1 Running 0 14m
kube-system pod/kube-controller-manager-minikube 1/1 Running 0 14m
kube-system pod/kube-proxy-4t86q 1/1 Running 0 14m
kube-system pod/kube-scheduler-minikube 1/1 Running 0 14m
kube-system pod/registry-88qj5 1/1 Running 0 13m
kube-system pod/registry-proxy-nd78l 1/1 Running 0 13m
kube-system pod/storage-provisioner 1/1 Running 1 (14m ago) 14m
kubernetes-dashboard pod/dashboard-metrics-scraper-58549894f-kzsv6 1/1 Running 0 12m
kubernetes-dashboard pod/kubernetes-dashboard-ccd587f44-962t5 1/1 Running 0 12m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/hello-minikube NodePort 10.100.135.91 <none> 8080:30559/TCP 12m
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
ingress-nginx service/ingress-nginx-controller NodePort 10.96.220.171 <none> 80:30342/TCP,443:32452/TCP 14m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.99.54.83 <none> 443/TCP 14m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15m
kube-system service/registry ClusterIP 10.97.119.219 <none> 80/TCP,443/TCP 13m
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.97.74.188 <none> 8000/TCP 12m
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.96.193.213 <none> 80/TCP 12m

Having a consistent Developer Lab defined in code enables you to:

  • Get Developer started and setup in a short time
  • Consistent Environments creates less surprises in Production
  • Being able to test many different versions with other integrations quickly

And that’s a wrap for now folks, hope you enjoyed this as much as I did, and I hope it helps you to create a consistent Developer Lab, that runs on Multi-Arch and Multi OS environments.

For more information see https://hashiqube.com or https://github.com/star3am/hashiqube and my Linkedin profile https://www.linkedin.com/in/riaannolan/

#weareservian #hashicorp #hashiqube #vault #nomad #consul #waypoint #kubernetes

--

--

Riaan Nolan

My head is in the clouds and my feet are in the beach sand, I’m working on a dream ❤