Getting Started with AirGap Deployment of Longhorn Block Storage with Zarf

Jason van Brackel
Defense Unicorns
Published in
14 min readMar 20, 2023

What is Longhorn?

Longhorn is a cloud-native block storage project from the Cloud Native Computing Foundation designed to be deployed on Kubernetes clusters. It uses object storage for backups like Amazon S3 or MinIO for non-cloud deployments.

It provides block storage that is highly available, scalable, and fault-tolerant by distributing the storage across multiple nodes in a Kubernetes cluster and replicating the data across those nodes in Longhorn Volumes.

Architecturual Concepts Diagram of the Longhorn Cloud Storage Platform pulled from https://longhorn.io
Longhorn High Level Architecture source: https://longhorn.io

This ensures that even if a node fails, the data remains available and can be accessed from other nodes. Longhorn also provides backup and restore, snapshotting, and encryption to ensure that data is secure and recoverable.

“In addition, it supports the creation of Disaster Recovery clusters that can be used to restore data if an entire cluster goes down.

Note: This blog post has an associated YouTube video you can find on the Defense Unicorns YouTube channel. As these examples evolved over the last month as this was developed, there are slight differences between the video, the screenshots and the code.

Environment Setup

For this example, we’ll assume no test environment setup, so your milage may vary.

Install the Zarf Binary

doug in ~/src/github.com/defenseunicorns 🦄 zarf version
v0.24.2

Initialize Zarf

You’ll need a Kubernetes cluster. Installing a container runtime like Docker, and setting up a Kubernetes cluster is beyond the scope of this tutorial, but I’ll quickly show you how to create a cluster with K3d.

doug in ~/src/github.com/defenseunicorns 🦄 k3d cluster create longhorn-example
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-longhorn-example'
INFO[0000] Created image volume k3d-longhorn-example-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-longhorn-example-tools'
INFO[0001] Creating node 'k3d-longhorn-example-server-0'
INFO[0001] Creating LoadBalancer 'k3d-longhorn-example-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] Starting new tools node...
INFO[0001] Starting Node 'k3d-longhorn-example-tools'
INFO[0002] Starting cluster 'longhorn-example'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-longhorn-example-server-0'
INFO[0006] All agents already running.
INFO[0006] Starting helpers...
INFO[0006] Starting Node 'k3d-longhorn-example-serverlb'
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0015] Cluster 'longhorn-example' created successfully!
INFO[0015] You can now use it like this:
kubectl cluster-info
  • Note: If you get this error, make sure Docker is running.
doug in ~/src/github.com/defenseunicorns 🦄 k3d cluster create
ERRO[0000] Failed to get nodes for cluster 'k3s-default': docker failed to get containers with labels 'map[k3d.cluster:k3s-default]': failed to list containers: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
INFO[0000] Prep: Network
ERRO[0000] Failed Cluster Preparation: Failed Network Preparation: failed to create cluster network: failed to check for duplicate docker networks: docker failed to list networks: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERRO[0000] Failed to create cluster >>> Rolling Back
INFO[0000] Deleting cluster 'k3s-default'
ERRO[0000] Failed to get nodes for cluster 'k3s-default': docker failed to get containers with labels 'map[k3d.cluster:k3s-default]': failed to list containers: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERRO[0000] failed to get cluster: No nodes found for given cluster
FATA[0000] Cluster creation FAILED, also FAILED to rollback changes!

Initialize Zarf

Now that you have a Kubernetes cluster, you’re going to initialize Zarf with the zarf init command. It will look different depending on your operating system. Feel free to reject or accept optional components. It’s not essential to our work today.

doug in ~/src/github.com/defenseunicorns 🦄 zarf init
Saving log file to /var/folders/bk/rz1xx2sd5zn134c0_j1s2n5r0000gp/T/zarf-2023-02-23-11-39-26-2360079011.log
*,
*(((&&&&&/*.
*(((((%&&&&&&&*,
*(((((((&&&&&&&&&* ,,*****,. **%&&&&&((((((
*(((((((((&&&&&&&@* **@@@@@@&&&&&&&&&&@@@@@** */&&&&&&((((((((((
*((((((///(&&&&&&@@@@&&&&@@@@@@@@@&&&&&&&&&&&&&&@/* *%&&&&&&/////((((((*
*(((///////&&&&&&&&&&&&&@@@@@@@@@&&&&&&&&&&&&&&&&&(%&&&/**///////(/*
*/&&&&&&&&&&&&&&&&*/***&%&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&/*******///*
*&%&&&&&&&&&&&&&&&&&&&&&&&***&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&****/&&&&&&&&&&&&(/*
*/((((((((((((((///////******%%&&&&&&&&//&@@*&&&&&&&&&&&&&&&&&&&#%&&&&*/####(/////((((/.
*/((((((((((///////******%%%%%%%%%(##@@@//%&%%%%%%%&%&%%&&/(@@(/&&(***///////(((*
***(((((/////********%%%%%%%/&%***/((%%%%%%%%%%%%%%%%(#&@*%/%%***/////**
*&&%%%%%%//*******%%%%%%%%@@****%%/%%%%%%%%%%%%%%%%%%***@@%%**(%%%&&*
*&&%%%%%(////******/(%%%%%%%@@@**@@&%%%%%%%%%%%%%%%%%(@@*%@@%%*****////%&*
*&%%%%%#////////***/////%%%%%%*@@@@@/%%%%%%%%%%%%%%%%%%%%@@@@%%*****///////((*
*%%%%((((///////* *////(%%%%%%##%%%%%%%%%%%(%%%%*%%%%%%%%%%%*
*(((((((/*** */////#%%%%%%%%%%#%%%%%%%%%%%%%%%%%%%%#*
%%( ,*///((%%%%%%%%(**/#%%%##**/%%%%%*
%%%&&&& *///*/(((((########//######**
%&&&&&* *#######(((((((//////((((*
###%##############(((#####*
%@&& *&#(%######*#########(#####/
/&&* .. ,&#(/%####(*#########/#####/ #%@%&&&
** && ./%##((*&####/(#######(#####*(* %&&&&&&
*@%%@* *&#####((((####*(#####(*###(*(##* , %@&
*@%%%%* *%######((((*%####/*((*%####/*(###* *
*@%%%%%%* *##* **#(###((((///#*#*(((((/#**#((*(##**#,*/##*, %@&&
*@%%%*%%%* ****,*##/*#*##(((((((/(((((((((/(((*(((((###########*, #&&#
*@%%%*(%%%/* **######(#((..((((((((((((((((((* ,*(#####(((((*,
*@%%%#(*%%%%* ,**/####(* */(((((((((((((((((* ,**,
*@%%%*(/(%%%%/* ******(((((((((((*(((((*
*@%%%#(((*/%%%%%%##%%*((((((((((((**((((*
*@%%%%*(((((((((((((((((((((((*/%*((*. (&&&(
,*%%%%%%*((((((((((((((((**%%%**, (&
*%%%%%%%%%(/*****(#%%%%%** &%
,**%%%%%%%%%%%%%***
,((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((,
.....(((((((/////////////////((((((((.....
/////////////// /////// ***************** ***************,
////. /// //// */// *** ****
////, /// //// *///////////////. /////**/******
///// ////////////// */// */// ///*
./////////////// //// /// */// //// ///*

It seems the init package could not be found locally, but can be downloaded from
https://github.com/defenseunicorns/zarf/releases/download/v0.24.2/zarf-init-arm64-v0.24.2.tar.zst
Note: This will require an internet connection.
? Do you want to download this init package? Yes
✔ Downloading https://github.com/defenseunicorns/zarf/releases/download/v0.24.2/zarf-init-arm64-v0.24.2.tar.zst
✔ Loading Zarf Package /Users/jason/.zarf-cache/zarf-init-arm64-v0.24.2.tar.zst
kind: ZarfInitConfig
metadata:
name: init
description: Used to establish a new Zarf cluster
architecture: arm64
build:
terminal: fv-az363-679
user: runner
architecture: arm64
timestamp: Tue, 14 Feb 2023 02:03:31 +0000
version: v0.24.2
migrations:
- scripts-to-actions
components:
- name: zarf-injector
description: |
Bootstraps a Kubernetes cluster by cloning a running pod in the cluster and hosting the registry image.
Removed and destroyed after the Zarf Registry is self-hosting the registry image.
required: true
cosignKeyPath: cosign.pub
files:
- source: sget://defenseunicorns/zarf-injector:arm64-2023-02-09
target: "###ZARF_TEMP###/zarf-injector"
executable: true
- name: zarf-seed-registry
description: |
Deploys the Zarf Registry using the registry image provided by the Zarf Injector.
required: true
charts:
- name: docker-registry
releaseName: zarf-docker-registry
version: 1.0.0
namespace: zarf
valuesFiles:
- packages/zarf-registry/registry-values.yaml
- packages/zarf-registry/registry-values-seed.yaml
localPath: packages/zarf-registry/chart
- name: zarf-registry
description: |
Updates the Zarf Registry to use the self-hosted registry image.
Serves as the primary docker registry for the cluster.
required: true
charts:
- name: docker-registry
releaseName: zarf-docker-registry
version: 1.0.0
namespace: zarf
valuesFiles:
- packages/zarf-registry/registry-values.yaml
localPath: packages/zarf-registry/chart
manifests:
- name: registry-connect
namespace: zarf
files:
- packages/zarf-registry/connect.yaml
- name: kep-1755-registry-annotation
namespace: zarf
files:
- packages/zarf-registry/configmap.yaml
images:
- registry:2.8.1
- name: zarf-agent
description: |
A Kubernetes mutating webhook to enable automated URL rewriting for container
images and git repository references in Kubernetes manifests. This prevents
the need to manually update URLs from their original sources to the Zarf-managed
docker registry and git server.
required: true
actions:
onCreate:
before:
- cmd: make init-package-local-agent AGENT_IMAGE="agent:v0.24.2"
manifests:
- name: zarf-agent
namespace: zarf
files:
- packages/zarf-agent/manifests/service.yaml
- packages/zarf-agent/manifests/secret.yaml
- packages/zarf-agent/manifests/deployment.yaml
- packages/zarf-agent/manifests/webhook.yaml
images:
- ghcr.io/defenseunicorns/zarf/agent:v0.24.2
- name: logging
description: |
Deploys the Promtail Grafana & Loki (PGL) stack.
Aggregates logs from different containers and presents them in a web dashboard.
Recommended if no other logging stack is deployed in the cluster.
charts:
- name: loki-stack
releaseName: zarf-loki-stack
url: https://grafana.github.io/helm-charts
version: 2.8.9
namespace: zarf
valuesFiles:
- packages/logging-pgl/pgl-values.yaml
manifests:
- name: logging-connect
namespace: zarf
files:
- packages/logging-pgl/connect.yaml
images:
- docker.io/grafana/promtail:2.7.0
- grafana/grafana:8.3.5
- grafana/loki:2.6.1
- quay.io/kiwigrid/k8s-sidecar:1.19.2
- name: git-server
description: |
Deploys Gitea to provide git repositories for Kubernetes configurations.
Required for GitOps deployments if no other git server is available.
actions:
onDeploy:
after:
- maxTotalSeconds: 60
maxRetries: 3
cmd: ./zarf internal create-read-only-gitea-user
charts:
- name: gitea
releaseName: zarf-gitea
url: https://dl.gitea.io/charts
version: 7.0.2
namespace: zarf
valuesFiles:
- packages/gitea/gitea-values.yaml
manifests:
- name: git-connect
namespace: zarf
files:
- packages/gitea/connect.yaml
images:
- gitea/gitea:1.18.3
variables:
- name: K3S_ARGS
description: Arguments to pass to K3s
default: --disable traefik
- name: REGISTRY_EXISTING_PVC
description: "Optional: Use an existing PVC for the registry instead of creating a new one. If this is set, the REGISTRY_PVC_SIZE variable will be ignored."
- name: REGISTRY_PVC_SIZE
description: The size of the persistent volume claim for the registry
default: 20Gi
- name: REGISTRY_CPU_REQ
description: The CPU request for the registry
default: 100m
- name: REGISTRY_MEM_REQ
description: The memory request for the registry
default: 256Mi
- name: REGISTRY_CPU_LIMIT
description: The CPU limit for the registry
default: "3"
- name: REGISTRY_MEM_LIMIT
description: The memory limit for the registry
default: 2Gi
- name: REGISTRY_HPA_MIN
description: The minimum number of registry replicas
default: "1"
- name: REGISTRY_HPA_MAX
description: The maximum number of registry replicas
default: "5"
- name: REGISTRY_HPA_ENABLE
description: Enable the Horizontal Pod Autoscaler for the registry
default: "true"
- name: GIT_SERVER_EXISTING_PVC
description: "Optional: Use an existing PVC for the git server instead of creating a new one. If this is set, the GIT_SERVER_PVC_SIZE variable will be ignored."
- name: GIT_SERVER_PVC_SIZE
description: The size of the persistent volume claim for git server
default: 10Gi
- name: GIT_SERVER_CPU_REQ
description: The CPU request for git server
default: 200m
- name: GIT_SERVER_MEM_REQ
description: The memory request for git server
default: 512Mi
- name: GIT_SERVER_CPU_LIMIT
description: The CPU limit for git server
default: "3"
- name: GIT_SERVER_MEM_LIMIT
description: The memory limit for git server
default: 2Gi
constants:
- name: AGENT_IMAGE
value: agent:v0.24.2
This package has 9 artifacts with software bill-of-materials (SBOM) included. You can view them now
in the zarf-sbom folder in this directory or to go directly to one, open this in your browser:
/Users/jason/src/github.com/defenseunicorns/zarf-sbom/sbom-viewer-docker.io_grafana_promtail_2.7.0.html
* This directory will be removed after package deployment.
? Deploy this Zarf package? Yes
───────────────────────────────────────────────────────────────────────────────────────
name: logging
charts:
- name: loki-stack
releaseName: zarf-loki-stack
url: https://grafana.github.io/helm-charts
version: 2.8.9
namespace: zarf
valuesFiles:
- packages/logging-pgl/pgl-values.yaml
manifests:
- name: logging-connect
namespace: zarf
files:
- packages/logging-pgl/connect.yaml
images:
- docker.io/grafana/promtail:2.7.0
- grafana/grafana:8.3.5
- grafana/loki:2.6.1
- quay.io/kiwigrid/k8s-sidecar:1.19.2
Deploys the Promtail Grafana & Loki (PGL) stack. Aggregates logs from different containers and
presents them in a web dashboard. Recommended if no other logging stack is deployed in the cluster.
? Deploy the logging component? No
───────────────────────────────────────────────────────────────────────────────────────
name: git-server
actions:
onDeploy:
after:
- maxTotalSeconds: 60
maxRetries: 3
cmd: ./zarf internal create-read-only-gitea-user
charts:
- name: gitea
releaseName: zarf-gitea
url: https://dl.gitea.io/charts
version: 7.0.2
namespace: zarf
valuesFiles:
- packages/gitea/gitea-values.yaml
manifests:
- name: git-connect
namespace: zarf
files:
- packages/gitea/connect.yaml
images:
- gitea/gitea:1.18.3
Deploys Gitea to provide git repositories for Kubernetes configurations. Required for GitOps
deployments if no other git server is available.
? Deploy the git-server component? No

📦 ZARF-INJECTOR COMPONENT

✔ Copying 1 files
✔ Gathering cluster information
⠋ Attempting to bootstrap with the [k3d-longhorn-example-server-0]/rancher/mirrored-library-tr ⠙ Attempting to bootstrap with the [k3d-longhorn-example-server-0]/rancher/mirrored-library-tr ⠹ Attempting to bootstrap with the [k3d-longhorn-example-server-0]/rancher/mirrored-library-tr ⠸ Attempting to bootstrap with the [k3d-longhorn-example-server-0]/rancher/mirrored-library-tr ⠼ Attempting to bootstrap with the [k3d-longhorn-example-server-0]/rancher/mirrored-library-tr ⠴ Attempting to bootstrap with the [k3d-longhorn-example-server-0]/rancher/mirrored-library-tr ✔ Attempting to bootstrap the seed image into the cluster
📦 ZARF-SEED-REGISTRY COMPONENT

✔ Loading the Zarf State from the Kubernetes cluster
✔ Processing helm chart docker-registry:1.0.0 from Zarf-generated helm chart

📦 ZARF-REGISTRY COMPONENT

✔ Creating port forwarding tunnel at http://127.0.0.1:61003/v2/_catalog
✔ Storing images in the zarf registry
✔ Processing helm chart docker-registry:1.0.0 from Zarf-generated helm chart
✔ Starting helm chart generation registry-connect
⠋ Processing helm chart raw-init-zarf-registry-registry-connect:0.1.1677170366 from Zarf-generat ✔ Processing helm chart raw-init-zarf-registry-registry-connect:0.1.1677170366 from Zarf-generated helm chart
✔ Starting helm chart generation kep-1755-registry-annotation
⠋ Processing helm chart raw-init-zarf-registry-kep-1755-registry-annotation:0.1.1677170366 from ✔ Processing helm chart raw-init-zarf-registry-kep-1755-registry-annotation:0.1.1677170366 from Zarf-generated helm chart

📦 ZARF-AGENT COMPONENT

✔ Creating port forwarding tunnel at http://127.0.0.1:61015/v2/_catalog
✔ Storing images in the zarf registry
✔ Starting helm chart generation zarf-agent
⠋ Processing helm chart raw-init-zarf-agent-zarf-agent:0.1.1677170366 from Zarf-generated helm c ✔ Processing helm chart raw-init-zarf-agent-zarf-agent:0.1.1677170366 from Zarf-generated helm chart
✔ Zarf deployment complete

Application | Username | Password | Connect
Registry | zarf-push | QFFiAMQ4GBW6BJ01fc4czE2W | zarf connect registry
doug in ~/src/github.com/defenseunicorns 🦄

Longhorn Prerequisites

Longhorn takes advantage of a number of technologies like NFS and Open-ISCSI. Make sure these are installed as the Longhorn Zarf Package includes a built-in check for Longhorn Pre-requisites as spelled out in the Longhorn documentation. If these are not installed the Zarf package deployment will fail. The Installation Requirements documentation catalogs these.

Getting The Longhorn Zarf Example

To create the Zarf Longhorn Package, you can start with the Zarf Longhorn Example.

  • Navigate to the Zarf repository.
  • Press the Code button and copy the https URL.
Screenshot of the Code drop down button with the HTTPS tab open.
  • Clone the git repository in your favorite terminal
doug in ~ 🦄 git clone https://github.com/defenseunicorns/zarf.git
Cloning into 'zarf'...
remote: Enumerating objects: 16346, done.
remote: Counting objects: 100% (846/846), done.
remote: Compressing objects: 100% (388/388), done.
remote: Total 16346 (delta 515), reused 741 (delta 450), pack-reused 15500
Receiving objects: 100% (16346/16346), 37.57 MiB | 7.04 MiB/s, done.
Resolving deltas: 100% (10796/10796), done.
  • Navigate to the examples folder.
doug in ~ 🦄 cd zarf/examples/longhorn
doug in ~/zarf/examples/longhorn on main 🦄

Longhorn Zarf Files

  • Let’s examine the Zarf Package Files. We’ll discuss each file one at a time.

README.md

The Readme.md file is used to populate the Example in the Zarf documentation. It also populates the UI on the github.com folder for that directory.

values.yaml

The values.yaml file is the default values file for Longhorn, with modifications required to support K3s. Modify this for your needs.

connect.yaml

This file creates a service for the longhorn-ui, so an operator can connect to it with the zarf connect command. This creates a kubernetes port-forward to the service.

Any service with the zarf.dev/connect-name label can be accessed using Zarf. The zarf.dev/connect-description annotation is used to populate the output of the zarf connect list command.

In addition to this label and annotation, Zarf also supports the zarf.dev/connect-url annotation. This URL is the initial URL that will be opened in the web browser when the operator runs the zarf connect command. You can see examples of this in the dos-games example and in the docker registry included in the ZarfInitPackage.

zarf.yaml

This is ZarfPackageConfig for the Longhorn Example. Let’s explore each section of this manifest’s components.

Screenshot of the zarf.yaml file lines concerned with the Longhorn environment check

This is the Longhorn environment check, used to check the nodes and ensure they have installed the Longhorn prerequisites.

Screenshot of the zarf.yaml file lines concerned with the the onDeploy, before action and the onRemove, before action. These ensure that the environment is checked for Longhorn prerequisites, and that a prerequisite kubectl command is run, so Longhorn can be removed when requested.

These actions are onDeploy and onRemove. Zarf supports actions for different events. In this example, the onDeploy > before action is running the environmental check. The onRemove > before action runs a kubectl command that allows the operator to remove Longhorn. It’s a safety precaution, so you don’t remove data unintentionally. If you don’t have backups and do a Zarf remove, your data and replicas will be irreversibly gone.

Screenshot of the zarf.yaml file lines concerned with files included in the package. jq is a prerequisite of the environment check and must be included with the package.

jq is a JSON command line parser used by the Longhorn environment check. It is installed on most Linux systems, but we can’t assume that, so we include it in the package.

Screenshot of the zarf.yaml file lines concerned with the Kubenetes manifest for the zarf connect component of the service.

These lines are concerned with the aforementioned connect.yaml Kubernetes service manifest.

Screenshot of the zarf.yaml file lines concerned with the Longhorn helm chart and the associated Longhorn helm values file.

When deploying the chart with Zarf, these lines will pull the Helm chart from the Longhorn Helm repository and apply the aforementioned local values.yaml file.

Screenshot of the zarf.yaml file lines concerned with the container images required to install Longhorn in an air gapped environment.

These are the images required to deploy and run Longhorn in an air gapped environment.

Creating the Zarf Package

  • Run the zarf package create command in the Longhorn directory.
doug in ~/zarf/examples/longhorn on main 🦄 zarf package create .

Saving log file to
/var/folders/bk/rz1xx2sd5zn134c0_j1s2n5r0000gp/T/zarf-2023-03-03-11-23-49-1243511006.log

Using build directory .

kind: ZarfPackageConfig
metadata:
name: longhorn-example
description: Example package for Longhorn cloud native distributed block storage for Kubernetes
architecture: arm64
build:
terminal: JvBUnicornMBP5
user: jason
architecture: arm64
timestamp: Fri, 03 Mar 2023 11:23:49 -0500
version: v0.24.2-6-g22e84ab
migrations:
- scripts-to-actions
components:
- name: longhorn-environment-check
required: true
files:
- source: https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/scripts/environment_check.sh
target: environment_check.sh
executable: true
- name: longhorn
description: Deploy Longhorn into a Kubernetes cluster. https://longhorn.io
required: true
actions:
onDeploy:
before:
- env:
- PATH=$PATH:./
- cmd: alias kubectl="zarf tools kubectl"
- cmd: ./environment_check.sh
onRemove:
before:
- env:
- PATH=$PATH:./
- cmd: alias kubectl="zarf tools kubectl"
- cmd: "kubectl -n longhorn-system patch -p '{\"value\": \"true\"}' --type=merge lhs deleting-confirmation-flag"
files:
- source: https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
target: jq
executable: true
charts:
- name: longhorn
url: https://charts.longhorn.io
version: 1.4.0
namespace: longhorn-system
valuesFiles:
- values.yaml
manifests:
- name: longhorn-connect
namespace: longhorn-system
files:
- connect.yaml
images:
- longhornio/csi-attacher:v3.4.0
- longhornio/csi-provisioner:v2.1.2
- longhornio/csi-resizer:v1.3.0
- longhornio/csi-snapshotter:v5.0.1
- longhornio/csi-node-driver-registrar:v2.5.0
- longhornio/livenessprobe:v2.8.0
- longhornio/backing-image-manager:v1.4.0
- longhornio/longhorn-engine:v1.4.0
- longhornio/longhorn-instance-manager:v1.4.0
- longhornio/longhorn-manager:v1.4.0
- longhornio/longhorn-share-manager:v1.4.0
- longhornio/longhorn-ui:v1.4.0
- longhornio/support-bundle-kit:v0.0.17
? Create this Zarf package? Yes

Specify a maximum file size for this package in Megabytes. Above this size, the package will be
split into multiple files. 0 will disable this feature.
? Please provide a value for "Maximum Package Size" 0


📦 LONGHORN-ENVIRONMENT-CHECK COMPONENT


✔ Downloading https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/scripts/environment_check.sh


📦 LONGHORN COMPONENT


✔ Processing helm chart longhorn:1.4.0 from repo https://charts.longhorn.io
✔ Downloading https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
✔ Loading 1 K8s manifests


📦 COMPONENT IMAGES


✔ Loading metadata for 13 images. This step may take several seconds to complete.
✔ Pulling 13 images (786.80 MBs)
✔ Creating SBOMs for 13 images and 2 components with files.

Deploying the Zarf Package

  • You can deploy the Longhorn Package with the zarf package deploy command.
doug in ~/zarf/examples/longhorn on main 🦄 zarf package deploy

Saving log file to /tmp/zarf-2023-03-03-19-57-43-1761103531.log
✔ Loading Zarf Package zarf-package-longhorn-example-amd64.tar.zst

kind: ZarfPackageConfig
metadata:
name: longhorn-example
description: Example package for Longhorn cloud native distributed block storage for Kubernetes
architecture: amd64
build:
terminal: k3s80controller1
user: kuber
architecture: amd64
timestamp: Fri, 03 Mar 2023 19:51:41 +0000
version: v0.24.3
migrations:
- scripts-to-actions
components:
- name: longhorn-environment-check
required: true
files:
- source: https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/scripts/environment_check.sh
target: environment_check.sh
executable: true
- name: longhorn
description: Deploy Longhorn into a Kubernetes cluster. https://longhorn.io
required: true
actions:
onDeploy:
before:
- env:
- PATH=$PATH:./
- cmd: alias kubectl="zarf tools kubectl"
- cmd: ./environment_check.sh
onRemove:
before:
- env:
- PATH=$PATH:./
- cmd: alias kubectl="zarf tools kubectl"
- cmd: "kubectl -n longhorn-system patch -p '{\"value\": \"true\"}' --type=merge lhs deleting-confirmation-flag"
files:
- source: https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
target: jq
executable: true
charts:
- name: longhorn
url: https://charts.longhorn.io
version: 1.4.0
namespace: longhorn-system
valuesFiles:
- values.yaml
manifests:
- name: longhorn-connect
namespace: longhorn-system
files:
- connect.yaml
images:
- longhornio/csi-attacher:v3.4.0
- longhornio/csi-provisioner:v2.1.2
- longhornio/csi-resizer:v1.3.0
- longhornio/csi-snapshotter:v5.0.1
- longhornio/csi-node-driver-registrar:v2.5.0
- longhornio/livenessprobe:v2.8.0
- longhornio/backing-image-manager:v1.4.0
- longhornio/longhorn-engine:v1.4.0
- longhornio/longhorn-instance-manager:v1.4.0
- longhornio/longhorn-manager:v1.4.0
- longhornio/longhorn-share-manager:v1.4.0
- longhornio/longhorn-ui:v1.4.0
- longhornio/support-bundle-kit:v0.0.17
This package has 15 artifacts with software bill-of-materials (SBOM) included. You can view them now
in the zarf-sbom folder in this directory or to go directly to one, open this in your browser:
/home/kuber/zarf/examples/longhorn/zarf-sbom/sbom-viewer-longhornio_backing-image-manager_v1.4.0.html

* This directory will be removed after package deployment.

? Deploy this Zarf package? Yes


📦 LONGHORN-ENVIRONMENT-CHECK COMPONENT


✔ Copying 1 files


📦 LONGHORN COMPONENT


✔ Completed ""
✔ Completed "alias kubectl="zarf tools kubectl""
[INFO] Required dependencies 'kubectl jq mktemp' are installed.
[INFO] Hostname uniqueness check is passed.
[INFO] Waiting for longhorn-environment-check pods to become ready (0/2)...
[INFO] All longhorn-environment-check pods are ready (2/2).
[WARN] Unable to check kernel config CONFIG_NFS_V4_1 on node k3s80controller1
[WARN] Unable to check kernel config CONFIG_NFS_V4_1 on node k3s80worker0
[WARN] Unable to check kernel config CONFIG_NFS_V4_2 on node k3s80controller1
[WARN] Unable to check kernel config CONFIG_NFS_V4_2 on node k3s80worker0
[WARN] NFS client kernel support, CONFIG_NFS_V4_1 CONFIG_NFS_V4_2, is not enabled on Longhorn nodes. Please refer to https://longhorn.io/docs/1.4.0/deploy/install/#installing-nfsv4-client for more information.
[INFO] Required packages are installed.
[INFO] MountPropagation is enabled.
[INFO] Cleaning up longhorn-environment-check pods...
[INFO] Cleanup completed.
✔ Completed "./environment_check.sh"
✔ Copying 1 files
✔ Loading the Zarf State from the Kubernetes cluster
✔ Creating port forwarding tunnel at http://127.0.0.1:43401/v2/_catalog
✔ Storing images in the zarf registry
✔ Processing helm chart longhorn:1.4.0 from https://charts.longhorn.io
✔ Starting helm chart generation longhorn-connect
✔ Processing helm chart raw-longhorn-example-longhorn-longhorn-connect:0.1.1677873463 from Zarf-generated helm chart
✔ Zarf deployment complete

Connect Command | Description
zarf connect longhorn-ui | Connect to the Longhorn User Interface

Using Zarf Connect to Confirm Everything is Working

  • Run the zarf connect command to make sure everything is installed properly.
doug in ~/zarf/examples/longhorn on main 🦄 zarf connect longhorn-ui

Saving log file to
/var/folders/bk/rz1xx2sd5zn134c0_j1s2n5r0000gp/T/zarf-2023-03-03-15-29-23-1248805990.log
⠋ Looking for a Zarf Connect Label in the cluster (0s)
✔ Creating port forwarding tunnel at http://127.0.0.1:58961
http://127.0.0.1:58961
Screenshot of the Longhorn frontend user interface.

This should open your web browser displaying the Longhorn front end user interface.

Connect with Us

If you have any questions about this or run into any issues, please feel free to reach out to us in the #Zarf channel in Kubernetes Slack.

--

--

Defense Unicorns
Defense Unicorns

Published in Defense Unicorns

We provide mission heroes with DevSecOps and organizational transformation stories from the world’s leading problem solvers. For all our new and up-to-date content, please visit defenseunicorns.com/blog. Explore an expansive set of articles, and learn more about Defense Unicorns.

Jason van Brackel
Jason van Brackel

Written by Jason van Brackel

Christian, Software Guy, Enjoys helping people level up with technology and solve hard problems

No responses yet