Deployment and configuration of MVI-Edge using KubeStellar

Franco Stellari
17 min readJun 19, 2023

--

by Franco Stellari, Braulio Dumba, and Robert Filepp

Table of contents:

1. Overview
2. Install and run KubeStellar
3. Onboard two MicroShift clusters, running on two NVIDIA Jetson Nanos
4. Prepare KubeStellar Workload Management Workspaces (WMW)
5. Connect ArgoCD to KubeStellar WMWs
6. Deploy and configure IBM MVI-Edge using ArgoCD
7. Conclusions

1. Overview

IBM Maximo Visual Inspection (MVI) makes Computer Vision (CV) based on deep learning more accessible to users and it enables them to quickly label images, train and deploy AI vision models. IBM MVI is built for easy and rapid deployment, and simple training of models using a drag and drop interface. A containerized MVI-Edge and iOS applications are available to be used to collect data, process the images, and make inferences at the user site. For example, an MVI-Edge application could be deployed at a manufacturing line to continuously inspect parts, components, and final products to guarantee high quality. In particular, MVI-Edge can be connected to multiple cameras, such as specialized cameras, drones, and vehicle cameras, while its dashboard allows one to manage and view inspections across multiple inspection points. Each MVI-Edge application can configure multiple Stations that aggregate one or more Inspection(s), for example, one for each camera input source. In each Inspection, users can create custom rules that define whether the identified objects pass or fail inspections. Both types of application connect to a central MVI service, which we will refer to in this article as MVI-Hub, that allows users to develop and train new models and to store training images, inspection images, trained models, etc. These resources are made available to multiple MVI-Edge applications connected to it.

In this article, we will show how to use KubeStellar to deploy and configure MVI-Edge instances on two MicroShift clusters running on two NVIDIA Jetson Nano (JN) developer kit Single Board Computers (SBCs).

KubeStellar is a multicluster configuration management for edge, multi-cloud, and hybrid cloud. KubeStellar aims to handle disconnected operation for clusters that do not always have connectivity, large scale deployments, small clusters with low resources, and different types of clouds: edge, sovereign, regulated, high-performance, on-premises. You can learn more from KubeStellar documentation.

In our demonstration, we deploy KubeStellar server in an Ubuntu 22.04 LTS virtual machine alongside a Kind cluster used to run an instance of ArgoCD.

In our example, the two MVI-Edge applications will be slightly different for each of the MicroShift clusters. Firstly, each application will have a unique name, so that it can be uniquely addressed. Secondly, each application will be uniquely configured to perform different tasks, for example, one instance will run a local AI model, while the other instance will run a remote model residing in the MVI-Hub.

An AWS Elastic Container Registry stores the container images of two custom operators that are used to deploy and configure the MVI-Edge application. Custom Resource Definitions are used to determine the settings for each MicroShift cluster.

As of KubeStellar release v0.2.2 used for this blog, resource customization is not fully implemented. Therefore, in order to setup two different instances of MVI-Edge on the two MicroShift clusters, we will be using two separate KubeStellar Workload Management Workspaces (wmw).

The MVI-Edge application deployment and configuration requires a variety of namespaced and non-namespaced Kubernetes objects, such as Deployments, Services, Service Accounts, Roles, Role Bindings, Cluster Roles, cluster Role Bindings, Custom Resource Definitions, Custom resources, and Secrets.

While the Secrets will be manually applied to the two wmw using kubectl apply commands, the remaining objects listed in the table will be deployed in two phases to the wmw using Continuous Delivery (CD) best practice based on ArgoCD.

The figures below illustrate the overall architecture of the deployment and configuration of the MVI-Edge application discussed in this demonstration.

Architecture of the deployment and configuration of the MVI-Edge application using KubeStellar

During the first application deployment phase, the deployment operator YAML for each MVI-Edge application is pushed to the corresponding Git repositories. ArgoCD will detect the updates and apply the new objects to the corresponding KubeStellar wmw. After some time, the KubeStellar Syncer will detect the new objects in the KubeStellar workspace and sync them into the MicroShift cluster. The KubeStellar Syncer runs in each targeted cluster and watches for Kubernetes objects in the KubeStellar Workspaces that have been assigned to this specific cluster by using an Edge Placement Custom Resource. After syncing and applying the operator objects into the cluster(s), the MVI-Edge operator(s) will deploy the MVI-Edge application based on configuration specified in a corresponding Custom Resource.

After some time, the MVI-Edge application will be running and its User Interface (UI) will be accessible from the web. However, at this point, the application is not configured: it is not connected to the MVI-Hub, it does not have access to any AI model, and it does not have an inspection setup.

To fully configure the application, the configuration operator YAML is then pushed to the corresponding Git repositories. ArgoCD will again detect the updates and apply the new objects to the corresponding KubeStellar wmw. Once again, after some time, the KubeStellar syncer will detect the new objects in the KubeStellar workspace and sync them into the MicroShift cluster. Once the configuration operator is running it will use the information contained in another Custom Resource to fully configure the MVI-Edge application.

2. Install and run KubeStellar

In this example, the KubeStellar server v0.2.2 is installed in an Ubuntu 22.04 LTS virtual machine following the simple instructions described in KubeStellar Quickstart page.

It should be noted that KubeStellar requires kubectl version 1.23-1.25 and jq preinstalled but Go, Docker, and Kind are not required because we are using remote MicroShift clusters running on a physical machine.

The installation process is very simple, and it auto detects the host type and architecture. A single bootstrap command can be used to install and run both KubeStellar and its kcp dependency:

bash <(curl -s https://raw.githubusercontent.com/kcp-dev/edge-mc/main/bootstrap/bootstrap-kubestellar.sh) --kubestellar-version v0.2.2 --bind-address $(ifconfig | grep -A 1 'enp0s8' | tail -1 | awk '{print $2}')

It should be noted that, in this case, we have used the flags --kubesterllar-version v0.2.2 to select KubeStellar v0.2.2 and --bind-address to specify a binding address. In fact, since we are running KubeStellar in an Ubuntu virtual machine but we are planning to use target clusters running remotely on a different physical machine, we have to make sure to bind kcp to a publicly accessible ip address of the virtual machine that it is accessible from the remote MicroShift clusters.

The command above should yield something like:

< KubeStellar bootstrap started >----------------
< Ensure kcp is installed >----------------------
kcp not found in the PATH.
Downloading kcp+plugins v0.11.0 linux/amd64...
Installing kcp+plugins into '/home/vagrant/kcp'...
Add kcp folder to the PATH: export PATH="/home/vagrant/kcp/bin:$PATH"
< Ensure kcp is running >-----------------------
Running kcp bound to address 192.168.1.1... logfile=/home/vagrant/kcp_log.txt
Waiting for kcp to be ready... it may take a while
kcp version v0.11.0 ... ok
Export KUBECONFIG environment variable: export KUBECONFIG="/home/vagrant/.kcp/admin.kubeconfig"
< Ensure KubeStellar is installed >-------------
KubeStellar not found in the PATH.
Downloading KubeStellar v0.2.2 linux/amd64...
Installing KubeStellar into '/home/vagrant/kubestellar'...
Add KubeStellar folder to the PATH: export PATH="/home/vagrant/kubestellar/bin:$PATH"
< Ensure KubeStellar is running >---------------
Starting or restarting KubeStellar...
****************************************
Launching KubeStellar ...
****************************************
Workspace "espw" (type root:organization) created. Waiting for it to be ready...
Workspace "espw" (type root:organization) is ready to use.
Current workspace is "root:espw" (type root:organization).
Finished populate the espw with kubestellar crds and apiexports
mailbox-controller is running (log file: /home/vagrant/kubestellar-logs/mailbox-controller-log.txt)
scheduler is running (log file: /home/vagrant/kubestellar-logs/kubestellar-scheduler-log.txt)
placement translator is running (log file: /home/vagrant/kubestellar-logs/placement-translator-log.txt)
****************************************
Finished launching KubeStellar ...
****************************************
Current workspace is "root".
< KubeStellar bootstrap completed successfully >-
Please create/update the following environment variables:
export PATH="/home/vagrant/kcp/bin:$PATH"
export KUBECONFIG="/home/vagrant/.kcp/admin.kubeconfig"
export PATH="/home/vagrant/kubestellar/bin:$PATH"

At this point, we just need to export the PATH and KUBECONFIG environment variables as prescribed by the Quickstart instructions:

export PATH="$PATH:$(pwd)/kcp/bin:$(pwd)/kubestellar/bin"
export KUBECONFIG="$(pwd)/.kcp/admin.kubeconfig"

The following command kubectl ws tree can be used to verify the presence of KubeStellar Edge Service Provider Workspace (espw):

.
└── root
├── compute
└── espw

This is all it takes to get an instance of KubeStellar up and running, and ready to be used.

3. Onboard two MicroShift clusters, running on two NVIDIA Jetson Nanos

In this example, we will use KubeStellar to deploy and configure two different instances of MVI-Edge into two MicroShift clusters running on two NVIDIA Jetson Nano SBCs. For this purpose, we have followed the excellent tutorial by Alexei Karve on how to install Ubuntu 20.04 (the latest version supporting NVIDIA GPU libraries) and MicroShift 4.8.0 (the latest version supporting Ubuntu).

First, let us create an mvi-demo workspace for our demo under the root workspace:

kubectl ws "root"
kubectl ws create "mvi-demo" --enter
Workspace "mvi-demo" (type root:organization) created. Waiting for it to be ready...
Workspace "mvi-demo" (type root:organization) is ready to use.
Current workspace is "root:mvi-demo" (type root:organization)

Then, let us create a KubeStellar Inventory Management Workspace (imw) in the newly created mvi-demo workspace:

kubectl ws create imw --enter
Workspace "imw" (type root:universal) created. Waiting for it to be ready...
Workspace "imw" (type root:universal) is ready to use.
Current workspace is "root:mvi-demo:imw" (type root:universal).

At this point, we can prepare the two KubeStellar syncers that will be deployed in the two MicroShift clusters and that are used to connect them back to the KubeStellar server. We will call the two syncers jn-a and jn-b and add corresponding labels jn=a and jn=b, respectively. It should be noted that this command requires jq to be installed.

kubectl kubestellar prep-for-cluster --imw "root:mvi-demo:imw" jn-a jn=a
kubectl kubestellar prep-for-cluster --imw "root:mvi-demo:imw" jn-b jn=b

If everything works as expected, two new YAML files should be created in the current folder:

jn-a-syncer.yaml
jn-b-syncer.yaml

At this point, each of the syncer YAML files needs to be copied to their corresponding MicroShift cluster and applied to the run the KubeStellar syncer.

In the context of thejn-a MicroShift cluster, let us deploy the jn-a-syncer.yaml YAML file with the command:

$  oc apply -f jn-a-syncer.yaml
namespace/kcp-edge-syncer-jn-a-1srb3ev8 created
serviceaccount/kcp-edge-syncer-jn-a-1srb3ev8 created
secret/kcp-edge-syncer-jn-a-1srb3ev8-token created
clusterrole.rbac.authorization.k8s.io/kcp-edge-syncer-jn-a-1srb3ev8 created
clusterrolebinding.rbac.authorization.k8s.io/kcp-edge-syncer-jn-a-1srb3ev8 created
secret/kcp-edge-syncer-jn-a-1srb3ev8 created
deployment.apps/kcp-edge-syncer-jn-a-1srb3ev8 created

After few seconds, we can verify that the corresponding syncer pod is running correctly:

# Context: jn-a MicroShift cluster
$ oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kcp-edge-syncer-jn-a-1srb3ev8 kcp-edge-syncer-jn-a-1srb3ev8-846bd666df-579s7 1/1 Running 0 28s

Similarly, jn-b-syncer.yaml is deployed in the jn-b MicroShift cluster.

# Context: jn-b MicroShift cluster
$ oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kcp-edge-syncer-jn-b-17dcit57 kcp-edge-syncer-jn-b-17dcit57-594fcf76b6-l7w5l 1/1 Running 0 38s

These few commands are all that it takes to onboard the two MicroShift clusters running on NVIDIA Jetson Nano SBCs.

4. Prepare KubeStellar Workload Management Workspaces (WMW)

In this section, we will show how to prepare KubeStellar Workload Management Workspaces (WMW) for the MVI-Edge workload.

In order to setup two different instances of MVI-Edge on the two MicroShift clusters, we will be using two separated workspaces in the KubeStellar server.

First, let us create the jn-a WMW under "root:mvi-demo" that will be used for the jn-a MicroShift cluster:

$ kubectl ws "root:mvi-demo"
$ kubectl kubestellar ensure wmw jn-a
Workspace "jn-a" (type root:universal) created. Waiting for it to be ready...
Workspace "jn-a" (type root:universal) is ready to use.
Current workspace is "root:mvi-demo:jn-a" (type root:universal).
apibinding.apis.kcp.io/bind-espw created
apibinding.apis.kcp.io/bind-kube created

Then, let us apply the Edge Placement YAML based on the list of resources needed by MVI-Edge. This custom object selects which namespace and cluster-level objects will be synced to the jn-a MicroShift cluster. More information about Edge Placement objects and Scheduling can be found in the KubeStellar documentation and in the excellent blog post by Jun Duan titled "Make Multi-Cluster Scheduling a No-brainer".

kubectl apply -f - <<EOF
apiVersion: edge.kcp.io/v1alpha1
kind: EdgePlacement
metadata:
name: jn-a
spec:
locationSelectors:
- matchLabels: {"jn":"a"}
namespaceSelector:
matchLabels: {"mvi":"demo"}
nonNamespacedObjects:
- apiGroup: apis.kcp.io
resources:
- apibindings
resourceNames:
- bind-kube
- apiGroup: apiextensions.k8s.io
resources:
- customresourcedefinitions
resourceNames:
- mviapps.edge.mvi.com
- mviconfigs.edge.mvi.com
- apiGroup: rbac.authorization.k8s.io
resources:
- clusterroles
resourceNames:
- mvi-app-operator-manager-role
- mvi-app-operator-metrics-reader
- mvi-app-operator-proxy-role
- mvi-config-operator-manager-role
- mvi-config-operator-metrics-reader
- mvi-config-operator-proxy-role
- apiGroup: rbac.authorization.k8s.io
resources:
- clusterrolebindings
resourceNames:
- mvi-app-operator-manager-rolebinding
- mvi-app-operator-proxy-rolebinding
- mvi-config-operator-manager-rolebinding
- mvi-config-operator-proxy-rolebinding
EOF

By inspecting the above YAML, we can observe a first spec section targeting the KubeStellar syncer created with the label jn=a:

locationSelectors:
- matchLabels: {"jn":"a"}

A second spec section is listing the label(s) used to select the namespaces to be synced to the MicroShift cluster, in this case mvi=demo:

namespaceSelector:
matchLabels: {"mvi":"demo"}

Finally, there is a list of the non-namespaced objects, separated by type. It is important to note that in KubeStellar v0.2.2 this is in terms of resources, which are lowercase and plural, rather than kinds, which are UpperCamelCase and singular. This behavior is also different from kubectl command usage, which is case insensitive.

For example:

resources:
- customresourcedefinitions

Additionally, resourceNames: does not currently support wildcard or regex pattern matching.

As a result of the creation of the jn-a Edge Placement object, a Single Placement Slice, called jn-a, is also created in the same jn-a WMW:

$ k get SinglePlacementSlice jn-a -o yaml
apiVersion: edge.kcp.io/v1alpha1
destinations:
- cluster: 2ndrw3mnehklnn24
locationName: jn-a
syncTargetName: jn-a
syncTargetUID: 6600b404-e7da-4317-af4e-e927744fab84
kind: SinglePlacementSlice
metadata:
annotations:
kcp.io/cluster: 1x7xaymnjpzufe61
creationTimestamp: "2023-06-02T19:36:43Z"
generation: 1
name: jn-a
ownerReferences:
- apiVersion: edge.kcp.io/v1alpha1
kind: EdgePlacement
name: jn-a
uid: d9503367-9f6b-4f82-aca1-790a7bf9c02f
resourceVersion: "1213"
uid: 77a63a4d-9876-4c5a-8c82-ced4b1ae3763

The same process is repeated to create a jn-b WMW under "root:mvi-demo" and an Edge Placement object targeting the jn=b label of the previously created KubeStellar syncer for the jn-b MicroShift cluster.

Now, the workspace tree should look like below, with an IMW and two WMWs (jn-a and jn-b) setup under the mvi-demo workspace, and two KubeStellar Mailbox workspaces corresponding to the two WMWs under the espw workspace:

$ kubectl ws "root"
$ kubectl ws tree
.
└── root
├── compute
├── espw
│ ├── 2ndrw3mnehklnn24-mb-30a3260d-91e3-46be-82b6-e84751670834
│ └── 2ndrw3mnehklnn24-mb-6600b404-e7da-4317-af4e-e927744fab84
└── mvi-demo
├── imw
├── jn-a
└── jn-b

The MVI-Edge application and the operators required for its deployment and configuration require several secrets to access private registries and credentials to connect to the MVI-Hub. Since, we do not want to store such information in a Git repository, we will now manually apply them to KubeStellar WMWs.

First, we create an mvi Namespace in the root:mvi-demo:jn-a" WMW, making sure to use the same mvi=demo label referenced in the Edge Placement object above:

kubectl ws "root:mvi-demo:jn-a"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: mvi
labels: {"mvi":"demo"}
EOF

Then we create the three secrets required to access the operator and MVI-Edge container images in the mvi Namespace:

kubectl create secret docker-registry mvi-edge-us \
--namespace mvi \
--docker-server="..." \
--docker-username="..." \
--docker-password="..."

kubectl create secret docker-registry mvi-edge-cp \
--namespace mvi \
--docker-server="..." \
--docker-username="..." \
--docker-password="..."

kubectl create secret docker-registry mvi-edge-aws \
--namespace mvi \
--docker-server="..." \
--docker-username=AWS \
--docker-password="$(aws ecr get-login-password --region us-west-2)"

Finally, we can add the credentials required by the MVI-Edge application to access the MVI-Hub, for example to download new models:

kubectl apply -n mvi -f mvi-edge-cred-jn-a.yaml

We can now confirm that the secrets have been created in the KubeStellar WMW:

$ kubectl get secrets -n mvi
NAME TYPE DATA AGE
mvi-edge-aws kubernetes.io/dockerconfigjson 1 2m11s
mvi-edge-cp kubernetes.io/dockerconfigjson 1 3m14s
mvi-edge-creds Opaque 4 39s
mvi-edge-us kubernetes.io/dockerconfigjson 1 3m23s

and successfully transferred by the KubeStellar Syncer to the corresponding MicroShift cluster running on the NVIDIA Jetson Nano jn-a:

$ oc get secrets -n mvi
NAME TYPE DATA AGE
mvi-edge-aws kubernetes.io/dockerconfigjson 1 4m28s
mvi-edge-cp kubernetes.io/dockerconfigjson 1 5m36s
mvi-edge-creds Opaque 4 2m48s
mvi-edge-us kubernetes.io/dockerconfigjson 1 5m36s

Once again, the same process is repeated for the root:mvi-demo:jn-b WMW .

5. Connect ArgoCD to KubeStellar WMWs

In this section, we bring up an instance of ArgoCD running in a Kind cluster following the instructions here and set it up to work with the previously created jn-a and jn-b WMWs.

Here we will base our work on the following excellent tutorials “Sync 10,000 Argo CD Applications in One Shot” by Jun Duan and “How To Sync 10,000 Argo CD Applications in One Shot, By Yourself” by Robert Filepp.

In the context of the Kind cluster running in the Ubuntu 22.04 LTS virtual machine:

$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
argocd argocd-application-controller-0 1/1 Running 0 50s
argocd argocd-applicationset-controller-54c565cb6b-njmm9 1/1 Running 0 50s
argocd argocd-dex-server-56d48d4bcf-md9hk 1/1 Running 0 50s
argocd argocd-notifications-controller-54d7bdc957-qlvvd 1/1 Running 0 50s
argocd argocd-redis-685866888c-4k6ld 1/1 Running 0 50s
argocd argocd-repo-server-57696966b-wvg6h 1/1 Running 0 50s
argocd argocd-server-5d76f98658-xrxw2 1/1 Running 0 50s

We can retrieve ArgoCD admin password with the following command:

export ARGOCD_PASSWD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)

In order to work with KubeStellar workspaces, we need to update ArgoCD tenancy with the following command:

kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
resource.exclusions: |
- apiGroups:
- "tenancy.kcp.dev" # kcp dev
- "tenancy.kcp.io" # kcp prod
kinds:
- "ClusterWorkspace"
clusters:
- "*"
EOF

and restart ArgoCD with the following command:

kubectl rollout restart deployment argocd-server -n argocd

Back in the context of the KubeStellar server, we need to prepare each of the WMWs. In "root:mvi-demo:jn-a", we need to create an empty kube-system namespace that is expected by ArgoCD:

kubectl ws "root:mvi-demo:jn-a"
kubectl create ns kube-system

Then, we need to create a context in KubeStellar kubeconfig pointing to the "root:mvi-demo:jn-a" workspace:

kubectl kcp workspace create-context

Finally, using ArgoCD CLI, we can login to ArgoCD and add the newly created context as a target cluster to ArgoCD :

yes | argocd login --username admin --password $ARGOCD_PASSWD $ARGOCD_URL:$ARGOCD_PORT
yes | argocd cluster add "root:mvi-demo:jn-a"

where $ARGOCD_PASSWD can be retrieved as shown above and $ARGOCD_URL:$ARGOCD_PORT point to the ArgoCD address.

The same steps are repeated for the "root:mvi-demo:jn-b" workspace.

Finally, we can switch back to the original context:

kubectl config set-context workspace.kcp.io/current

In ArgoCD UI we can confirm the presence of the two clusters mapping KubeStellar jn-a and jn-b WMWs:

ArgoCD UI showing the KubeStellar jn-a and jn-b WMWs mounted as clusters

From this point, we can setup the MVI-Edge applications for jn-a and jn-b as shown in the image below. Each application is setting a different KubeStellar WMW as a destination.

ArgoCD UI showing the MVI-Edge applications

6. Deploy and configure MVI-Edge using ArgoCD

After pushing the YAML files for the MVI-Edge application deployment operator to the Git repository, we can manually sync the objects to the KubeStellar WMWs using the ArgoCD sync button.

ArgoCD show that the objects have been synced to the KubeStellar WMW

Inside the KubeStellar "root:mvi-demo:jn-a" WMW, we can see the new operator deployment and corresponding MVIApp Custom Resource:

$ kubectl get deployment,MVIApp -n mvi
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mvi-app-operator-controller-manager 0/1 0 0 16m
NAME AGE
mviapp.edge.mvi.com/mvi-edge-jn-a 16m

After a few minutes we can check that the operator is indeed running in the MicroShift cluster of the NVIDIA Jetson Nano jn-a and it has created the MVI-Edge application mvi-edge-jn-a-pod:

$ oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kcp-edge-syncer-jn-b-17dcit57 kcp-edge-syncer-jn-b-17dcit57-594fcf76b6-l7w5l 1/1 Running 0 2d18h
mvi mvi-app-operator-controller-manager-57cf4779f7-x59fx 2/2 Running 0 9m31s
mvi mvi-edge-jn-a-pod 3/3 Running 0 9m8s

After retrieving the route of the MVI-Edge application with the command:

$ oc get route -n mvi
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
mvi-edge-jn-b-route mvi-edge-jn-b-route-mvi.cluster.local mvi-edge-jn-b-service tcp-443 passthrough None

we can finally browse to the MVI-Edge login screen:

MVI-Edge login screen

After logging in with the default username and password, we can see that, while the application is up and running, it is not configured yet. Referring to the three figures below, one can notice that the MVI-Edge application is not connected to the MVI-Hub, no models are available, and no inspection station has been created:

MVI-Edge application is not connected to the MVI-Hub
MVI-Edge application without any model available
MVI-Edge application without any inspection station setup

In order to properly configure the MVI-Edge application, we push the YAML files for the MVI-Edge application configuration operator to the Git repository. Then we can manually sync the objects to the KubeStellar WMWs using the ArgoCD sync button.

Inside the KubeStellar "root:mvi-demo:jn-a" WMW, we can see the new config operator deployment and corresponding MVIConfig Custom Resource in addition to the previously shown app operator:

$ ~$ kubectl get deployment,MVIConfig -n mvi
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mvi-app-operator-controller-manager 0/1 0 0 35m
deployment.apps/mvi-config-operator-controller-manager 0/1 0 0 26m
NAME AGE
mviconfig.edge.mvi.com/mviconfig-test-inspection-v2 26m

After a few minutes, we can check that the KubeStellar Syncer has detected the new Kubernetes objects in the jn-aWMW and applied them to the MicroShift cluster of the NVIDIA Jetson Nano jn-a:

$ oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kcp-edge-syncer-jn-a-1srb3ev8 kcp-edge-syncer-jn-a-1srb3ev8-846bd666df-579s7 1/1 Running 0 2d18h
mvi mvi-app-operator-controller-manager-57cf4779f7-prpj2 2/2 Running 0 9m54s
mvi mvi-config-operator-controller-manager-b65bf4cfb-4rn96 2/2 Running 0 39s
mvi mvi-edge-jn-a-pod 3/3 Running 0 9m31s

After browsing again to the MVI-Edge UI we can confirm models are now available from the connected MVI-Hub:

Models are available for download in the MVI-Edge application

Furthermore, a new inspection has been setup and it is running:

Inspection running in the MVI-Edge application

Diving into the configuration of the inspection, we can observe that it is using a remote augmented v2 model running on the MVI-Hub training server:

Inspection configuration for the MVI-Edge application running on the NVIDIA jetson Nano `jn-a`

Similar steps are followed to configure the second MicroShift cluster running on the second NVIDIA Jetson Nano jn-b. However, by looking at the inspection configuration, we can observe that in this case a local model is being used during the evaluation of the images:

Inspection configuration for the MVI-Edge application running on the NVIDIA jetson Nano `jn-b`

7. Conclusions

In this blog, we have shown how easy it is to use KubeStellar v0.2.2 to deploy and configure two different instances of IBM Maximo Visual Inspection (MVI) Computer Vision (CV) and AI applications into two MicroShift clusters running on two NVIDIA Jetson Nano Single Board Computers.

In particular, we have covered the few easy steps of KubeStellar Quickstart that are required to install and run KubeStellar, and to connect it to an ArgoCD based Continuous Deployment.

Finally, we have shown that the MVI-Edge application can be successfully deployed and configured to use different models from the MVI-Hub.

Watch a recorded demonstration on our YouTube channel:

--

--

Franco Stellari

Senior Scientist at IBM T.J. Watson Research Center. Views are my own.