21 Air-gapped deployments with Edge Image Builder #
21.1 Intro #
This guide will show how to deploy several of the SUSE Edge components completely air-gapped on SLE Micro 5.5 utilizing Edge Image Builder(EIB) (Chapter 9, Edge Image Builder). With this, you’ll be able to boot into a customized, ready to boot (CRB) image created by EIB and have the specified components deployed on either a RKE2 or K3s cluster without an Internet connection or any manual steps. This configuration is highly desirable for customers that want to pre-bake all artifacts required for deployment into their OS image, so they are immediately available on boot.
We will cover an air-gapped installation of:
EIB will parse and pre-download all images referenced in the provided Helm charts and Kubernetes manifests. However, some of those may be attempting to pull container images and create Kubernetes resources based on those at runtime. In these cases we have to manually specify the necessary images in the definition file if we want to set up a completely air-gapped environment.
21.2 Prerequisites #
If you’re following this guide, it’s assumed that you are already familiar with EIB (Chapter 9, Edge Image Builder). If not, please follow the quick start guide (Chapter 3, Standalone clusters with Edge Image Builder) to better understand the concepts shown in practice below.
21.3 Libvirt Network Configuration #
To demo the air-gapped deployment, this guide will be done using a simulated air-gapped libvirt
network and the following configuration will be tailored to that. For your own deployments, you may have to modify the host1.local.yaml
configuration that will be introduced in the next step.
If you would like to use the same libvirt
network configuration, follow along. If not, skip to Section 21.4, “Base Directory Configuration”.
Let’s create an isolated network configuration with an IP address range 192.168.100.2/24
for DHCP:
cat << EOF > isolatednetwork.xml <network> <name>isolatednetwork</name> <bridge name='virbr1' stp='on' delay='0'/> <ip address='192.168.100.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.100.2' end='192.168.100.254'/> </dhcp> </ip> </network> EOF
Now, the only thing left is to create the network and start it:
virsh net-define isolatednetwork.xml virsh net-start isolatednetwork
21.4 Base Directory Configuration #
The base directory configuration is the same across all different components, so we will set it up here.
We will first create the necessary subdirectories:
export CONFIG_DIR=$HOME/config mkdir -p $CONFIG_DIR/base-images mkdir -p $CONFIG_DIR/network mkdir -p $CONFIG_DIR/kubernetes/helm/values
Make sure to add whichever base image you plan to use into the base-images
directory. This guide will focus on the Self Install ISO found here.
Let’s copy the downloaded image:
cp SLE-Micro.x86_64-5.5.0-Default-SelfInstall-GM2.install.iso $CONFIG_DIR/base-images/slemicro.iso
EIB is never going to modify the base image input.
Let’s create a file containing the desired network configuration:
cat << EOF > $CONFIG_DIR/network/host1.local.yaml routes: config: - destination: 0.0.0.0/0 metric: 100 next-hop-address: 192.168.100.1 next-hop-interface: eth0 table-id: 254 - destination: 192.168.100.0/24 metric: 100 next-hop-address: next-hop-interface: eth0 table-id: 254 dns-resolver: config: server: - 192.168.100.1 - 8.8.8.8 interfaces: - name: eth0 type: ethernet state: up mac-address: 34:8A:B1:4B:16:E7 ipv4: address: - ip: 192.168.100.50 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false EOF
This configuration ensures the following are present on the provisioned systems (using the specified MAC address):
an Ethernet interface with a static IP address
routing
DNS
hostname (
host1.local
)
The resulting file structure should now look like:
├── kubernetes/ │ └── helm/ │ └── values/ ├── base-images/ │ └── slemicro.iso └── network/ └── host1.local.yaml
21.5 Base Definition File #
Edge Image Builder is using definition files to modify the SLE Micro images. These files contain the majority of configurable options. Many of these options will be repeated across the different component sections, so we will list and explain those here.
Full list of customization options in the definition file can be found in the upstream documentation
We will take a look at the following fields which will be present in all definition files:
apiVersion: 1.0
image:
imageType: iso
arch: x86_64
baseImage: slemicro.iso
outputImageName: eib-image.iso
operatingSystem:
users:
- username: root
encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/
kubernetes:
version: v1.28.13+rke2r1
embeddedArtifactRegistry:
images:
- ...
The image
section is required, and it specifies the input image, its architecture and type, as well as what the output image will be called.
The operatingSystem
section is optional, and contains configuration to enable login on the provisioned systems with the root/eib
username/password.
The kubernetes
section is optional, and it defines the Kubernetes type and version. We are going to use Kubernetes 1.28.13 and RKE2 by default.
Use kubernetes.version: v1.28.13+k3s1
if K3s is desired instead. Unless explicitly configured via the kubernetes.nodes
field, all clusters we bootstrap in this guide will be single-node ones.
The embeddedArtifactRegistry
section will include all images which are only referenced and pulled at runtime for the specific component.
21.6 Rancher Installation #
The Rancher (Chapter 4, Rancher) deployment that will be demonstrated will be highly slimmed down for demonstration purposes. For your actual deployments, additional artifacts may be necessary depending on your configuration.
The Rancher v2.8.8 container images file lists all the images required for an air-gapped installation.
There are over 600 container images in total which means that the resulting CRB image would be roughly 30GB. For our Rancher installation, we will strip down that list to the smallest working configuration. From there, you can add back any images you may need for your deployments.
We will create the definition file and include the stripped down image list:
apiVersion: 1.0 image: imageType: iso arch: x86_64 baseImage: slemicro.iso outputImageName: eib-image.iso operatingSystem: users: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: version: v1.28.13+rke2r1 network: apiVIP: 192.168.100.151 manifests: urls: - https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.crds.yaml helm: charts: - name: rancher version: 2.8.8 repositoryName: rancher-prime valuesFile: rancher-values.yaml targetNamespace: cattle-system createNamespace: true installationNamespace: kube-system - name: cert-manager installationNamespace: kube-system createNamespace: true repositoryName: jetstack targetNamespace: cert-manager version: 1.14.2 repositories: - name: jetstack url: https://charts.jetstack.io - name: rancher-prime url: https://charts.rancher.com/server-charts/prime embeddedArtifactRegistry: images: - name: registry.rancher.com/rancher/backup-restore-operator:v4.0.3 - name: registry.rancher.com/rancher/calico-cni:v3.27.4-rancher1 - name: registry.rancher.com/rancher/cis-operator:v1.0.15 - name: registry.rancher.com/rancher/coreos-kube-state-metrics:v1.9.7 - name: registry.rancher.com/rancher/coreos-prometheus-config-reloader:v0.38.1 - name: registry.rancher.com/rancher/coreos-prometheus-operator:v0.38.1 - name: registry.rancher.com/rancher/flannel-cni:v0.3.0-rancher9 - name: registry.rancher.com/rancher/fleet-agent:v0.9.9 - name: registry.rancher.com/rancher/fleet:v0.9.9 - name: registry.rancher.com/rancher/gitjob:v0.9.13 - name: registry.rancher.com/rancher/grafana-grafana:7.1.5 - name: registry.rancher.com/rancher/hardened-addon-resizer:1.8.20-build20240410 - name: registry.rancher.com/rancher/hardened-calico:v3.28.1-build20240806 - name: registry.rancher.com/rancher/hardened-cluster-autoscaler:v1.8.10-build20240124 - name: registry.rancher.com/rancher/hardened-cni-plugins:v1.5.1-build20240805 - name: registry.rancher.com/rancher/hardened-coredns:v1.11.1-build20240305 - name: registry.rancher.com/rancher/hardened-dns-node-cache:1.22.28-build20240125 - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240531 - name: registry.rancher.com/rancher/hardened-flannel:v0.25.5-build20240801 - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240401 - name: registry.rancher.com/rancher/hardened-kubernetes:v1.28.13-rke2r1-build20240815 - name: registry.rancher.com/rancher/hardened-multus-cni:v4.0.2-build20240612 - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.4-build20240513 - name: registry.rancher.com/rancher/hardened-whereabouts:v0.7.0-build20240429 - name: registry.rancher.com/rancher/helm-project-operator:v0.2.1 - name: registry.rancher.com/rancher/istio-kubectl:1.5.10 - name: registry.rancher.com/rancher/jimmidyson-configmap-reload:v0.3.0 - name: registry.rancher.com/rancher/k3s-upgrade:v1.28.13-k3s1 - name: registry.rancher.com/rancher/klipper-helm:v0.8.4-build20240523 - name: registry.rancher.com/rancher/klipper-lb:v0.4.9 - name: registry.rancher.com/rancher/kube-api-auth:v0.2.1 - name: registry.rancher.com/rancher/kubectl:v1.28.12 - name: registry.rancher.com/rancher/library-nginx:1.19.2-alpine - name: registry.rancher.com/rancher/local-path-provisioner:v0.0.28 - name: registry.rancher.com/rancher/machine:v0.15.0-rancher116 - name: registry.rancher.com/rancher/mirrored-cluster-api-controller:v1.4.4 - name: registry.rancher.com/rancher/nginx-ingress-controller:v1.10.4-hardened2 - name: registry.rancher.com/rancher/pause:3.6 - name: registry.rancher.com/rancher/prom-alertmanager:v0.21.0 - name: registry.rancher.com/rancher/prom-node-exporter:v1.0.1 - name: registry.rancher.com/rancher/prom-prometheus:v2.18.2 - name: registry.rancher.com/rancher/prometheus-auth:v0.2.2 - name: registry.rancher.com/rancher/prometheus-federator:v0.3.4 - name: registry.rancher.com/rancher/pushprox-client:v0.1.3-rancher2-client - name: registry.rancher.com/rancher/pushprox-proxy:v0.1.3-rancher2-proxy - name: registry.rancher.com/rancher/rancher-agent:v2.8.8 - name: registry.rancher.com/rancher/rancher-csp-adapter:v3.0.1 - name: registry.rancher.com/rancher/rancher-webhook:v0.4.11 - name: registry.rancher.com/rancher/rancher:v2.8.8 - name: registry.rancher.com/rancher/rke-tools:v0.1.102 - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 - name: registry.rancher.com/rancher/rke2-runtime:v1.28.13-rke2r1 - name: registry.rancher.com/rancher/rke2-upgrade:v1.28.13-rke2r1 - name: registry.rancher.com/rancher/security-scan:v0.2.17 - name: registry.rancher.com/rancher/shell:v0.1.26 - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.28.13-k3s1 - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.28.13-rke2r1 - name: registry.rancher.com/rancher/system-agent:v0.3.9-suc - name: registry.rancher.com/rancher/system-upgrade-controller:v0.13.4 - name: registry.rancher.com/rancher/ui-plugin-catalog:2.1.0 - name: registry.rancher.com/rancher/ui-plugin-operator:v0.1.1 - name: registry.rancher.com/rancher/webhook-receiver:v0.2.5 - name: registry.rancher.com/rancher/kubectl:v1.20.2 - name: registry.rancher.com/rancher/shell:v0.1.24 - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.4.1 - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6 - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20231011-8b53cabe0 - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20231226-1a7112e06
As compared to the full list of 602 container images, this slimmed down version only contains 62 which makes the new CRB image only about 7GB.
We also need to create a Helm values file for Rancher:
cat << EOF > $CONFIG_DIR/kubernetes/helm/values/rancher-values.yaml hostname: 192.168.100.50.sslip.io replicas: 1 bootstrapPassword: "adminadminadmin" systemDefaultRegistry: registry.rancher.com useBundledSystemChart: true EOF
Setting the systemDefaultRegistry
to registry.rancher.com
allows Rancher to automatically look for images in the embedded artifact registry started within the CRB image at boot. Omitting this field may result in failure to find the container images on the node.
Let’s build the image:
podman run --rm -it --privileged -v $CONFIG_DIR:/eib \ registry.suse.com/edge/edge-image-builder:1.0.2 \ build --definition-file eib-iso-definition.yaml
The output should be similar to the following:
Generating image customization components... Identifier ................... [SUCCESS] Custom Files ................. [SKIPPED] Time ......................... [SKIPPED] Network ...................... [SUCCESS] Groups ....................... [SKIPPED] Users ........................ [SUCCESS] Proxy ........................ [SKIPPED] Rpm .......................... [SKIPPED] Systemd ...................... [SKIPPED] Elemental .................... [SKIPPED] Suma ......................... [SKIPPED] Downloading file: dl-manifest-1.yaml 100% (437/437 kB, 17 MB/s) Populating Embedded Artifact Registry... 100% (69/69, 26 it/min) Embedded Artifact Registry ... [SUCCESS] Keymap ....................... [SUCCESS] Configuring Kubernetes component... The Kubernetes CNI is not explicitly set, defaulting to 'cilium'. Downloading file: rke2_installer.sh Downloading file: rke2-images-core.linux-amd64.tar.zst 100% (780/780 MB, 115 MB/s) Downloading file: rke2-images-cilium.linux-amd64.tar.zst 100% (367/367 MB, 108 MB/s) Downloading file: rke2.linux-amd64.tar.gz 100% (34/34 MB, 117 MB/s) Downloading file: sha256sum-amd64.txt 100% (3.9/3.9 kB, 34 MB/s) Downloading file: dl-manifest-1.yaml 100% (437/437 kB, 106 MB/s) Kubernetes ................... [SUCCESS] Certificates ................. [SKIPPED] Building ISO image... Kernel Params ................ [SKIPPED] Image build complete!
Once a node using the built image is provisioned, we can verify the Rancher installation:
/var/lib/rancher/rke2/bin/kubectl get all -A --kubeconfig /etc/rancher/rke2/rke2.yaml
The output should be similar to the following, showing that everything has been successfully deployed:
NAMESPACE NAME READY STATUS RESTARTS AGE cattle-fleet-local-system pod/fleet-agent-68f4d5d5f7-tdlk7 1/1 Running 0 34s cattle-fleet-system pod/fleet-controller-85564cc978-pbtvk 1/1 Running 0 5m51s cattle-fleet-system pod/gitjob-9dc58fb5b-7cwsw 1/1 Running 0 5m51s cattle-provisioning-capi-system pod/capi-controller-manager-5c57b4b8f7-wlp5k 1/1 Running 0 4m52s cattle-system pod/helm-operation-4fk5c 0/2 Completed 0 37s cattle-system pod/helm-operation-6zgbq 0/2 Completed 0 4m54s cattle-system pod/helm-operation-cjds5 0/2 Completed 0 5m37s cattle-system pod/helm-operation-kt5c2 0/2 Completed 0 5m21s cattle-system pod/helm-operation-ppgtw 0/2 Completed 0 5m30s cattle-system pod/helm-operation-tvcwk 0/2 Completed 0 5m54s cattle-system pod/helm-operation-wpxd4 0/2 Completed 0 53s cattle-system pod/rancher-58575f9575-svrg2 1/1 Running 0 6m34s cattle-system pod/rancher-webhook-5c6556f7ff-vgmkt 1/1 Running 0 5m19s cert-manager pod/cert-manager-6c69f9f796-fkm8f 1/1 Running 0 7m14s cert-manager pod/cert-manager-cainjector-584f44558c-wg7p6 1/1 Running 0 7m14s cert-manager pod/cert-manager-webhook-76f9945d6f-lv2nv 1/1 Running 0 7m14s endpoint-copier-operator pod/endpoint-copier-operator-58964b659b-l64dk 1/1 Running 0 7m16s endpoint-copier-operator pod/endpoint-copier-operator-58964b659b-z9t9d 1/1 Running 0 7m16s kube-system pod/cilium-fht55 1/1 Running 0 7m32s kube-system pod/cilium-operator-558bbf6cfd-gwfwf 1/1 Running 0 7m32s kube-system pod/cilium-operator-558bbf6cfd-qsxb5 0/1 Pending 0 7m32s kube-system pod/cloud-controller-manager-host1.local 1/1 Running 0 7m21s kube-system pod/etcd-host1.local 1/1 Running 0 7m8s kube-system pod/helm-install-cert-manager-fvbtt 0/1 Completed 0 8m12s kube-system pod/helm-install-endpoint-copier-operator-5kkgw 0/1 Completed 0 8m12s kube-system pod/helm-install-metallb-zfphb 0/1 Completed 0 8m12s kube-system pod/helm-install-rancher-nc4nt 0/1 Completed 2 8m12s kube-system pod/helm-install-rke2-cilium-7wq87 0/1 Completed 0 8m12s kube-system pod/helm-install-rke2-coredns-nl4gc 0/1 Completed 0 8m12s kube-system pod/helm-install-rke2-ingress-nginx-svjqd 0/1 Completed 0 8m12s kube-system pod/helm-install-rke2-metrics-server-gqgqz 0/1 Completed 0 8m12s kube-system pod/helm-install-rke2-snapshot-controller-crd-r6b5p 0/1 Completed 0 8m12s kube-system pod/helm-install-rke2-snapshot-controller-ss9v4 0/1 Completed 1 8m12s kube-system pod/helm-install-rke2-snapshot-validation-webhook-vlkpn 0/1 Completed 0 8m12s kube-system pod/kube-apiserver-host1.local 1/1 Running 0 7m29s kube-system pod/kube-controller-manager-host1.local 1/1 Running 0 7m30s kube-system pod/kube-proxy-host1.local 1/1 Running 0 7m30s kube-system pod/kube-scheduler-host1.local 1/1 Running 0 7m42s kube-system pod/rke2-coredns-rke2-coredns-6c8d9bb6d-qlwc8 1/1 Running 0 7m31s kube-system pod/rke2-coredns-rke2-coredns-autoscaler-55fb4bbbcf-j5r2z 1/1 Running 0 7m31s kube-system pod/rke2-ingress-nginx-controller-4h2mm 1/1 Running 0 7m3s kube-system pod/rke2-metrics-server-544c8c66fc-lsrc6 1/1 Running 0 7m15s kube-system pod/rke2-snapshot-controller-59cc9cd8f4-4wx75 1/1 Running 0 7m14s kube-system pod/rke2-snapshot-validation-webhook-54c5989b65-5kp2x 1/1 Running 0 7m15s metallb-system pod/metallb-controller-5895d8446d-z54lm 1/1 Running 0 7m15s metallb-system pod/metallb-speaker-fxwgk 1/1 Running 0 7m15s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cattle-fleet-system service/gitjob ClusterIP 10.43.30.8 <none> 80/TCP 5m51s cattle-provisioning-capi-system service/capi-webhook-service ClusterIP 10.43.7.100 <none> 443/TCP 4m52s cattle-system service/rancher ClusterIP 10.43.100.229 <none> 80/TCP,443/TCP 6m34s cattle-system service/rancher-webhook ClusterIP 10.43.121.133 <none> 443/TCP 5m19s cert-manager service/cert-manager ClusterIP 10.43.140.65 <none> 9402/TCP 7m14s cert-manager service/cert-manager-webhook ClusterIP 10.43.108.158 <none> 443/TCP 7m14s default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8m26s default service/kubernetes-vip LoadBalancer 10.43.138.138 192.168.100.151 9345:31006/TCP,6443:31599/TCP 8m21s kube-system service/cilium-agent ClusterIP None <none> 9964/TCP 7m32s kube-system service/rke2-coredns-rke2-coredns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 7m31s kube-system service/rke2-ingress-nginx-controller-admission ClusterIP 10.43.157.19 <none> 443/TCP 7m3s kube-system service/rke2-metrics-server ClusterIP 10.43.4.123 <none> 443/TCP 7m15s kube-system service/rke2-snapshot-validation-webhook ClusterIP 10.43.91.161 <none> 443/TCP 7m16s metallb-system service/metallb-webhook-service ClusterIP 10.43.71.192 <none> 443/TCP 7m15s NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/cilium 1 1 1 1 1 kubernetes.io/os=linux 7m32s kube-system daemonset.apps/rke2-ingress-nginx-controller 1 1 1 1 1 kubernetes.io/os=linux 7m3s metallb-system daemonset.apps/metallb-speaker 1 1 1 1 1 kubernetes.io/os=linux 7m15s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE cattle-fleet-local-system deployment.apps/fleet-agent 1/1 1 1 34s cattle-fleet-system deployment.apps/fleet-controller 1/1 1 1 5m51s cattle-fleet-system deployment.apps/gitjob 1/1 1 1 5m51s cattle-provisioning-capi-system deployment.apps/capi-controller-manager 1/1 1 1 4m52s cattle-system deployment.apps/rancher 1/1 1 1 6m34s cattle-system deployment.apps/rancher-webhook 1/1 1 1 5m19s cert-manager deployment.apps/cert-manager 1/1 1 1 7m14s cert-manager deployment.apps/cert-manager-cainjector 1/1 1 1 7m14s cert-manager deployment.apps/cert-manager-webhook 1/1 1 1 7m14s endpoint-copier-operator deployment.apps/endpoint-copier-operator 2/2 2 2 7m16s kube-system deployment.apps/cilium-operator 1/2 2 1 7m32s kube-system deployment.apps/rke2-coredns-rke2-coredns 1/1 1 1 7m31s kube-system deployment.apps/rke2-coredns-rke2-coredns-autoscaler 1/1 1 1 7m31s kube-system deployment.apps/rke2-metrics-server 1/1 1 1 7m15s kube-system deployment.apps/rke2-snapshot-controller 1/1 1 1 7m14s kube-system deployment.apps/rke2-snapshot-validation-webhook 1/1 1 1 7m15s metallb-system deployment.apps/metallb-controller 1/1 1 1 7m15s NAMESPACE NAME DESIRED CURRENT READY AGE cattle-fleet-local-system replicaset.apps/fleet-agent-68f4d5d5f7 1 1 1 34s cattle-fleet-system replicaset.apps/fleet-controller-85564cc978 1 1 1 5m51s cattle-fleet-system replicaset.apps/gitjob-9dc58fb5b 1 1 1 5m51s cattle-provisioning-capi-system replicaset.apps/capi-controller-manager-5c57b4b8f7 1 1 1 4m52s cattle-system replicaset.apps/rancher-58575f9575 1 1 1 6m34s cattle-system replicaset.apps/rancher-webhook-5c6556f7ff 1 1 1 5m19s cert-manager replicaset.apps/cert-manager-6c69f9f796 1 1 1 7m14s cert-manager replicaset.apps/cert-manager-cainjector-584f44558c 1 1 1 7m14s cert-manager replicaset.apps/cert-manager-webhook-76f9945d6f 1 1 1 7m14s endpoint-copier-operator replicaset.apps/endpoint-copier-operator-58964b659b 2 2 2 7m16s kube-system replicaset.apps/cilium-operator-558bbf6cfd 2 2 1 7m32s kube-system replicaset.apps/rke2-coredns-rke2-coredns-6c8d9bb6d 1 1 1 7m31s kube-system replicaset.apps/rke2-coredns-rke2-coredns-autoscaler-55fb4bbbcf 1 1 1 7m31s kube-system replicaset.apps/rke2-metrics-server-544c8c66fc 1 1 1 7m15s kube-system replicaset.apps/rke2-snapshot-controller-59cc9cd8f4 1 1 1 7m14s kube-system replicaset.apps/rke2-snapshot-validation-webhook-54c5989b65 1 1 1 7m15s metallb-system replicaset.apps/metallb-controller-5895d8446d 1 1 1 7m15s NAMESPACE NAME COMPLETIONS DURATION AGE kube-system job.batch/helm-install-cert-manager 1/1 85s 8m21s kube-system job.batch/helm-install-endpoint-copier-operator 1/1 59s 8m21s kube-system job.batch/helm-install-metallb 1/1 60s 8m21s kube-system job.batch/helm-install-rancher 1/1 100s 8m21s kube-system job.batch/helm-install-rke2-cilium 1/1 44s 8m18s kube-system job.batch/helm-install-rke2-coredns 1/1 45s 8m18s kube-system job.batch/helm-install-rke2-ingress-nginx 1/1 76s 8m16s kube-system job.batch/helm-install-rke2-metrics-server 1/1 60s 8m16s kube-system job.batch/helm-install-rke2-snapshot-controller 1/1 61s 8m15s kube-system job.batch/helm-install-rke2-snapshot-controller-crd 1/1 60s 8m16s kube-system job.batch/helm-install-rke2-snapshot-validation-webhook 1/1 60s 8m14s
And when we go to https://192.168.100.50.sslip.io
and log in with the adminadminadmin
password that we set earlier, we are greeted with the Rancher dashboard:
21.7 NeuVector Installation #
Unlike the Rancher installation, the NeuVector installation does not require any special handling in EIB. EIB will automatically air-gap every image required by NeuVector.
We will create the definition file:
apiVersion: 1.0 image: imageType: iso arch: x86_64 baseImage: slemicro.iso outputImageName: eib-image.iso operatingSystem: users: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: version: v1.28.13+rke2r1 helm: charts: - name: neuvector-crd version: 103.0.3+up2.7.6 repositoryName: rancher-charts targetNamespace: neuvector createNamespace: true installationNamespace: kube-system valuesFile: neuvector-values.yaml - name: neuvector version: 103.0.3+up2.7.6 repositoryName: rancher-charts targetNamespace: neuvector createNamespace: true installationNamespace: kube-system valuesFile: neuvector-values.yaml repositories: - name: rancher-charts url: https://charts.rancher.io/
We will also create a Helm values file for NeuVector:
cat << EOF > $CONFIG_DIR/kubernetes/helm/values/neuvector-values.yaml controller: replicas: 1 manager: enabled: false cve: scanner: enabled: false replicas: 1 k3s: enabled: true crdwebhook: enabled: false EOF
Let’s build the image:
podman run --rm -it --privileged -v $CONFIG_DIR:/eib \ registry.suse.com/edge/edge-image-builder:1.0.2 \ build --definition-file eib-iso-definition.yaml
The output should be similar to the following:
Generating image customization components... Identifier ................... [SUCCESS] Custom Files ................. [SKIPPED] Time ......................... [SKIPPED] Network ...................... [SUCCESS] Groups ....................... [SKIPPED] Users ........................ [SUCCESS] Proxy ........................ [SKIPPED] Rpm .......................... [SKIPPED] Systemd ...................... [SKIPPED] Elemental .................... [SKIPPED] Suma ......................... [SKIPPED] Populating Embedded Artifact Registry... 100% (6/6, 20 it/min) Embedded Artifact Registry ... [SUCCESS] Keymap ....................... [SUCCESS] Configuring Kubernetes component... The Kubernetes CNI is not explicitly set, defaulting to 'cilium'. Downloading file: rke2_installer.sh Kubernetes ................... [SUCCESS] Certificates ................. [SKIPPED] Building ISO image... Kernel Params ................ [SKIPPED] Image build complete!
Once a node using the built image is provisioned, we can verify the NeuVector installation:
/var/lib/rancher/rke2/bin/kubectl get all -n neuvector --kubeconfig /etc/rancher/rke2/rke2.yaml
The output should be similar to the following, showing that everything has been successfully deployed:
NAME READY STATUS RESTARTS AGE pod/neuvector-controller-pod-bc74745cf-x9fsc 1/1 Running 0 13m pod/neuvector-enforcer-pod-vzw7t 1/1 Running 0 13m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/neuvector-svc-admission-webhook ClusterIP 10.43.240.25 <none> 443/TCP 13m service/neuvector-svc-controller ClusterIP None <none> 18300/TCP,18301/TCP,18301/UDP 13m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/neuvector-enforcer-pod 1 1 1 1 1 <none> 13m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/neuvector-controller-pod 1/1 1 1 13m NAME DESIRED CURRENT READY AGE replicaset.apps/neuvector-controller-pod-bc74745cf 1 1 1 13m NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE cronjob.batch/neuvector-updater-pod 0 0 * * * False 0 <none> 13m
21.8 Longhorn Installation #
The official documentation for Longhorn contains a longhorn-images.txt
file which lists all the images required for an air-gapped installation.
We will be including them in our definition file. Let’s create it:
apiVersion: 1.0 image: imageType: iso arch: x86_64 baseImage: slemicro.iso outputImageName: eib-image.iso operatingSystem: users: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: version: v1.28.13+rke2r1 helm: charts: - name: longhorn repositoryName: longhorn targetNamespace: longhorn-system createNamespace: true version: 1.6.1 repositories: - name: longhorn url: https://charts.longhorn.io embeddedArtifactRegistry: images: - name: longhornio/csi-attacher:v4.4.2 - name: longhornio/csi-provisioner:v3.6.2 - name: longhornio/csi-resizer:v1.9.2 - name: longhornio/csi-snapshotter:v6.3.2 - name: longhornio/csi-node-driver-registrar:v2.9.2 - name: longhornio/livenessprobe:v2.12.0 - name: longhornio/backing-image-manager:v1.6.1 - name: longhornio/longhorn-engine:v1.6.1 - name: longhornio/longhorn-instance-manager:v1.6.1 - name: longhornio/longhorn-manager:v1.6.1 - name: longhornio/longhorn-share-manager:v1.6.1 - name: longhornio/longhorn-ui:v1.6.1 - name: longhornio/support-bundle-kit:v0.0.36
Let’s build the image:
podman run --rm -it --privileged -v $CONFIG_DIR:/eib \ registry.suse.com/edge/edge-image-builder:1.0.2 \ build --definition-file eib-iso-definition.yaml
The output should be similar to the following:
Generating image customization components... Identifier ................... [SUCCESS] Custom Files ................. [SKIPPED] Time ......................... [SKIPPED] Network ...................... [SUCCESS] Groups ....................... [SKIPPED] Users ........................ [SUCCESS] Proxy ........................ [SKIPPED] Rpm .......................... [SKIPPED] Systemd ...................... [SKIPPED] Elemental .................... [SKIPPED] Suma ......................... [SKIPPED] Populating Embedded Artifact Registry... 100% (13/13, 20 it/min) Embedded Artifact Registry ... [SUCCESS] Keymap ....................... [SUCCESS] Configuring Kubernetes component... The Kubernetes CNI is not explicitly set, defaulting to 'cilium'. Downloading file: rke2_installer.sh Downloading file: rke2-images-core.linux-amd64.tar.zst 100% (782/782 MB, 108 MB/s) Downloading file: rke2-images-cilium.linux-amd64.tar.zst 100% (367/367 MB, 104 MB/s) Downloading file: rke2.linux-amd64.tar.gz 100% (34/34 MB, 108 MB/s) Downloading file: sha256sum-amd64.txt 100% (3.9/3.9 kB, 7.5 MB/s) Kubernetes ................... [SUCCESS] Certificates ................. [SKIPPED] Building ISO image... Kernel Params ................ [SKIPPED] Image build complete!
Once a node using the built image is provisioned, we can verify the Longhorn installation:
/var/lib/rancher/rke2/bin/kubectl get all -n longhorn-system --kubeconfig /etc/rancher/rke2/rke2.yaml
The output should be similar to the following, showing that everything has been successfully deployed:
NAME READY STATUS RESTARTS AGE pod/csi-attacher-5c4bfdcf59-9hgvv 1/1 Running 0 35s pod/csi-attacher-5c4bfdcf59-dt6jl 1/1 Running 0 35s pod/csi-attacher-5c4bfdcf59-swpwq 1/1 Running 0 35s pod/csi-provisioner-667796df57-dfrzw 1/1 Running 0 35s pod/csi-provisioner-667796df57-tvsrt 1/1 Running 0 35s pod/csi-provisioner-667796df57-xszsx 1/1 Running 0 35s pod/csi-resizer-694f8f5f64-6khlb 1/1 Running 0 35s pod/csi-resizer-694f8f5f64-gnr45 1/1 Running 0 35s pod/csi-resizer-694f8f5f64-sbl4k 1/1 Running 0 35s pod/csi-snapshotter-959b69d4b-2k4v8 1/1 Running 0 35s pod/csi-snapshotter-959b69d4b-9d8wl 1/1 Running 0 35s pod/csi-snapshotter-959b69d4b-l2w95 1/1 Running 0 35s pod/engine-image-ei-5cefaf2b-cwd8f 1/1 Running 0 43s pod/instance-manager-f0d17f96bc92f3cc44787a2a347f6a98 1/1 Running 0 43s pod/longhorn-csi-plugin-szv7t 3/3 Running 0 35s pod/longhorn-driver-deployer-9f4fc86-q8fz2 1/1 Running 0 83s pod/longhorn-manager-zp66l 1/1 Running 0 83s pod/longhorn-ui-5f4b7bbf69-k645d 1/1 Running 3 (65s ago) 83s pod/longhorn-ui-5f4b7bbf69-t7xt4 1/1 Running 3 (62s ago) 83s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/longhorn-admission-webhook ClusterIP 10.43.74.59 <none> 9502/TCP 83s service/longhorn-backend ClusterIP 10.43.45.206 <none> 9500/TCP 83s service/longhorn-conversion-webhook ClusterIP 10.43.83.108 <none> 9501/TCP 83s service/longhorn-engine-manager ClusterIP None <none> <none> 83s service/longhorn-frontend ClusterIP 10.43.84.55 <none> 80/TCP 83s service/longhorn-recovery-backend ClusterIP 10.43.75.200 <none> 9503/TCP 83s service/longhorn-replica-manager ClusterIP None <none> <none> 83s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/engine-image-ei-5cefaf2b 1 1 1 1 1 <none> 43s daemonset.apps/longhorn-csi-plugin 1 1 1 1 1 <none> 35s daemonset.apps/longhorn-manager 1 1 1 1 1 <none> 83s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/csi-attacher 3/3 3 3 35s deployment.apps/csi-provisioner 3/3 3 3 35s deployment.apps/csi-resizer 3/3 3 3 35s deployment.apps/csi-snapshotter 3/3 3 3 35s deployment.apps/longhorn-driver-deployer 1/1 1 1 83s deployment.apps/longhorn-ui 2/2 2 2 83s NAME DESIRED CURRENT READY AGE replicaset.apps/csi-attacher-5c4bfdcf59 3 3 3 35s replicaset.apps/csi-provisioner-667796df57 3 3 3 35s replicaset.apps/csi-resizer-694f8f5f64 3 3 3 35s replicaset.apps/csi-snapshotter-959b69d4b 3 3 3 35s replicaset.apps/longhorn-driver-deployer-9f4fc86 1 1 1 83s replicaset.apps/longhorn-ui-5f4b7bbf69 2 2 2 83s
21.9 KubeVirt and CDI Installation #
The Helm charts for both KubeVirt and CDI are only installing their respective operators. It is up to the operators to deploy the rest of the systems which means we will have to include all necessary container images in our definition file. Let’s create it:
apiVersion: 1.0 image: imageType: iso arch: x86_64 baseImage: slemicro.iso outputImageName: eib-image.iso operatingSystem: users: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: version: v1.28.13+rke2r1 helm: charts: - name: kubevirt-chart repositoryName: suse-edge version: 0.3.0 targetNamespace: kubevirt-system createNamespace: true installationNamespace: kube-system - name: cdi-chart repositoryName: suse-edge version: 0.3.0 targetNamespace: cdi-system createNamespace: true installationNamespace: kube-system repositories: - name: suse-edge url: oci://registry.suse.com/edge embeddedArtifactRegistry: images: - name: registry.suse.com/suse/sles/15.5/cdi-uploadproxy:1.59.0-150500.6.18.1 - name: registry.suse.com/suse/sles/15.5/cdi-uploadserver:1.59.0-150500.6.18.1 - name: registry.suse.com/suse/sles/15.5/cdi-apiserver:1.59.0-150500.6.18.1 - name: registry.suse.com/suse/sles/15.5/cdi-controller:1.59.0-150500.6.18.1 - name: registry.suse.com/suse/sles/15.5/cdi-importer:1.59.0-150500.6.18.1 - name: registry.suse.com/suse/sles/15.5/cdi-cloner:1.59.0-150500.6.18.1 - name: registry.suse.com/suse/sles/15.5/virt-api:1.2.2-150500.8.21.1 - name: registry.suse.com/suse/sles/15.5/virt-controller:1.2.2-150500.8.21.1 - name: registry.suse.com/suse/sles/15.5/virt-launcher:1.2.2-150500.8.21.1 - name: registry.suse.com/suse/sles/15.5/virt-handler:1.2.2-150500.8.21.1 - name: registry.suse.com/suse/sles/15.5/virt-exportproxy:1.2.2-150500.8.21.1 - name: registry.suse.com/suse/sles/15.5/virt-exportserver:1.2.2-150500.8.21.1
Let’s build the image:
podman run --rm -it --privileged -v $CONFIG_DIR:/eib \ registry.suse.com/edge/edge-image-builder:1.0.2 \ build --definition-file eib-iso-definition.yaml
The output should be similar to the following:
Generating image customization components... Identifier ................... [SUCCESS] Custom Files ................. [SKIPPED] Time ......................... [SKIPPED] Network ...................... [SUCCESS] Groups ....................... [SKIPPED] Users ........................ [SUCCESS] Proxy ........................ [SKIPPED] Rpm .......................... [SKIPPED] Systemd ...................... [SKIPPED] Elemental .................... [SKIPPED] Suma ......................... [SKIPPED] Populating Embedded Artifact Registry... 100% (13/13, 6 it/min) Embedded Artifact Registry ... [SUCCESS] Keymap ....................... [SUCCESS] Configuring Kubernetes component... The Kubernetes CNI is not explicitly set, defaulting to 'cilium'. Downloading file: rke2_installer.sh Kubernetes ................... [SUCCESS] Certificates ................. [SKIPPED] Building ISO image... Kernel Params ................ [SKIPPED] Image build complete!
Once a node using the built image is provisioned, we can verify the installation of both KubeVirt and CDI.
Verify KubeVirt:
/var/lib/rancher/rke2/bin/kubectl get all -n kubevirt-system --kubeconfig /etc/rancher/rke2/rke2.yaml
The output should be similar to the following, showing that everything has been successfully deployed:
NAME READY STATUS RESTARTS AGE pod/virt-api-75dd5896c-ck24g 1/1 Running 0 2m11s pod/virt-controller-54b46dffbc-8j8x9 1/1 Running 0 106s pod/virt-controller-54b46dffbc-qhpkc 1/1 Running 0 106s pod/virt-handler-qbbcq 1/1 Running 0 106s pod/virt-operator-b599bcd7b-mq87d 1/1 Running 0 2m38s pod/virt-operator-b599bcd7b-q7hkg 1/1 Running 0 2m38s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubevirt-operator-webhook ClusterIP 10.43.60.25 <none> 443/TCP 2m14s service/kubevirt-prometheus-metrics ClusterIP None <none> 443/TCP 2m14s service/virt-api ClusterIP 10.43.70.57 <none> 443/TCP 2m14s service/virt-exportproxy ClusterIP 10.43.255.129 <none> 443/TCP 2m14s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/virt-handler 1 1 1 1 1 kubernetes.io/os=linux 106s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/virt-api 1/1 1 1 2m11s deployment.apps/virt-controller 2/2 2 2 106s deployment.apps/virt-operator 2/2 2 2 2m38s NAME DESIRED CURRENT READY AGE replicaset.apps/virt-api-75dd5896c 1 1 1 2m11s replicaset.apps/virt-controller-54b46dffbc 2 2 2 106s replicaset.apps/virt-operator-b599bcd7b 2 2 2 2m38s NAME AGE PHASE kubevirt.kubevirt.io/kubevirt 2m38s Deployed
Verify CDI:
/var/lib/rancher/rke2/bin/kubectl get all -n cdi-system --kubeconfig /etc/rancher/rke2/rke2.yaml
The output should be similar to the following, showing that everything has been successfully deployed:
NAME READY STATUS RESTARTS AGE pod/cdi-apiserver-85dff89756-7j97k 1/1 Running 0 2m56s pod/cdi-deployment-66b96bf79f-6whvj 1/1 Running 0 2m56s pod/cdi-operator-8f5f4654d-786rc 1/1 Running 0 3m pod/cdi-uploadproxy-77db4ccd8-mzjz5 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cdi-api ClusterIP 10.43.66.178 <none> 443/TCP 2m56s service/cdi-prometheus-metrics ClusterIP 10.43.99.119 <none> 8080/TCP 2m56s service/cdi-uploadproxy ClusterIP 10.43.207.154 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cdi-apiserver 1/1 1 1 2m56s deployment.apps/cdi-deployment 1/1 1 1 2m56s deployment.apps/cdi-operator 1/1 1 1 3m deployment.apps/cdi-uploadproxy 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/cdi-apiserver-85dff89756 1 1 1 2m56s replicaset.apps/cdi-deployment-66b96bf79f 1 1 1 2m56s replicaset.apps/cdi-operator-8f5f4654d 1 1 1 3m replicaset.apps/cdi-uploadproxy-77db4ccd8 1 1 1 2m56s
21.10 Troubleshooting #
If you run into any issues while building the images or are looking to further test and debug the process, please refer to the upstream documentation.