|
This is unreleased documentation for SUSE® Virtual Clusters v1.1.0 (Dev). |
Using a Custom Container Runtime
By default, SUSE Virtual Clusters runs virtual cluster server and agent pods using the host cluster’s default container runtime (typically runc). You can override this by setting the runtimeClassName field on the Cluster resource, which lets virtual clusters run on alternative runtimes, such as crun, Kata Containers, or gVisor.
This guide walks you through configuring a host cluster with crun and creating a virtual cluster that uses it.
Prepare the host cluster
Follow the below steps to install and register the runtime with containerd on the host cluster before referencing it from a virtual cluster:
-
Install K3s on the host, if it is not already running. For installation instructions, see K3s documentation.
-
Download the
crunbinary from the crun releases page and place it on the host:mv crun.amd64 /usr/local/bin/crun chmod +x /usr/local/bin/crun -
Restart K3s so that
containerdpicks up the new runtime:systemctl restart k3s
Verification
-
Confirm whether K3s auto-detected
crun`by verifying whether the following appears in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes."crun"]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes."crun".options]
BinaryName = "/usr/local/bin/crun"
SystemdCgroup = true
+
K3s also creates a matching RuntimeClass resource in the cluster, which is what the runtimeClassName field on the virtual cluster will reference.
Create a virtual cluster with the runtime class
With the SUSE Virtual Clusters controller installed (see Quick Start), create a Cluster resource and set spec.runtimeClassName to the name of the runtime registered on the host:
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: cruncluster
namespace: test-x
spec:
runtimeClassName: crun
mode: virtual
servers: 1
agents: 0
expose:
nodePort: {}
persistence:
type: dynamic
storageRequestSize: 2G
tlsSANs:
- <IP-of-host-server>
The server pod for the virtual cluster is scheduled on the host using the specified runtime class.
|
The runtime class name must match a |
Verify the virtual cluster
Once the server pod is ready, generate a kubeconfig with k3kcli and confirm the virtual cluster is reachable.
k3kcli kubeconfig generate --name cruncluster --namespace test-x
The command writes a kubeconfig file to the current directory and prints the path. Export it and query the control plane:
export KUBECONFIG=$(pwd)/test-x-cruncluster-kubeconfig.yaml
kubectl cluster-info
A working virtual cluster returns output similar to:
Kubernetes control plane is running at https://<host-ip>:<nodeport> CoreDNS is running at https://<host-ip>:<nodeport>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://<host-ip>:<nodeport>/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
Related fields: securityContext and hostUsers
Two additional Cluster spec fields control pod-level security for the server and agent pods of the virtual cluster. They can be combined with runtimeClassName or used independently.
securityContext
spec.securityContext sets a custom SecurityContext on the agent and server pods. In virtual mode, this overrides the default SecurityContext that SUSE Virtual Clusters would otherwise apply.
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: cruncluster
namespace: test-x
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
# ...
hostUsers
spec.hostUsers controls whether the server and agent pods run in the host’s user namespace.
-
When
trueor unset, the pods run in the host user namespace (the default). -
When
false, a new user namespace is created for the pods, isolating their UIDs/GIDs from the host.
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: cruncluster
namespace: test-x
spec:
hostUsers: false
# ...
User namespaces require kernel support and a compatible container runtime. See the Kubernetes user namespaces documentation for prerequisites.