Requirements
Each node in the Kubernetes cluster where SUSE Storage is installed must fulfill the following requirements:
- 
A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.) 
- 
Kubernetes v1.25 or later 
- 
open-iscsiis installed, and theiscsiddaemon is running on all the nodes. This is necessary, since SUSE Storage relies oniscsiadmon the host to provide persistent volumes to Kubernetes. For help installingopen-iscsi, refer to Installing open-iscsi.
- 
RWX support requires that each node has a NFSv4 client installed. - 
For installing a NFSv4 client, refer to Installing NFSv4 Client. 
 
- 
- 
The host filesystem supports the file extentsfeature to store the data. Currently we support:- 
ext4 
- 
XFS 
 
- 
- 
bash,curl,findmnt,grep,awk,blkid,lsblkmust be installed.
- 
Mount propagation must be enabled. 
The SUSE Storage workloads must be able to run as root in order for SUSE Storage to be deployed and operated properly.
The Longhorn Command Line Tool can be used to check the Longhorn environment for potential issues.
For the minimum recommended hardware, refer to the best practices guide.
OS/Distro Specific Configuration
You must perform additional setups before using SUSE Storage with certain operating systems and distributions.
- 
Google Kubernetes Engine (GKE): See GKE. 
- 
K3s clusters: See K3s. 
- 
RKE clusters with CoreOS: See RKE and CoreOS. 
- 
OCP/OKD clusters: See OKD. 
- 
Talos Linux clusters: See Talos Linux. 
- 
Container-Optimized OS: See Container-Optimized OS. 
Checking the Kubernetes Version
Use the following command to check your Kubernetes server version:
kubectl versionResult:
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.10", GitCommit:"b8609d4dd75c5d6fba4a5eaa63a5507cb39a6e99", GitTreeState:"clean", BuildDate:"2023-10-18T11:44:31Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.10+k3s2", GitCommit:"cb5cb5557f34e240e38c68a8c4ca2506c68b1d86", GitTreeState:"clean", BuildDate:"2023-11-08T03:21:46Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}The Server Version should be greater than or equal to v1.25.
Pod Security Policy
SUSE Storage is shipped with a default Pod Security Policy that will give SUSE Storage the necessary privileges to be able to run properly.
No special configuration is needed for SUSE Storage to work properly on clusters with Pod Security Policy enabled.
Notes on Mount Propagation
If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, the MountPropagation feature is enabled by default.
If MountPropagation is disabled, Base Image feature will be disabled.
Root and Privileged Permission
SUSE Storage components require root access with privileged permissions to achieve volume operations and management, because SUSE Storage relies on system resources on the host across different namespaces. For example, SUSE Storage uses nsenter to understand block devices' usage or encrypt/decrypt volumes on the host.
Below are the directories SUSE Storage components requiring access with root and privileged permissions :
- 
Longhorn Manager - 
/boot: Get information about required modules from /boot/config-$(uname -r)on the host.
- 
/dev: Block devices created by Longhorn are under the /devpath.
- 
/proc: Find the recognized host process like container runtime, then use nsenterto access the mounts on the host to understand disks usage.
- 
/var/lib/longhorn: The default path for storing volume data on a host. 
 
- 
- 
Longhorn Engine Image - 
/var/lib/longhorn/engine-binaries: The default path for storing the Longhorn engine binaries. 
 
- 
- 
Longhorn Instance Manager - 
/: Access any data path on this node and access Longhorn engine binaries. 
- 
/dev: Block devices created by Longhorn are under the /devpath.
- 
/proc: Find the recognized host process like container runtime, then use nsenterto manage iSCSI targets and initiators, also some file system
 
- 
- 
Longhorn Share Manager - 
/dev: Block devices created by Longhorn are under the /devpath.
- 
/lib/modules: Kernel modules required by cryptsetupfor volume encryption.
- 
/proc: Find the recognized host process like container runtime, then use nsenterfor volume encryption.
- 
/sys: Support volume encryption by cryptsetup.
 
- 
- 
Longhorn CSI Plugin - 
/: For host checks via the NFS customer mounter (deprecated). Note that, this will be removed in the future release. 
- 
/dev: Block devices created by Longhorn are under the /devpath.
- 
/lib/modules: Kernel modules required by Longhorn CSI plugin. 
- 
/sys: Support volume encryption by cryptsetup.
- 
/var/lib/kubelet/plugins/kubernetes.io/csi: The path where the Longhorn CSI plugin creates the staging path (via NodeStageVolume) of a block device. The staging path will be bind-mounted to the target path/var/lib/kubelet/pods(viaNodePublishVolume) for support single volume could be mounted to multiple Pods.
- 
/var/lib/kubelet/plugins_registry: The path where the node-driver-registrar registers the CSI plugin with kubelet. 
- 
/var/lib/kubelet/plugins/driver.longhorn.io: The path where the socket for the communication between kubelet Longhorn CSI driver. 
- 
/var/lib/kubelet/pods: The path where the Longhorn CSI driver mounts volume from the target path (via NodePublishVolume).
 
- 
- 
Longhorn CSI Attacher/Provisioner/Resizer/Snapshotter - 
/var/lib/kubelet/plugins/driver.longhorn.io: The path where the socket for the communication between kubelet Longhorn CSI driver. 
 
- 
- 
Longhorn Backing Image Manager - 
/var/lib/longhorn: The default path for storing data on the host. 
 
- 
- 
Longhorn Backing Image Data Source - 
/var/lib/longhorn: The default path for storing data on the host. 
 
- 
- 
Longhorn System Restore Rollout - 
/var/lib/longhorn/engine-binaries: The default path for storing the Longhorn engine binaries. 
 
- 
Installing open-iscsi
The command used to install open-iscsi differs depending on the Linux distribution.
For GKE, we recommend using Ubuntu as the guest OS image since it containsopen-iscsi already.
You may need to edit the cluster security group to allow SSH access.
- 
SUSE and openSUSE: Run the following command: zypper install open-iscsi systemctl enable iscsid systemctl start iscsid 
- 
Debian and Ubuntu: Run the following command: apt-get install open-iscsi 
- 
RHEL, CentOS, and EKS (EKS Kubernetes Worker AMI with AmazonLinux2 image): Run the following commands: yum --setopt=tsflags=noscripts install iscsi-initiator-utils echo "InitiatorName=$(/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi systemctl enable iscsid systemctl start iscsid 
- 
Talos Linux: See Talos Linux Support. 
- 
Container-Optimized OS: See Container-Optimized OS Support 
Please ensure iscsi_tcp module has been loaded before iscsid service starts. Generally, it should be automatically loaded along with the package installation.
modprobe iscsi_tcp
| On SUSE and openSUSE, the iscsi_tcpmodule is included only in thekernel-defaultpackage. If thekernel-default-basepackage is installed on your system, you must replace it withkernel-default. | 
We also provide an iscsi installer to make it easier for users to install open-iscsi automatically:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.9.2/deploy/prerequisite/longhorn-iscsi-installation.yamlAfter the deployment, run the following command to check pods' status of the installer:
kubectl -n longhorn-system get pod | grep longhorn-iscsi-installation longhorn-iscsi-installation-49hd7 1/1 Running 0 21m longhorn-iscsi-installation-pzb7r 1/1 Running 0 39m
And also can check the log with the following command to see the installation result:
kubectl -n longhorn-system logs longhorn-iscsi-installation-pzb7r -c iscsi-installation ... Installed: iscsi-initiator-utils.x86_64 0:6.2.0.874-7.amzn2 Dependency Installed: iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-7.amzn2 Complete! Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service. iscsi install successfully
In rare cases, it may be required to modify the installed SELinux policy to get SUSE Storage working. If you are running an up-to-date version of a Fedora downstream distribution (e.g. Fedora, RHEL, Rocky, CentOS, etc.) and plan to leave SELinux enabled, see the KB for details.
Installing NFSv4 client
The backup feature requires NFSv4, v4.1 or v4.2, and ReadWriteMany (RWX) volume feature requires NFSv4.1. Before installing NFSv4 client userspace daemon and utilities, make sure the client kernel support is enabled on each SUSE Storage node.
- 
Check NFSv4.1support is enabled in the kernel:cat /boot/config-`uname -r`| grep CONFIG_NFS_V4_1 
- 
Check NFSv4.2support is enabled in the kernel:cat /boot/config-`uname -r`| grep CONFIG_NFS_V4_2 
- 
Check if NFSv4.2support is enabled in the kernel:cat /boot/config-`uname -r`| grep CONFIG_NFS_V4_2 
The command used to install a NFSv4 client differs depending on the Linux distribution.
- 
For Debian and Ubuntu, use this command: apt-get install nfs-common 
- 
For RHEL, CentOS, and EKS with EKS Kubernetes Worker AMI with AmazonLinux2 image, use this command:yum install nfs-utils 
- 
For SUSE/OpenSUSE you can install a NFSv4 client via: zypper install nfs-client 
- 
For Talos Linux, the NFS client is part of the kubeletimage maintained by the Talos team.
- 
For Container-Optimized OS, the NFS is supported with the node image. 
We also provide an nfs installer to make it easier for users to install nfs-client automatically:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.9.2/deploy/prerequisite/longhorn-nfs-installation.yamlAfter the deployment, run the following command to check pods' status of the installer:
kubectl -n longhorn-system get pod | grep longhorn-nfs-installation NAME READY STATUS RESTARTS AGE longhorn-nfs-installation-t2v9v 1/1 Running 0 143m longhorn-nfs-installation-7nphm 1/1 Running 0 143m
And also can check the log with the following command to see the installation result:
kubectl -n longhorn-system logs longhorn-nfs-installation-t2v9v -c nfs-installation ... nfs install successfully
| 
 | 
Installing Cryptsetup and LUKS
Cryptsetup is an open-source utility used to conveniently set up dm-crypt based device-mapper targets and SUSE Storage uses LUKS2 (Linux Unified Key Setup) format that is the standard for Linux disk encryption to support volume encryption.
The command used to install the cryptsetup tool differs depending on the Linux distribution.
- 
For Debian and Ubuntu, use this command: apt-get install cryptsetup
- 
For RHEL, CentOS, Rocky Linux and EKS with EKS Kubernetes Worker AMI with AmazonLinux2 image, use this command:yum install cryptsetup
- 
For SUSE/OpenSUSE, use this command: zypper install cryptsetup
Installing Device Mapper Userspace Tool
The device mapper is a framework provided by the Linux kernel for mapping physical block devices onto higher-level virtual block devices. It forms the foundation of the dm-crypt disk encryption and provides the linear dm device on the top of v2 volume. The device mapper is typically included by default in many Linux distributions. Some lightweight or highly customized distributions or a minimal installation of a distribution might exclude it to save space or reduce complexity
The command used to install the device mapper differs depending on the Linux distribution.
- 
For Debian and Ubuntu, use this command: apt-get install dmsetup
- 
For RHEL, CentOS, Rocky Linux and EKS with EKS Kubernetes Worker AMI with AmazonLinux2 image, use this command:yum install device-mapper
- 
For SUSE/OpenSUSE, use this command: zypper install device-mapper
Longhorn Command Line Tool
Checking Prerequisites Using Longhorn Command Line Tool
The longhornctl tool is a command-line interface (CLI) for Longhorn operations. For more information, see Command Line Tool (longhornctl).
To check prerequisites and configurations, download the longhornctl tool and then run the check sub-command:
# For AMD64 platform
curl -sSfL -o longhornctl https://github.com/longhorn/cli/releases/download/v1.9.2/longhornctl-linux-amd64
# For ARM platform
curl -sSfL -o longhornctl https://github.com/longhorn/cli/releases/download/v1.9.2/longhornctl-linux-arm64
chmod +x longhornctl
./longhornctl check preflightExample of result:
INFO[2024-01-01T00:00:01Z] Initializing preflight checker
INFO[2024-01-01T00:00:01Z] Cleaning up preflight checker
INFO[2024-01-01T00:00:01Z] Running preflight checker
INFO[2024-01-01T00:00:02Z] Retrieved preflight checker result:
worker1:
  info:
  - Service iscsid is running
  - NFS4 is supported
  - Package nfs-common is installed
  - Package open-iscsi is installed
  warn:
  - multipathd.service is running. Please refer to https://longhorn.io/kb/troubleshooting-volume-with-multipath/ for more information.
worker2:
  info:
  - Service iscsid is running
  - NFS4 is supported
  - Package nfs-common is not installed
  - Package open-iscsi is installedInstalling Prerequisites Using Longhorn Command Line Tool
Use the install sub-command to install and set up the preflight dependencies before installing Longhorn. This involves operations that may require a system reboot on certain Linux distributions.
Here are examples of how to use the install sub-command:
- 
To run from a locally downloaded longhornctlbinary:./longhornctl install preflight
- 
To run with explicit kube-configandimageparameters:longhornctl --kube-config ~/.kube/config --image longhornio/longhorn-cli:v1.9.2 install preflight
Example of result after running the install command:
INFO[2025-03-11T08:17:57+08:00] Initializing preflight installer
INFO[2025-03-11T08:17:57+08:00] Cleaning up preflight installer
INFO[2025-03-11T08:17:57+08:00] Running preflight installer
INFO[2025-03-11T08:17:57+08:00] Installing dependencies with package manager
INFO[2025-03-11T08:18:28+08:00] Installed dependencies with package manager
INFO[2025-03-11T08:18:28+08:00] Cleaning up preflight installer
INFO[2025-03-11T08:18:28+08:00] Completed preflight installer. Use 'longhornctl check preflight' to check the result.| On some immutable Linux distributions, such as SUSE Linux Enterprise Micro (SLE Micro), you might need to reboot worker nodes after running the  The documentation of the Linux distribution you are using should outline such requirements. For example, the SLE Micro documentation explains how all changes made by the  |