Hardware and Network Requirements
As an HCI solution on bare metal servers, there are minimum node hardware and network requirements for installing and running SUSE Virtualization.
A three-node cluster is required to fully realize the multi-node features. The first node that is added to the cluster is by default the management node. When the cluster has three or more nodes, the two nodes added after the first are automatically promoted to management nodes to form a high availability (HA) cluster.
The latest versions support the deployment of single-node clusters. Such clusters do not support high availability, multiple replicas, and live migration.
Hardware Requirements
SUSE Virtualization nodes have the following hardware requirements and recommendations for installation and testing.
Hardware | Development/Testing | Production |
---|---|---|
CPU |
x86_64 (with hardware-assisted virtualization); 8 cores minimum |
x86_64 (with hardware-assisted virtualization); 16 cores minimum |
Memory |
32 GB minimum |
64 GB minimum |
Disk capacity |
250 GB minimum (180 GB minimum when using multiple disks) |
500 GB minimum |
Disk performance |
5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet etcd speed requirements. Only local disks and hardware RAID are supported. |
5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet etcd speed requirements. Only local disks and hardware RAID are supported. |
Network card count |
Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the witness node) |
Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the witness node) |
Network card speed |
1 Gbps Ethernet minimum |
10 Gbps Ethernet minimum |
Network switch |
Port trunking for VLAN support |
Port trunking for VLAN support |
|
CPU Specifications
Live Migration functions correctly only if the CPUs of all physical servers in the cluster have the same specifications. This requirement applies to all operations that rely on Live Migration functionality, such as automatic VM migration when Maintenance Mode is enabled.
Newer CPUs (even those from the same vendor, generation, and family) can have varying capabilities that may be exposed to VM operating systems. To ensure VM stability, Live Migration checks if the CPU capabilities are consistent, and blocks migration attempts when the source and destination are incompatible.
When creating clusters, adding more hosts to a cluster, and replacing hosts, always use CPUs with the same specifications to prevent operational constraints.
Network Requirements
Nodes have the following network requirements for installation.
Port Requirements for Nodes
Nodes require the following port connections or inbound rules. Typically, all outbound traffic is allowed.
Protocol | Port | Source | Description |
---|---|---|---|
TCP |
2379 |
Management nodes |
Etcd client port |
TCP |
2381 |
Management nodes |
Etcd metrics collection |
TCP |
2380 |
Management nodes |
Etcd peer port |
TCP |
2382 |
Management nodes |
Etcd client port (HTTP only) |
TCP |
10010 |
Management and compute nodes |
Containerd |
TCP |
6443 |
Management nodes |
Kubernetes API |
TCP |
9345 |
Management nodes |
Kubernetes API |
TCP |
10252 |
Management nodes |
Kube-controller-manager health checks |
TCP |
10257 |
Management nodes |
Kube-controller-manager secure port |
TCP |
10251 |
Management nodes |
Kube-scheduler health checks |
TCP |
10259 |
Management nodes |
Kube-scheduler secure port |
TCP |
10250 |
Management and compute nodes |
Kubelet |
TCP |
10256 |
Management and compute nodes |
Kube-proxy health checks |
TCP |
10258 |
Management nodes |
cloud-controller-manager |
TCP |
10260 |
Management nodes |
cloud-controller-manager |
TCP |
9091 |
Management and compute nodes |
Canal calico-node felix |
TCP |
9099 |
Management and compute nodes |
Canal CNI health checks |
UDP |
8472 |
Management and compute nodes |
Canal CNI with VxLAN |
TCP |
2112 |
Management nodes |
Kube-vip |
TCP |
6444 |
Management and compute nodes |
RKE2 agent |
TCP |
10246/10247/10248/10249 |
Management and compute nodes |
Nginx worker process |
TCP |
8181 |
Management and compute nodes |
Nginx-ingress-controller |
TCP |
8444 |
Management and compute nodes |
Nginx-ingress-controller |
TCP |
10245 |
Management and compute nodes |
Nginx-ingress-controller |
TCP |
80 |
Management and compute nodes |
Nginx |
TCP |
9796 |
Management and compute nodes |
Node-exporter |
TCP |
30000-32767 |
Management and compute nodes |
NodePort port range |
TCP |
22 |
Management and compute nodes |
sshd |
UDP |
68 |
Management and compute nodes |
Wicked |
TCP |
3260 |
Management and compute nodes |
iscsid |
Port Requirements for Integrating with SUSE Rancher Prime
If you want to integrate with SUSE Rancher Prime, you need to make sure that all SUSE Virtualization nodes can connect to TCP port 443 of the SUSE Rancher Prime load balancer.
When provisioning VMs with Kubernetes clusters from SUSE Rancher Prime into SUSE Virtualization, you need to be able to connect to TCP port 443 of the SUSE Rancher Prime load balancer. Otherwise, the cluster won’t be manageable by SUSE Rancher Prime. For more information, refer to Rancher Architecture.