Automatic Replica Balancing
When replicas are scheduled unevenly on nodes or zones, SUSE Storage Replica Auto Balance
setting enables the replicas for automatic balancing when a new node is available to the cluster or when the replica count for a volume is updated.
Replica Auto Balance Settings
Global setting
SUSE Storage supports 3 options for global replica auto-balance setting:
-
disabled
. This is the default option, no replica auto-balance will be done. -
least-effort
. This option instructs SUSE Storage to balance replicas for minimal redundancy. For example, after adding node-2, a volume with 4 off-balanced replicas will only rebalance 1 replica.node-1 +-- replica-a +-- replica-b +-- replica-c node-2 +-- replica-d
-
best-effort
. This option instructs SUSE Storage to try balancing replicas for even redundancy. For example, after adding node-2, a volume with 4 off-balanced replicas will rebalance 2 replicas.node-1 +-- replica-a +-- replica-b node-2 +-- replica-c +-- replica-d
SUSE Storage does not forcefully re-schedule the replicas to a zone that does not have enough nodes to support even balance. Instead, SUSE Storage will re-schedule to balance at the node level.
Volume specific setting
SUSE Storage also supports setting individual volume for Replica Auto Balance
. The setting can be specified in volume.spec.replicaAutoBalance
, this overrules the global setting.
There are 4 options available for individual volume setting:
-
Ignored
. This is the default option that instructs SUSE Storage to inherit from the global setting. -
disabled
. This option instructs SUSE Storage no replica auto-balance should be done. -
least-effort
. This option instructs SUSE Storage to balance replicas for minimal redundancy. For example, after adding node-2, a volume with 4 off-balanced replicas will only rebalance 1 replica.node-1 +-- replica-a +-- replica-b +-- replica-c node-2 +-- replica-d
-
best-effort
. This option instructs SUSE Storage to try balancing replicas for even redundancy. For example, after adding node-2, a volume with 4 off-balanced replicas will rebalance 2 replicas.node-1 +-- replica-a +-- replica-b node-2 +-- replica-c +-- replica-d
SUSE Storage does not forcefully re-schedule the replicas to a zone that does not have enough nodes to support even balance. Instead, SUSE Storage will re-schedule to balance at the node level.
How to Set Replica Auto Balance For Volumes
There are 3 ways to set Replica Auto Balance
for SUSE Storage volumes:
Change the global setting
You can change the global default setting for Replica Auto Balance
inside SUSE Storage UI settings.
The global setting only functions as a default value, similar to the replica count.
It doesn’t change any existing volume settings.
When a volume is created without specifying Replica Auto Balance
, SUSE Storage will automatically set to ignored
to inherit from the global setting.
Set individual volumes to auto-balance replicas using the SUSE Storage UI
You can change the Replica Auto Balance
setting for individual volume after creation on the volume detail page, or do multiple updates on the listed volume page.
Set individual volumes to auto-balance replicas using a StorageClass
SUSE Storage also exposes the Replica Auto Balance
setting as a parameter in a StorageClass.
You can create a StorageClass with a specified Replica Auto Balance
setting, then create PVCs using this StorageClass.
For example, the below YAML file defines a StorageClass which tells the SUSE Storage CSI driver to set the Replica Auto Balance
to least-effort
:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: hyper-converged
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
numberOfReplicas: "3"
replicaAutoBalance: "least-effort"
staleReplicaTimeout: "2880" # 48 hours in minutes
fromBackup: ""
Replica Auto Balance Disk Pressure Threshold (%)
When Replica Auto Balance
is enabled with best-effort
, you can set a Replica Auto Balance Disk Pressure Threshold (%)
. This threshold defines the disk usage level at which SUSE Storage will automatically attempt to migrate replicas to another disk on the same node.
For example, if the threshold is set to 75%, SUSE Storage will try to migrate replicas sequentially when the disk consumption reaches 75% capacity.
SUSE Storage prioritizes balancing replicas across node and zone first. Once the node and zones are balanced, it will then consider balancing within a single node based on disk pressure.
Since SUSE Storage v1.7.0, when rebuilding replicas on the same node, SUSE Storage uses local file data synchronization for more efficient data transfer.