SUSE Edge Documentation|Day 2 Operations|Edge 3.5 migration

35 Edge 3.5 migration

This section explains how to migrate your management and downstream clusters from Edge 3.4 to Edge 3.5.0.

Important
Important

Always perform cluster migrations from the latest Z-stream release of Edge 3.4.

Always migrate to the Edge 3.5.0 release. For subsequent post-migration upgrades, refer to the management (Chapter 36, Management Cluster) and downstream (Chapter 37, Downstream clusters) cluster sections.

The following table lists the different types of clusters and the methods to upgrade clusters:

Table 35.1: Clusters and methods to upgrade downstream clusters
Cluster typeMethod

EIB provisioned clusters

See Section 35.1.3, “Fleet” for details.

Metal3 provisioned clusters

See Downstream cluster upgrades (Section 44.3, “Downstream cluster upgrades”) for details.

Phone-home provisioned clusters

See Upgrading the Kubernetes Version for Kubernetes version upgrade and Downstream clusters (Chapter 37, Downstream clusters) for SUC, Operating system, and other components.

35.1 Management Cluster

This section covers the following topics:

Section 35.1.1, “Prerequisites” - prerequisite steps to complete before starting the migration.

Section 35.1.2, “Upgrade Controller” - how to do a management cluster migration using the Chapter 22, Upgrade Controller.

Section 35.1.3, “Fleet” - how to do a management cluster migration using Chapter 8, Fleet.

35.1.1 Prerequisites

35.1.1.1 Upgrade the Bare Metal Operator CRDs

Note
Note

Applies only to CAPI/Metal3 management clusters that require a Chapter 10, Metal3 chart upgrade.

The Metal3 Helm chart includes the Bare Metal Operator (BMO) CRDs by leveraging Helm’s CRD directory.

However, this approach has certain limitations, particularly the inability to upgrade CRDs in this directory using Helm. For more information, refer to the Helm documentation.

As a result, before upgrading Metal3 to an Edge 3.5.0 compatible version, users must manually upgrade the underlying BMO CRDs.

On a machine with Helm installed and kubectl configured to point to your management cluster:

  1. Manually apply the BMO CRDs:

    helm show crds oci://registry.suse.com/edge/charts/metal3 --version 305.0.21+up0.13.0 | kubectl apply -f -

35.1.1.2 Prepare for SUSE Storage Migration

When upgrading to Edge 3.5.0 it is necessary to migrate from the previous Longhorn chart to the SUSE Storage chart that is maintained in the SUSE Application Collection

Note
Note

This procedure applies only to management clusters that require a Longhorn chart upgrade.

You must ensure the management cluster is first updated to the latest 3.4 z-stream, which contains the necessary 1.9.2 SUSE Storage version.

The SUSE Storage chart no longer has a seperate CRD Helm Chart, they are packaged as one, therefore it is necessary to follow some additional migration steps as described in the SUSE Storage Documentation

  1. First, we can check to see the current version of the Longhorn CRD installed.

    helm list --all-namespaces | grep longhorn-crd
    longhorn-crd                    longhorn-system 1               2026-01-16 02:17:16.7804359 +0000 UTC   deployed        longhorn-crd-107.1.1+up1.9.2          v1.9.2
  2. Now we will clone the rancher/charts repository for the specific Longhorn version that we currently have installed. To do this, you must first download this script.

    The script is executed like so:

    ./download-longhorn-crd-chart.sh 107.1.1+up1.9.2

    Now the longhorn-crd chart will be downloaded to the local directory ./107.1.1+up1.9.2

  3. After it is downloaded, check to make sure it is the correct version by opening the Chart.yaml and verifying the appVersion is v1.9.2:

    cat 107.1.1+up1.9.2/Chart.yaml
    
    annotations:
      catalog.cattle.io/certified: rancher
      catalog.cattle.io/hidden: "true"
      catalog.cattle.io/namespace: longhorn-system
      catalog.cattle.io/release-name: longhorn-crd
    apiVersion: v1
    appVersion: v1.9.2
    description: Installs the CRDs for longhorn.
    name: longhorn-crd
    version: 107.1.1+up1.9.2
  4. Next we must patch the helm.sh/resource-policy annotation to the value keep in the templates/crds.yaml within the longhorn-crd chart that was cloned. This ensures that Helm does not delete the CRDs when the release is uninstalled.

    To do this, download this script to automatically patch the annotation:

    ./patch-resource-policy-annotation.sh 107.1.1+up1.9.2/templates/crds.yaml
    
    Processing CRDs in '107.1.1+up1.9.2/templates/crds.yaml'...
    Creating backup: '/tmp/crds.yaml.original'
    Successfully processed the file
    Original file backed up to: '/tmp/crds.yaml.original'
    Modified file saved as: '107.1.1+up1.9.2/templates/crds.yaml'
    Found 22 CustomResourceDefinition(s)
    Added 22 helm.sh/resource-policy: keep annotation(s)
    Original file: 4575 lines, Modified file: 4597 lines
  5. To verify that the CRDs have been correctly patched, check the diff between the original template and the patched one to ensure the value of keep is set for helm.sh/resource-policy:

    vim -d /tmp/crds.yaml.original 107.1.1+up1.9.2/templates/crds.yaml
  6. Next, upgrade the longhorn-crd Helm release using the locally patched chart:

    helm upgrade longhorn-crd -n longhorn-system ./107.1.1+up1.9.2
  7. Now uninstall the longhorn-crd Helm release from your system. Due to the applied patch, the CRDs will remain:

    helm uninstall longhorn-crd --namespace longhorn-system
    
    These resources were kept due to the resource policy:
    [CustomResourceDefinition] backingimagedatasources.longhorn.io
    [CustomResourceDefinition] backingimagemanagers.longhorn.io
    [CustomResourceDefinition] nodes.longhorn.io
    [CustomResourceDefinition] orphans.longhorn.io
    [CustomResourceDefinition] recurringjobs.longhorn.io
    [CustomResourceDefinition] replicas.longhorn.io
    [CustomResourceDefinition] settings.longhorn.io
    [CustomResourceDefinition] sharemanagers.longhorn.io
    [CustomResourceDefinition] snapshots.longhorn.io
    [CustomResourceDefinition] supportbundles.longhorn.io
    [CustomResourceDefinition] systembackups.longhorn.io
    [CustomResourceDefinition] systemrestores.longhorn.io
    [CustomResourceDefinition] backingimages.longhorn.io
    [CustomResourceDefinition] volumeattachments.longhorn.io
    [CustomResourceDefinition] volumes.longhorn.io
    [CustomResourceDefinition] backupbackingimages.longhorn.io
    [CustomResourceDefinition] backups.longhorn.io
    [CustomResourceDefinition] backuptargets.longhorn.io
    [CustomResourceDefinition] backupvolumes.longhorn.io
    [CustomResourceDefinition] engineimages.longhorn.io
    [CustomResourceDefinition] engines.longhorn.io
    [CustomResourceDefinition] instancemanagers.longhorn.io
    
    release "longhorn-crd" uninstalled
  8. Ensure that the longhorn-crd chart is uninstalled by re-running the same command as before: helm list --all-namespaces | grep longhorn-crd to verify that longhorn-crd is not present.

    Following this, you need to update the ownership labels on the existing Longhorn CRDs to prepare for upgrade to the SUSE Storage Helm chart.

    Apply this script to perform the replacement.

    ./migrate-crd-ownership.sh
    
    # The output will look like the following for each CRD. The most important thing to note is that each operation says "Successfully updated CRD..." at the end:
    
    Processing CRD: volumes.longhorn.io
    Warning: resource customresourcedefinitions/volumes.longhorn.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
    customresourcedefinition.apiextensions.k8s.io/volumes.longhorn.io configured
    Successfully updated CRD: volumes.longhorn.io
    Note
    Note

    Login Credentials for the Rancher Application Collection registry can be found in the access tokens section of your settings. (Must be signed in)

    Furthermore, additional documentation for the Rancher Application Collection can be found here.

  9. After all of the CRDs have been prepared, log into the Rancher Application Collection so that you are able to pull the Helm Chart:

    helm registry login dp.apps.rancher.io -u ${APPS.RANCHER.IO_USERNAME} -p ${APPS.RANCHER.IO_ACCESS_TOKEN}
  10. Then create a docker-registry secret so you are able to pull the container images:

    kubectl create secret docker-registry rancher-app-collection \
      --namespace longhorn-system \
      --docker-server=dp.apps.rancher.io \
      --docker-username="${APPS.RANCHER.IO_USERNAME}" \
      --docker-password="${APPS.RANCHER.IO_ACCESS_TOKEN}"
  11. Finally, upgrade your Longhorn installation to SUSE Storage:

    helm upgrade longhorn oci://dp.apps.rancher.io/charts/suse-storage \
    	--namespace longhorn-system \
    	--version 1.10.1 \
    	--set privateRegistry.registrySecret=rancher-app-collection

    You can provide a values.yaml file by appending -f values.yaml to the upgrade command if you wish.

35.1.1.3 Prepare for Rancher Turtles upgrade

Note
Note

Applies only to CAPI/Metal3 management clusters that require a rancher turtles chart upgrade.

You must ensure the management cluster is first updated to the latest 3.4 z-stream, which contains the necessary 0.24.3 Rancher Turtles version.

Starting with Rancher 2.13, Rancher Turtles is installed by default, therefore it is necessary to follow some additional migration steps as described in the Rancher Turtles Documentation

First we remove the installed CAPIProvider resources:

kubectl delete capiprovider -A --all

After waiting for the step above to complete, we next remove the installed rancher-turtles chart and rancher-turtles-airgap-resources (if installed), when installed via Edge Image Builder this requires removal of the corresponding HelmChart resources:

kubectl delete -n kube-system helmchart rancher-turtles
kubectl delete -n kube-system helmchart rancher-turtles-airgap-resources

Next we must patch the CRD resources as described in the Rancher Turtles Documentation

kubectl patch crd capiproviders.turtles-capi.cattle.io --type=json -p='[{"op": "add", "path": "/metadata/annotations/meta.helm.sh~1release-namespace", "value": "cattle-turtles-system"}]'
kubectl patch crd clusterctlconfigs.turtles-capi.cattle.io --type=json -p='[{"op": "add", "path": "/metadata/annotations/meta.helm.sh~1release-namespace", "value": "cattle-turtles-system"}]'

Now follow the regular steps to upgrade the management cluster to Edge 3.5.0

35.1.1.4 Rancher Turtles post-upgrade

After following the steps below to upgrade to Edge 3.5.0 it is necessary to install the new rancher-turtles-providers helm chart - this creates new CAPIProvider resources to replace those removed in the pre-upgrade steps above.

This chart installation should be done via a HelmChart resource to enable future automated upgrade via the upgrade controller:

helm pull oci://registry.suse.com/edge/charts/rancher-turtles-providers --version 305.0.4+up0.25.1

cat > turtles-providers-helmchart.yaml <<EOF
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  annotations:
    edge.suse.com/repository-url: oci://registry.suse.com/edge/charts/rancher-turtles-providers
  name: rancher-turtles-providers
  namespace: kube-system
spec:
  chartContent: $(base64 -w 0 rancher-turtles-providers-305.0.4+up0.25.1.tgz)
  failurePolicy: reinstall
  createNamespace: true
  targetNamespace: cattle-turtles-system
  version: 305.0.4+up0.25.1
EOF
kubectl apply -f turtles-providers-helmchart.yaml

After a few minutes, output similar to the following should be observed:

kubectl get capiprovider -A
NAMESPACE                   NAME                 TYPE             PROVIDERNAME    INSTALLEDVERSION   PHASE
capm3-system                metal3               infrastructure   metal3          v1.10.4            Ready
cattle-capi-system          cluster-api          core             cluster-api     v1.10.6            Ready
fleet-addon-system          fleet                addon            rancher-fleet   v0.12.0            Ready
metal3-ipam-system          metal3ipam           ipam             metal3ipam      v1.10.4            Ready
rke2-bootstrap-system       rke2-bootstrap       bootstrap        rke2            v0.21.1            Ready
rke2-control-plane-system   rke2-control-plane   controlPlane     rke2            v0.21.1            Ready

35.1.2 Upgrade Controller

Important
Important

The Upgrade Controller currently supports Edge release migrations only for non air-gapped management clusters.

The following topics are covered as part of this section:

Section 35.1.2.1, “Prerequisites” - prerequisites specific to the Upgrade Controller.

Section 35.1.2.2, “Migration steps” - steps for migrating a management cluster to a new Edge version using the Upgrade Controller.

35.1.2.1 Prerequisites

35.1.2.1.1 Edge 3.5 Upgrade Controller

Before using the Upgrade Controller, you must first ensure that it is running a version that is capable of migrating to the desired Edge release.

To do this:

  1. If you already have Upgrade Controller deployed from a previous Edge release, upgrade its chart:

    helm upgrade upgrade-controller -n upgrade-controller-system oci://registry.suse.com/edge/charts/upgrade-controller --version 305.0.3+up0.1.3
  2. If you do not have Upgrade Controller deployed, follow Section 22.3, “Installing the Upgrade Controller”.

35.1.2.2 Migration steps

Performing a management cluster migration with the Upgrade Controller is fundamentally similar to executing an upgrade.

The only difference is that your UpgradePlan must specify the 3.5.0 release version:

apiVersion: lifecycle.suse.com/v1alpha1
kind: UpgradePlan
metadata:
  name: upgrade-plan-mgmt
  # Change to the namespace of your Upgrade Controller
  namespace: CHANGE_ME
spec:
  releaseVersion: 3.5.0

For information on how to use the above UpgradePlan to do a migration, refer to Upgrade Controller upgrade process (Section 36.1, “Upgrade Controller”).

35.1.3 Fleet

Note
Note

Whenever possible, use the Section 35.1.2, “Upgrade Controller” for migration.

Refer to this section only for use cases not covered by the Upgrade Controller.

Performing a management cluster migration with Fleet is fundamentally similar to executing an upgrade.

The key differences being that:

  1. The fleets must be used from the release-3.5.0 release of the suse-edge/fleet-examples repository.

  2. Charts scheduled for an upgrade must be upgraded to versions compatible with the Edge 3.5.0 release. For a list of the Edge 3.5.0 components, refer to Section 53.3, “Release 3.5.0”.

Important
Important

To ensure a successful Edge 3.5.0 migration, it is important that users comply with the points outlined above.

Considering the points above, users can follow the management cluster Fleet (Section 36.2, “Fleet”) documentation for a comprehensive guide on the steps required to perform a migration.

35.2 Downstream Clusters

Section 35.2.1, “Fleet” - how to do a downstream cluster migration using Chapter 8, Fleet.

35.2.1 Fleet

Performing a downstream cluster migration with Fleet is fundamentally similar to executing an upgrade.

The key differences being that:

  1. The fleets must be used from the release-3.5.0 release of the suse-edge/fleet-examples repository.

  2. Charts scheduled for an upgrade must be upgraded to versions compatible with the Edge 3.5.0 release. For a list of the Edge 3.5.0 components, refer to Section 53.3, “Release 3.5.0”.

Important
Important

To ensure a successful Edge 3.5.0 migration, it is important that users comply with the points outlined above.

Considering the points above, users can follow the downstream cluster Fleet (Section 37.1, “Fleet”) documentation for a comprehensive guide on the steps required to perform a migration.