Documentation survey

Legacy SUSE Multi-Linux Manager Server Migration to Container

To migrate a legacy SUSE Multi-Linux Manager Server to a container, a new machine is required.

In the context of this migration, the legacy SUSE Multi-Linux Manager Server (RPM installation) is sometimes also called old server.

1. Requirements and Considerations

1.1. Hostnames

Neither in-place migration is not possible nor allows the migration procedure currently any hostname renaming functionality.

Thus the fully qualified domain name (FQDN) on the new server will remain identical to that on the legacy server.

After migration, it is necessary to update the DHCP and DNS records to point to the new server.

For more information, see Finalize migration.

2. GPG Keys

  • Self trusted GPG keys are not migrated.

  • GPG keys that are trusted in the RPM database only are not migrated. Thus synchronizing channels with spacewalk-repo-sync can fail.

  • The administrator must migrate these keys manually from the legacy SUSE Multi-Linux Manager installation to the container host after the actual server migration.

Procedure: Manual Migration of the GPG Keys to New Server
  1. Copy the keys from the legacy Uyuni server to the container host of the new server.

  2. Later, add each key to the migrated server with the command mgradm gpg add <PATH_TO_KEY_FILE>.

2.1. Initial Preparation on the Legacy Server

The migration can take a very long time depending on the amount of data that needs to be replicated. To reduce downtime it is possible to run the migration multiple times in a process of initial replication, re-replication, or final replication and switch over while all the services on the legacy server can stay up and running.

Only during the final migration the processes on the legacy server need to be stopped.

For all non-final replications add the parameter --prepare to prevent the automatic stopping the services on the legacy server. For example:

mgradm migrate podman <oldserver.fqdn> --prepare
Procedure: Initial Preparation on the Legacy Server
  1. Stop the SUSE Multi-Linux Manager services:

    spacewalk-service stop
  2. Stop the PostgreSQL service:

    systemctl stop postgresql

2.2. SSH Connection Preparation

Procedure: Preparing the SSH connection
  1. Ensure that for root an SSH key exists on the new 5.1 server. If a key does not exist, create it with the command:

    ssh-keygen -t rsa
  2. The SSH configuration and agent should be ready on the new server host for a connection to the legacy server that does not prompt for a password.

    eval $(ssh-agent); ssh-add

    To establish a connection without prompting for a password, the migration script relies on an SSH agent running on the new server. If the agent is not active yet, initiate it by running eval $(ssh-agent). Then add the SSH key to the running agent with ssh-add followed by the path to the private key. You will be prompted to enter the password for the private key during this process.

  3. Copy the public SSH key to the legacy SUSE Multi-Linux Manager Server (<oldserver.fqdn>) with ssh-copy-id. Replace <oldserver.fqdn> with the FQDN of the legacy server:

    ssh-copy-id <oldserver.fqdn>

    The SSH key will be copied into the legacy server’s ~/.ssh/authorized_keys file. For more information, see the ssh-copy-id manpage.

  4. Establish an SSH connection from the new server to the legacy SUSE Multi-Linux Manager Server to check that no password is needed. Also there must not by any problem with the host fingerprint. In case of trouble, remove old fingerprints from the ~/.ssh/known_hosts file. Then try again. The fingerprint will be stored in the local ~/.ssh/known_hosts file.

2.3. Perform the Migration

When planning your migration from a legacy SUSE Multi-Linux Manager to a containerized SUSE Multi-Linux Manager, ensure that your target instance meets or exceeds the specifications of the legacy setup. This includes, but is not limited to, memory (RAM), CPU Cores, Storage, and Network Bandwidth.

SUSE Multi-Linux Manager server hosts that are hardened for security may restrict execution of files from the /tmp folder. In such cases, as a workaround, export the TMPDIR environment variable to another existing path before running mgradm. For example:

export TMPDIR=/path/to/other/tmp

In SUSE Multi-Linux Manager updates, tools will be changed to make this workaround unnecessary.

2.3.1. Configure Custom Persistent Storage

Configuring persistent storage is optional, but it is the only way to avoid serious trouble with container full disk conditions. It is highly recommended to configure custom persistent storage with the mgr-storage-server tool.

For more information, see mgr-storage-server --help. This tool simplifies creating the container storage and database volumes.

Use the command in the following manner:

mgr-storage-server <storage-disk-device> [<database-disk-device>]

Devices must not have any filesystem. The command aborts if a filesystem exists on the storage device.

For example:

mgr-storage-server /dev/nvme1n1 /dev/nvme2n1

This command will create the persistent storage volumes at /var/lib/containers/storage/volumes.

For more information, see

2.3.2. Perform the Migration

  1. Execute the following command to install a new SUSE Multi-Linux Manager server. Replace <oldserver.fqdn> with the FQDN of the legacy server:

    mgradm migrate podman <oldserver.fqdn>
  2. Migrate trusted SSL CA certificates.

2.3.3. Migration of the Certificates

Trusted SSL CA certificates that were installed as part of an RPM and stored on a legacy SUSE Multi-Linux Manager in the /usr/share/pki/trust/anchors/ directory will not be migrated. Because SUSE does not install RPM packages in the container, the administrator must migrate these certificate files manually from the legacy installation after migration:

Procedure: Migrating the Certificates
  1. Copy the file from the legacy server to the new server. For example, as /local/ca.file.

  2. Copy the file into the container with the command:

    mgrctl cp /local/ca.file server:/etc/pki/trust/anchors/

2.3.4. Finalize migration

After successfully running the mgradm migrate command, the Salt setup on all clients will still point to the legacy server.

To redirect them to the new 5.1 server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same FQDN and IP address as the legacy server.

If something goes wrong with the migration it is possible to restart the old system. As root, restart PostgreSQL and the spacewalk services with the following commands:

service postgresql start
spacewalk-service start

3. Kubernetes Preparations

Before executing the migration with mgradm migrate command, it is essential to predefine Persistent Volumes, especially considering that the migration job initiates the container from scratch.

For more information, see the installation section on preparing these volumes in Persistent Container Volumes.

4. Migrating

Execute the following command to install a new SUSE Multi-Linux Manager server, replacing <oldserver.fqdn> with the appropriate FQDN of the legacy server:

mgradm migrate podman <oldserver.fqdn>

or

mgradm migrate kubernetes <oldserver.fqdn>

After successfully running the mgradm migrate command, the Salt setup on all clients will still point to the legacy server. To redirect them to the new server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same FQDN and IP address as the legacy server.