Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Serverマニュアル / Virtualization Guide / Managing virtual machines with libvirt / Migrating VM Guests
Applies to SUSE Linux Enterprise Server 15 SP6

17 Migrating VM Guests

One of the major advantages of virtualization is that VM Guests are portable. When a VM Host Server needs maintenance, or when the host becomes overloaded, the guests can be moved to another VM Host Server. KVM and Xen even support live migrations during which the VM Guest is constantly available.

17.1 Types of migration

Depending on the required scenario, there are three ways you can migrate virtual machines (VM).

Live migration

The source VM continues to run while its configuration and memory is transferred to the target host. When the transfer is complete, the source VM is suspended and the target VM is resumed.

Live migration is useful for VMs that need to be online without any downtime.

Note
Note

VMs experiencing heavy I/O load or frequent memory page writes are challenging to live migrate. In such cases, consider using non-live or offline migration.

Non-live migration

The source VM is suspended and its configuration and memory transferred to the target host. Then the target VM is resumed.

Non-live migration is more reliable than live migration, although it creates downtime for the VM. If downtime is tolerable, non-live migration can be an option for VMs that are difficult to live migrate.

Offline migration

The VM definition is transferred to the target host. The source VM is not stopped and the target VM is not resumed.

Offline migration can be used to migrate inactive VMs.

Important
Important

The --persistent option must be used together with offline migration.

17.2 Migration requirements

To successfully migrate a VM Guest to another VM Host Server, the following requirements need to be met:

  • The source and target systems must have the same architecture.

  • Storage devices must be accessible from both machines, for example, via NFS or iSCSI. For more information, see Chapter 13, Advanced storage topics.

    This is also true for CD-ROM or floppy images that are connected during the move. However, you can disconnect them before the move as described in Section 14.11, “Ejecting and changing floppy or CD/DVD-ROM media with Virtual Machine Manager”.

  • libvirtd needs to run on both VM Host Servers and you must be able to open a remote libvirt connection between the target and the source host (or vice versa). Refer to Section 12.3, “Configuring remote connections” for details.

  • If a firewall is running on the target host, ports need to be opened to allow the migration. If you do not specify a port during the migration process, libvirt chooses one from the range 49152:49215. Make sure that either this range (recommended) or a dedicated port of your choice is opened in the firewall on the target host.

  • The source and target machines should be in the same subnet on the network, otherwise networking fails after the migration.

  • All VM Host Servers participating in migration must have the same UID for the qemu user and the same GIDs for the kvm, qemu and libvirt groups.

  • No running or paused VM Guest with the same name must exist on the target host. If a shut-down machine with the same name exists, its configuration is overwritten.

  • All CPU models, except the host cpu model, are supported when migrating VM Guests.

  • The SATA disk device type is not migratable.

  • File system pass-through feature is incompatible with migration.

  • The VM Host Server and VM Guest need to have proper timekeeping installed. See Chapter 20, VM Guest clock settings.

  • No physical devices can be passed from host to guest. Live migration is currently not supported when using devices with PCI pass-through or SR-IOV. If live migration needs to be supported, use software virtualization (paravirtualization or full virtualization).

  • The cache mode setting is an important setting for migration. See: Section 19.6, “Cache modes and live migration”.

  • Backward migration, for example, from SLES 15 SP2 to 15 SP1, is not supported.

  • SUSE strives to support live migration of VM Guests from a VM Host Server running a service pack under LTSS to a VM Host Server running a newer service pack within the same SLES major version. For example, VM Guest migration from a SLES 12 SP2 host to a SLES 12 SP5 host. SUSE only performs minimal testing of LTSS-to-newer migration scenarios and recommends thorough on-site testing before attempting to migrate critical VM Guests.

  • The image directory should be located in the same path on both hosts.

  • All hosts should be on the same level of microcode (especially the Spectre microcode updates). This can be achieved by installing the latest updates of SUSE Linux Enterprise Server on all hosts.

17.3 Live-migrating with Virtual Machine Manager

When using the Virtual Machine Manager to migrate VM Guests, it does not matter on which machine it is started. You can start Virtual Machine Manager on the source or the target host or even on a third host. In the latter case, you need to be able to open remote connections to both the target and the source host.

  1. Start Virtual Machine Manager and establish a connection to the target or the source host. If the Virtual Machine Manager was started neither on the target nor the source host, connections to both hosts need to be opened.

  2. Right-click the VM Guest that you want to migrate and choose Migrate. Make sure the guest is running or paused—it is not possible to migrate guests that are shut down.

    Tip
    Tip: Increasing the speed of the migration

    To increase the speed of the migration, pause the VM Guest. This is the equivalent of non-live migration described in Section 17.1, “Types of migration”.

  3. Choose a New Host for the VM Guest. If the desired target host does not show up, make sure that you are connected to the host.

    To change the default options for connecting to the remote host, under Connection, set the Mode, and the target host's Address (IP address or host name) and Port. If you specify a Port, you must also specify an Address.

    Under Advanced options, choose whether the move should be permanent (default) or temporary, using Temporary move.

    Additionally, there is the option Allow unsafe, which allows migrating without disabling the cache of the VM Host Server. This can speed up the migration but only works when the current configuration allows for a consistent view of the VM Guest storage without using cache="none"/0_DIRECT.

    Note
    Note: Bandwidth option

    In recent versions of Virtual Machine Manager, the option of setting a bandwidth for the migration has been removed. To set a specific bandwidth, use virsh instead.

  4. To perform the migration, click Migrate.

    When the migration is complete, the Migrate window closes and the VM Guest is now listed on the new host in the Virtual Machine Manager window. The original VM Guest is still available on the source host in the shut-down state.

17.4 Migrating with virsh

To migrate a VM Guest with virsh migrate, you need to have direct or remote shell access to the VM Host Server, because the command needs to be run on the host. The migration command looks like this:

> virsh migrate [OPTIONS] VM_ID_or_NAME CONNECTION_URI [--migrateuri tcp://REMOTE_HOST:PORT]

The most important options are listed below. See virsh help migrate for a full list.

--live

Does a live migration. If not specified, the guest is paused during the migration (non-live migration).

--suspend

Leaves the VM paused on the target host during live or non-live migration.

--persistent

Persists the migrated VM on the target host. Without this option, the VM is not be included in the list of domains reported by virsh list --all when shut down.

--undefinesource

When specified, the VM Guest definition on the source host is deleted after a successful migration. However, virtual disks attached to this guest are not deleted.

--parallel --parallel-connections NUM_OF_CONNECTIONS

Parallel migration can be used to increase migration data throughput in cases where a single migration thread is not capable of saturating the network link between source and target hosts. On hosts with 40 GB network interfaces, it may require four migration threads to saturate the link. With parallel migration, the time required to migrate large memory VMs can be reduced.

The following examples use mercury.example.com as the source system and jupiter.example.com as the target system; the VM Guest's name is opensuse131 with ID 37.

Non-live migration with default parameters
> virsh migrate 37 qemu+ssh://tux@jupiter.example.com/system
Transient live migration with default parameters
> virsh migrate --live opensuse131 qemu+ssh://tux@jupiter.example.com/system
Persistent live migration; delete VM definition on source
> virsh migrate --live --persistent --undefinesource 37 \
qemu+tls://tux@jupiter.example.com/system
Non-live migration using port 49152
> virsh migrate opensuse131 qemu+ssh://tux@jupiter.example.com/system \
--migrateuri tcp://@jupiter.example.com:49152
Live migration transferring all used storage
> virsh migrate --live --persistent --copy-storage-all \
opensuse156 qemu+ssh://tux@jupiter.example.com/system
Important
Important

When migrating VM's storage using the --copy-storage-all option, the storage must be placed in a libvirt storage pool. The target storage pool must exist with identical type and name as the source pool.

To obtain the XML representation of the source pool, use the following command:

> sudo virsh pool-dumpxml EXAMPLE_VM > EXAMPLE_POOL.xml

To create and start the storage pool on the target host, copy its XML representation there and use the following commands:

> sudo virsh pool-define EXAMPLE_POOL.xml
> sudo virsh pool-start EXAMPLE_VM
Note
Note: Transient compared to persistent migrations

By default, virsh migrate creates a temporary (transient) copy of the VM Guest on the target host. A shut-down version of the original guest description remains on the source host. A transient copy is deleted from the server after it is shut down.

To create a permanent copy of a guest on the target host, use the switch --persistent. A shut-down version of the original guest description remains on the source host, too. Use the option --undefinesource together with --persistent for a real move where a permanent copy is created on the target host and the version on the source host is deleted.

It is not recommended to use --undefinesource without the --persistent option, since this results in the loss of both VM Guest definitions when the guest is shut down on the target host.

17.5 Step-by-step example

17.5.1 Exporting the storage

First, you need to export the storage to share the guest image between hosts. This can be done by an NFS server. In the following example, we want to share the /volume1/VM directory for all machines that are on the network 10.0.1.0/24. We are using a SUSE Linux Enterprise NFS server. As root user, edit the /etc/exports file and add:

/volume1/VM 10.0.1.0/24  (rw,sync,no_root_squash)

You need to restart the NFS server:

> sudo systemctl restart nfsserver
> sudo exportfs
/volume1/VM      10.0.1.0/24

17.5.2 Defining the pool on the target hosts

On each host where you want to migrate the VM Guest, the pool must be defined to be able to access the volume (that contains the Guest image). Our NFS server IP address is 10.0.1.99, its share is the /volume1/VM directory, and we want to get it mounted in the /var/lib/libvirt/images/VM directory. The pool name is VM. To define this pool, create a VM.xml file with the following content:

<pool type='netfs'>
  <name>VM</name>
  <source>
    <host name='10.0.1.99'/>
    <dir path='/volume1/VM'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/var/lib/libvirt/images/VM</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
  </pool>

Then load it into libvirt using the pool-define command:

# virsh pool-define VM.xml

An alternative way to define this pool is to use the virsh command:

# virsh pool-define-as VM --type netfs --source-host 10.0.1.99 \
     --source-path /volume1/VM --target /var/lib/libvirt/images/VM
Pool VM created

The following commands assume that you are in the interactive shell of virsh, which can also be reached by using the command virsh without any arguments. Then the pool can be set to start automatically at host boot (autostart option):

virsh # pool-autostart VM
Pool VM marked as autostarted

To disable the autostart:

virsh # pool-autostart VM --disable
Pool VM unmarked as autostarted

Check if the pool is present:

virsh # pool-list --all
 Name                 State      Autostart
-------------------------------------------
 default              active     yes
 VM                   active     yes

virsh # pool-info VM
Name:           VM
UUID:           42efe1b3-7eaa-4e24-a06a-ba7c9ee29741
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       2,68 TiB
Allocation:     2,38 TiB
Available:      306,05 GiB
Warning
Warning: Pool needs to exist on all target hosts

Remember: this pool must be defined on each host where you want to be able to migrate your VM Guest.

17.5.3 Creating the volume

The pool has been defined—now we need a volume which contains the disk image:

virsh # vol-create-as VM sled12.qcow2 8G --format qcow2
Vol sled12.qcow2 created

The volume names shown are used later to install the guest with virt-install.

17.5.4 Creating the VM Guest

Let us create a SUSE Linux Enterprise Server VM Guest with the virt-install command. The VM pool is specified with the --disk option, cache=none is recommended if you do not want to use the --unsafe option while doing the migration.

# virt-install --connect qemu:///system --virt-type kvm --name \
   sles15 --memory 1024 --disk vol=VM/sled12.qcow2,cache=none --cdrom \
   /mnt/install/ISO/SLE-15-Server-DVD-x86_64-Build0327-Media1.iso --graphics \
   vnc --os-variant sled15
Starting install...
Creating domain...

17.5.5 Migrate the VM Guest

Everything is ready to do the migration now. Run the migrate command on the VM Host Server that is currently hosting the VM Guest, and choose the target.

virsh # migrate --live sled12 --verbose qemu+ssh://IP/Hostname/system
Password:
Migration: [ 12 %]