Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Server Documentation / Virtualization Guide / Managing virtual machines with libvirt / Advanced storage topics
Applies to SUSE Linux Enterprise Server 15 SP3

13 Advanced storage topics

This chapter introduces advanced topics about manipulating storage from the perspective of the VM Host Server.

13.1 Locking disk files and block devices with virtlockd

Locking block devices and disk files prevents concurrent writes to these resources from different VM Guests. It provides protection against starting the same VM Guest twice, or adding the same disk to two different virtual machines. This will reduce the risk of a virtual machine's disk image becoming corrupted because of a wrong configuration.

The locking is controlled by a daemon called virtlockd. Since it operates independently from the libvirtd daemon, locks will endure a crash or a restart of libvirtd. Locks will even persist in the case of an update of the virtlockd itself, since it can re-execute itself. This ensures that VM Guests do not need to be restarted upon a virtlockd update. virtlockd is supported for KVM, QEMU, and Xen.

13.1.1 Enable locking

Locking virtual disks is not enabled by default on SUSE Linux Enterprise Server. To enable and automatically start it upon rebooting, perform the following steps:

  1. Edit /etc/libvirt/qemu.conf and set

    lock_manager = "lockd"
  2. Start the virtlockd daemon with the following command:

    > sudo systemctl start virtlockd
  3. Restart the libvirtd daemon with:

    > sudo systemctl restart libvirtd
  4. Make sure virtlockd is automatically started when booting the system:

    > sudo systemctl enable virtlockd

13.1.2 Configure locking

By default virtlockd is configured to automatically lock all disks configured for your VM Guests. The default setting uses a "direct" lockspace, where the locks are acquired against the actual file paths associated with the VM Guest <disk> devices. For example, flock(2) will be called directly on /var/lib/libvirt/images/my-server/disk0.raw when the VM Guest contains the following <disk> device:

<disk type='file' device='disk'>
 <driver name='qemu' type='raw'/>
 <source file='/var/lib/libvirt/images/my-server/disk0.raw'/>
 <target dev='vda' bus='virtio'/>
</disk>

The virtlockd configuration can be changed by editing the file /etc/libvirt/qemu-lockd.conf. It also contains detailed comments with further information. Make sure to activate configuration changes by reloading virtlockd:

> sudo systemctl reload virtlockd

13.1.2.1 Enabling an indirect lockspace

The default configuration of virtlockd uses a direct lockspace. This means that the locks are acquired against the actual file paths associated with the <disk> devices.

If the disk file paths are not accessible to all hosts, virtlockd can be configured to allow an indirect lockspace. This means that a hash of the disk file path is used to create a file in the indirect lockspace directory. The locks are then held on these hash files instead of the actual disk file paths. Indirect lockspace is also useful if the file system containing the disk files does not support fcntl() locks. An indirect lockspace is specified with the file_lockspace_dir setting:

file_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"

13.1.2.2 Enable locking on LVM or iSCSI volumes

When wanting to lock virtual disks placed on LVM or iSCSI volumes shared by several hosts, locking needs to be done by UUID rather than by path (which is used by default). Furthermore, the lockspace directory needs to be placed on a shared file system accessible by all hosts sharing the volume. Set the following options for LVM and/or iSCSI:

lvm_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"
iscsi_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"

13.2 Online resizing of guest block devices

Sometimes you need to change—extend or shrink—the size of the block device used by your guest system. For example, when the disk space originally allocated is no longer enough, it is time to increase its size. If the guest disk resides on a logical volume, you can resize it while the guest system is running. This is a big advantage over an offline disk resizing (see the virt-resize command from the Section 20.3, “Guestfs tools” package) as the service provided by the guest is not interrupted by the resizing process. To resize a VM Guest disk, follow these steps:

Procedure 13.1: Online resizing of guest disk
  1. Inside the guest system, check the current size of the disk (for example /dev/vda).

    # fdisk -l /dev/vda
    Disk /dev/sda: 160.0 GB, 160041885696 bytes, 312581808 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
  2. On the host, resize the logical volume holding the /dev/vda disk of the guest to the required size, for example 200 GB.

    # lvresize -L 200G /dev/mapper/vg00-home
    Extending logical volume home to 200 GiB
    Logical volume home successfully resized
  3. On the host, resize the block device related to the disk /dev/mapper/vg00-home of the guest. Note that you can find the DOMAIN_ID with virsh list.

    # virsh blockresize  --path /dev/vg00/home --size 200G DOMAIN_ID
    Block device '/dev/vg00/home' is resized
  4. Check that the new disk size is accepted by the guest.

    # fdisk -l /dev/vda
    Disk /dev/sda: 200.0 GB, 200052357120 bytes, 390727260 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

13.3 Sharing directories between host and guests (file system pass-through)

libvirt allows to share directories between host and guests using QEMU's file system pass-through (also called VirtFS) feature. Such a directory can be also be accessed by several VM Guests at once and therefore be used to exchange files between VM Guests.

Note
Note: Windows guests and file system pass-through

Note that sharing directories between VM Host Server and Windows guests via File System Pass-Through does not work, because Windows lacks the drivers required to mount the shared directory.

To make a shared directory available on a VM Guest, proceed as follows:

  1. Open the guest's console in Virtual Machine Manager and either choose View › Details from the menu or click Show virtual hardware details in the toolbar. Choose Add Hardware › Filesystem to open the Filesystem Passthrough dialog.

  2. Driver allows you to choose between a Handle or Path base driver. The default setting is Path. Mode lets you choose the security model, which influences the way file permissions are set on the host. Three options are available:

    Passthrough (default)

    Files on the file system are directly created with the client-user's credentials. This is very similar to what NFSv3 is using.

    Squash

    Same as Passthrough, but failure of privileged operations like chown are ignored. This is required when KVM is not run with root privileges.

    Mapped

    Files are created with the file server's credentials (qemu.qemu). The user credentials and the client-user's credentials are saved in extended attributes. This model is recommended when host and guest domains should be kept completely isolated.

  3. Specify the path to the directory on the VM Host Server with Source Path. Enter a string at Target Path that will be used as a tag to mount the shared directory. Note that the string of this field is a tag only, not a path on the VM Guest.

  4. Apply the setting. If the VM Guest is currently running, you need to shut it down to apply the new setting (rebooting the guest is not sufficient).

  5. Boot the VM Guest. To mount the shared directory, enter the following command:

    > sudo mount -t 9p -o trans=virtio,version=9p2000.L,rw TAG /MOUNT_POINT

    To make the shared directory permanently available, add the following line to the /etc/fstab file:

    TAG   /MOUNT_POINT    9p  trans=virtio,version=9p2000.L,rw    0   0

13.4 Using RADOS block devices with libvirt

RADOS Block Devices (RBD) store data in a Ceph cluster. They allow snapshotting, replication, and data consistency. You can use an RBD from your libvirt-managed VM Guests similarly to how you use other block devices.

For more details, refer to the SUSE Enterprise Storage Administration Guide, chapter Using libvirt with Ceph. The SUSE Enterprise Storage documentation is available from https://documentation.suse.com/ses/.