Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Serverマニュアル / Virtualization Guide / Managing Virtual Machines with Xen / Block Devices in Xen
Applies to SUSE Linux Enterprise Server 15 SP2

24 Block Devices in Xen

24.1 Mapping Physical Storage to Virtual Disks

The disk(s) specification for a Xen domain in the domain configuration file is as straightforward as the following example:

disk = [ 'format=raw,vdev=hdc,access=ro,devtype=cdrom,target=/root/image.iso' ]

It defines a disk block device based on the /root/image.iso disk image file. The disk will be seen as hdc by the guest, with read-only (ro) access. The type of the device is cdrom with raw format.

The following example defines an identical device, but using simplified positional syntax:

disk = [ '/root/image.iso,raw,hdc,ro,cdrom' ]

You can include more disk definitions in the same line, each one separated by a comma. If a parameter is not specified, then its default value is taken:

disk = [ '/root/image.iso,raw,hdc,ro,cdrom','/dev/vg/guest-volume,,hda','...' ]
List of Parameters
target

Source block device or disk image file path.

format

The format of the image file. Default is raw.

vdev

Virtual device as seen by the guest. Supported values are hd[x], xvd[x], sd[x] etc. See /usr/share/doc/packages/xen/misc/vbd-interface.txt for more details. This parameter is mandatory.

access

Whether the block device is provided to the guest in read-only or read-write mode. Supported values are ro or r for read-only, and rw or w for read/write access. Default is ro for devtype=cdrom, and rw for other device types.

devtype

Qualifies virtual device type. Supported value is cdrom.

backendtype

The back-end implementation to use. Supported values are phy, tap, and qdisk. Normally this option should not be specified as the back-end type is automatically determined.

script

Specifies that target is not a normal host path, but rather information to be interpreted by the executable program. The specified script file is looked for in /etc/xen/scripts if it does not point to an absolute path. These scripts are normally called block-<script_name>.

For more information about specifying virtual disks, see /usr/share/doc/packages/xen/misc/xl-disk-configuration.txt.

24.2 Mapping Network Storage to Virtual Disk

Similar to mapping a local disk image (see Section 24.1, “Mapping Physical Storage to Virtual Disks”), you can map a network disk as a virtual disk as well.

The following example shows mapping of an RBD (RADOS Block Device) disk with multiple Ceph monitors and cephx authentication enabled:

disk = [ 'vdev=hdc, backendtype=qdisk, \
target=rbd:libvirt-pool/new-libvirt-image:\
id=libvirt:key=AQDsPWtW8JoXJBAAyLPQe7MhCC+JPkI3QuhaAw==:auth_supported=cephx;none:\
mon_host=137.65.135.205\\:6789;137.65.135.206\\:6789;137.65.135.207\\:6789' ]

Following is an example of an NBD (Network Block Device) disk mapping:

disk = [ 'vdev=hdc, backendtype=qdisk, target=nbd:151.155.144.82:5555' ]

24.3 File-Backed Virtual Disks and Loopback Devices

When a virtual machine is running, each of its file-backed virtual disks consumes a loopback device on the host. By default, the host allows up to 64 loopback devices to be consumed.

To simultaneously run more file-backed virtual disks on a host, you can increase the number of available loopback devices by adding the following option to the host’s /etc/modprobe.conf.local file.

options loop max_loop=x

where x is the maximum number of loopback devices to create.

Changes take effect after the module is reloaded.

Tip
Tip

Enter rmmod loop and modprobe loop to unload and reload the module. In case rmmod does not work, unmount all existing loop devices or reboot the computer.

24.4 Resizing Block Devices

While it is always possible to add new block devices to a VM Guest system, it is sometimes more desirable to increase the size of an existing block device. In case such a system modification is already planned during deployment of the VM Guest, some basic considerations should be done:

  • Use a block device that may be increased in size. LVM devices and file system images are commonly used.

  • Do not partition the device inside the VM Guest, but use the main device directly to apply the file system. For example, use /dev/xvdb directly instead of adding partitions to /dev/xvdb.

  • Make sure that the file system to be used can be resized. Sometimes, for example with Ext3, some features must be switched off to be able to resize the file system. A file system that can be resized online and mounted is XFS. Use the command xfs_growfs to resize that file system after the underlying block device has been increased in size. For more information about XFS, see man 8 xfs_growfs.

When resizing an LVM device that is assigned to a VM Guest, the new size is automatically known to the VM Guest. No further action is needed to inform the VM Guest about the new size of the block device.

When using file system images, a loop device is used to attach the image file to the guest. For more information about resizing that image and refreshing the size information for the VM Guest, see Section 26.2, “Sparse Image Files and Disk Space”.

24.5 Scripts for Managing Advanced Storage Scenarios

There are scripts that can help with managing advanced storage scenarios such as disk environments provided by dmmd (device mapper—multi disk) including LVM environments built upon a software RAID set, or a software RAID set built upon an LVM environment. These scripts are part of the xen-tools package. After installation, they can be found in /etc/xen/scripts:

  • block-dmmd

  • block-drbd-probe

  • block-npiv

The scripts allow for external commands to perform some action, or series of actions of the block devices prior to serving them up to a guest.

These scripts could formerly only be used with xl or libxl using the disk configuration syntax script=. They can now be used with libvirt by specifying the base name of the block script in the <source> element of the disk. For example:

<source dev='dmmd:md;/dev/md0;lvm;/dev/vgxen/lv-vm01'/>