26 Block devices in Xen #
26.1 Mapping physical storage to virtual disks #
The disk specification for a Xen domain in the domain configuration file is as straightforward as the following example:
disk = [ 'format=raw,vdev=hdc,access=ro,devtype=cdrom,target=/root/image.iso' ]
It defines a disk block device based on the
/root/image.iso
disk image file. The is seen as
hdc
by the guest, with read-only
(ro
) access. The type of the device is
cdrom
with raw
format.
The following example defines an identical device, but using simplified positional syntax:
disk = [ '/root/image.iso,raw,hdc,ro,cdrom' ]
You can include more disk definitions in the same line, each one separated by a comma. If a parameter is not specified, then its default value is taken:
disk = [ '/root/image.iso,raw,hdc,ro,cdrom','/dev/vg/guest-volume,,hda','...' ]
- target
Source block device or disk image path.
- format
The format of the image file. Default is
raw
.- vdev
Virtual device as seen by the guest. Supported values are hd[x], xvd[x], sd[x] etc. See
/usr/share/doc/packages/xen/misc/vbd-interface.txt
for more details. This parameter is mandatory.- access
Whether the block device is provided to the guest in read-only or read-write mode. Supported values are
ro
orr
for read-only, andrw
orw
for read/write access. Default isro
fordevtype=cdrom
, andrw
for other device types.- devtype
Qualifies virtual device type. Supported value is
cdrom
.- backendtype
The back-end implementation to use. Supported values are
phy
,tap
, andqdisk
. Normally this option should not be specified as the back-end type is automatically determined.- script
Specifies that
target
is not a normal host path, but rather information to be interpreted by the executable program. The specified script file is looked for in/etc/xen/scripts
if it does not point to an absolute path. These scripts are normally calledblock-<script_name>
.
For more information about specifying virtual disks, see
/usr/share/doc/packages/xen/misc/xl-disk-configuration.txt
.
26.2 Mapping network storage to virtual disk #
Similar to mapping a local disk image (see Section 26.1, “Mapping physical storage to virtual disks”), you can map a network disk as a virtual disk as well.
The following example shows mapping of an RBD (RADOS Block Device) disk with multiple Ceph monitors and cephx authentication enabled:
disk = [ 'vdev=hdc, backendtype=qdisk, \ target=rbd:libvirt-pool/new-libvirt-image:\ id=libvirt:key=AQDsPWtW8JoXJBAAyLPQe7MhCC+JPkI3QuhaAw==:auth_supported=cephx;none:\ mon_host=137.65.135.205\\:6789;137.65.135.206\\:6789;137.65.135.207\\:6789' ]
Following is an example of an NBD (Network Block Device) disk mapping:
disk = [ 'vdev=hdc, backendtype=qdisk, target=nbd:151.155.144.82:5555' ]
26.3 File-backed virtual disks and loopback devices #
When a virtual machine is running, each of its file-backed virtual disks consumes a loopback device on the host. By default, the host allows up to 64 loopback devices to be consumed.
To simultaneously run more file-backed virtual disks on a host, you can
increase the number of available loopback devices by adding the following
option to the host’s /etc/modprobe.conf.local
file.
options loop max_loop=x
where x
is the maximum number of loopback devices to
create.
Changes take effect after the module is reloaded.
Enter rmmod loop
and modprobe
loop
to unload and reload the module. In case
rmmod
does not work, unmount all existing loop
devices or reboot the computer.
26.4 Resizing block devices #
While it is always possible to add new block devices to a VM Guest system, it is sometimes more desirable to increase the size of an existing block device. In case such a system modification is already planned during deployment of the VM Guest, several basic considerations should be done:
Use a block device that may be increased in size. LVM devices and file system images are commonly used.
Do not partition the device inside the VM Guest, but use the main device directly to apply the file system. For example, use
/dev/xvdb
directly instead of adding partitions to/dev/xvdb
.Make sure that the file system to be used can be resized. Sometimes, for example with Ext3, certain features must be switched off to be able to resize the file system. A file system that can be resized online and mounted is
XFS
. Use the commandxfs_growfs
to resize that file system after the underlying block device has been increased in size. For more information aboutXFS
, seeman 8 xfs_growfs
.
When resizing an LVM device that is assigned to a VM Guest, the new size is automatically known to the VM Guest. No further action is needed to inform the VM Guest about the new size of the block device.
When using file system images, a loop device is used to attach the image file to the guest. For more information about resizing that image and refreshing the size information for the VM Guest, see Section 28.2, “Sparse image files and disk space”.
26.5 Scripts for managing advanced storage scenarios #
There are scripts that can help with managing advanced storage scenarios
such as disk environments provided by dmmd
(“device mapper—multi disk”) including LVM
environments built upon a software RAID set, or a software RAID set built
upon an LVM environment. These scripts are part of the
xen-tools package. After installation, they can be
found in /etc/xen/scripts
:
block-dmmd
block-drbd-probe
block-npiv
The scripts allow for external commands to perform specific action, or series of actions of the block devices before serving them up to a guest.
These scripts could formerly only be used with xl
or
libxl
using the disk configuration syntax
script=
. They can now be used with libvirt by
specifying the base name of the block script in the
<source>
element of the disk. For example:
<source dev='dmmd:md;/dev/md0;lvm;/dev/vgxen/lv-vm01'/>