Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7.1 Documentation / Administration and Operations Guide / Integration with Virtualization Tools / libvirt and Ceph
Applies to SUSE Enterprise Storage 7.1

26 libvirt and Ceph

The libvirt library creates a virtual machine abstraction layer between hypervisor interfaces and the software applications that use them. With libvirt, developers and system administrators can focus on a common management framework, common API, and common shell interface (virsh) to many different hypervisors, including QEMU/KVM, Xen, LXC, or VirtualBox.

Ceph block devices support QEMU/KVM. You can use Ceph block devices with software that interfaces with libvirt. The cloud solution uses libvirt to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block devices via librbd.

To create VMs that use Ceph block devices, use the procedures in the following sections. In the examples, we have used libvirt-pool for the pool name, client.libvirt for the user name, and new-libvirt-image for the image name. You may use any value you like, but ensure you replace those values when executing commands in the subsequent procedures.

26.1 Configuring Ceph with libvirt

To configure Ceph for use with libvirt, perform the following steps:

  1. Create a pool. The following example uses the pool name libvirt-pool with 128 placement groups.

    cephuser@adm > ceph osd pool create libvirt-pool 128 128

    Verify that the pool exists.

    cephuser@adm > ceph osd lspools
  2. Create a Ceph User. The following example uses the Ceph user name client.libvirt and references libvirt-pool.

    cephuser@adm > ceph auth get-or-create client.libvirt mon 'profile rbd' osd \
     'profile rbd pool=libvirt-pool'

    Verify the name exists.

    cephuser@adm > ceph auth list
    Note
    Note: User name or ID

    libvirt will access Ceph using the ID libvirt, not the Ceph name client.libvirt. See Section 30.2.1.1, “User” for a detailed explanation of the difference between ID and name.

  3. Use QEMU to create an image in your RBD pool. The following example uses the image name new-libvirt-image and references libvirt-pool.

    Tip
    Tip: Keyring file location

    The libvirt user key is stored in a keyring file placed in the /etc/ceph directory. The keyring file needs to have an appropriate name that includes the name of the Ceph cluster it belongs to. For the default cluster name 'ceph', the keyring file name is /etc/ceph/ceph.client.libvirt.keyring.

    If the keyring does not exist, create it with:

    cephuser@adm > ceph auth get client.libvirt > /etc/ceph/ceph.client.libvirt.keyring
    # qemu-img create -f raw rbd:libvirt-pool/new-libvirt-image:id=libvirt 2G

    Verify the image exists.

    cephuser@adm > rbd -p libvirt-pool ls

26.2 Preparing the VM manager

You may use libvirt without a VM manager, but you may find it simpler to create your first domain with virt-manager.

  1. Install a virtual machine manager.

    # zypper in virt-manager
  2. Prepare/download an OS image of the system you want to run virtualized.

  3. Launch the virtual machine manager.

    virt-manager

26.3 Creating a VM

To create a VM with virt-manager, perform the following steps:

  1. Choose the connection from the list, right-click it, and select New.

  2. Import existing disk image by providing the path to the existing storage. Specify OS type, memory settings, and Name the virtual machine, for example libvirt-virtual-machine.

  3. Finish the configuration and start the VM.

  4. Verify that the newly created domain exists with sudo virsh list. If needed, specify the connection string, such as

    virsh -c qemu+ssh://root@vm_host_hostname/system list
    Id    Name                           State
    -----------------------------------------------
    [...]
     9     libvirt-virtual-machine       running
  5. Log in to the VM and stop it before configuring it for use with Ceph.

26.4 Configuring the VM

In this chapter, we focus on configuring VMs for integration with Ceph using virsh. virsh commands often require root privileges (sudo) and will not return appropriate results or notify you that root privileges are required. For a reference of virsh commands, refer to man 1 virsh (requires the package libvirt-client to be installed).

  1. Open the configuration file with virsh edit vm-domain-name.

    # virsh edit libvirt-virtual-machine
  2. Under <devices> there should be a <disk> entry.

    <devices>
        <emulator>/usr/bin/qemu-system-SYSTEM-ARCH</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw'/>
          <source file='/path/to/image/recent-linux.img'/>
          <target dev='vda' bus='virtio'/>
          <address type='drive' controller='0' bus='0' unit='0'/>
        </disk>

    Replace /path/to/image/recent-linux.img with the path to the OS image.

    Important
    Important

    Use sudo virsh edit instead of a text editor. If you edit the configuration file under /etc/libvirt/qemu with a text editor, libvirt may not recognize the change. If there is a discrepancy between the contents of the XML file under /etc/libvirt/qemu and the result of sudo virsh dumpxml vm-domain-name, then your VM may not work properly.

  3. Add the Ceph RBD image you previously created as a <disk> entry.

    <disk type='network' device='disk'>
            <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
                    <host name='monitor-host' port='6789'/>
            </source>
            <target dev='vda' bus='virtio'/>
    </disk>

    Replace monitor-host with the name of your host, and replace the pool and/or image name as necessary. You may add multiple <host> entries for your Ceph monitors. The dev attribute is the logical device name that will appear under the /dev directory of your VM. The optional bus attribute indicates the type of disk device to emulate. The valid settings are driver specific (for example ide, scsi, virtio, xen, usb or sata).

  4. Save the file.

  5. If your Ceph cluster has authentication enabled (it does by default), you must generate a secret. Open an editor of your choice and create a file called secret.xml with the following content:

    <secret ephemeral='no' private='no'>
            <usage type='ceph'>
                    <name>client.libvirt secret</name>
            </usage>
    </secret>
  6. Define the secret.

    # virsh secret-define --file secret.xml
    <uuid of secret is output here>
  7. Get the client.libvirt key and save the key string to a file.

    cephuser@adm > ceph auth get-key client.libvirt | sudo tee client.libvirt.key
  8. Set the UUID of the secret.

    # virsh secret-set-value --secret uuid of secret \
    --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml

    You must also set the secret manually by adding the following <auth> entry to the <disk> element you entered earlier (replacing the uuid value with the result from the command line example above).

    # virsh edit libvirt-virtual-machine

    Then, add <auth></auth> element to the domain configuration file:

    ...
    </source>
    <auth username='libvirt'>
            <secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
    </auth>
    <target ...
    Note
    Note

    The exemplary ID is libvirt, not the Ceph name client.libvirt as generated at step 2 of Section 26.1, “Configuring Ceph with libvirt. Ensure you use the ID component of the Ceph name you generated. If for some reason you need to regenerate the secret, you will need to execute sudo virsh secret-undefine uuid before executing sudo virsh secret-set-value again.

26.5 Summary

Once you have configured the VM for use with Ceph, you can start the VM. To verify that the VM and Ceph are communicating, you may perform the following procedures.

  1. Check to see if Ceph is running:

    cephuser@adm > ceph health
  2. Check to see if the VM is running:

    # virsh list
  3. Check to see if the VM is communicating with Ceph. Replace vm-domain-name with the name of your VM domain:

    # virsh qemu-monitor-command --hmp vm-domain-name 'info block'
  4. Check to see if the device from &target dev='hdb' bus='ide'/> appears under /dev or under /proc/partitions:

    > ls /dev
    > cat /proc/partitions