Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise Server 11 SP4

2 Setting Up a Virtual Machine Host

This section documents how to set up and use SUSE Linux Enterprise Server 11 SP4 as a virtual machine host.

In most cases, the hardware requirements for the Domain0 are the same as those for the SUSE Linux Enterprise Server operating system, but additional CPU, disk, memory, and network resources should be added to accommodate the resource demands of all planned VM Guest systems.


Remember that VM Guest systems, just like physical machines, perform better when they run on faster processors and have access to more system memory.

The following table lists the minimum hardware requirements for running a typical virtualized environment. Additional requirements have to be added for the number and type of the respective guest systems.

Table 2.1: Hardware Requirements

System Component

Minimum Requirements



Computer with Pentium II or AMD K7 450 MHz processor



512 MB of RAM for the host


Free Disk Space

7 GB of available disk space for the host.


Optical Drive



Hard Drive

20 GB


Network Device

Ethernet 100 Mbps


IP Address

  • One IP address on a subnet for the host.

  • One IP address on a subnet for each VM Guest.


Xen virtualization technology is available in SUSE Linux Enterprise Server products based on code path 10 and later. Code path 10 products include Open Enterprise Server 2 Linux, SUSE Linux Enterprise Server 10, SUSE Linux Enterprise Desktop 10, and openSUSE 10.x.

The virtual machine host requires a number of software packages and their dependencies to be installed. To install all necessary packages, run YaST Software Management, select View › Patterns and choose Xen Virtual Machine Host Server for installation. The installation can also be performed with YaST using the module Virtualization › Install Hypervisor and Tools.

After the Xen software is installed, restart the computer.

Updates are available through your update channel. To be sure to have the latest updates installed, run YaST Online Update after the installation has finished.

2.1 Best Practices and Suggestions

When installing and configuring the SUSE Linux Enterprise operating system on the host, be aware of the following best practices and suggestions:

  • If the host should always run as Xen host, run YaST System › Boot Loader and activate the Xen boot entry as default boot section.

    • In YaST, click System > Boot Loader.

    • Change the default boot to the Xen label, then click Set as Default.

    • Click Finish.

  • Close Virtual Machine Manager if you are not actively using it and restart it when needed. Closing Virtual Machine Manager does not affect the state of virtual machines.

  • For best performance, only the applications and processes required for virtualization should be installed on the virtual machine host.

  • When using both, iSCSI and OCFS2 to host Xen images, the latency required for OCFS2 default timeouts in SP2 may not be met. To reconfigure this timeout, run /etc/init.d/o2cb configure or edit O2CB_HEARTBEAT_THRESHOLD in the system configuration.

2.2 Managing Domain0 Memory

When the host is set up, a percentage of system memory is reserved for the hypervisor, and all remaining memory is automatically allocated to Domain0.

A better solution is to set a default amount of memory for Domain0, so the memory can be allocated appropriately to the hypervisor. An adequate amount would be 20 percent of the total system memory up to 2 GB. An appropriate minimum amount would be 512 MB.

2.2.1 Setting a Maximum Amount of Memory

  1. Determine the amount of memory to set for Domain0.

  2. At Domain0, type xm info to view the amount of memory that is available on the machine. The memory that is currently allocated by Domain0 can be determined with the command xm list.

  3. Run YaST › Boot Loader.

  4. Select the Xen section.

  5. In Additional Xen Hypervisor Parameters, add dom0_mem= mem_amount where mem_amount is the maximum amount of memory to allocate to Domain0. Add K, M, or G, to specify the size, for example, dom0_mem=768M.

  6. Restart the computer to apply the changes.

2.2.2 Setting a Minimum Amount of Memory

To set a minimum amount of memory for Domain0, edit the dom0-min-mem parameter in the /etc/xen/xend-config.sxp file and restart Xend. For more information, see Section 5.2, “Controlling the Host by Modifying Xend Settings”.

2.3 Network Card in Fully Virtualized Guests

In a fully virtualized guest, the default network card is an emulated Realtek network card. However, it also possible to use the split network driver to run the communication between Domain0 and a VM Guest. By default, both interfaces are presented to the VM Guest, because the drivers of some operating systems require both to be present.

When using SUSE Linux Enterprise, only the paravirtualized network cards are available for the VM Guest by default. The following network options are available:


To use a emulated network interface like an emulated Realtek card, specify (type ioemu) in the vif device section of the Xend configuration. An example configuration would look like:

        (bridge br0)
        (uuid e2b8f872-88c7-0a4a-b965-82f7d5bdd31e)
        (devid 0)
        (mac 00:16:3e:54:79:a6)
        (model rtl8139)
        (type ioemu)

Find more details about editing the Xend configuration at Section 5.3, “Configuring a Virtual Machine by Modifying its Xend Settings”.


When not specifying a model or type, Xend uses the paravirtualized network interface:

        (bridge br0)
        (mac 00:16:3e:50:66:a4)
        (script /etc/xen/scripts/vif-bridge)
        (uuid 0a94b603-8b90-3ba8-bd1a-ac940c326514)
        (backend 0)
emulated and paravirtualized

If the administrator should be offered both options, simply specify both, type and model. The Xend configuration would look like:

        (bridge br0)
        (uuid e2b8f872-88c7-0a4a-b965-82f7d5bdd31e)
        (devid 0)
        (mac 00:16:3e:54:79:a6)
        (model rtl8139)
        (type netfront)

In this case, one of the network interfaces should be disabled on the VM Guest.

2.4 Starting the Virtual Machine Host

If virtualization software is correctly installed, the computer boots to display the GRUB boot loader with a Xen option on the menu. Select this option to start the virtual machine host.

Note: Xen and Kdump

In Xen, the hypervisor manages the memory resource. If you need to reserve system memory for a recovery kernel in Domain0, this memory has to be reserved by the hypervisor. Thus, it is necessary to add the parameter crashkernel=size@offset to the kernel line instead of using the line with the other boot options.

If the Xen option is not on the GRUB menu, review the steps for installation and verify that the GRUB boot loader has been updated. If the installation has been done without selecting the Xen pattern, run the YaST Software Management, select the filter Patterns and choose Xen Virtual Machine Host Server for installation.

After booting the hypervisor, the Domain0 virtual machine starts and displays its graphical desktop environment. If you did not install a graphical desktop, the command line environment appears.

Tip: Graphics Problems

Sometimes it may happen that the graphics system does not work properly. In this case, add vga=ask to the boot parameters. To activate permanent settings, use vga=mode-0x??? where ??? is calculated as 0x100 + VESA mode from http://en.wikipedia.org/wiki/VESA_BIOS_Extensions, e.g. vga=mode-0x361.

Before starting to install virtual guests, make sure that the system time is correct. To do this, configure NTP (Network Time Protocol) on the controlling domain:

  1. In YaST select Network Services › NTP Configuration.

  2. Select the option to automatically start the NTP daemon during boot. Provide the IP address of an existing NTP time server, then click Finish.

Note: Time Services on Virtual Guests

Hardware clocks commonly are not very precise. All modern operating systems try to correct the system time compared to the hardware time by means of an additional time source. To get the correct time on all VM Guest systems, also activate the network time services on each respective guest or make sure that the guest uses the system time of the host. For more about Independent Wallclocks in SUSE Linux Enterprise Server see Section 13.2, “Virtual Machine Clock Settings”.

For more information about managing virtual machines, see Chapter 5, Managing a Virtualization Environment.

2.5 PCI Pass-Through

To take full advantage of VM Guest systems, it is sometimes necessary to assign specific PCI devices to a dedicated domain. When using fully virtualized guests, this functionality is only available if the chipset of the system supports this feature, and if it is activated from the BIOS.

This feature is available from both, AMD* and Intel*. For AMD machines, the feature is called IOMMU, in Intel speak, this is VT-d. Note that Intel-VT technology is not sufficient to use this feature for fully virtualized guests. To make sure that your computer supports this feature, ask your supplier specifically to deliver a system that supports PCI Pass-Through.

  • Some graphics drivers use highly optimized ways to access DMA. This is not always supported, and thus using graphics cards may be difficult.

  • When accessing PCI devices behind a PCIe bridge, all of the PCI devices must be assigned to a single guest. This limitations does not apply to PCIe devices.

  • Guests with dedicated PCI devices cannot be live migrated to a different host.

The configuration of PCI Pass-Through is twofold. First, the hypervisor must be informed that a PCI device should be available for reassigning. Second, the PCI device must be assigned to the VM Guest.

2.5.1 Configuring the Hypervisor for PCI Pass-Through

  1. Select a device to reassign to a VM Guest. To do this run lspci and read the device number. For example, if lspci contains the following line:

    06:01.0 Ethernet controller: Digital Equipment Corporation DECchip 21142/43 (rev 41)

    In this case, the PCI number is 06:01.0.

  2. Edit /etc/sysconfig/pciback, and add the PCI device number to the XEN_PCI_HIDE_LIST option, for example

  3. As root, reload the pciback service:

    rcpciback reload
  4. Check if the device is in the list of assignable devices with the command

    xm pci-list-assignable-devices Solution without Host System Restart

If you want to avoid restarting the host system, there is an alternative procedure to prepare the host system for PCI Pass-Through via the /sys/bus/pci file system:

  1. Identify the PCI device and store it to a variable for easier handling.

    # export PCI_DOMAIN_BUS_SLOT_FUNC=06:01.0
  2. Check which driver is currently bound to the device and save its name to a variable.

    # readlink /sys/bus/pci/devices/0000\:06:01.0/driver
    # export DRIVER_NAME=igb
  3. Detach the driver from the device, and load the pciback module.

    # echo -n $PCI_DOMAIN_BUS_SLOT_FUNC > \
    # modprobe pciback
  4. Add a new slot to the pciback's list.

    # echo -n $PCI_DOMAIN_BUS_SLOT_FUNC > \
  5. Bind the PCI device to pciback.

    # echo -n $PCI_DOMAIN_BUS_SLOT_FUNC > \

The device is now ready to be used in VM Guest by specifying 'pci=[$PCI_DOMAIN_BUS_SLOT_FUNC]' in the guest config file.

2.5.2 Assigning PCI Devices to VM Guest Systems

There are several possibilities to dedicate a PCI device to a VM Guest:

Adding the device while installing:

During installation, add the pci line to the configuration file:


If you want the Xen tools to manage preparing and assigning a PCI device to a VM Guest when it is activated, add managed=1 to the PCI setting in the guest configuration file, denoting that it is a 'managed' PCI device:


When the VM Guest is activated, the Xen tools will unbind the PCI device from its existing driver, bind it to pciback, and attach the device to the VM. When the VM is shut down, the tools will rebind the device to its original driver. When using the managed mode, there is no need to configure the hypervisor for PCI Pass-Through as described in Section 2.5.1, “Configuring the Hypervisor for PCI Pass-Through”.

Hot adding PCI devices to VM Guest systems

The command xm may be used to add or remove PCI devices on the fly. To Add the device with number 06:01.0 to a guest with name sles11 use:

xm pci-attach sles11 06:01.0
Adding the PCI device to Xend

To add the device to the Xend database, add the following section to the Xend database:

            (slot 0x01)
            (domain 0x0)
            (bus 0x06)
            (vslt 0x0)
            (func 0x0)

For more information about modifying the Xend database, see Section 5.3, “Configuring a Virtual Machine by Modifying its Xend Settings”.

After assigning the PCI device to the VM Guest, the guest system must care for the configuration and device drivers for this device.

2.5.3 VGA Pass-Through

Xen 4.0 and newer supports VGA graphics adapter pass-through on fully virtualized VM Guests. The guest can take full control of the graphics adapter with high performance full 3D and video acceleration.

  • VGA Pass-Through functionality is similar to PCI Pass-Through and as such also requires IOMMU (or Intel VT-d) support from the motherboard chipset and BIOS.

  • Only the primary graphics adapter (the one that is used when you power on the computer) can be used with VGA Pass-Through.

  • VGA Pass-Through is supported only for fully virtualized guests. Paravirtual guests (PV) are not supported.

  • The graphics card cannot be shared between multiple VM Guests using VGA Pass-Through — you can dedicate it to one guest only.

To enable VGA Pass-Through, add the following settings to your fully virtualized guest configuration file


where yy:zz.n is the PCI controller ID of the VGA graphics adapter as found with lspci -v on Domain0.

Print this page