Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Server Documentation / Virtualization Guide / Managing Virtual Machines with QEMU / Running Virtual Machines with qemu-system-ARCH
Applies to SUSE Linux Enterprise Server 12 SP5

29 Running Virtual Machines with qemu-system-ARCH

Once you have a virtual disk image ready (for more information on disk images, see Section 28.2, “Managing Disk Images with qemu-img), it is time to start the related virtual machine. Section 28.1, “Basic Installation with qemu-system-ARCH introduced simple commands to install and run a VM Guest. This chapter focuses on a more detailed explanation of qemu-system-ARCH usage, and shows solutions for more specific tasks. For a complete list of qemu-system-ARCH's options, see its manual page (man 1 qemu).

29.1 Basic qemu-system-ARCH Invocation

The qemu-system-ARCH command uses the following syntax:

qemu-system-ARCH options1 disk_img2

1

qemu-system-ARCH understands many options. Most of them define parameters of the emulated hardware, while others affect more general emulator behavior. If you do not supply any options, default values are used, and you need to supply the path to a disk image to be run.

2

Path to the disk image holding the guest system you want to virtualize. qemu-system-ARCH supports many image formats. Use qemu-img --help to list them. If you do not supply the path to a disk image as a separate argument, you need to use the -drive file= option.

29.2 General qemu-system-ARCH Options

This section introduces general qemu-system-ARCH options and options related to the basic emulated hardware, such as the virtual machine's processor, memory, model type, or time processing methods.

-name NAME_OF_GUEST

Specifies the name of the running guest system. The name is displayed in the window caption and used for the VNC server.

-boot OPTIONS

Specifies the order in which the defined drives will be booted. Drives are represented by letters, where a and b stand for the floppy drives 1 and 2, c stands for the first hard disk, d stands for the first CD-ROM drive, and n to p stand for Ether-boot network adapters.

For example, qemu-system-ARCH [...] -boot order=ndc first tries to boot from network, then from the first CD-ROM drive, and finally from the first hard disk.

-pidfile FILENAME

Stores the QEMU's process identification number (PID) in a file. This is useful if you run QEMU from a script.

-nodefaults

By default QEMU creates basic virtual devices even if you do not specify them on the command line. This option turns this feature off, and you must specify every single device manually, including graphical and network cards, parallel or serial ports, or virtual consoles. Even QEMU monitor is not attached by default.

-daemonize

Daemonizes the QEMU process after it is started. QEMU will detach from the standard input and standard output after it is ready to receive connections on any of its devices.

Note
Note: SeaBIOS BIOS Implementation

SeaBIOS is the default BIOS used. You can boot USB devices, any drive (CD-ROM, Floppy, or a hard disk). It has USB mouse and keyboard support and supports multiple VGA cards. For more information about SeaBIOS, refer to the SeaBIOS Website.

29.2.1 Basic Virtual Hardware

29.2.1.1 Machine Type

You can specifies the type of the emulated machine. Run qemu-system-ARCH -M help to view a list of supported machine types.

Note
Note: ISA-PC

The machine type isapc: ISA-only-PC is unsupported.

29.2.1.2 CPU Model

To specify the type of the processor (CPU) model, run qemu-system-ARCH -cpu MODEL. Use qemu-system-ARCH -cpu help to view a list of supported CPU models.

CPU flags information can be found at CPUID Wikipedia.

29.2.1.3 Other Basics Options

The following is a list of most commonly used options while launching qemu from command line. To see all options available refer to qemu-doc man page.

-m MEGABYTES

Specifies how many megabytes are used for the virtual RAM size.

-balloon virtio

Specifies a paravirtualized device to dynamically change the amount of virtual RAM memory assigned to VM Guest. The top limit is the amount of memory specified with -m.

-smp NUMBER_OF_CPUS

Specifies how many CPUs will be emulated. QEMU supports up to 255 CPUs on the PC platform (up to 64 with KVM acceleration used). This option also takes other CPU-related parameters, such as number of sockets, number of cores per socket, or number of threads per core.

The following is an example of a working qemu-system-ARCH command line:

tux > qemu-system-x86_64 -name "SLES 12 SP2" -M pc-i440fx-2.7 -m 512 \
-machine accel=kvm -cpu kvm64 -smp 2 -drive /images/sles.raw
QEMU Window with SLES 11 SP3 as VM Guest
Figure 29.1: QEMU Window with SLES 11 SP3 as VM Guest
-no-acpi

Disables ACPI support.

-S

QEMU starts with CPU stopped. To start CPU, enter c in QEMU monitor. For more information, see Chapter 30, Virtual Machine Administration Using QEMU Monitor.

29.2.2 Storing and Reading Configuration of Virtual Devices

-readconfig CFG_FILE

Instead of entering the devices configuration options on the command line each time you want to run VM Guest, qemu-system-ARCH can read it from a file that was either previously saved with -writeconfig or edited manually.

-writeconfig CFG_FILE

Dumps the current virtual machine's devices configuration to a text file. It can be consequently re-used with the -readconfig option.

tux > qemu-system-x86_64 -name "SLES 12 SP2" -machine accel=kvm -M pc-i440fx-2.7 -m 512 -cpu kvm64 \
-smp 2 /images/sles.raw -writeconfig /images/sles.cfg
(exited)
tux > cat /images/sles.cfg
# qemu config file

[drive]
  index = "0"
  media = "disk"
  file = "/images/sles_base.raw"

This way you can effectively manage the configuration of your virtual machines' devices in a well-arranged way.

29.2.3 Guest Real-Time Clock

-rtc OPTIONS

Specifies the way the RTC is handled inside a VM Guest. By default, the clock of the guest is derived from that of the host system. Therefore, it is recommended that the host system clock is synchronized with an accurate external clock (for example, via NTP service).

If you need to isolate the VM Guest clock from the host one, specify clock=vm instead of the default clock=host.

You can also specify the initial time of the VM Guest's clock with the base option:

qemu-system-x86_64 [...] -rtc clock=vm,base=2010-12-03T01:02:00

Instead of a time stamp, you can specify utc or localtime. The former instructs VM Guest to start at the current UTC value (Coordinated Universal Time, see http://en.wikipedia.org/wiki/UTC), while the latter applies the local time setting.

29.3 Using Devices in QEMU

QEMU virtual machines emulate all devices needed to run a VM Guest. QEMU supports, for example, several types of network cards, block devices (hard and removable drives), USB devices, character devices (serial and parallel ports), or multimedia devices (graphic and sound cards). This section introduces options to configure various types of supported devices.

Tip
Tip

If your device, such as -drive, needs a special driver and driver properties to be set, specify them with the -device option, and identify with drive= suboption. For example:

qemu-system-x86_64 [...] -drive if=none,id=drive0,format=raw \
-device virtio-blk-pci,drive=drive0,scsi=off ...

To get help on available drivers and their properties, use -device ? and -device DRIVER,?.

29.3.1 Block Devices

Block devices are vital for virtual machines. In general, these are fixed or removable storage media usually called drives. One of the connected hard disks typically holds the guest operating system to be virtualized.

Virtual Machine drives are defined with -drive. This option has many sub-options, some of which are described in this section. For the complete list, see the manual page (man 1 qemu).

Sub-options for the -drive Option
file=image_fname

Specifies the path to the disk image that will be used with this drive. If not specified, an empty (removable) drive is assumed.

if=drive_interface

Specifies the type of interface to which the drive is connected. Currently only floppy, scsi, ide, or virtio are supported by SUSE. virtio defines a paravirtualized disk driver. Default is ide.

index=index_of_connector

Specifies the index number of a connector on the disk interface (see the if option) where the drive is connected. If not specified, the index is automatically incremented.

media=type

Specifies the type of media. Can be disk for hard disks, or cdrom for removable CD-ROM drives.

format=img_fmt

Specifies the format of the connected disk image. If not specified, the format is autodetected. Currently, SUSE supports qcow2, qed and raw formats.

cache=method

Specifies the caching method for the drive. Possible values are unsafe, writethrough, writeback, directsync, or none. To improve performance when using the qcow2 image format, select writeback. none disables the host page cache and, therefore, is the safest option. Default for image files is writeback. For more information, see Chapter 15, Disk Cache Modes.

Tip
Tip

To simplify defining block devices, QEMU understands several shortcuts which you may find handy when entering the qemu-system-ARCH command line.

You can use

qemu-system-x86_64 -cdrom /images/cdrom.iso

instead of

qemu-system-x86_64 -drive file=/images/cdrom.iso,index=2,media=cdrom

and

qemu-system-x86_64 -hda /images/imagei1.raw -hdb /images/image2.raw -hdc \
/images/image3.raw -hdd /images/image4.raw

instead of

qemu-system-x86_64 -drive file=/images/image1.raw,index=0,media=disk \
-drive file=/images/image2.raw,index=1,media=disk \
-drive file=/images/image3.raw,index=2,media=disk \
-drive file=/images/image4.raw,index=3,media=disk
Tip
Tip: Using Host Drives Instead of Images

As an alternative to using disk images (see Section 28.2, “Managing Disk Images with qemu-img) you can also use existing VM Host Server disks, connect them as drives, and access them from VM Guest. Use the host disk device directly instead of disk image file names.

To access the host CD-ROM drive, use

qemu-system-x86_64 [...] -drive file=/dev/cdrom,media=cdrom

To access the host hard disk, use

qemu-system-x86_64 [...] -drive file=/dev/hdb,media=disk

A host drive used by a VM Guest must not be accessed concurrently by the VM Host Server or another VM Guest.

29.3.1.1 Freeing Unused Guest Disk Space

A Sparse image file is a type of disk image file that grows in size as the user adds data to it, taking up only as much disk space as is stored in it. For example, if you copy 1 GB of data inside the sparse disk image, its size grows by 1 GB. If you then delete for example 500 MB of the data, the image size does not by default decrease as expected.

That is why the discard=on option is introduced on the KVM command line. It tells the hypervisor to automatically free the holes after deleting data from the sparse guest image. Note that this option is valid only for the if=scsi drive interface:

qemu-system-x86_64 [...] -drive file=/path/to/file.img,if=scsi,discard=on
Important
Important: Support Status

if=scsi is not supported. This interface does not map to virtio-scsi, but rather to the lsi SCSI adapter.

29.3.1.2 IOThreads

IOThreads are dedicated event loop threads for virtio devices to perform I/O requests in order to improve scalability, especially on an SMP VM Host Server with SMP VM Guests using many disk devices. Instead of using QEMU's main event loop for I/O processing, IOThreads allow spreading I/O work across multiple CPUs and can improve latency when properly configured.

IOThreads are enabled by defining IOThread objects. virtio devices can then use the objects for their I/O event loops. Many virtio devices can use a single IOThread object, or virtio devices and IOThread objects can be configured in a 1:1 mapping. The following example creates a single IOThread with ID iothread0 which is then used as the event loop for two virtio-blk devices.

tux > qemu-system-x86_64 [...] -object iothread,id=iothread0\
-drive if=none,id=drive0,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive0,scsi=off,\
iothread=iothread0 -drive if=none,id=drive1,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive1,scsi=off,\
iothread=iothread0 [...]

The following qemu command line example illustrates a 1:1 virtio device to IOThread mapping:

tux > qemu-system-x86_64 [...] -object iothread,id=iothread0\
-object iothread,id=iothread1 -drive if=none,id=drive0,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive0,scsi=off,\
iothread=iothread0 -drive if=none,id=drive1,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive1,scsi=off,\
    iothread=iothread1 [...]

29.3.1.3 Bio-Based I/O Path for virtio-blk

For better performance of I/O-intensive applications, a new I/O path was introduced for the virtio-blk interface in kernel version 3.7. This bio-based block device driver skips the I/O scheduler, and thus shortens the I/O path in guest and has lower latency. It is especially useful for high-speed storage devices, such as SSD disks.

The driver is disabled by default. To use it, do the following:

  1. Append virtio_blk.use_bio=1 to the kernel command line on the guest. You can do so via YaST › System › Boot Loader.

    You can do it also by editing /etc/default/grub, searching for the line that contains GRUB_CMDLINE_LINUX_DEFAULT=, and adding the kernel parameter at the end. Then run grub2-mkconfig >/boot/grub2/grub.cfg to update the grub2 boot menu.

  2. Reboot the guest with the new kernel command line active.

Tip
Tip: Bio-Based Driver on Slow Devices

The bio-based virtio-blk driver does not help on slow devices such as spin hard disks. The reason is that the benefit of scheduling is larger than what the shortened bio path offers. Do not use the bio-based driver on slow devices.

29.3.1.4 Accessing iSCSI Resources Directly

QEMU now integrates with libiscsi. This allows QEMU to access iSCSI resources directly and use them as virtual machine block devices. This feature does not require any host iSCSI initiator configuration, as is needed for a libvirt iSCSI target based storage pool setup. Instead it directly connects guest storage interfaces to an iSCSI target LUN by means of the user space library libiscsi. iSCSI-based disk devices can also be specified in the libvirt XML configuration.

Note
Note: RAW Image Format

This feature is only available using the RAW image format, as the iSCSI protocol has some technical limitations.

The following is the QEMU command line interface for iSCSI connectivity.

Note
Note: virt-manager Limitation

The use of libiscsi based storage provisioning is not yet exposed by the virt-manager interface, but instead it would be configured by directly editing the guest xml. This new way of accessing iSCSI based storage is to be done at the command line.

qemu-system-x86_64 -machine accel=kvm \
  -drive file=iscsi://192.168.100.1:3260/iqn.2016-08.com.example:314605ab-a88e-49af-b4eb-664808a3443b/0,\
  format=raw,if=none,id=mydrive,cache=none \
  -device ide-hd,bus=ide.0,unit=0,drive=mydrive ...

Here is an example snippet of guest domain xml which uses the protocol based iSCSI:

<devices>
...
  <disk type='network' device='disk'>
    <driver name='qemu' type='raw'/>
    <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'>
      <host name='example.com' port='3260'/>
    </source>
    <auth username='myuser'>
      <secret type='iscsi' usage='libvirtiscsi'/>
    </auth>
    <target dev='vda' bus='virtio'/>
  </disk>
</devices>

Contrast that with an example which uses the host based iSCSI initiator which virt-manager sets up:

<devices>
...
  <disk type='block' device='disk'>
    <driver name='qemu' type='raw' cache='none' io='native'/>
    <source dev='/dev/disk/by-path/scsi-0:0:0:0'/>
    <target dev='hda' bus='ide'/>
    <address type='drive' controller='0' bus='0' target='0' unit='0'/>
  </disk>
  <controller type='ide' index='0'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
             function='0x1'/>
  </controller>
</devices>

29.3.1.5 Using RADOS Block Devices with QEMU

RADOS Block Devices (RBD) store data in a Ceph cluster. They allow snapshotting, replication, and data consistency. You can use an RBD from your KVM-managed VM Guests similarly to how you use other block devices.

Refer to SUSE Enterprise Storage documentation for more details.

29.3.2 Graphic Devices and Display Options

This section describes QEMU options affecting the type of the emulated video card and the way VM Guest graphical output is displayed.

29.3.2.1 Defining Video Cards

QEMU uses -vga to define a video card used to display VM Guest graphical output. The -vga option understands the following values:

none

Disables video cards on VM Guest (no video card is emulated). You can still access the running VM Guest via the serial console.

std

Emulates a standard VESA 2.0 VBE video card. Use it if you intend to use high display resolution on VM Guest.

cirrus

Emulates Cirrus Logic GD5446 video card. Good choice if you insist on high compatibility of the emulated video hardware. Most operating systems (even Windows 95) recognize this type of card.

Tip
Tip

For best video performance with the cirrus type, use 16-bit color depth both on VM Guest and VM Host Server.

29.3.2.2 Display Options

The following options affect the way VM Guest graphical output is displayed.

-display gtk

Display video output in a GTK window. This interface provides UI elements to configure and control the VM during runtime.

-display sdl

Display video output via SDL, usually in a separate graphics window. For more information, see the SDL documentation.

-spice option[,option[,...]]

Enables the spice remote desktop protocol.

-display vnc

Refer to Section 29.5, “Viewing a VM Guest with VNC” for more information.

-nographic

Disables QEMU's graphical output. The emulated serial port is redirected to the console.

After starting the virtual machine with -nographic, press CtrlA H in the virtual console to view the list of other useful shortcuts, for example, to toggle between the console and the QEMU monitor.

tux > qemu-system-x86_64 -hda /images/sles_base.raw -nographic

C-a h    print this help
C-a x    exit emulator
C-a s    save disk data back to file (if -snapshot)
C-a t    toggle console timestamps
C-a b    send break (magic sysrq)
C-a c    switch between console and monitor
C-a C-a  sends C-a
(pressed C-a c)

QEMU 2.3.1 monitor - type 'help' for more information
(qemu)
-no-frame

Disables decorations for the QEMU window. Convenient for dedicated desktop work space.

-full-screen

Starts QEMU graphical output in full screen mode.

-no-quit

Disables the close button of the QEMU window and prevents it from being closed by force.

-alt-grab, -ctrl-grab

By default, the QEMU window releases the captured mouse after pressing CtrlAlt. You can change the key combination to either CtrlAltShift (-alt-grab), or the right Ctrl key (-ctrl-grab).

29.3.3 USB Devices

There are two ways to create USB devices usable by the VM Guest in KVM: you can either emulate new USB devices inside a VM Guest, or assign an existing host USB device to a VM Guest. To use USB devices in QEMU you first need to enable the generic USB driver with the -usb option. Then you can specify individual devices with the -usbdevice option.

29.3.3.1 Emulating USB Devices in VM Guest

SUSE currently supports the following types of USB devices: disk, host, serial, braille, net, mouse, and tablet.

Types of USB devices for the -usbdevice option
disk

Emulates a mass storage device based on file. The optional format option is used rather than detecting the format.

qemu-system-x86_64 [...] -usbdevice
        disk:format=raw:/virt/usb_disk.raw
host

Pass through the host device (identified by bus.addr).

serial

Serial converter to a host character device.

braille

Emulates a braille device using BrlAPI to display the braille output.

net

Emulates a network adapter that supports CDC Ethernet and RNDIS protocols.

mouse

Emulates a virtual USB mouse. This option overrides the default PS/2 mouse emulation. The following example shows the hardware status of a mouse on VM Guest started with qemu-system-ARCH [...] -usbdevice mouse:

tux > sudo hwinfo --mouse
20: USB 00.0: 10503 USB Mouse
[Created at usb.122]
UDI: /org/freedesktop/Hal/devices/usb_device_627_1_1_if0
[...]
Hardware Class: mouse
Model: "Adomax QEMU USB Mouse"
Hotplug: USB
Vendor: usb 0x0627 "Adomax Technology Co., Ltd"
Device: usb 0x0001 "QEMU USB Mouse"
[...]
tablet

Emulates a pointer device that uses absolute coordinates (such as touchscreen). This option overrides the default PS/2 mouse emulation. The tablet device is useful if you are viewing VM Guest via the VNC protocol. See Section 29.5, “Viewing a VM Guest with VNC” for more information.

29.3.4 Character Devices

Use -chardev to create a new character device. The option uses the following general syntax:

qemu-system-x86_64 [...] -chardev BACKEND_TYPE,id=ID_STRING

where BACKEND_TYPE can be one of null, socket, udp, msmouse, vc, file, pipe, console, serial, pty, stdio, braille, tty, or parport. All character devices must have a unique identification string up to 127 characters long. It is used to identify the device in other related directives. For the complete description of all back-end's sub-options, see the manual page (man 1 qemu). A brief description of the available back-ends follows:

null

Creates an empty device that outputs no data and drops any data it receives.

stdio

Connects to QEMU's process standard input and standard output.

socket

Creates a two-way stream socket. If PATH is specified, a Unix socket is created:

qemu-system-x86_64 [...] -chardev \
socket,id=unix_socket1,path=/tmp/unix_socket1,server

The SERVER suboption specifies that the socket is a listening socket.

If PORT is specified, a TCP socket is created:

qemu-system-x86_64 [...] -chardev \
socket,id=tcp_socket1,host=localhost,port=7777,server,nowait

The command creates a local listening (server) TCP socket on port 7777. QEMU will not block waiting for a client to connect to the listening port (nowait).

udp

Sends all network traffic from VM Guest to a remote host over the UDP protocol.

qemu-system-x86_64 [...] \
-chardev udp,id=udp_fwd,host=mercury.example.com,port=7777

The command binds port 7777 on the remote host mercury.example.com and sends VM Guest network traffic there.

vc

Creates a new QEMU text console. You can optionally specify the dimensions of the virtual console:

qemu-system-x86_64 [...] -chardev vc,id=vc1,width=640,height=480 \
-mon chardev=vc1

The command creates a new virtual console called vc1 of the specified size, and connects the QEMU monitor to it.

file

Logs all traffic from VM Guest to a file on VM Host Server. The path is required and will be created if it does not exist.

qemu-system-x86_64 [...] \
-chardev file,id=qemu_log1,path=/var/log/qemu/guest1.log

By default QEMU creates a set of character devices for serial and parallel ports, and a special console for QEMU monitor. However, you can create your own character devices and use them for the mentioned purposes. The following options will help you:

-serial CHAR_DEV

Redirects the VM Guest's virtual serial port to a character device CHAR_DEV on VM Host Server. By default, it is a virtual console (vc) in graphical mode, and stdio in non-graphical mode. The -serial understands many sub-options. See the manual page man 1 qemu for a complete list of them.

You can emulate up to 4 serial ports. Use -serial none to disable all serial ports.

-parallel DEVICE

Redirects the VM Guest's parallel port to a DEVICE. This option supports the same devices as -serial.

Tip
Tip

With SUSE Linux Enterprise Server as a VM Host Server, you can directly use the hardware parallel port devices /dev/parportN where N is the number of the port.

You can emulate up to 3 parallel ports. Use -parallel none to disable all parallel ports.

-monitor CHAR_DEV

Redirects the QEMU monitor to a character device CHAR_DEV on VM Host Server. This option supports the same devices as -serial. By default, it is a virtual console (vc) in a graphical mode, and stdio in non-graphical mode.

For a complete list of available character devices back-ends, see the man page (man 1 qemu).

29.4 Networking in QEMU

Use the -netdev option in combination with -device to define a specific type of networking and a network interface card for your VM Guest. The syntax for the -netdev option is

-netdev type[,prop[=value][,...]]

Currently, SUSE supports the following network types: user, bridge, and tap. For a complete list of -netdev sub-options, see the manual page (man 1 qemu).

Supported -netdev Sub-options
bridge

Uses a specified network helper to configure the TAP interface and attach it to a specified bridge. For more information, see Section 29.4.3, “Bridged Networking”.

user

Specifies user-mode networking. For more information, see Section 29.4.2, “User-Mode Networking”.

tap

Specifies bridged or routed networking. For more information, see Section 29.4.3, “Bridged Networking”.

29.4.1 Defining a Network Interface Card

Use -netdev together with the related -device option to add a new emulated network card:

qemu-system-x86_64 [...] \
-netdev tap1,id=hostnet0 \
-device virtio-net-pci2,netdev=hostnet0,vlan=13,\
macaddr=00:16:35:AF:94:4B4,name=ncard1

1

Specifies the network device type.

2

Specifies the model of the network card. Use qemu-system-ARCH -device help and search for the Network devices:section to get the list of all network card models supported by QEMU on your platform.

Currently, SUSE supports the models rtl8139, e1000 and its variants e1000-82540em, e1000-82544gc and e1000-82545em, and virtio-net-pci. To view a list of options for a specific driver, add help as a driver option:

qemu-system-x86_64 -device e1000,help
e1000.mac=macaddr
e1000.vlan=vlan
e1000.netdev=netdev
e1000.bootindex=int32
e1000.autonegotiation=on/off
e1000.mitigation=on/off
e1000.addr=pci-devfn
e1000.romfile=str
e1000.rombar=uint32
e1000.multifunction=on/off
e1000.command_serr_enable=on/off

3

Connects the network interface to VLAN number 1. You can specify your own number—it is mainly useful for identification purpose. If you omit this suboption, QEMU uses the default 0.

4

Specifies the Media Access Control (MAC) address for the network card. It is a unique identifier and you are advised to always specify it. If not, QEMU supplies its own default MAC address and creates a possible MAC address conflict within the related VLAN.

29.4.2 User-Mode Networking

The -netdev user option instructs QEMU to use user-mode networking. This is the default if no networking mode is selected. Therefore, these command lines are equivalent:

qemu-system-x86_64 -hda /images/sles_base.raw
qemu-system-x86_64 -hda /images/sles_base.raw -netdev user,id=hostnet0

This mode is useful if you want to allow the VM Guest to access the external network resources, such as the Internet. By default, no incoming traffic is permitted and therefore, the VM Guest is not visible to other machines on the network. No administrator privileges are required in this networking mode. The user-mode is also useful for doing a network boot on your VM Guest from a local directory on VM Host Server.

The VM Guest allocates an IP address from a virtual DHCP server. VM Host Server (the DHCP server) is reachable at 10.0.2.2, while the IP address range for allocation starts from 10.0.2.15. You can use ssh to connect to VM Host Server at 10.0.2.2, and scp to copy files back and forth.

29.4.2.1 Command Line Examples

This section shows several examples on how to set up user-mode networking with QEMU.

Example 29.1: Restricted User-mode Networking
qemu-system-x86_64 [...] \
-netdev user1,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,vlan=12,name=user_net13,restrict=yes4

1

Specifies user-mode networking.

2

Connects to VLAN number 1. If omitted, defaults to 0.

3

Specifies a human-readable name of the network stack. Useful when identifying it in the QEMU monitor.

4

Isolates VM Guest. It then cannot communicate with VM Host Server and no network packets will be routed to the external network.

Example 29.2: User-mode Networking with Custom IP Range
qemu-system-x86_64 [...] \
-netdev user,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,net=10.2.0.0/81,host=10.2.0.62,\
dhcpstart=10.2.0.203,hostname=tux_kvm_guest4

1

Specifies the IP address of the network that VM Guest sees and optionally the netmask. Default is 10.0.2.0/8.

2

Specifies the VM Host Server IP address that VM Guest sees. Default is 10.0.2.2.

3

Specifies the first of the 16 IP addresses that the built-in DHCP server can assign to VM Guest. Default is 10.0.2.15.

4

Specifies the host name that the built-in DHCP server will assign to VM Guest.

Example 29.3: User-mode Networking with Network-boot and TFTP
qemu-system-x86_64 [...] \
-netdev user,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,tftp=/images/tftp_dir1,\
bootfile=/images/boot/pxelinux.02

1

Activates a built-in TFTP (a file transfer protocol with the functionality of a very basic FTP) server. The files in the specified directory will be visible to a VM Guest as the root of a TFTP server.

2

Broadcasts the specified file as a BOOTP (a network protocol that offers an IP address and a network location of a boot image, often used in diskless workstations) file. When used together with tftp, the VM Guest can boot from network from the local directory on the host.

Example 29.4: User-mode Networking with Host Port Forwarding
qemu-system-x86_64 [...] \
-netdev user,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,hostfwd=tcp::2222-:22

Forwards incoming TCP connections to the port 2222 on the host to the port 22 (SSH) on VM Guest. If sshd is running on VM Guest, enter

ssh qemu_host -p 2222

where qemu_host is the host name or IP address of the host system, to get a SSH prompt from VM Guest.

29.4.3 Bridged Networking

With the -netdev tap option, QEMU creates a network bridge by connecting the host TAP network device to a specified VLAN of VM Guest. Its network interface is then visible to the rest of the network. This method does not work by default and needs to be explicitly specified.

First, create a network bridge and add a VM Host Server physical network interface (usually eth0) to it:

  1. Start YaST Control Center and select System › Network Settings.

  2. Click Add and select Bridge from the Device Type drop-down box in the Hardware Dialog window. Click Next.

  3. Choose whether you need a dynamically or statically assigned IP address, and fill the related network settings if applicable.

  4. In the Bridged Devices pane, select the Ethernet device to add to the bridge.

    Configuring Network Bridge with YaST
    Figure 29.2: Configuring Network Bridge with YaST

    Click Next. When asked about adapting an already configured device, click Continue.

  5. Click OK to apply the changes. Check if the bridge is created:

    tux > brctl show
    bridge name bridge id          STP enabled  interfaces
    br0         8000.001676d670e4  no           eth0

29.4.3.1 Connecting to a Bridge Manually

Use the following example script to connect VM Guest to the newly created bridge interface br0. Several commands in the script are run via the sudo mechanism because they require root privileges.

Note
Note: Required Packages

Make sure the tunctl and bridge-utils packages are installed on the VM Host Server. If not, install them with zypper in tunctl bridge-utils.

#!/bin/bash
bridge=br01
tap=$(sudo tunctl -u $(whoami) -b)2
sudo ip link set $tap up3
sleep 1s4
sudo brctl addif $bridge $tap5
qemu-system-x86_64 -machine accel=kvm -m 512 -hda /images/sles_base.raw \
-netdev tap,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,vlan=0,macaddr=00:16:35:AF:94:4B,\
ifname=$tap6,script=no7,downscript=no
sudo brctl delif $bridge $tap8
sudo ip link set $tap down9
sudo tunctl -d $tap10

1

Name of the bridge device.

2

Prepare a new TAP device and assign it to the user who runs the script. TAP devices are virtual network devices often used for virtualization and emulation setups.

3

Bring up the newly created TAP network interface.

4

Make a 1-second pause to make sure the new TAP network interface is really up.

5

Add the new TAP device to the network bridge br0.

6

The ifname= suboption specifies the name of the TAP network interface used for bridging.

7

Before qemu-system-ARCH connects to a network bridge, it checks the script and downscript values. If it finds the specified scripts on the VM Host Server file system, it runs the script before it connects to the network bridge and downscript after it exits the network environment. You can use these scripts to first set up and bring up the bridged network devices, and then to destroy it. By default, /etc/qemu-ifup and /etc/qemu-ifdown are examined. If script=no and downscript=no are specified, the script execution is disabled and you need to take care of it manually.

8

Deletes the TAP interface from a network bridge br0.

9

Sets the state of the TAP device to down.

10

Destroys the TAP device.

29.4.3.2 Connecting to a Bridge with qemu-bridge-helper

Another way to connect VM Guest to a network through a network bridge is by means of the qemu-bridge-helper helper program. It configures the TAP interface for you, and attaches it to the specified bridge. The default helper executable is /usr/lib/qemu-bridge-helper. The helper executable is setuid root, which is only executable by the members of the virtualization group (kvm). Therefore the qemu-system-ARCH command itself does not need to be run under root privileges.

The helper is automatically called when you specify a network bridge:

qemu-system-x86_64 [...] \
 -netdev bridge,id=hostnet0,vlan=0,br=br0 \
 -device virtio-net-pci,netdev=hostnet0

You can specify your own custom helper script that will take care of the TAP device (de)configuration, with the helper=/path/to/your/helper option:

qemu-system-x86_64 [...] \
 -netdev bridge,id=hostnet0,vlan=0,br=br0,helper=/path/to/bridge-helper \
 -device virtio-net-pci,netdev=hostnet0
Tip
Tip

To define access privileges to qemu-bridge-helper, inspect the /etc/qemu/bridge.conf file. For example the following directive

allow br0

allows the qemu-system-ARCH command to connect its VM Guest to the network bridge br0.

29.5 Viewing a VM Guest with VNC

By default QEMU uses a GTK (a cross-platform toolkit library) window to display the graphical output of a VM Guest. With the -vnc option specified, you can make QEMU listen on a specified VNC display and redirect its graphical output to the VNC session.

Tip
Tip

When working with QEMU's virtual machine via VNC session, it is useful to work with the -usbdevice tablet option.

Moreover, if you need to use another keyboard layout than the default en-us, specify it with the -k option.

The first suboption of -vnc must be a display value. The -vnc option understands the following display specifications:

host:display

Only connections from host on the display number display will be accepted. The TCP port on which the VNC session is then running is normally a 5900 + display number. If you do not specify host, connections will be accepted from any host.

unix:path

The VNC server listens for connections on Unix domain sockets. The path option specifies the location of the related Unix socket.

none

The VNC server functionality is initialized, but the server itself is not started. You can start the VNC server later with the QEMU monitor. For more information, see Chapter 30, Virtual Machine Administration Using QEMU Monitor.

Following the display value there may be one or more option flags separated by commas. Valid options are:

reverse

Connect to a listening VNC client via a reverse connection.

websocket

Opens an additional TCP listening port dedicated to VNC Websocket connections. By definition the Websocket port is 5700+display.

password

Require that password-based authentication is used for client connections.

tls

Require that clients use TLS when communicating with the VNC server.

x509=/path/to/certificate/dir

Valid if TLS is specified. Require that x509 credentials are used for negotiating the TLS session.

x509verify=/path/to/certificate/dir

Valid if TLS is specified. Require that x509 credentials are used for negotiating the TLS session.

sasl

Require that the client uses SASL to authenticate with the VNC server.

acl

Turn on access control lists for checking of the x509 client certificate and SASL party.

lossy

Enable lossy compression methods (gradient, JPEG, ...).

non-adaptive

Disable adaptive encodings. Adaptive encodings are enabled by default.

share=[allow-exclusive|force-shared|ignore]

Set display sharing policy.

Note
Note

For more details about the display options, see the qemu-doc man page.

An example VNC usage:

tux > qemu-system-x86_64 [...] -vnc :5
(on the client:)
wilber > :~>vinagre venus:5905 &
QEMU VNC Session
Figure 29.3: QEMU VNC Session

29.5.1 Secure VNC Connections

The default VNC server setup does not use any form of authentication. In the previous example, any user can connect and view the QEMU VNC session from any host on the network.

There are several levels of security that you can apply to your VNC client/server connection. You can either protect your connection with a password, use x509 certificates, use SASL authentication, or even combine some authentication methods in one QEMU command.

See Section B.2, “Generating x509 Client/Server Certificates” for more information about the x509 certificates generation. For more information about configuring x509 certificates on a VM Host Server and the client, see Section 11.3.2, “Remote TLS/SSL Connection with x509 Certificate (qemu+tls or xen+tls)” and Section 11.3.2.3, “Configuring the Client and Testing the Setup”.

The Vinagre VNC viewer supports advanced authentication mechanisms. Therefore, it will be used to view the graphical output of VM Guest in the following examples. For this example, let us assume that the server x509 certificates ca-cert.pem, server-cert.pem, and server-key.pem are located in the /etc/pki/qemu directory on the host, while the client's certificates are distributed in the following locations on the client:

/etc/pki/CA/cacert.pem
/etc/pki/libvirt-vnc/clientcert.pem
/etc/pki/libvirt-vnc/private/clientkey.pem
Example 29.5: Password Authentication
qemu-system-x86_64 [...] -vnc :5,password -monitor stdio

Starts the VM Guest graphical output on VNC display number 5 (usually port 5905). The password suboption initializes a simple password-based authentication method. There is no password set by default and you need to set one with the change vnc password command in QEMU monitor:

QEMU 2.3.1 monitor - type 'help' for more information
(qemu) change vnc password
Password: ****

You need the -monitor stdio option here, because you would not be able to manage the QEMU monitor without redirecting its input/output.

Authentication Dialog in Vinagre
Figure 29.4: Authentication Dialog in Vinagre
Example 29.6: x509 Certificate Authentication

The QEMU VNC server can use TLS encryption for the session and x509 certificates for authentication. The server asks the client for a certificate and validates it against the CA certificate. Use this authentication type if your company provides an internal certificate authority.

qemu-system-x86_64 [...] -vnc :5,tls,x509verify=/etc/pki/qemu
Example 29.7: x509 Certificate and Password Authentication

You can combine the password authentication with TLS encryption and x509 certificate authentication to create a two-layer authentication model for clients. Remember to set the password in the QEMU monitor after you run the following command:

qemu-system-x86_64 [...] -vnc :5,password,tls,x509verify=/etc/pki/qemu \
-monitor stdio
Example 29.8: SASL Authentication

Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It integrates several authentication mechanisms, like PAM, Kerberos, LDAP and more. SASL keeps its own user database, so the connecting user accounts do not need to exist on VM Host Server.

For security reasons, you are advised to combine SASL authentication with TLS encryption and x509 certificates:

qemu-system-x86_64 [...] -vnc :5,tls,x509,sasl -monitor stdio