6 Installation of virtualization components #
6.1 Introduction #
To run a virtualization server (VM Host Server) that can host one or more guest systems (VM Guests), you need to install required virtualization components on the server. These components vary depending on which virtualization technology you want to use.
6.2 Installing virtualization components #
You can install the virtualization tools required to run a VM Host Server in one of the following ways:
By selecting a specific system role during SUSE Linux Enterprise Server installation on the VM Host Server
By running the YaST Virtualization module on an already installed and running SUSE Linux Enterprise Server.
By installing specific installation patterns on an already installed and running SUSE Linux Enterprise Server.
6.2.1 Specifying a system role #
You can install all the tools required for virtualization during the installation of SUSE Linux Enterprise Server on the VM Host Server. During the installation, you are presented with the screen.
Here you can select either the SUSE Linux Enterprise Server installation.
or the roles. The appropriate software selection and setup is automatically performed during
Both virtualization system roles create a dedicated
/var/lib/libvirt
partition, and enable the
firewalld
and Kdump services.
6.2.2 Running the YaST Virtualization module #
Depending on the scope of SUSE Linux Enterprise Server installation on the VM Host Server, none of the virtualization tools may be installed on your system. They are automatically installed when configuring the hypervisor with the YaST Virtualization module.
The YaST Virtualization module is included in the yast2-vm package. Verify it is installed on the VM Host Server before installing virtualization components.
To install the KVM virtualization environment and related tools, proceed as follows:
Start YaST and select
› .Select
for a minimal installation of QEMU and KVM environment. Select to use thelibvirt
-based management stack as well. Confirm with .YaST offers to automatically configure a network bridge on the VM Host Server. It ensures proper networking capabilities of the VM Guest. Agree to do so by selecting
, otherwise choose .After the setup has been finished, you can start creating and configuring VM Guests. Rebooting the VM Host Server is not required.
To install the Xen virtualization environment, proceed as follows:
Start YaST and select
› .Select
for a minimal installation of Xen environment. Select to use thelibvirt
-based management stack as well. Confirm with .YaST offers to automatically configure a network bridge on the VM Host Server. It ensures proper networking capabilities of the VM Guest. Agree to do so by selecting
, otherwise choose .After the setup has been finished, you need to reboot the machine with the Xen kernel.
Tip: Default boot kernelIf everything works as expected, change the default boot kernel with YaST and make the Xen-enabled kernel the default. For more information about changing the default kernel, see Abschnitt 18.3, „Konfigurieren des Bootloaders mit YaST“.
6.2.3 Installing specific installation patterns #
Related software packages from SUSE Linux Enterprise Server software repositories are
organized into installation patterns. You can use
these patterns to install specific virtualization components on an
already running SUSE Linux Enterprise Server. Use zypper
to install
them:
zypper install -t pattern PATTERN_NAME
To install the KVM environment, consider the following patterns:
kvm_server
Installs basic VM Host Server with the KVM and QEMU environments.
kvm_tools
Installs
libvirt
tools for managing and monitoring VM Guests in KVM environment.
To install the Xen environment, consider the following patterns:
xen_server
Installs a basic Xen VM Host Server.
xen_tools
Installs
libvirt
tools for managing and monitoring VM Guests in Xen environment.
6.3 Enable nested virtualization in KVM #
KVM's nested virtualization is still a technology preview. It is provided for testing purposes and is not supported.
Nested guests are KVM guests run in a KVM guest. When describing nested guests, we use the following virtualization layers:
- L0
A bare metal host running KVM.
- L1
A virtual machine running on L0. Because it can run another KVM, it is called a guest hypervisor.
- L2
A virtual machine running on L1. It is called a nested guest.
Nested virtualization has many advantages. You can benefit from it in the following scenarios:
Manage your own virtual machines directly with your hypervisor of choice in cloud environments.
Enable the live migration of hypervisors and their guest virtual machines as a single entity.
NoteLive migration of a nested VM Guest is not supported.
Use it for software development and testing.
To enable nesting temporarily, remove the module and reload it with the
nested
KVM module parameter:
For Intel CPUs, run:
>
sudo
modprobe -r kvm_intel && modprobe kvm_intel nested=1For AMD CPUs, run:
>
sudo
modprobe -r kvm_amd && modprobe kvm_amd nested=1
To enable nesting permanently, enable the nested
KVM
module parameter in the /etc/modprobe.d/kvm_*.conf
file, depending on your CPU:
For Intel CPUs, edit
/etc/modprobe.d/kvm_intel.conf
and add the following line:options kvm_intel nested=1
For AMD CPUs, edit
/etc/modprobe.d/kvm_amd.conf
and add the following line:options kvm_amd nested=1
When your L0 host is capable of nesting, you can start an L1 guest in one of the following ways:
Use the
-cpu host
QEMU command line option.Add the
vmx
(for Intel CPUs) or thesvm
(for AMD CPUs) CPU feature to the-cpu
QEMU command line option, which enables virtualization for the virtual CPU.
6.3.1 VMware ESX as a guest hypervisor #
If you use VMware ESX as a guest hypervisor on top of a KVM bare metal hypervisor, you may experience unstable network communication. This problem occurs especially between nested KVM guests and the KVM bare metal hypervisor or external network. The following default CPU configuration of the nested KVM guest is causing the problem:
<cpu mode='host-model' check='partial'/>
To fix it, modify the CPU configuration as follow:
[...] <cpu mode='host-passthrough' check='none'> <cache mode='passthrough'/> </cpu> [...]