Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Server Documentation / Virtualization Guide / Introduction / Virtualization limits and support
Applies to SUSE Linux Enterprise Server 15 SP5

7 Virtualization limits and support

Important
Important

QEMU is only supported when used for virtualization together with the KVM or Xen hypervisors. The TCG accelerator is not supported, even when it is distributed within SUSE products. Users must not rely on QEMU TCG to provide guest isolation, or for any security guarantees. See also https://qemu-project.gitlab.io/qemu/system/security.html.

7.1 Architecture support

7.1.1 KVM hardware requirements

SUSE supports KVM full virtualization on AMD64/Intel 64, AArch64, IBM Z and IBM LinuxONE hosts.

  • On the AMD64/Intel 64 architecture, KVM is designed around hardware virtualization features included in AMD* (AMD-V) and Intel* (VT-x) CPUs. It supports virtualization features of chipsets and PCI devices, such as an I/O Memory Mapping Unit (IOMMU) and Single Root I/O Virtualization (SR-IOV). You can test whether your CPU supports hardware virtualization with the following command:

    > egrep '(vmx|svm)' /proc/cpuinfo

    If this command returns no output, your processor either does not support hardware virtualization, or this feature has been disabled in the BIOS or firmware.

    The following Web sites identify AMD64/Intel 64 processors that support hardware virtualization: https://ark.intel.com/Products/VirtualizationTechnology (for Intel CPUs), and https://products.amd.com/ (for AMD CPUs).

  • On the Arm architecture, Armv8-A processors include support for virtualization.

  • On the Arm architecture, we only support running QEMU/KVM via the CPU model host (it is named host-passthrough in Virtual Machine Manager or libvirt).

Note
Note: KVM kernel modules not loading

The KVM kernel modules only load if the CPU hardware virtualization features are available.

The general minimum hardware requirements for the VM Host Server are the same as outlined in Section 2.1, « Configuration matérielle requise ». However, additional RAM for each virtualized guest is needed. It should at least be the same amount that is needed for a physical installation. It is also strongly recommended to have at least one processor core or hyper-thread for each running guest.

Note
Note: AArch64

AArch64 is a continuously evolving platform. It does not have a traditional standards and compliance certification program to enable interoperability with operating systems and hypervisors. Ask your vendor for the support statement on SUSE Linux Enterprise Server.

Note
Note: POWER

Running KVM or Xen hypervisors on the POWER platform is not supported.

7.1.2 Xen hardware requirements

SUSE supports Xen on AMD64/Intel 64.

7.2 Hypervisor limits

New features and virtualization limits for Xen and KVM are outlined in the Release Notes for each Service Pack (SP).

Only packages that are part of the official repositories for SUSE Linux Enterprise Server are supported. Conversely, all optional subpackages and plug-ins (for QEMU, libvirt) provided at packagehub are not supported.

For the maximum total virtual CPUs per host, see Section 4.5.1, “Assigning CPUs”. The total number of virtual CPUs should be proportional to the number of available physical CPUs.

Note
Note: 32-bit hypervisor

With SUSE Linux Enterprise Server 11 SP2, we removed virtualization host facilities from 32-bit editions. 32-bit guests are not affected and are fully supported using the provided 64-bit hypervisor.

7.2.1 KVM limits

Supported (and tested) virtualization limits of a SUSE Linux Enterprise Server 15 SP5 host running Linux guests on AMD64/Intel 64. For other operating systems, refer to the specific vendor.

Table 7.1: KVM VM limits

Maximum virtual CPUs per VM

768

Maximum memory per VM

6 TiB

Note
Note

KVM host limits are identical to SUSE Linux Enterprise Server (see the corresponding section of release notes), except for:

  • Maximum virtual CPUs per VM: see recommendations in the Virtualization Best Practices Guide regarding over-commitment of physical CPUs at Section 4.5.1, “Assigning CPUs”. The total virtual CPUs should be proportional to the available physical CPUs.

7.2.2 Xen limits

Table 7.2: Xen VM limits

Maximum virtual CPUs per VM

32 (general HVM recommendation), 64 (HVM Windows guest), 128 (trusted HVMs), or 512 (PV)

Maximum memory per VM

2 TiB (64-bit guest), 16 GiB (32-bit guest with PAE)

Table 7.3: Xen host limits

Maximum total physical CPUs

1024

Maximum total virtual CPUs per host

See recommendations in the Virtualization Best Practices Guide regarding over-commitment of physical CPUs in sec-vt-best-perf-cpu-assign. The total virtual CPUs should be proportional to the available physical CPUs.

Maximum physical memory

Recommendation is to stay below the 16 TiB address boundary.

Suspend and hibernate modes

Not supported.

7.3 Supported host environments (hypervisors)

This section describes the support status of SUSE Linux Enterprise Server 15 SP5 running as a guest operating system on top of different virtualization hosts (hypervisors).

Table 7.4: The following SUSE host environments are supported

SUSE Linux Enterprise Server

Hypervisors

SUSE Linux Enterprise Server 12 SP5

KVM

SUSE Linux Enterprise Server 15 SP3 to SP6

Xen and KVM

You can also search in the SUSE YES certification database

The level of support is as follows
  • Support for SUSE host operating systems is full L3 (both for the guest and host) according to the respective product life cycle.

  • SUSE provides full L3 support for SUSE Linux Enterprise Server guests within third-party host environments.

  • Support for the host and cooperation with SUSE Linux Enterprise Server guests must be provided by the host system's vendor.

7.4 Supported guest operating systems

This section lists the support status for guest operating systems virtualized on top of SUSE Linux Enterprise Server 15 SP5 for KVM and Xen hypervisors.

Important
Important

Microsoft Windows guests can be rebooted by libvirt/virsh only if paravirtualized drivers are installed in the guest. Refer to https://www.suse.com/products/vmdriverpack/ for more details on downloading and installing PV drivers.

The following guest operating systems are fully supported (L3):
  • SUSE Linux Enterprise Server 11 SP4

  • SUSE Linux Enterprise Server 12 SP3, 12 SP4, 12 SP5

  • SUSE Linux Enterprise Server 15 GA, 15 SP1, 15 SP2, 15 SP3, 15 SP4, 15 SP5, 15 SP6

  • SUSE Linux Enterprise Micro 5.3, 5.4, 5.5, 6

  • Windows Server 2012+, 2012 R2+, 2016, 2019, 2022

  • Microsoft Windows Server Catalog

  • Oracle Linux 6, 7, 8 (KVM hypervisor only)

The following guest operating systems are supported as a technology preview (L2, fixes if reasonable):
  • SLED 15 SP3

Red Hat and CentOS guest operating systems are fully supported (L3) if the customer has purchased SUSE Liberty Linux.
  • Refer to the SUSE Liberty Linux documentation at https://documentation.suse.com/liberty for the list of available combinations and supported releases. In other cases, they are supported on a limited basis (L2, fixes if reasonable).

Note
Note: RHEL PV drivers

Starting from RHEL 7.2, Red Hat removed Xen PV drivers.

The following guest operating systems are supported on a commercially reasonable basis (L2, fixes if reasonable):
  • Windows 8+, 8.1+, 10+

All other guest operating systems
  • In other combinations, L2 support is provided but fixes are available only if feasible. SUSE fully supports the host OS (hypervisor). The guest OS issues need to be supported by the respective OS vendor. If an issue fix involves both the host and guest environments, the customer needs to approach both SUSE and the guest VM OS vendor.

  • All guest operating systems are supported both fully virtualized and paravirtualized. The exception is Windows systems, which are only supported fully virtualized (but they can use PV drivers: https://www.suse.com/products/vmdriverpack/), and OES operating systems, which are supported only paravirtualized.

  • All guest operating systems are supported both in 32-bit and 64-bit environments, unless stated otherwise.

7.4.1 Availability of paravirtualized drivers

To improve the performance of the guest operating system, paravirtualized drivers are provided when available. Although they are not required, it is strongly recommended to use them.

Starting with SUSE Linux Enterprise Server 12 SP2, we switched to a PVops kernel. We are no longer using a dedicated kernel-xen package:

  • The kernel-default+kernel-xen on dom0 was replaced by the kernel-default package.

  • The kernel-xen package on PV domU was replaced by the kernel-default package.

  • The kernel-default+xen-kmp on HVM domU was replaced by kernel-default.

For SUSE Linux Enterprise Server 12 SP1 and older (down to 10 SP4), the paravirtualized drivers are included in a dedicated kernel-xen package.

The paravirtualized drivers are available as follows:

SUSE Linux Enterprise Server 12 / 12 SP1 / 12 SP2

Included in kernel

SUSE Linux Enterprise Server 11 / 11 SP1 / 11 SP2 / 11 SP3 / 11 SP4

Included in kernel

SUSE Linux Enterprise Server 10 SP4

Included in kernel

Red Hat

Available since Red Hat Enterprise Linux 5.4. Starting from Red Hat Enterprise Linux 7.2, Red Hat removed the PV drivers.

Windows

SUSE has developed virtio-based drivers for Windows, which are available in the Virtual Machine Driver Pack (VMDP). For more information, see https://www.suse.com/products/vmdriverpack/.

7.5 Supported VM migration scenarios

SUSE Linux Enterprise Server supports migrating a virtual machine from one physical host to another.

7.5.1 Offline migration scenarios

SUSE supports offline migration, powering off a guest VM, then moving it to a host running a different SLE product, from SLE 12 to SLE 15 SPX. The following host operating system combinations are fully supported (L3) for migrating guests from one host to another:

Table 7.5: Supported offline migration guests
Target SLES host 12 SP3 12 SP4 12 SP5 15 GA 15 SP1 15 SP2 15 SP3 15 SP415 SP5
Source SLES host
12 SP3
12 SP4 ()
12 SP5
15 GA
15 SP1
15 SP2
15 SP3
15 SP4

Fully compatible and fully supported

()

Supported for KVM hypervisor only.

Not supported

7.5.2 Live migration scenarios

This section lists support status of live migration scenarios when running virtualized on top of SLES. Also, refer to the supported Section 16.2, “Migration requirements”. The following host operating system combinations are fully supported (L3 according to the respective product life cycle).

Note
Note: Live migration
  • SUSE always supports live migration of virtual machines between hosts running SLES with successive service pack numbers. For example, from SLES 15 SP2 to 15 SP3.

  • SUSE strives to support live migration of virtual machines from a host running a service pack under LTSS to a host running a newer service pack, within the same major version of SUSE Linux Enterprise Server. For example, virtual machine migration from a SLES 12 SP2 host to a SLES 12 SP5 host. SUSE only performs minimal testing of LTSS-to-newer migration scenarios and recommends thorough on-site testing before attempting to migrate critical virtual machines.

Important
Important: Xen live migration

Live migration between SLE 11 and SLE 12 is not supported because of the different tool stack, see the Release notes for more details.

Table 7.6: Supported live-migration guests
Target SLES host 12 SP4 12 SP5 15 GA 15 SP1 15 SP2 15 SP3 15 SP415 SP5
Source SLES host
12 SP3
12 SP4 ()
12 SP5
15 GA
15 SP1
15 SP2
15 SP3
15 SP4

Fully compatible and fully supported

()

Supported for KVM hypervisor only.

Not supported

7.6 Feature support

Important
Important: Nested virtualization: tech preview

Nested virtualization allows you to run a virtual machine inside another VM while still using hardware acceleration from the host. It has low performance and adds more complexity while debugging. Nested virtualization is normally used for testing purposes. In SUSE Linux Enterprise Server, nested virtualization is a technology preview. It is only provided for testing and is not supported. Bugs can be reported, but they are treated with low priority. Any attempt to live migrate or to save or restore VMs in the presence of nested virtualization is also explicitly unsupported.

Important
Important: Post-copy live migration: tech preview

Post-copy is a method to live migrate virtual machines that is intended to get VMs running as soon as possible on the destination host, and have the VM RAM transferred gradually in the background over time as needed. Under some conditions, this can be an optimization compared to the traditional pre-copy method. However, this comes with a major drawback: An error occurring during the migration (especially a network failure) can cause the whole VM RAM contents to be lost. Therefore, we recommend using pre-copy only in production, while post-copy can be used for testing and experimentation in case losing the VM state is not a major concern.

7.6.1 Xen host (Dom0)

Table 7.7: Feature support—host (Dom0)
Features Xen
Network and block device hotplugging
Physical CPU hotplugging
Virtual CPU hotplugging
Virtual CPU pinning
Virtual CPU capping
Intel* VT-x2: FlexPriority, FlexMigrate (migration constraints apply to dissimilar CPU architectures)
Intel* VT-d2 (DMA remapping with interrupt filtering and queued invalidation)
AMD* IOMMU (I/O page table with guest-to-host physical address translation)
Note
Note: Adding or removing physical CPUs at runtime is not supported

The addition or removal of physical CPUs at runtime is not supported. However, virtual CPUs can be added or removed for each VM Guest while offline.

7.6.2 Guest feature support

Note
Note: Live migration of Xen PV guests

For live migration, both source and target system architectures need to match; that is, the processors (AMD* or Intel*) must be the same. Unless CPU ID masking is used, such as with Intel FlexMigration, the target should feature the same processor revision or a more recent processor revision than the source. If VMs are moved among different systems, the same rules apply for each move. To avoid failing optimized code at runtime or application start-up, source and target CPUs need to expose the same processor extensions. Xen exposes the physical CPU extensions to the VMs transparently. To summarize, guests can be 32-bit or 64-bit, but the VHS must be identical.

Note
Note: Windows guest

Hotplugging of virtual network and virtual block devices, and resizing, shrinking and restoring dynamic virtual memory are supported in Xen and KVM only if PV drivers are being used (VMDP).

Note
Note: Intel flexMigration

For machines that support Intel FlexMigration, CPU-ID masking and faulting allow for more flexibility in cross-CPU migration.

Tip
Tip

For KVM, a detailed description of supported limits, features, recommended settings and scenarios, and other useful information is maintained in kvm-supported.txt. This file is part of the KVM package and can be found in /usr/share/doc/packages/qemu-kvm.

Table 7.8: Guest feature support for Xen and KVM
Features Xen PV guest (DomU) Xen FV guest KVM FV guest
Virtual network and virtual block device hotplugging
Virtual CPU hotplugging
Virtual CPU over-commitment
Dynamic virtual memory resize
VM save and restore
VM Live Migration [1] [1]
VM snapshot
Advanced debugging with GDBC
Dom0 metrics visible to VM
Memory ballooning
PCI Pass-Through [2]
AMD SEV [3]

Fully compatible and fully supported

Not supported

[1] See Section 16.2, “Migration requirements”.
[2] NetWare guests are excluded.
[3] See https://documentation.suse.com/sles/html/SLES-amd-sev/article-amd-sev.html