Jump to content
documentation.suse.com / The Adaptable Linux Platform Guide

The Adaptable Linux Platform Guide

This guide introduces the Adaptable Linux Platform (ALP)—its deployment, system management and software installation as well as running of containerized workloads. To enhance this ALP documentation, find its sources at https://github.com/SUSE/doc-modular/edit/main/xml/.


ALP is a lightweight operating system. Instead of applications distributed in traditional software packages, it runs containerized and virtualized workloads.


This guide introduces an overview of what ALP is and how it is different from traditional operating systems. It also describes how to administer ALP and install and manage individual workloads.


To understand the concepts and perform tasks described in this guide, you need to have good knowledge and practice with the Linux operating system.


After having read this guide, you will be able to deploy ALP, modify its file system in a transactional way, and install and run specific workloads on top of it.

Publication Date: 01 Dec 2022

1 General description

1.1 What is ALP?

The Adaptable Linux Platform (ALP) is a lightweight operating system. Instead of applications distributed in traditional software packages, it runs containerized and virtualized workloads.

1.2 Core components of ALP

The Adaptable Linux Platform (ALP) consists of the following components:

Base operating system

The core of ALP which runs all required services. It is an immutable operating system with a read-only root file system. The file system is modified by transactional updates which utilize the snapshotting feature of BTRFS.

Transactional updates

The transactional-update command performs changes on the file system. You can use it to install software, update existing workloads, or apply software patches. Because it uses file system snapshots, applied changes can be easily rolled back.

Container orchestration

ALP runs containerized workloads instead of applications packed in software packages. The default container orchestrator in ALP is Podman which is responsible for managing containers and container images.

Containerized workloads

Workloads replace traditional applications. A containerized workload contains all software dependencies required to run a specific application or tool.


A Web-based graphical interface to administer single or multiple ALP workloads from one place. It helps you manage, for example, user accounts, network settings, or container orchestration.

1.3 Benefits of ALP

The Adaptable Linux Platform offers the following customer benefits:

  • High security of running workloads.

  • Minimal maintenance with keeping the workloads up to date.

  • Stable immutable base operating system that utilizes transactions when modifying the file system.

  • Ability to roll back modifications on the file system in case the transaction result is undesirable.

Available workloads for the Adaptable Linux Platform

1.5.1 Introduction

This article lists and describes workloads that are available for the Adaptable Linux Platform (ALP). You can find the source files of the workloads at https://build.opensuse.org/project/show/SUSE:ALP:Workloads.

1.5.2 YaST

The following YaST container images are available:


The base YaST workload. It contains the text version of YaST (ncurses).

For more details, refer to Section 4.7, “Running the YaST workload using Podman”.


This workload adds the Qt-based graphical user interface.


This workload exposes the standard graphical interface via a VNC server and uses a JavaScript VNC client to render the screen in a Web browser.

1.5.3 KVM

This workload adds virtualization capability to ALP so that you can use it as a VM Host Server. It uses the KVM hypervisor supported by the libvirt toolkit.

For more details, refer to Section 4.8, “Running the KVM virtualization workload using Podman”.

1.5.4 Cockpit Web server

This workload adds the Cockpit Web server to Adaptable Linux Platform so that you can administer the system and container via a user-friendly interface in your Web browser.

For more details, refer to Section 4.9, “Running the Cockpit Web server using Podman”.

2 Deployment

2.1 Introduction

The Adaptable Linux Platform (ALP) is distributed as a pre-built raw disk image. You can download the latest published image from https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/. There are two types of ALP images, depending whether you intend to run ALP on an encrypted disk or an unencrypted disk. You can deploy ALP either with a minimal initial configuration (JeOS Firstboot), or use additional tools—Combustion and Ignition—to specify a detailed system setup.

2.2 First boot detection

The deployment configuration runs on the first boot only. To distinguish between the first and subsequent boots, the flag file /boot/writable/firstboot_happened is created after the first boot finishes. If the file is not present in the file system, the attribute ignition.firstboot is passed to the kernel command line, which triggers the creation of initramfs (Ignition) or running a specific dracut module (Combustion). After completing the first boot, the /boot/writable/firstboot_happened flag file is created.

Note: The flag file is always created

Even though the configuration may not be successful, due to improper or missing configuration files, the /boot/writable/firstboot_happened flag file is created.


You may force the first boot configuration on subsequent boot by passing the ignition.firstboot attribute to the kernel command line or by deleting the /boot/writable/firstboot_happened flag file.

2.3 Default partitioning

The pre-built images are delivered with a default partitioning scheme. You can change it during the first boot by using Ignition or Combustion.

Important: BTRFS is mandatory for the root file system

If you intend to perform any changes to the default partitioning scheme, the root file system must be BTRFS.

Each image has the following subvolumes:


The /etc directory is mounted as overlayfs, where the upper directory is mounted to /var/lib/overlay/1/etc/.

You can recognize the subvolumes mounted by default by the option x-initrd.mount in /etc/fstab. Other subvolumes or partitions must be configured either by Ignition or Combustion.

Deploying ALP

2.5.1 Introduction

This article describes how to deploy the Adaptable Linux Platform (ALP) raw disk image. It applies to ALP running both on encrypted and unencrypted disk.

2.5.2 Hardware requirements

The minimum supported hardware requirements for deploying ALP follow:


AMD64/Intel 64 CPU architecture is supported

Maximum number of CPUs

The maximum number of CPUs supported by software design is 8192.


ALP requires at least 1 GB RAM. Bear in mind that this is a minimal value for the operation system, the actual memory size depends on the workload.

Hard disk

The minimum hard disk space is 12GB, while the recommended value is 20GB of hard disk space. Adjust the value according to the workloads of your containers.

Important: Encrypted image does not expand to the full disk capacity

As of now, the encrypted image does not expand to the full disk capacity automatically. As a workaround, the following steps are required:

  1. Use the qemu-img command to increase the disk image to the desired size.

  2. Setup the virtual machine and boot it. When the JeOS Firstboot wizard asks you which method to use for encryption, select passphrase.

  3. When the system is ready, use the parted command to resize the partition where the LUKS device resides (for example, partition number 3) to the desired size.

  4. Run the cryptsetup resize luks command. When asked, enter the passphrase to resize the encrypted device.

  5. Run the transactional-update shell command to open a read-write shell in the current disk snapshot. Then resize the BTRFS file system to the desired size, for example:

    # btrfs fi resize max /
  6. Leave the shell with exit and reboot the system with reboot.

2.5.3 Deploying ALP on a KVM VM Host Server

This procedure describes steps to deploy ALP as a KVM virtual machine using the Virtual Machine Manager.

  1. Download ALP virtual machine image on the host where you will run ALP. Go to https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/ and download the latest disk image of ALP.

    For example, for the unencrypted image:

    > curl -LO https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/ALP-VM.x86_64-0.0.1-kvm-Build15.17.qcow2

    And for the encrypted image:

    > curl -LO https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/ALP-VM.x86_64-0.0.1-kvm_encrypted-Build15.18.qcow2
  2. Start Virtual Machine Manager, select File › New VM and Import existing disk image. Confirm with Forward.

  3. Specify the path to the ALP disk image that you previously downloaded and the type of linux OS you are deploying, for example, Generic Linux 2020. Confirm with Forward.

  4. Specify the amount of memory and number of processors that you want to assign to the ALP virtual machine and confirm with Forward.

  5. Specify the name for the virtual machine and the network to be used.

  6. If you are deploying an encrypted ALP image, perform these additional steps:

    1. Enable Customize configuration before install and confirm with Finish.

    2. Click Overview from the left menu and change the boot method from BIOS to UEFI for secure boot. Confirm with Apply.

      Set UEFI firmware for the encrypted ALP image
      Figure 2.1: Set UEFI firmware for the encrypted ALP image
    3. Add a Trusted Platform Module (TPM) device. Click Add Hardware, select TPM from the left menu, and select the Emulated type.

      Add emulated TPM device
      Figure 2.2: Add emulated TPM device

      Confirm with Finish and start the ALP deployment by clicking Begin Installation from the top menu.

    1. If you want to deploy ALP with only minimal setup options, confirm with Finish. The ALP disk image will be booted and JeOS Firstboot will take care of the deployment. Refer to Section 2.5.4, “Deploying ALP with JeOS Firstboot” for next steps.

    2. If you want to specify detailed deployment options, you need to use Ignition or Combustion tools to supply your setup during the disk image boot process. For more details, refer to Section 2.6, “Configuring with Ignition” and Section 2.7, “Configuring with Combustion”.

2.5.4 Deploying ALP with JeOS Firstboot

When booting the ALP raw image for the first time, JeOS Firstboot enables you to perform a minimal configuration of your system:

  1. After booting the ALP disk image, you will be presented with a bootloader screen. Select ALP and confirm with Enter.

    ALP boot screen
    Figure 2.3: ALP boot screen
  2. JeOS Firstboot displays a welcome screen. Confirm with Enter.

    JeOS Firstboot screen
    Figure 2.4: JeOS Firstboot screen
  3. On the next screens, select keyboard, confirm the license agreement, and select the time zone.

  4. In the Enter root password dialog window, enter a password for the root and confirm it.

    Enter root password
    Figure 2.5: Enter root password
  5. When deploying with an encrypted disk, follow these additional steps:

    1. Select the desired protection method and confirm with OK.

      Select method for encryption
      Figure 2.6: Select method for encryption
    2. Enter a recovery password for LUKS encryption and retype it. The root file system re-encryption will begin.

  6. ALP is successfully deployed using a minimal initial configuration.

2.5.5 Summary

After the deployment of ALP is finished, you are presented with the login prompt. Log in as root, and you are ready to set up the system and install additional workloads.

2.5.6 Next steps

Configuring with Ignition

2.6.1 What is Ignition?

Ignition is a provisioning tool that enables you to configure a system according to your specification on the first boot.

2.6.2 How does Ignition work?

When the system is booted for the first time, Ignition is loaded as part of an initramfs and searches for a configuration file within a specific directory (on a USB flash disk, or you can provide a URL). All changes are performed before the kernel switches from the temporal file system to the real root file system (before the switch_root command is issued).

Ignition uses a configuration file in the JSON format named config.ign. For the purpose of better human readability, you can create a YAML file and convert this file to JSON. For details, refer to 'Task: Converting YAML file into JSON'. config.ign

When installing on bare metal, the configuration file config.ign must reside in the ignition subdirectory on the configuration media labeled ignition. The directory structure must look as follows:

<root directory>
└── ignition
    └── config.ign

If you intend to configure a virtual machine with Virtual Machine Manager (libvirt), provide the path to the config.ign file in its XML definition, for example:

<domain ... >
  <sysinfo type="fwcfg">
    <entry name="opt/com.coreos/config" file="/location/to/config.ign"/>

The config.ign contains various data types: objects, strings, integers, booleans and lists of objects. For a complete specification, refer to Ignition specification v3.3.0.

The version attribute is mandatory and in case of ALP, its value must be set either to 3.3.0 or to any lower version. Otherwise, Ignition will fail.

If you want to log in to your system as root, you must at least include a password for root. However, it is recommended to establish access via SSH keys. To configure a password, make sure to use a secure one. If you use a randomly generated password, use at least 10 characters. If you create your password manually, use even more than 10 characters and combine uppercase and lowercase letters and numbers.

Converting YAML formatted files into JSON Introduction

JSON is a universal file format for storing structured data. Applications, for example, Ignition, use it to store and retrieve their configuration. Because JSON's syntax is complex and hard to read for human beings, you can write the configuration in a more friendly format called YAML and then convert it into JSON. Converting YAML files into JSON format

The tool that converts Ignition-specific vocabularies in YAML files into JSON format is butane. It also verifies the syntax of the YAML file to catch potential errors in the structure. For the latest version of butane, add the following repository:

> sudo  zypper ar -f \
  https://download.opensuse.org/repositories/devel:/kubic:/ignition/openSUSE_Tumbleweed/ \

Replace openSUSE_Tumbleweed with one of the following (depending on your distribution):

  • 'openSUSE_Leap_$releasever'

  • 15.3

Now you can install the butane tool:

> sudo  zypper ref && zypper in butane

After the installation is complete, you can invoke butane by running:

>  butane -p -o config.ign config.fcc
  • config.fcc is the path to the YAML configuration file.

  • config.ign is the path to the output JSON configuration file.

  • The -p command option adds line breaks to the output file and thus makes it more readable. Summary

After you completed the described steps, you can write and store configuration files in YAML format while providing them in JSON format if applications, for example, Ignition, require it.

Ignition configuration examples Configuration examples in YAML

This section will provide you with some common examples of the Ignition configuration in the YAML format. Note that Ignition does not accept configuration in the YAML format but rather JSON. To convert a YAML file to the JSON format, use the butane tool as described in Section, “Introduction”.

Note: The version attribute is mandatory

Each config.fcc must include version 1.4.0 or lower that is then converted to the corresponding Ignition specification. Storage configuration

The storage attribute is used to configure partitions, RAID, define file systems, create files, etc. To define partitions, use the disks attribute. The filesystems attribute is used to format partitions and define mount points of particular partitions. The files attribute can be used to create files in the file system. Each of the mentioned attributes is described in the following sections. The disks attribute

The disks attribute is a list of devices that enables you to define partitions on these devices. The disks attribute must contain at least one device, other attributes are optional. The following example will use a single virtual device and divide the disk into four partitions:

variant: fcos
version: 1.0.0
    - device: "/dev/vda"
      wipe_table: true
       - label: root
         number: 1
         type_guid: 4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709
       - label: boot
         number: 2
         type_guid: BC13C2FF-59E6-4262-A352-B275FD6F7172
       - label: swap
         number: 3
         type_guid: 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F
       - label: home
         number: 4
         type_guid: 933AC7E1-2EB4-4F13-B844-0E14E2AEF915 The raid attribute

The raid is a list of RAID arrays. The following attributes of raid are mandatory:


a level of the particular RAID array (linear, raid0, raid1, raid2, raid3, raid4, raid5, raid6)


a list of devices in the array referenced by their absolute paths


a name that will be used for the md device

variant: fcos
version: 1.0.0
    - name: system
      level: raid1
        - "/dev/sda"
        - "/dev/sdb" The filesystems attribute

filesystems must contain the following attributes:


the absolute path to the device, typically /dev/sda in case of physical disk


the file system format (btrfs, ext4, xfs, vfat or swap)


In case of ALP, the root file system must be formatted to btrfs.

The following example demonstrates using the filesystems attribute. The /opt directory will be mounted to the /dev/sda1 partition, which is formatted to btrfs. The device will not be erased.

variant: fcos
version: 1.0.0
    - path: /opt
      device: "/dev/sda1"
      format: btrfs
      wipe_filesystem: false The files attribute

You can use the files attribute to create any files on your machine. Bear in mind that if you want to create files outside the default partitioning schema, you need to define the directories by using the filesystems attribute.

In the following example, a host name is created by using the files attribute. The file /etc/hostname will be created with the alp-1 host name:

variant: fcos
version: 1.0.0
    - path: /etc/hostname
      mode: 0644
      overwrite: true
        inline: "alp-1" The directories attribute

The directories attribute is a list of directories that will be created in the file system. The directories attribute must contain at least one path attribute.

variant: fcos
version: 1.0.0
    - path: /home/tux
        name: tux Users administration

The passwd attribute is used to add users. If you intend to log in to your system, create root and set the root's password and/or add the SSH key to the Ignition configuration. You need to hash the root password, for example by using the openssl command:

 openssl passwd -6

The command creates a hash of the password you chose. Use this hash as the value of the password_hash attribute.

variant: fcos
version: 1.0.0
   - name: root
     password_hash: "$6$PfKm6Fv5WbqOvZ0C$g4kByYM.D2B5GCsgluuqDNL87oeXiHqctr6INNNmF75WPGgkLn9O9uVx4iEe3UdbbhaHbTJ1vpZymKWuDIrWI1"
       - ssh-rsa long...key user@host

The users attribute must contain at least one name attribute. ssh_authorized_keys is a list of ssh keys for the user. Enabling systemd services

You can enable systemd services by specifying them in the systemd attribute.

variant: fcos
version: 1.0.0
  - name: sshd.service
    enabled: true

The name must be the exact name of a service to be enabled (including the suffix).

Configuring with Combustion

2.7.1 What is Combustion?

Combustion is a dracut module that enables you to configure your system on the first boot. You can use Combustion, for example, to change the default partitions, set user passwords, create files, or install packages.

2.7.2 How does Combustion work?

Combustion is invoked after the ignition.firstboot argument is passed to the kernel command line. Combustion reads a provided file named script, executes included commands, and thus performs changes to the file system. If script includes the network flag, Combustion tries to configure the network. After /sysroot is mounted, Combustion tries to activate all mount points in /etc/fstab and then calls transactional-update to apply other changes, for example, setting root password or installing packages. The script file

When installing on bare metal, the configuration file script must reside in the combustion subdirectory on the configuration media labeled combustion. The directory structure must look as follows:

<root directory>
└── combustion
    └── script
    └── other files

If you intend to configure a virtual machine with Virtual Machine Manager (libvirt), provide the path to the script file in its XML definition, for example:

<domain ... >
  <sysinfo type="fwcfg">
    <entry name="opt/org.opensuse.combustion/script" file="/location/to/script"/>
Tip: Using Combustion together with Ignition

Combustion can be used along with Ignition. If you intend to do so, label your configuration medium ignition and include the ignition directory with the config.ign to your directory structure as shown below:

<root directory>
└── combustion
    └── script
    └── other files
└── ignition
    └── config.ign

In this scenario, Ignition runs before Combustion.

Combustion configuration examples The script configuration file

The script configuration file is a set of commands that are parsed and executed by Combustion in a transactional-update shell. This article provides examples of configuration tasks performed by Combustion.

Important: Include interpreter declaration

As the script file is interpreted by Bash, always start the file with the interpreter declaration at its first line:


To log in to your system, include at least the root password. However, it is recommended to establish the authentication using SSH keys. If you need to use a root password, make sure to configure a secure password. If you use a randomly generated password, use at least 10 characters. If you create your password manually, use even more than 10 characters and combine uppercase and lowercase letters and numbers. Network configuration

To configure and use the network connection during the first boot, add the following statement to script:

# combustion: network

Using this statement will pass the rd.neednet=1 argument to dracut. If you do not use the statement, the system will be configured without any network connection. Partitioning

ALP raw images are delivered with a default partitioning scheme as described in Section 2.3, “Default partitioning”. You might want to use a different partitioning. The following set of example snippets moves the /home to a different partition.

Note: Performing changes outside of directories included in snapshots

The following script performs changes that are not included in snapshots. If the script fails and the snapshot is discarded, some changes remain visible and cannot be reverted, for example, the changes to the /dev/vdb device.

The following snippet creates a GPT partitioning schema with a single partition on the /dev/vdb device:

sfdisk /dev/vdb <<EOF
label: gpt


The partition is formatted to BTRFS:

wipefs --all ${partition}
mkfs.btrfs ${partition}

Possible content of /home is moved to the new /home folder location by the following snippet:

mount /home
mount ${partition} /mnt
rsync -aAXP /home/ /mnt/
umount /home /mnt

The snippet below removes an old entry in /etc/fstab and creates a new entry:

awk -i inplace '$2 != "/home"' /etc/fstab
echo "$(blkid -o export ${partition} | grep ^UUID=) /home btrfs defaults 0 0" >>/etc/fstab Setting a password for root

Before you set the root password, generate a hash of the password, for example, by using the openssl passwd -6. To set the password, add the following to the script:

echo 'root:$5$.wn2BZHlEJ5R3B1C$TAHEchlU.h2tvfOpOki54NaHpGYKwdNhjaBuSpDotD7' | chpasswd -e Adding SSH keys

The following snippet creates a directory to store the root's SSH key and then copies the public SSH key located on the configuration device to the authorized_keys file.

mkdir -pm700 /root/.ssh/
cat id_rsa_new.pub >> /root/.ssh/authorized_keys

The SSH service must be enabled in case you need to use remote login via SSH. For details, refer to Section, “Enabling services”. Enabling services

To enable system services, for example, the SSH service, add the following line to script:

systemctl enable sshd.service Installing packages
Important: Network connection and registering your system may be necessary

As some packages may require additional subscription, you may need to register your system beforehand. An available network connection may also be needed to install additional packages.

During the first boot configuration, you can install additional packages to your system. For example, you can install the vim editor by adding:

zypper --non-interactive install vim-small

Bear in mind that you will not be able to use zypper after the configuration is complete and you boot to the configured system. To perform changes later, you must use the transactional-update command to create a changed snapshot.

3 Transactional updates

3.1 What are transactional updates?

The Adaptable Linux Platform (ALP) was designed to use a read-only root file system. This means that after the deployment is complete, you are not able to perform direct modifications to the root file system, for example, by using the zypper command. Instead, ALP introduces the concept of transactional updates which enables you to modify your system and keep it up to date.

3.2 How do transactional updates work?

Each time you call the transactional-update command to change your system—either to install a package, perform an update or apply a patch—the following actions take place:

Procedure 3.1: Modifying the root file system
  1. A new read-write snapshot is created from your current root file system, or from a snapshot that you specified.

  2. All changes are applied (updates, patches or package installation).

  3. The snapshot is switched back to read-only mode.

  4. The new root file system snapshot is prepared, so that it will be active after you reboot.

  5. After rebooting, the new root file system is set as the default snapshot.


    Bear in mind that without rebooting your system, the changes will not be applied.


If you do not reboot your machine before performing further changes, the transactional-update command will create a new snapshot from the current root file system. This means that you will end up with several parallel snapshots, each including that particular change but not changes from the other invocations of the command. After reboot, the most recently created snapshot will be used as your new root file system, and it will not include changes done in the previous snapshots.

3.3 Software repositories

The current ALP image points to the following two software repositories:



This repository is enabled. It is a subset of the build repository and an equivalent of the POOL repository known from other SUSE software products. It will remain unchanged until the release of the next ALP prototype.


If you need a package which is not included in the ALP repository, you may find it in the ALP-Build repository. To enable it, run:

# zypper mr -e ALP-Build


This repository is disabled by default. It is used for building the project. It includes all packages built in the SUSE:ALP project in the build service and will be moving forward over the time with future development.

3.4 Benefits of transactional updates

  • They are atomic—the update is applied only if it completes successfully.

  • Changes are applied in a separate snapshot and so do not influence the running system.

  • Changes can easily be rolled back.

Usage of the transactional-update command

3.6.1 transactional-update usage

The transactional-update command enables the atomic installation or removal of updates. Updates are applied only if all of them can be successfully installed. transactional-update creates a snapshot of your system and uses it to update the system. Later you can restore this snapshot. All changes become active only after reboot.

The transactional-update command syntax is as follows:

transactional-update [option] [general_command] [package_command] standalone_command
Note: Running transactional-update without arguments

If you do not specify any command or option while running the transactional-update command, the system updates itself.

Possible command parameters are described further.

transactional-update options
--interactive, -i

Can be used along with a package command to turn on interactive mode.

--non-interactive, -n

Can be used along with a package command to turn on non-interactive mode.

--continue [number], -c

The --continue option is for making multiple changes to an existing snapshot without rebooting.

The default transactional-update behavior is to create a new snapshot from the current root file system. If you forget something, such as installing a new package, you have to reboot to apply your previous changes, run transactional-update again to install the forgotten package, and reboot again. You cannot run the transactional-update command multiple times without rebooting to add more changes to the snapshot, because this will create separate independent snapshots that do not include changes from the previous snapshots.

Use the --continue option to make as many changes as you want without rebooting. A separate snapshot is made each time, and each snapshot contains all the changes you made in the previous snapshots, plus your new changes. Repeat this process as many times as you want, and when the final snapshot includes everything you want, reboot the system, and your final snapshot becomes the new root file system.

Another useful feature of the --continue option is that you may select any existing snapshot as the base for your new snapshot. The following example demonstrates running transactional-update to install a new package in a snapshot based on snapshot 13, and then running it again to install another package:

# transactional-update pkg install package_1
# transactional-update --continue 13 pkg install package_2

Disables self-updating of transactional-update.

--drop-if-no-change, -d

Discards the snapshot created by transactional-update if there were no changes to the root file system. If there are some changes to the /etc directory, those changes merged back to the current file system.


The transactional-update command will not output to stdout.

--help, -h

Prints help for the transactional-update command.


Displays the version of the transactional-update command.

The general commands are the following:

General commands

The command marks all unused snapshots that are intended to be removed.


The command removes all unused overlay layers of /etc.


The command combines the cleanup-snapshots and cleanup-overlays commands.


Use this command to rebuild the GRUB boot loader configuration file.


The command reinstalls the boot loader.


Use the command to rebuild initrd.


In case you perform changes to your hardware or storage, you may need to rebuild the kdump initrd.


Opens a read-write shell in the new snapshot before exiting. The command is typically used for debugging purposes.


The system reboots after the transactional-update command is complete.

run <command>

Runs the provided command in a new snapshot.


Installs and enables targeted SELinux policy.

The package commands are the following:

Package commands

Performs upgrade of your system. The default option for this command is --non-interactive.


The command migrates your system to a selected target. Typically, it is used to upgrade your system if it has been registered via SUSE Customer Center.


Checks for available patches and installs them. The default option for this command is --non-interactive.

pkg install

Installs individual packages from the available channels using the zypper install command. This command can also be used to install Program Temporary Fix (PTF) RPM files. The default option for this command is --interactive.

# transactional-update pkg install package_name


# transactional-update pkg install rpm1 rpm2
pkg remove

Removes individual packages from the active snapshot using the zypper remove command. This command can also be used to remove PTF RPM files. The default option for this command is --interactive.

# transactional-update pkg remove package_name
pkg update

Updates individual packages from the active snapshot using the zypper update command. Only packages that are part of the snapshot of the base file system can be updated. The default option for this command is --interactive.

# transactional-update pkg update package_name

Registers or deregisters your system. For a complete usage description, refer to Section, “The register command”.


Updates installed packages to newer versions. The default option for this command is --non-interactive.

The standalone commands are the following:

Standalone commands
rollback <snapshot number>

This sets the default subvolume. The current system is set as the new default root file system. If you specify a number, that snapshot is used as the default root file system. On a read-only file system, it does not create any additional snapshots.

# transactional-update rollback snapshot_number
rollback last

This command sets the last known to be working snapshot as the default.


This prints a list of available snapshots. The currently booted one is marked with an asterisk, the default snapshot is marked with a plus sign. The register command

The register command enables you to handle all tasks regarding registration and subscription management. You can supply the following options:


With this option, the command will list available extensions for your system. You can use the output to find a product identifier for product activation.

-p, --product

Use this option to specify a product for activation. The product identifier has the following format: <name>/<version>/<architecture>, for example, sle-module-live-patching/15.3/x86_64. The appropriate command will then be the following:

# transactional-update register -p sle-module-live-patching/15.3/x86_64
-r, --regcode

Register your system with the registration code provided. The command will register the subscription and enable software repositories.

-d, --de-register

The option deregisters the system, or when used along with the -p option, deregisters an extension.

-e, --email

Specify an email address that will be used in SUSE Customer Center for registration.


Specify the URL of your registration server. The URL is stored in the configuration and will be used in subsequent command invocations. For example:

# transactional-update register --url https://scc.suse.com
-s, --status

Displays the current registration status in JSON format.


Writes the provided options value to the /etc/SUSEConnect configuration file.


Removes old system credentials.


Prints the version.


Displays the usage of the command.

4 Containers and Podman

4.1 What are containers and Podman?

Containers offer a lightweight virtualization method to run multiple virtual environments (containers) simultaneously on a single host. Unlike technologies such as Xen or KVM, where the processor simulates a complete hardware environment and a hypervisor controls virtual machines, containers provide virtualization on the operating system level, where the kernel controls the isolated containers.

Podman is a short name for Pod Manager Tool. It is a daemonless container engine that enables you to run and deploy applications using containers and container images. Podman provides a command line interface to manage containers.

4.2 How does Podman work?

Podman provides integration with systemd. This way you can control containers via systemd units. You can create these units for existing containers as well as generate units that can start containers if they do not exist in the system. Moreover, Podman can run systemd inside containers.

Podman enables you to organize your containers into pods. Pods share the same network interface and resources. A typical use case for organizing a group of containers into a pod is a container that runs a database and a container with a client that accesses the database.

4.2.1 Pods architecture

A pod is a group of containers that share the same namespace, ports and network connection. Usually, containers within one pod can communicate directly with each other. Each pod contains an infrastructure container (INFRA), whose purpose is to hold the namespace. INFRA also enables Podman to add other containers to the pod. Port bindings, cgroup-parent values, and kernel namespaces are all assigned to the infrastructure container. Therefore, later changes of these values are not possible.

Pods architecture
Figure 4.1: Pods architecture

Each container in a pod has its own instance of a monitoring program. The monitoring program watches the container's process and if the container dies, the monitoring program saves its exit code. The program also holds open the tty interface for the particular container. The monitoring program enables you to run containers in the detached mode when Podman exits, because this program continues to run and enables you to attach tty later.

4.3 Benefits of containers

  • Containers make it possible to isolate applications in self-contained units.

  • Containers provide near-native performance. Depending on the runtime, a container can use the host kernel directly, thus minimizing overhead.

  • It is possible to control network interfaces and apply resources inside containers through kernel control groups.

Enabling Podman

4.5.1 Introduction

This article helps you verify that Podman is installed on the ALP system and provides guidelines to enable its systemd service when Cockpit requires it.

4.5.2 Requirements

  • Deployed ALP base OS.

4.5.3 Installing Podman

  1. Verify that Podman is installed on your system by running the following command:

    # zypper se -i podman
  2. If Podman is not listed in the output, install it by running:

    # transactional-update pkg install podman*
  3. Reboot the ALP host for the changes to take effect.

  4. Optionally, enable and start the podman.service service for applications that require it, such as Cockpit. You can enable it either in Cockpit by clicking Podman containers › Start podman, or by running the following command:

    # systemctl enable --now podman.service

4.5.4 Enabling rootless mode

By default, Podman requires root privileges. To enable rootless mode for the current user, run the following command:

> sudo usermod --add-subuids 100000-165535 \
  --add-subgids 100000-165535 USER

Reboot the machine to enable the change. The command above defines a range of local UIDs to which the UIDs allocated to users inside the container are mapped on the host. Note that the ranges defined for different users must not overlap. It is also important that the ranges do not reuse the UID of an existing local user or group. By default, adding a user with the useradd command automatically allocates subUID and subGID ranges.

Note: Limitations of rootless containers

Running a container with Podman in rootless mode on SLE Micro may fail, because the container might need access to directories or files that require root privileges.

4.5.5 Next steps

Podman usage

This article introduces basic Podman usage that you may need when running containerized workloads.

4.6.1 Getting container images

To run a container, you need an image. An image includes all dependencies needed to run an application. You can obtain images from an image registry. Available registries are defined in the /etc/containers/registries.conf configuration file. If you have a local image registry or want to use other registries, add the registries into the configuration file.

Important: No tools for building images in ALP

ALP does not provide tools for building custom images. Therefore, the only way to get an image is to pull it from an image registry.

The podman pull command pulls an image from an image registry. The syntax is as follows:

# podman pull [OPTIONS] SOURCE

The source can be an image without the registry name. In that case, Podman tries to pull the image from all registries configured in the /etc/containers/registries.conf file. The default image tag is latest. The default location of pulled images is /var/lib/containers/storage/overlay-images/.

To view all possible options of the podman pull command, run:

# podman pull --help
Note: Getting images using Cockpit

If you are using Cockpit, you can also pull images from an image registry in the Podman containers menu by clicking + Get new image.

Podman enables you to search for images in an image registry or a list of registries using the command:

# podman search IMAGE_NAME

4.6.2 Working with containers

The following section covers common container management tasks. This includes creating, starting, and modifying containers.


The current version of ALP does not provide tools for building custom images. Therefore, the only way to get a container image is to pull it from an image registry. Running containers


For specific details on running ALP containers, refer to links in the Section 1.5, “Available workloads for the Adaptable Linux Platform” article.

After you have pulled your container image, you can create containers based on it. You can run an instance of the image using the podman run command. The command syntax is as follows:


IMAGE is specified in format transport:path. If transport is omitted, the default docker is used. The path can reference to a specific image registry. If omitted, Podman searches for the image in registries defined in the /etc/containers/registries.conf file. An example that runs a container called sles15 based on the sle15 image follows:

# podman run registry.opensuse.org/suse/templates/images/sle-15-sp3/base/images/suse/sle15 sles15

Below is a list of frequently used options. For a complete list of available options, run the command: podman run --help.

--detach, -d

The container will run in the background.

--env, -e=env

This option allows arbitrary environment variables that are available for the process to be launched inside of the container. If an environment variable is specified without a value, Podman will check the host environment for a value and set the variable only if it is set on the host.


Prints help for the podman run command.

--hostname=name, -h

Sets the container host name that is available inside the container.


Runs the container in an existing pod. To create a pod, prefix the pod name with new:.


Mounts the container’s root file system as read-only.


Runs the container in systemd mode. The default is true. Stopping containers

If the podman run command finished successfully, a new container has been started. You can stop the container by running:

# podman stop [OPTIONS] CONTAINER

You can specify a single container name or ID or a space-separated list of containers. The command takes the following options:

--all, -a

Stops all running containers.

--latest, -l

Instead of providing a container name, the last created container will be stopped.

--time, -t=seconds

Seconds to wait before forcibly stopping the container.

To view all possible options of the podman stop command, run the following:

# podman stop --help Starting containers

To start already created but stopped containers, use the podman start command. The command syntax is as follows:

# podman start [OPTIONS] CONTAINER

CONTAINER can be a container name or a container ID.

For a complete list of possible options of podman start, run the command:

# podman start --help Updating containers

To update an existing container, follow these steps:

  1. Identify the image of the container that you want to update, for example, yast-mgmt-qt:

    > podman image ls
    REPOSITORY                                                                                                  TAG         IMAGE ID      CREATED      SIZE
    registry.opensuse.org/suse/alp/workloads/publish/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt  latest      f349194a439d  13 days ago  674 MB
  2. Pull the image from the registry to find out if there is a newer version. If you do not specify a version tag, the latest tag is used:

    # podman pull registry.opensuse.org/suse/alp/workloads/publish/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt
    Trying to pull registry.opensuse.org/suse/alp/workloads/publish/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt:latest...
    Getting image source signatures
    Copying blob 6bfbcdeee2ec done
    Writing manifest to image destination
    Storing signatures
  3. If the container is running, identify its ID and stop it:

    # podman ps
    CONTAINER ID  IMAGE                                                                             COMMAND     CREATED         STATUS
    28fef404417b /workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latest               2 weeks ago     Up 24 seconds ago
    # podman stop 28fef404417b
  4. Run the container following specific instructions at Section 1.5, “Available workloads for the Adaptable Linux Platform”, for example:

    # podman container runlabel run \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latest Committing modified containers

You can run a new container with specific attributes that are not part of the original image. To save the container with these attributes as a new image, you can use the podman commit command:


CONTAINER is a container name or a container ID. IMAGE is the new image name. If the image name does not start with a registry name, the value localhost is used.

When using Cockpit, you can perform the commit operation directly from a container's Details, by clicking Commit. A dialog box opens. Specify all required details as shown below and click Commit:

Committing a container in Cockpit
Figure 4.2: Committing a container in Cockpit Listing containers

Podman enables you to list all running containers using the podman ps command. The generic syntax of the command is as follows:

# podman  ps [OPTIONS]

Command options can change the displayed information. For example, using the --all option will output all containers created by Podman (not only the running containers).

For a complete list of podman ps options, run:

# podman ps --help Removing containers

To remove one or more unused containers from the host, use the podman rm command as follows:


CONTAINER can be a container name or a container ID.

The command does not remove the specified container if the container is running. To remove a running container, use the -f option.

For a complete list of podman rm options, run:

# podman rm --help
Note: Deleting all stopped containers

You can delete all stopped containers from your host with a single command:

# podman container prune

Make sure that each stopped container is intended to be removed before you run the command, otherwise you might remove containers that are still in use and were stopped only temporarily.

4.6.3 Working with pods

Containers can be grouped into a pod. The containers in the pod then share network, pid, and IPC namespace. Pods can be managed by podman pod commands. This section provides an overview of the commands for managing pods. Creating pods

The command podman pod create is used to create a pod. The syntax of the command is as follows:

# podman pod create [OPTIONS]

The command outputs the pod ID. By default, the pods are created without being started. You can start a pod by running a container in the pod, or by starting the pod as described in Section, “Starting/stopping/restarting pods”.

Note: Default pod names

If you do not specify a pod name with the --name option, Podman will assign a default name for the pod.

For a complete list of possible options, run the following command:

# podman pod create --help Listing pods

You can list all pods by running the command:

# podman pod list

The output looks as follows:

30fba506fecb  upbeat_mcclintock  Created  19 hours ago  1                4324f40c9651
976a83b4d88b  nervous_feynman    Running  19 hours ago  2                daa5732ecd02

As each pod includes the INFRA container, the number of containers in a pod is always larger than zero. Starting/stopping/restarting pods

After a pod is created, you must start it, as it is not in the state running by default. In the commands below, POD can be a pod name or a pod ID.

To start a pod, run the command:

# podman pod start [OPTIONS] POD

For a complete list of possible options, run:

# podman pod start --help

To stop a pod, use the podman pod stop as follows:

# podman pod stop POD

To restart a pod, use the podman pod restart command as follows:

# podman pod restart POD Managing containers in a pod

To add a new container to a pod, use the podman run command with the option --pod. A general syntax of the command follows:

# podman run [OPTIONS] --pod POD_NAME IMAGE

For details about the podman run command, refer to Section, “Running containers”.

Note: Only new containers can be added to a pod

The podman start command does not allow for starting a container in a pod if the container was not added to the pod during the container's initial running.

You cannot remove a container from a pod and keep the container running, because the container itself is removed from the host.

Other actions like start, restart and stop can be performed on specific containers without affecting the status of the pod. Removing pods

There are two ways to remove pods. You can use the podman pod rm command to remove one or more pods. Alternatively, you can remove all stopped pods using the podman pod prune command.

To remove a pod or several pods, run the podman pod rm command as follows:

# podman pod rm POD

POD can be a pod name or a pod ID.

To remove all currently stopped pods, use the podman pod prune command. Make sure that all stopped pods are intended to be removed before you run the podman pod prune command, otherwise you might remove pods that are still in use. Monitoring processes in pods

To view all containers in all pods, use the following command:

# podman ps -a --pod

The output of the command will be similar to the following one:

CONTAINER ID  IMAGE                       COMMAND    CREATED       STATUS                 [...]
4324f40c9651  k8s.gcr.io/pause:3.2                   21 hours ago  Created
daa5732ecd02  k8s.gcr.io/pause:3.2                   22 hours ago  Up 3 hours ago
e5c8e360c54b  localhost/test:latest       /bin/bash  3 days ago    Exited (137) 3 days ago
82dad15828f7  localhost/opensuse/toolbox  /bin/bash  3 days ago    Exited (137) 3 days ago
1a23da456b6f  docker.io/i386/ubuntu       /bin/bash  4 days ago    Exited (0) 6 hours ago
df890193f651  localhost/opensuse/toolbox  /bin/bash  4 days ago    Created

The first two records are the INFRA containers of each pod, based on the k8s.gcr.io/pause:3.2 image. Other containers in the output are stand-alone containers that do not belong to any pod.

Running the YaST workload using Podman

4.7.1 Introduction

This article describes how to start the YaST workload on the Adaptable Linux Platform (ALP).

4.7.2 Requirements

  • Deployed ALP base OS.

  • Installed and enabled Podman.

4.7.3 Starting YaST in text mode

To start the text version (ncurses) of YaST as a workload, follow these steps:

  1. Identify the full URL address in a registry of container images, for example:

    > podman search yast-mgmt-ncurses
  2. To start the container, run the following command:

    # podman container runlabel run \
    YaST running in text mode on ALP
    Figure 4.3: YaST running in text mode on ALP

4.7.4 Starting graphical YaST

To start the graphical Qt version of YaST as a workload, follow these steps:

  1. To view the graphical YaST on your local X server, you need to use SSH X forwarding. It requires the xauth package installed, applied by the host reboot:

    # transactional-update pkg install xauth && reboot
  2. Connect to the ALP host using ssh with the X forwarding enabled:

    > ssh -X ALP_HOST
  3. Identify the full URL address in a registry of container images, for example:

    > podman search yast-mgmt-qt
  4. To start the container, run the following command:

    # podman container runlabel run \
    Running graphical YaST on top of ALP
    Figure 4.4: Running graphical YaST on top of ALP

Running the KVM virtualization workload using Podman

4.8.1 Introduction

This article describes how to run KVM VM Host Server on the Adaptable Linux Platform (ALP).

4.8.2 Requirements

  • Deployed ALP base OS.

  • When running ALP in a virtualized environment, you need to enable the nested KVM virtualization on the bare-metal host operating system and use kernel-default kernel instead of the default kernel-default-base in ALP.

  • Installed and enabled Podman.

4.8.3 Starting the KVM workload

ALP can serve as a host running virtual machines. The following procedure describes steps to prepare the ALP host to run containerized KVM VM Host Server and run an example VM Guest on top of it.

  1. Identify the KVM workload image:

    # podman search kvm
  2. Pull the image from the registry and install all the wrapper scripts:

    # podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latest
  3. Create the libvirtd container from the downloaded image:

    # kvm-container-manage.sh create
  4. Start the container:

    # kvm-container-manage.sh start
  5. Optionally, run a VM Guest on top of the started KVM VM Guest using the virt-install.sh script.


    virt-install.sh uses the openSUSE-Tumbleweed-JeOS.x86_64-OpenStack-Cloud.qcow2 image by default. To specify another VM image, modify the APPLIANCE_MIRROR and APPLIANCE options in the /etc/kvm-container.conf file.


    virsh.sh is a wrapper script to launch the virsh command inside the container (the default container name is libvirtd).

    > virt-install.sh
    Starting install...
    Password for first root login is: OPjQok1nlfKp5DRZ
    Allocating 'Tumbleweed-JeOS_5221fd7860.qcow2'            |    0 B  00:00:00 ...
    Creating domain...                                       |    0 B  00:00:00
    Running text console command: virsh --connect qemu:///system console Tumbleweed-JeOS_5221fd7860
    Connected to domain 'Tumbleweed-JeOS_5221fd7860'
    Escape character is ^] (Ctrl + ])
    Welcome to openSUSE Tumbleweed 20220919 - Kernel 5.19.8-1-default (hvc0).
    eth0: fe80::5054:ff:fe5a:c416
    localhost login:

Usage of the kvm-container-manage.sh script

The kvm-container-manage.sh script is used to manage the KVM server container on the Adaptable Linux Platform (ALP). This article lists each subcommand of the script and describes its purpose.

kvm-container-manage.sh create

Creates a KVM server container from a previously downloaded container image. To download the images, use podman, for example:

# podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latest
kvm-container-manage.sh start

Starts the KVM server container.

kvm-container-manage.sh virsh list

Lists all running VM Guests. Append the --all option to get the list of all—running and stopped—VM Guests.

kvm-container-manage.sh stop

Stops the running KVM server container.

kvm-container-manage.sh uninstall

Cleans the host environment by uninstalling all files that were required to run the KVM server container

Running the Cockpit Web server using Podman

4.9.1 Introduction

This article describes how to run a containerized Cockpit Web server on the Adaptable Linux Platform (ALP) using Podman.


An alternative way of installing and enabling the Cockpit Web server is described in https://en.opensuse.org/openSUSE:ALP/Workgroups/SysMngmnt/Cockpit#Install_the_Web_Server_Via_Packages.

4.9.2 Requirements

  • Deployed ALP base OS.

  • Installed and enabled Podman.

  • Installed the alp_cockpit pattern.

4.9.3 Starting the Cockpit workload

Cockpit is a tool to administer one or more hosts from one place via a Web user interface. Its default functionality is extended by plug-ins that you can install additionally. You do not need the Cockpit Web user interface installed on every ALP host. One instance of the Web interface can connect to multiple hosts if they have the alp_cockpit pattern installed.

ALP has the base part of the Cockpit component installed by default. It is included in the alp_cockpit pattern. To install and run Cockpit's Web interface, follow these steps:

  1. Identify the Cockpit Web server workload image:

    # podman search cockpit-ws
  2. Pull the image from the registry:

    # podman container runlabel install \
  3. Run the Cockpit's containerized Web server:

    # podman container runlabel --name cockpit-ws run \
  4. To run the Cockpit's Web server on each ALP boot, enable its service:

    # systemctl enable cockpit.service
  5. To view the Cockpit Web user interface, point your Web browser to the following address and accept the self-signed certificate:

    Cockpit running on ALP
    Figure 4.5: Cockpit running on ALP

4.9.4 Next steps

Adding more functionality to Cockpit Introduction

After you deploy Cockpit on the Adaptable Linux Platform (ALP), it already provides a default functionality. The following sections describe how to extend it by installing additional Cockpit extensions. Note that you need to reboot ALP to apply the changes.


Some packages described in this article are available from the ALP-Build repository which may be disabled by default. To make sure the repository is enabled, run the following command:

# zypper mr -e ALP-Build && refresh Metrics

To enable the visualization of some current metrics, install the PCP extension:

# transactional-update pkg install cockpit-pcp
# reboot
Metrics and history in Cockpit
Figure 4.6: Metrics and history in Cockpit Software updates

To be able to perform transactional software updates from Cockpit, install the cockpit-tukit package:

# transactional-update pkg install cockpit-tukit
# reboot
Software Updates in Cockpit
Figure 4.7: Software updates in Cockpit Storage devices

To manage local storage devices and their associated technologies, install the cockpit-storaged package:

# transactional-update pkg install cockpit-storaged
# reboot
Storage in Cockpit
Figure 4.8: Storage in Cockpit