This guide introduces the Adaptable Linux Platform (ALP)—its deployment, system management and software installation as well as running of containerized workloads. To enhance this ALP documentation, find its sources at https://github.com/SUSE/doc-modular/edit/main/xml/.
- WHAT?
ALP is a lightweight operating system. Instead of applications distributed in traditional software packages, it runs containerized and virtualized workloads.
- WHY?
This guide introduces an overview of what ALP is and how it is different from traditional operating systems. It also describes how to administer ALP and install and manage individual workloads.
- EFFORT?
To understand the concepts and perform tasks described in this guide, you need to have good knowledge and practice with the Linux operating system.
- GOAL!
After having read this guide, you will be able to deploy ALP, modify its file system in a transactional way, and install and run specific workloads on top of it.
- 1 General description
- 2 Deployment
- 3 Transactional updates
- 4 Containers and Podman
- 5 Workloads
- 5.1 Introduction
- 5.2 YaST
- 5.3 KVM
- 5.4 Cockpit Web server
- 5.5 GDM
- 5.6
firewalld
- 5.7 Grafana
- 5.8 Related topics
- 5.9 Running the YaST workload using Podman
- 5.10 Running the KVM virtualization workload using Podman
- 5.11 Running the Cockpit Web server using Podman
- 5.12 Running the GNOME Display Manager workload using Podman
- 5.13 Running
firewalld
using Podman - 5.14 Running the Grafana workload using Podman
- 2.1 Set UEFI firmware for the encrypted ALP image
- 2.2 Add an emulated TPM device
- 2.3 The D-Installer GUI
- 2.4 Configuring a wired connection
- 2.5 Configuring ALP storage
- 2.6 Advanced storage settings
- 2.7 Creating a user account
- 2.8 ALP installation in progress
- 2.9 ALP boot screen
- 2.10 JeOS Firstboot screen
- 2.11 Enter root password
- 2.12 Select method for encryption
- 4.1 Pods architecture
- 4.2 Committing a container in Cockpit
- 5.1 YaST running in text mode on ALP
- 5.2 Running graphical YaST on top of ALP
- 5.3 Cockpit running on ALP
- 5.4 Metrics and history in Cockpit
- 5.5 Software updates in Cockpit
- 5.6 Storage in Cockpit
- 5.7 GNOME Settings on top of ALP
- 5.8 Grafana data sources
- 5.9 Prometheus URL configuration in Grafana
- 5.10 Creating a Grafana dashboard
- 5.11 New Grafana dashboard
1 General description #
1.1 What is ALP? #
The Adaptable Linux Platform (ALP) is a lightweight operating system. Instead of applications distributed in traditional software packages, it runs containerized and virtualized workloads.
1.2 Core components of ALP #
The Adaptable Linux Platform (ALP) consists of the following components:
- Base operating system
The core of ALP which runs all required services. It is an immutable operating system with a read-only root file system. The file system is modified by transactional updates which utilize the snapshotting feature of BTRFS.
- Transactional updates
The
transactional-update
command performs changes on the file system. You can use it to install software, update existing workloads, or apply software patches. Because it uses file system snapshots, applied changes can be easily rolled back.- Container orchestration
ALP runs containerized workloads instead of applications packed in software packages. The default container orchestrator in ALP is Podman which is responsible for managing containers and container images.
- Containerized workloads
Workloads replace traditional applications. A containerized workload contains all software dependencies required to run a specific application or tool.
- Cockpit
A Web-based graphical interface to administer single or multiple ALP workloads from one place. It helps you manage, for example, user accounts, network settings, or container orchestration.
1.3 Benefits of ALP #
The Adaptable Linux Platform offers the following customer benefits:
High security of running workloads.
Minimal maintenance with keeping the workloads up to date.
Stable immutable base operating system that utilizes transactions when modifying the file system.
Ability to roll back modifications on the file system in case the transaction result is undesirable.
2 Deployment #
2.1 Introduction #
The Adaptable Linux Platform (ALP) is distributed either as a disk image of the ALP installer named D-Installer, or as a pre-built ALP raw disk image.
2.1.1 D-Installer #
While D-Installer handles both bare-metal and virtualized deployments, it is a preferred method for bare-metal deployments. ALP deployment using D-Installer is similar to a traditional operating system setup. After booting the D-Installer image, the installer uses a graphical user-friendly interface to walk you through the system configuration and deployment.
2.1.2 Raw disk image #
This method handles both bare-metal and virtualized deployment as well. It is different from the D-Installer deployment in that you do not boot an installer but the actual ALP image itself. On first boot, you can configure basic system options using an ncurses user interface. Using a raw disk image, you can fine-tune the deployment setup with Combustion and Ignition tools.
2.2 Hardware requirements #
The minimum supported hardware requirements for deploying ALP follow:
- CPU
AMD64/Intel 64 and AArch64 CPU architectures are supported.
- Maximum number of CPUs
The maximum number of CPUs supported by software design is 8192.
- Memory
ALP requires at least 1 GB RAM. Bear in mind that this is a minimal value for the operation system, the actual memory size depends on the workload.
- Hard disk
The minimum hard disk space is 12 GB, while the recommended value is 20 GB of hard disk space. Adjust the value according to the workloads of your containers.
Preparing an ALP virtual machine #
2.4.1 Introduction #
This article describes how to configure a new virtual machine suitable for the ALP deployment by using the Virtual Machine Manager.
2.4.2 Requirements #
A VM Host Server with KVM hypervisor.
Depending on the installation method, download either the ALP raw disk image from https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images or the D-Installer image from https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/iso on the VM Host Server where you will run virtualized ALP.
NoteNote that for the raw disk image deployment, there are two types of images, depending on whether you intend to run ALP on an encrypted disk or an unencrypted disk.
As of now, the encrypted raw disk image does not expand to the full disk capacity automatically. As a workaround, the following steps are required:
Use the
qemu-img
command to increase the disk image to the desired size.Set up the virtual machine and boot it. When the JeOS Firstboot wizard asks you which method to use for encryption, select
.When the system is ready, use the
parted
command to resize the partition where the LUKS device resides (for example, partition number 3) to the desired size.Run the
cryptsetup resize luks
command. When asked, enter the passphrase to resize the encrypted device.Run the
transactional-update shell
command to open a read-write shell in the current disk snapshot. Then resize the BTRFS file system to the desired size, for example:#
btrfs fi resize max /Leave the shell with
exit
and reboot the system withreboot
.
2.4.3 Configuring a virtual machine for ALP deployment #
Start Virtual Machine Manager and select
› .For deployment using D-Installer, select
.For the raw disk deployment, select
.
Confirm with
.Specify the path to the ALP disk image that you previously downloaded and the type of linux OS you are deploying, for example,
Generic Linux 2020
. Confirm with .Specify the amount of memory and number of processors that you want to assign to the ALP virtual machine and confirm with
.For deployment using D-Installer, enable storage for the virtual machine and specify the size of the disk image.
Specify the name for the virtual machine and the network to be used.
If you are deploying an encrypted ALP image, perform these additional steps:
Enable
and confirm with .Click
from the left menu and change the boot method from BIOS to UEFI for secure boot. Confirm with .Figure 2.1: Set UEFI firmware for the encrypted ALP image #Add a Trusted Platform Module (TPM) device. Click
, select from the left menu, and select the type.Figure 2.2: Add an emulated TPM device #Confirm with
and start the ALP deployment by clicking from the top menu.
For the raw disk image deployment, to deploy ALP with only minimal setup options, confirm with Section 2.6.2, “Deploying ALP with JeOS Firstboot” for next steps.
. The disk image will be booted and JeOS Firstboot will take care of the deployment. Refer toTipYou can detail the deployment setup by using the Ignition or Combustion tools. For more details, refer to Section 2.6.6, “Configuring with Ignition” and Section 2.6.7, “Configuring with Combustion”.
To continue the deployment by using D-Installer, confirm with Section 2.5, “Deploying ALP using D-Installer”.
and continue with
2.4.4 Summary #
Now your virtualization environment is ready for the ALP deployment process.
2.4.5 Next steps #
For deployment using D-Installer, refer to Section 2.5, “Deploying ALP using D-Installer”.
For raw disk deployment, continue with Section 2.6.2, “Deploying ALP with JeOS Firstboot”.
Deploying ALP using D-Installer #
2.5.1 Introduction #
This article describes how to deploy the Adaptable Linux Platform (ALP) using D-Installer.
2.5.2 Deploying ALP using D-Installer #
Download the ALP D-Installer image from https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/iso. For example:
>
curl -LO https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/isod-installer-live.x86_64-0.6-ALP-Build3.6.isoIf you are deploying ALP as a VM Guest, you need to first prepare the virtual machine. To do this, follow the steps in Section 2.4, “Preparing an ALP virtual machine”.
After booting the D-Installer image, select
from the boot menu. A graphical installer will appear with configuration options affecting the ALP deployment.Figure 2.3: The D-Installer GUI #Click the default language and select your preferred language from the drop-down list.
Configure network settings. Click the default wired connection and configure the network according to your needs. You can, for example, change the networking mode to
, add IP addresses and related prefixes or netmasks, gateway, and DNS servers.Figure 2.4: Configuring a wired connection #By clicking
you can utilize your local wireless network.Configure storage by clicking
.Figure 2.5: Configuring ALP storage #You can select a device for ALP installation or specify advanced storage settings, for example, whether to use LVM or encrypted devices.
Figure 2.6: Advanced storage settings #TipIf you enable disk encryption, you will be asked for a decryption password on each reboot. Because the GRUB 2 boot loader does not enable switching keyboard layouts, select a password made of alphanumeric characters and be aware of national keyboard layout differences. For extended post-deployment information about disk encryption, refer to Section 2.7.2, “Full disk encryption”.
In the
section, specify aroot
password, upload a , or create an additional user account and enable auto-login for it.Figure 2.7: Creating a user account #To begin the installation, click
and confirm with . The installation process will start.Figure 2.8: ALP installation in progress #After the installation is finished, click
and select from the boot menu after reboot.
2.5.3 Summary #
After the deployment of ALP is finished, you are presented with
the login prompt. Log in as root
, and you are ready to set up the
system and install additional workloads.
2.5.4 Next steps #
Install additional software with
transactional-update
. Refer to Chapter 3, Transactional updates for more details.Install and run additional workloads. Refer to Chapter 5, Workloads for more details.
Deploying ALP using a raw disk image #
2.6.1 Introduction #
This article describes how to deploy the Adaptable Linux Platform (ALP) raw disk image. It applies to ALP running both on encrypted and unencrypted disk.
2.6.1.1 First boot detection #
The deployment configuration runs on the first boot only. To
distinguish between the first and subsequent boots, the flag file
/boot/writable/firstboot_happened
is created after
the first boot finishes. If the file is not present in the file system,
the attribute ignition.firstboot
is passed to the
kernel command line, which triggers the creation of
initramfs
(Ignition) or running a specific
dracut module (Combustion). After completing the first boot, the
/boot/writable/firstboot_happened
flag file is
created.
Even though the configuration may not be successful due to improper
or missing configuration files, the
/boot/writable/firstboot_happened
flag file is
created.
You may force the first boot configuration on subsequent boot by
passing the ignition.firstboot
attribute to the
kernel command line or by deleting the
/boot/writable/firstboot_happened
flag file.
2.6.1.2 Default partitioning #
The pre-built images are delivered with a default partitioning scheme. You can change it during the first boot by using Ignition or Combustion.
If you intend to perform any changes to the default partitioning scheme, the root file system must be BTRFS.
Each image has the following subvolumes:
/home /root /opt /srv /usr/local /var
The /etc
directory is mounted as overlayfs, where
the upper directory is mounted to
/var/lib/overlay/1/etc/
.
You can recognize the subvolumes mounted by default by the option
x-initrd.mount
in /etc/fstab
.
Other subvolumes or partitions must be configured either by Ignition
or Combustion.
2.6.2 Deploying ALP with JeOS Firstboot #
When booting the ALP raw image for the first time, JeOS Firstboot enables you to perform a minimal configuration of your system. If you need more control over the deployment process, find more information in Section 2.6.6, “Configuring with Ignition” and Section 2.6.7, “Configuring with Combustion”.
Download the ALP raw disk image from https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images. There are two types of images, depending on whether you intend to run ALP on an encrypted disk or an unencrypted disk.
For example, for the unencrypted image:
>
curl -LO https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/imagesALP-VM.x86_64-0.0.1-kvm-Build15.17.qcow2And for the encrypted image:
>
curl -LO https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/imagesALP-VM.x86_64-0.0.1-kvm_encrypted-Build15.18.qcow2If you are deploying ALP as a VM Guest, you need to first prepare the virtual machine by following Section 2.4, “Preparing an ALP virtual machine”.
After booting the ALP disk image, you will be presented with a boot loader screen. Select Enter.
and confirm withFigure 2.9: ALP boot screen #Enter.
displays a welcome screen. Confirm withFigure 2.10: JeOS Firstboot screen #On the next screens, select keyboard, confirm the license agreement and select the time zone.
In the
dialog window, enter a password for theroot
and confirm it.Figure 2.11: Enter root password #When deploying with an encrypted disk, follow these additional steps:
Select the desired protection method and confirm with
.Figure 2.12: Select method for encryption #Enter a recovery password for LUKS encryption and retype it. The root file system re-encryption will begin.
ALP is successfully deployed using a minimal initial configuration.
2.6.3 Summary #
After the deployment of ALP is finished, you are presented with
the login prompt. Log in as root
, and you are ready to set up the
system and install additional workloads.
2.6.4 Next steps #
Install additional software with
transactional-update
. Refer to Chapter 3, Transactional updates for more details.Install and run additional workloads. Refer to Chapter 5, Workloads for more details.
Configuring with Ignition #
2.6.6.1 What is Ignition? #
Ignition is a provisioning tool that enables you to configure a system according to your specification on the first boot.
2.6.6.2 How does Ignition work? #
When the system is booted for the first time, Ignition is loaded as
part of an initramfs
and searches for a
configuration file within a specific directory (on a USB flash disk, or
you can provide a URL). All changes are performed before the kernel
switches from the temporary file system to the real root file system
(before the switch_root
command is issued).
Ignition uses a configuration file in the JSON format named
config.ign
. For the purpose of better human
readability, you can create a YAML file and convert this file to JSON.
For details, refer to 'Task: Converting YAML file into JSON'.
2.6.6.2.1 config.ign
#
When installing on bare metal, the configuration file
config.ign
must reside in the
ignition
subdirectory on the configuration media
labeled ignition
. The directory structure must look
as follows:
<root directory> └── ignition └── config.ign
If you intend to configure a virtual machine with Virtual Machine Manager (libvirt
),
provide the path to the config.ign
file in its XML
definition, for example:
<domain ... > <sysinfo type="fwcfg"> <entry name="opt/com.coreos/config" file="/location/to/config.ign"/> </sysinfo> </domain>
The config.ign
contains various data types:
objects, strings, integers, booleans and lists of objects. For a
complete specification, refer to
Ignition
specification v3.3.0.
The version
attribute is mandatory and in case of
ALP, its value must be set either to 3.3.0
or
to any lower version. Otherwise, Ignition will fail.
If you want to log in to your system as root
, you must at least
include a password for root
. However, it is recommended to
establish access via SSH keys. To configure a password, make sure to
use a secure one. If you use a randomly generated password, use at
least 10 characters. If you create your password manually, use even
more than 10 characters and combine uppercase and lowercase letters and
numbers.
Converting YAML formatted files into JSON #
2.6.6.4.1 Introduction #
JSON is a universal file format for storing structured data. Applications, for example, Ignition, use it to store and retrieve their configuration. Because JSON's syntax is complex and hard to read for human beings, you can write the configuration in a more friendly format called YAML and then convert it into JSON.
2.6.6.4.2 Converting YAML files into JSON format #
The tool that converts Ignition-specific vocabularies in YAML files
into JSON format is butane
. It also verifies the
syntax of the YAML file to catch potential errors in the structure. For
the latest version of butane
, add the following
repository:
>
sudo
zypper ar -f \ https://download.opensuse.org/repositories/devel:/kubic:/ignition/openSUSE_Tumbleweed/ \ devel_kubic_ignition
Replace openSUSE_Tumbleweed
with one of the following
(depending on your distribution):
'openSUSE_Leap_$releasever'
15.3
Now you can install the butane
tool:
>
sudo
zypper ref && zypper in butane
After the installation is complete, you can invoke
butane
by running:
>
butane -p -o config.ign config.fcc
config.fcc
is the path to the YAML configuration file.config.ign
is the path to the output JSON configuration file.The
-p
command option adds line breaks to the output file and thus makes it more readable.
2.6.6.4.3 Summary #
After you completed the described steps, you can write and store configuration files in YAML format while providing them in JSON format if applications, for example, Ignition, require it.
Ignition configuration examples #
2.6.6.5.1 Configuration examples in YAML #
This section will provide you with some common examples of the Ignition
configuration in the YAML format. Note that Ignition does not accept
configuration in the YAML format but rather JSON. To convert a YAML file
to the JSON format, use the butane
tool as described
in Section 2.6.6.4.1, “Introduction”.
version
attribute is mandatory
Each config.fcc
must include version 1.4.0 or
lower that is then converted to the corresponding Ignition
specification.
2.6.6.5.1.1 Storage configuration #
The storage
attribute is used to configure
partitions, RAID, define file systems, create files, etc. To define
partitions, use the disks
attribute. The
filesystems
attribute is used to format partitions
and define mount points of particular partitions. The
files
attribute can be used to create files in the
file system. Each of the mentioned attributes is described in the
following sections.
2.6.6.5.1.1.1 The disks
attribute #
The disks
attribute is a list of devices that
enables you to define partitions on these devices. The
disks
attribute must contain at least one
device
, other attributes are optional. The
following example will use a single virtual device and divide the
disk into four partitions:
variant: fcos version: 1.0.0 storage: disks: - device: "/dev/vda" wipe_table: true partitions: - label: root number: 1 type_guid: 4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709 - label: boot number: 2 type_guid: BC13C2FF-59E6-4262-A352-B275FD6F7172 - label: swap number: 3 type_guid: 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F - label: home number: 4 type_guid: 933AC7E1-2EB4-4F13-B844-0E14E2AEF915
2.6.6.5.1.1.2 The raid
attribute #
The raid
is a list of RAID arrays. The following
attributes of raid
are mandatory:
- level
a level of the particular RAID array (linear, raid0, raid1, raid2, raid3, raid4, raid5, raid6)
- devices
a list of devices in the array referenced by their absolute paths
- name
a name that will be used for the md device
variant: fcos version: 1.0.0 storage: raid: - name: system level: raid1 devices: - "/dev/sda" - "/dev/sdb"
2.6.6.5.1.1.3 The filesystems
attribute #
filesystems
must contain the following attributes:
- device
the absolute path to the device, typically
/dev/sda
in case of physical disk- format
the file system format (btrfs, ext4, xfs, vfat or swap)
NoteIn case of ALP, the
root
file system must be formatted to btrfs.
The following example demonstrates using the
filesystems
attribute. The
/opt
directory will be mounted to the
/dev/sda1
partition, which is formatted to btrfs.
The device will not be erased.
variant: fcos version: 1.0.0 storage: filesystems: - path: /opt device: "/dev/sda1" format: btrfs wipe_filesystem: false
2.6.6.5.1.1.4 The files
attribute #
You can use the files
attribute to create any
files on your machine. Bear in mind that if you want to create files
outside the default partitioning schema, you need to define the
directories by using the filesystems
attribute.
In the following example, a host name is created by using the
files
attribute. The file
/etc/hostname
will be created with the
alp-1 host name:
variant: fcos version: 1.0.0 storage: files: - path: /etc/hostname mode: 0644 overwrite: true contents: inline: "alp-1"
2.6.6.5.1.1.5 The directories
attribute #
The directories
attribute is a list of directories
that will be created in the file system. The
directories
attribute must contain at least one
path
attribute.
variant: fcos version: 1.0.0 storage: directories: - path: /home/tux user: name: tux
2.6.6.5.1.2 Users administration #
The passwd
attribute is used to add users. If you
intend to log in to your system, create root
and set the
root
's password and/or add the SSH key to the Ignition
configuration. You need to hash the root
password, for example by
using the openssl
command:
openssl passwd -6
The command creates a hash of the password you chose. Use this hash as
the value of the password_hash
attribute.
variant: fcos version: 1.0.0 passwd: users: - name: root password_hash: "$6$PfKm6Fv5WbqOvZ0C$g4kByYM.D2B5GCsgluuqDNL87oeXiHqctr6INNNmF75WPGgkLn9O9uVx4iEe3UdbbhaHbTJ1vpZymKWuDIrWI1" ssh_authorized_keys: - ssh-rsa long...key user@host
The users
attribute must contain at least one
name
attribute.
ssh_authorized_keys
is a list of ssh keys for the
user.
2.6.6.5.1.3 Enabling systemd
services #
You can enable systemd
services by specifying them in the
systemd
attribute.
variant: fcos version: 1.0.0 systemd: units: - name: sshd.service enabled: true
The name
must be the exact name of a service to be
enabled (including the suffix).
Configuring with Combustion #
2.6.7.1 What is Combustion? #
Combustion is a dracut module that enables you to configure your system on the first boot. You can use Combustion, for example, to change the default partitions, set user passwords, create files, or install packages.
2.6.7.2 How does Combustion work? #
Combustion is invoked after the ignition.firstboot
argument is passed to the kernel command line. Combustion reads a
provided file named script
, executes included
commands, and thus performs changes to the file system. If
script
includes the network flag, Combustion tries
to configure the network. After /sysroot
is mounted,
Combustion tries to activate all mount points in
/etc/fstab
and then calls
transactional-update
to apply other changes, for
example, setting root
password or installing packages.
2.6.7.2.1 The script
file #
When installing on bare metal, the configuration file
script
must reside in the
combustion
subdirectory on the configuration media
labeled combustion
. The directory structure must
look as follows:
<root directory> └── combustion └── script └── other files
If you intend to configure a virtual machine with Virtual Machine Manager (libvirt
),
provide the path to the script
file in its XML
definition, for example:
<domain ... > <sysinfo type="fwcfg"> <entry name="opt/org.opensuse.combustion/script" file="/location/to/script"/> </sysinfo> </domain>
Combustion can be used along with Ignition. If you intend to do
so, label your configuration medium ignition
and
include the ignition
directory with the
config.ign
to your directory structure as shown
below:
<root directory> └── combustion └── script └── other files └── ignition └── config.ign
In this scenario, Ignition runs before Combustion.
Combustion configuration examples #
2.6.7.4.1 The script
configuration file #
The script
configuration file is a set of commands
that are parsed and executed by Combustion in a transactional-update
shell. This
article provides examples of configuration tasks performed by
Combustion.
As the script
file is interpreted by Bash, always
start the file with the interpreter declaration at its first line:
#!/usr/bin/bash
To log in to your system, include at least the root
password.
However, it is recommended to establish the authentication using SSH
keys. If you need to use a root
password, make sure to configure a
secure password. If you use a randomly generated password, use at least
10 characters. If you create your password manually, use even more than
10 characters and combine uppercase and lowercase letters and numbers.
2.6.7.4.1.1 Network configuration #
To configure and use the network connection during the first boot, add
the following statement to script
:
# combustion: network
Using this statement will pass the rd.neednet=1
argument to dracut. If you do not use the statement, the system will be
configured without any network connection.
2.6.7.4.1.2 Partitioning #
ALP raw images are delivered with a default partitioning scheme
as described in Section 2.6.1.2, “Default partitioning”.
You might want to use a different partitioning. The following set of
example snippets moves the /home
to a different
partition.
The following script performs changes that are not included in
snapshots. If the script fails and the snapshot is discarded, some
changes remain visible and cannot be reverted, for example, the
changes to the /dev/vdb
device.
The following snippet creates a GPT partitioning schema with a single
partition on the /dev/vdb
device:
sfdisk /dev/vdb <<EOF label: gpt type=linux EOF partition=/dev/vdb1
The partition is formatted to BTRFS:
wipefs --all ${partition} mkfs.btrfs ${partition}
Possible content of /home
is moved to the new
/home
folder location by the following snippet:
mount /home mount ${partition} /mnt rsync -aAXP /home/ /mnt/ umount /home /mnt
The snippet below removes an old entry in
/etc/fstab
and creates a new entry:
awk -i inplace '$2 != "/home"' /etc/fstab echo "$(blkid -o export ${partition} | grep ^UUID=) /home btrfs defaults 0 0" >>/etc/fstab
2.6.7.4.1.3 Setting a password for root
#
Before you set the root
password, generate a hash of the
password, for example, by using the openssl passwd
-6
. To set the password, add the following to the
script
:
echo 'root:$5$.wn2BZHlEJ5R3B1C$TAHEchlU.h2tvfOpOki54NaHpGYKwdNhjaBuSpDotD7' | chpasswd -e
2.6.7.4.1.4 Adding SSH keys #
The following snippet creates a directory to store the root
's SSH
key and then copies the public SSH key located on the configuration
device to the authorized_keys
file.
mkdir -pm700 /root/.ssh/ cat id_rsa_new.pub >> /root/.ssh/authorized_keys
The SSH service must be enabled in case you need to use remote login via SSH. For details, refer to Section 2.6.7.4.1.5, “Enabling services”.
2.6.7.4.1.5 Enabling services #
To enable system services, for example, the SSH service, add the
following line to script
:
systemctl enable sshd.service
2.6.7.4.1.6 Installing packages #
As some packages may require additional subscription, you may need to register your system beforehand. An available network connection may also be needed to install additional packages.
During the first boot configuration, you can install additional
packages to your system. For example, you can install the
vim
editor by adding:
zypper --non-interactive install vim-small
Bear in mind that you will not be able to use
zypper
after the configuration is complete and you
boot to the configured system. To perform changes later, you must use
the transactional-update
command to create a
changed snapshot.
Post-deployment considerations #
2.7.1 Introduction #
This article includes important information and tasks that you need to consider after you successfully deploy the Adaptable Linux Platform (ALP).
2.7.2 Full disk encryption #
2.7.2.1 Change encryption password #
During the ALP deployment, you entered a password that is used for disk encryption. If you want to change the password, run the following command:
#
fdectl passwd
2.7.2.2 TPM device #
Without a TPM chip, you need to enter the encryption password to decrypt the disk on each ALP boot. On systems that have a TPM 2.0 chip, ALP deployed with D-Installer supports the automatic protection of the LUKS volume with a TPM device. The requirement is that the machine must use the UEFI Secure Boot enabled.
If the D-Installer detects a TPM 2.0 chip and UEFI Secure Boot, it will create a secondary LUKS key. On the first boot, ALP will use the TPM to protect this key and configure the GRUB 2 boot loader to automatically unwrap the key. Be aware that you must remove the ISO after the installer has finished and before the system boots for the first time. This is because we use the TPM to ensure that the system comes up with exactly the same configuration before unlocking the LUKS partition.
This allows you to use the full disk encryption without having to type the disk password on each reboot. However, the disk password is still there and can be used for recovery. For example, after updating the GRUB 2 boot loader, or the SHIM loader, the TPM will no longer be able to unseal the secondary key correctly, and GRUB 2 will have to fall back to the password.
2.7.3 SELinux #
Security-Enhanced Linux (SELinux) is a security framework that increases system security by defining access controls for applications, processes and files on the file system.
ALP ships with SELinux enabled and set to the restrictive enforce mode for increased security. The enforce mode can lead to processes or workloads not behaving correctly because the default policy may be too strict. If you observe such unexpected issues, set SELinux to the permissive mode that does not enforce SELinux policies but still logs offences against them called Access Vector Rules (AVCs).
To set SELinux to the permissive mode temporarily, run:
#
setenforce 0
To set SELinux to the permissive mode permanently, edit
/etc/selinux/config
and update it to include the
following line:
SELINUX=permissive
If you entered an SELinux permissive mode, you need to relabel your system until it is back in a good state. The reason is that the permissive mode allows you to reach states that are not reachable otherwise. To relabel the system, run the following command and reboot the system:
#
touch /etc/selinux/.autorelabel#
reboot
To monitor AVCs, search the Audit log and systemd
journal for log
messages similar to the following one:
type=AVC msg=audit(1669971354.731:25): avc: denied { create } \ for pid=1264 comm="ModemManager" scontext=system_u:system_r:modemmanager_t:s0 \ tcontext=system_u:system_r:modemmanager_t:s0 tclass=qipcrtr_socket permissive=0
To filter such messages, you can use the following commands:
#
tail -f /var/log/audit/audit.log | grep -i AVC
and
#
journalctl -f | grep -i AVC
For more advanced search, use the following command:
#
ausearch -m avc,user_avc,selinux_err -i
If such messages appear while using the application that did not behave
correctly when SELinux was set to the enforce mode, the policies are
probably too restrictive and need updating. You can help to fine-tune
SELinux policies by creating a bug report at
https://bugzilla.suse.com/enter_bug.cgi?classification=SUSE%20ALP%20-%20SUSE%20Adaptable%20Linux%20Platform.
Specify Basesystem
as a component, include the word
SELinux
in the bug subject, and attach the gathered
unique lines that include AVCs together with reproduction steps.
3 Transactional updates #
3.1 What are transactional updates? #
The Adaptable Linux Platform (ALP) was designed to use a read-only root file system.
This means that after the deployment is complete, you are not able to
perform direct modifications to the root file system, for example, by
using the zypper
command. Instead, ALP
introduces the concept of transactional updates which enables you to
modify your system and keep it up to date.
3.2 How do transactional updates work? #
Each time you call the transactional-update
command to change your system—either
to install a package, perform an update or apply a patch—the
following actions take place:
A new read-write snapshot is created from your current root file system, or from a snapshot that you specified.
All changes are applied (updates, patches or package installation).
The snapshot is switched back to read-only mode.
The new root file system snapshot is prepared, so that it will be active after you reboot.
After rebooting, the new root file system is set as the default snapshot.
NoteBear in mind that without rebooting your system, the changes will not be applied.
If you do not reboot your machine before performing further changes,
the transactional-update
command will create a new snapshot from the current root
file system. This means that you will end up with several parallel
snapshots, each including that particular change but not changes from
the other invocations of the command. After reboot, the most recently
created snapshot will be used as your new root file system, and it will
not include changes done in the previous snapshots.
3.3 Software repositories #
The current ALP image points to the following two software repositories:
- ALP
https://download.opensuse.org/repositories/SUSE:/ALP:/PUBLISH/images/repo/ALP-0.1-x86_64-Media1/
This repository is enabled. It is a subset of the build repository and an equivalent of the
POOL
repository known from other SUSE software products. It will remain unchanged until the release of the next ALP prototype.TipIf you need a package which is not included in the
ALP
repository, you may find it in theALP-Build
repository. To enable it, run:#
zypper mr -e ALP-Build- ALP-Build
https://download.opensuse.org/repositories/SUSE:/ALP/standard/
This repository is disabled by default. It is used for building the project. It includes all packages built in the
SUSE:ALP
project in the build service and will be moving forward over the time with future development.
3.4 Benefits of transactional updates #
They are atomic—the update is applied only if it completes successfully.
Changes are applied in a separate snapshot and so do not influence the running system.
Changes can easily be rolled back.
Usage of the transactional-update
command #
3.6.1 transactional-update
usage #
The transactional-update
command enables the atomic installation or removal of
updates. Updates are applied only if all of them can be successfully
installed. transactional-update
creates a snapshot of your system and uses it to
update the system. Later you can restore this snapshot. All changes
become active only after reboot.
The transactional-update
command syntax is as follows:
transactional-update [option]
[general_command] [package_command] standalone_command
transactional-update
without arguments
If you do not specify any command or option while running the transactional-update
command, the system updates itself.
Possible command parameters are described further.
transactional-update
options #--interactive, -i
Can be used along with a package command to turn on interactive mode.
--non-interactive, -n
Can be used along with a package command to turn on non-interactive mode.
--continue [number], -c
The
--continue
option is for making multiple changes to an existing snapshot without rebooting.The default
transactional-update
behavior is to create a new snapshot from the current root file system. If you forget something, such as installing a new package, you have to reboot to apply your previous changes, runtransactional-update
again to install the forgotten package, and reboot again. You cannot run thetransactional-update
command multiple times without rebooting to add more changes to the snapshot, because this will create separate independent snapshots that do not include changes from the previous snapshots.Use the
--continue
option to make as many changes as you want without rebooting. A separate snapshot is made each time, and each snapshot contains all the changes you made in the previous snapshots, plus your new changes. Repeat this process as many times as you want, and when the final snapshot includes everything you want, reboot the system, and your final snapshot becomes the new root file system.Another useful feature of the
--continue
option is that you may select any existing snapshot as the base for your new snapshot. The following example demonstrates runningtransactional-update
to install a new package in a snapshot based on snapshot 13, and then running it again to install another package:#
transactional-update pkg install package_1
#
transactional-update --continue 13 pkg install package_2
--no-selfupdate
Disables self-updating of
transactional-update
.--drop-if-no-change, -d
Discards the snapshot created by
transactional-update
if there were no changes to the root file system. If there are some changes to the/etc
directory, those changes merged back to the current file system.--quiet
The
transactional-update
command will not output tostdout
.--help, -h
Prints help for the
transactional-update
command.--version
Displays the version of the
transactional-update
command.
The general commands are the following:
cleanup-snapshots
The command marks all unused snapshots that are intended to be removed.
cleanup-overlays
The command removes all unused overlay layers of
/etc
.cleanup
The command combines the
cleanup-snapshots
andcleanup-overlays
commands.grub.cfg
Use this command to rebuild the GRUB boot loader configuration file.
bootloader
The command reinstalls the boot loader.
initrd
Use the command to rebuild
initrd
.kdump
In case you perform changes to your hardware or storage, you may need to rebuild the kdump initrd.
shell
Opens a read-write shell in the new snapshot before exiting. The command is typically used for debugging purposes.
reboot
The system reboots after the
transactional-update
command is complete.run <command>
Runs the provided command in a new snapshot.
setup-selinux
Installs and enables targeted SELinux policy.
The package commands are the following:
dup
Performs upgrade of your system. The default option for this command is
--non-interactive
.migration
The command migrates your system to a selected target. Typically, it is used to upgrade your system if it has been registered via SUSE Customer Center.
patch
Checks for available patches and installs them. The default option for this command is
--non-interactive
.pkg install
Installs individual packages from the available channels using the
zypper install
command. This command can also be used to install Program Temporary Fix (PTF) RPM files. The default option for this command is--interactive
.#
transactional-update pkg install package_name
or
#
transactional-update pkg install rpm1 rpm2
pkg remove
Removes individual packages from the active snapshot using the
zypper remove
command. This command can also be used to remove PTF RPM files. The default option for this command is--interactive
.#
transactional-update pkg remove package_name
pkg update
Updates individual packages from the active snapshot using the
zypper update
command. Only packages that are part of the snapshot of the base file system can be updated. The default option for this command is--interactive
.#
transactional-update pkg update package_name
register
Registers or deregisters your system. For a complete usage description, refer to Section 3.6.1.1, “The
register
command”.up
Updates installed packages to newer versions. The default option for this command is
--non-interactive
.
The standalone commands are the following:
rollback <snapshot number>
This sets the default subvolume. The current system is set as the new default root file system. If you specify a number, that snapshot is used as the default root file system. On a read-only file system, it does not create any additional snapshots.
#
transactional-update rollback snapshot_number
rollback last
This command sets the last known to be working snapshot as the default.
status
This prints a list of available snapshots. The currently booted one is marked with an asterisk, the default snapshot is marked with a plus sign.
3.6.1.1 The register
command #
The register
command enables you to handle all tasks
regarding registration and subscription management. You can supply the
following options:
--list-extensions
With this option, the command will list available extensions for your system. You can use the output to find a product identifier for product activation.
-p, --product
Use this option to specify a product for activation. The product identifier has the following format: <name>/<version>/<architecture>, for example,
sle-module-live-patching/15.3/x86_64
. The appropriate command will then be the following:#
transactional-update register -p sle-module-live-patching/15.3/x86_64-r, --regcode
Register your system with the registration code provided. The command will register the subscription and enable software repositories.
-d, --de-register
The option deregisters the system, or when used along with the
-p
option, deregisters an extension.-e, --email
Specify an email address that will be used in SUSE Customer Center for registration.
--url
Specify the URL of your registration server. The URL is stored in the configuration and will be used in subsequent command invocations. For example:
#
transactional-update register --url https://scc.suse.com-s, --status
Displays the current registration status in JSON format.
--write-config
Writes the provided options value to the
/etc/SUSEConnect
configuration file.--cleanup
Removes old system credentials.
--version
Prints the version.
--help
Displays the usage of the command.
4 Containers and Podman #
4.1 What are containers and Podman? #
Containers offer a lightweight virtualization method to run multiple virtual environments (containers) simultaneously on a single host. Unlike technologies such as Xen or KVM, where the processor simulates a complete hardware environment and a hypervisor controls virtual machines, containers provide virtualization on the operating system level, where the kernel controls the isolated containers.
Podman is a short name for Pod Manager Tool. It is a daemonless container engine that enables you to run and deploy applications using containers and container images. Podman provides a command line interface to manage containers.
4.2 How does Podman work? #
Podman provides integration with systemd
. This way you can control
containers via systemd
units. You can create these units for existing
containers as well as generate units that can start containers if they do
not exist in the system. Moreover, Podman can run systemd
inside
containers.
Podman enables you to organize your containers into pods. Pods share the same network interface and resources. A typical use case for organizing a group of containers into a pod is a container that runs a database and a container with a client that accesses the database.
4.2.1 Pods architecture #
A pod is a group of containers that share the same namespace, ports and
network connection. Usually, containers within one pod can communicate
directly with each other. Each pod contains an infrastructure container
(INFRA
), whose purpose is to hold the namespace.
INFRA
also enables Podman to add other containers
to the pod. Port bindings, cgroup-parent values, and kernel namespaces
are all assigned to the infrastructure container. Therefore, later
changes of these values are not possible.
Each container in a pod has its own instance of a monitoring program. The monitoring program watches the container's process and if the container dies, the monitoring program saves its exit code. The program also holds open the tty interface for the particular container. The monitoring program enables you to run containers in the detached mode when Podman exits, because this program continues to run and enables you to attach tty later.
4.3 Benefits of containers #
Containers make it possible to isolate applications in self-contained units.
Containers provide near-native performance. Depending on the runtime, a container can use the host kernel directly, thus minimizing overhead.
It is possible to control network interfaces and apply resources inside containers through kernel control groups.
Enabling Podman #
4.5.1 Introduction #
This article helps you verify that Podman is installed on the
ALP system and provides guidelines to enable its systemd
service
when Cockpit requires it.
4.5.2 Requirements #
Deployed ALP base OS.
4.5.3 Installing Podman #
Verify that Podman is installed on your system by running the following command:
#
zypper se -i podmanIf Podman is not listed in the output, install it by running:
#
transactional-update pkg install podman*Reboot the ALP host for the changes to take effect.
Optionally, enable and start the
podman.service
service for applications that require it, such as Cockpit. You can enable it either in Cockpit by clicking › , or by running the following command:#
systemctl enable --now podman.service
4.5.4 Enabling rootless mode #
By default, Podman requires root
privileges. To enable rootless
mode for the current user, run the following command:
>
sudo usermod --add-subuids 100000-165535 \
--add-subgids 100000-165535 USER
Reboot the machine to enable the change. The command above defines a
range of local UIDs to which the UIDs allocated to users inside the
container are mapped on the host. Note that the ranges defined for
different users must not overlap. It is also important that the ranges do
not reuse the UID of an existing local user or group. By default, adding
a user with the useradd
command automatically
allocates subUID and subGID ranges.
Running a container with Podman in rootless mode on SLE Micro may fail,
because the container might need access to directories or files that
require root
privileges.
4.5.5 Next steps #
Run containerized workloads. For details, refer to Chapter 5, Workloads.
Podman usage #
This article introduces basic Podman usage that you may need when running containerized workloads.
4.6.1 Getting container images #
To run a container, you need an image. An image includes all dependencies
needed to run an application. You can obtain images from an image
registry. Available registries are defined in the
/etc/containers/registries.conf
configuration file.
If you have a local image registry or want to use other registries, add
the registries into the configuration file.
ALP does not provide tools for building custom images. Therefore, the only way to get an image is to pull it from an image registry.
The podman pull
command pulls an image from an image
registry. The syntax is as follows:
#
podman pull [OPTIONS] SOURCE
The source can be an image without the
registry name. In that case, Podman tries to pull the image from all
registries configured in the
/etc/containers/registries.conf
file. The default
image tag is latest
. The default location of pulled
images is
/var/lib/containers/storage/overlay-images/
.
To view all possible options of the podman pull
command, run:
#
podman pull --help
If you are using Cockpit, you can also pull images from an image registry in the
menu by clicking .Podman enables you to search for images in an image registry or a list of registries using the command:
#
podman search IMAGE_NAME
4.6.2 Working with containers #
The following section covers common container management tasks. This includes creating, starting, and modifying containers.
The current version of ALP does not provide tools for building custom images. Therefore, the only way to get a container image is to pull it from an image registry.
4.6.2.1 Running containers #
For specific details on running ALP containers, refer to links in the Chapter 5, Workloads article.
After you have pulled your container image, you can create containers
based on it. You can run an instance of the image using the
podman run
command. The command syntax is as
follows:
#
podman run [OPTIONS] IMAGE [CONTAINER_NAME]
IMAGE is specified in format
transport:path. If transport
is omitted, the default docker
is used. The
path can reference to a specific image registry.
If omitted, Podman searches for the image in registries defined in
the /etc/containers/registries.conf
file. An
example that runs a container called sles15
based on
the sle15
image follows:
#
podman run registry.opensuse.org/suse/templates/images/sle-15-sp3/base/images/suse/sle15 sles15
Below is a list of frequently used options. For a complete list of
available options, run the command: podman run
--help
.
--detach, -d
The container will run in the background.
--env, -e=env
This option allows arbitrary environment variables that are available for the process to be launched inside of the container. If an environment variable is specified without a value, Podman will check the host environment for a value and set the variable only if it is set on the host.
--help
Prints help for the
podman run
command.--hostname=
name,-h
Sets the container host name that is available inside the container.
--pod=
nameRuns the container in an existing pod. To create a pod, prefix the pod name with
new:
.--read-only
Mounts the container’s root file system as read-only.
--systemd=true|false|always
Runs the container in systemd mode. The default is true.
4.6.2.2 Stopping containers #
If the podman run
command finished successfully, a
new container has been started. You can stop the container by running:
#
podman stop [OPTIONS] CONTAINER
You can specify a single container name or ID or a space-separated list of containers. The command takes the following options:
--all, -a
Stops all running containers.
--latest, -l
Instead of providing a container name, the last created container will be stopped.
--time, -t=
secondsSeconds to wait before forcibly stopping the container.
To view all possible options of the podman stop
command, run the following:
#
podman stop --help
4.6.2.3 Starting containers #
To start already created but stopped containers, use the
podman start
command. The command syntax is as
follows:
#
podman start [OPTIONS] CONTAINER
CONTAINER can be a container name or a container ID.
For a complete list of possible options of podman
start
, run the command:
#
podman start --help
4.6.2.4 Updating containers #
To update an existing container, follow these steps:
Identify the image of the container that you want to update, for example,
yast-mgmt-qt
:>
podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE [...] registry.opensuse.org/suse/alp/workloads/publish/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt latest f349194a439d 13 days ago 674 MBPull the image from the registry to find out if there is a newer version. If you do not specify a version tag, the
latest
tag is used:#
podman pull registry.opensuse.org/suse/alp/workloads/publish/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt Trying to pull registry.opensuse.org/suse/alp/workloads/publish/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt:latest... Getting image source signatures Copying blob 6bfbcdeee2ec done [...] Writing manifest to image destination Storing signatures f349194a439da249587fbd8baffc5659b390aa14c8db1d597e95be703490ffb1If the container is running, identify its ID and stop it:
#
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS [...] 28fef404417b /workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latest 2 weeks ago Up 24 seconds ago#
podman stop 28fef404417bRun the container following specific instructions at Chapter 5, Workloads, for example:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latest
4.6.2.5 Committing modified containers #
You can run a new container with specific attributes that are not part
of the original image. To save the container with these attributes as a
new image, you can use the podman commit
command:
#
podman commit [OPTIONS] CONTAINER IMAGE
CONTAINER is a container name or a container
ID. IMAGE is the new image name. If the
image name does not start with a registry name, the value
localhost
is used.
When using Cockpit, you can perform the commit
operation directly from a container's , by
clicking . A dialog box opens. Specify all
required details as shown below and click :
4.6.2.6 Listing containers #
Podman enables you to list all running containers using the
podman ps
command. The generic syntax of the command
is as follows:
#
podman ps [OPTIONS]
Command options can change the displayed information. For example,
using the --all
option will output all containers
created by Podman (not only the running containers).
For a complete list of podman ps
options, run:
#
podman ps --help
4.6.2.7 Removing containers #
To remove one or more unused containers from the host, use the
podman rm
command as follows:
#
podman rm [OPTIONS] CONTAINER
CONTAINER can be a container name or a container ID.
The command does not remove the specified container if the container is
running. To remove a running container, use the -f
option.
For a complete list of podman rm
options, run:
#
podman rm --help
You can delete all stopped containers from your host with a single command:
#
podman container prune
Make sure that each stopped container is intended to be removed before you run the command, otherwise you might remove containers that are still in use and were stopped only temporarily.
4.6.3 Working with pods #
Containers can be grouped into a pod. The containers in the pod then
share network, pid, and IPC namespace. Pods can be managed by
podman pod
commands. This section provides an overview
of the commands for managing pods.
4.6.3.1 Creating pods #
The command podman pod create
is used to create a
pod. The syntax of the command is as follows:
#
podman pod create [OPTIONS]
The command outputs the pod ID. By default, the pods are created without being started. You can start a pod by running a container in the pod, or by starting the pod as described in Section 4.6.3.3, “Starting/stopping/restarting pods”.
If you do not specify a pod name with the --name
option, Podman will assign a default name for the pod.
For a complete list of possible options, run the following command:
#
podman pod create --help
4.6.3.2 Listing pods #
You can list all pods by running the command:
#
podman pod list
The output looks as follows:
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 30fba506fecb upbeat_mcclintock Created 19 hours ago 1 4324f40c9651 976a83b4d88b nervous_feynman Running 19 hours ago 2 daa5732ecd02
As each pod includes the INFRA
container, the number
of containers in a pod is always larger than zero.
4.6.3.3 Starting/stopping/restarting pods #
After a pod is created, you must start it, as it is not in the state
running
by default. In the commands below,
POD can be a pod name or a pod ID.
To start a pod, run the command:
#
podman pod start [OPTIONS] POD
For a complete list of possible options, run:
#
podman pod start --help
To stop a pod, use the podman pod stop
as follows:
#
podman pod stop POD
To restart a pod, use the podman pod restart
command
as follows:
#
podman pod restart POD
4.6.3.4 Managing containers in a pod #
To add a new container to a pod, use the podman run
command with the option --pod
. A general syntax of
the command follows:
#
podman run [OPTIONS] --pod POD_NAME IMAGE
For details about the podman run
command, refer to
Section 4.6.2.1, “Running containers”.
The podman start
command does not allow for
starting a container in a pod if the container was not added to the
pod during the container's initial running.
You cannot remove a container from a pod and keep the container running, because the container itself is removed from the host.
Other actions like start, restart and stop can be performed on specific containers without affecting the status of the pod.
4.6.3.5 Removing pods #
There are two ways to remove pods. You can use the podman pod
rm
command to remove one or more pods. Alternatively, you can
remove all stopped pods using the podman pod prune
command.
To remove a pod or several pods, run the podman pod
rm
command as follows:
#
podman pod rm POD
POD can be a pod name or a pod ID.
To remove all currently stopped pods, use the podman pod
prune
command. Make sure that all stopped pods are intended
to be removed before you run the podman pod prune
command, otherwise you might remove pods that are still in use.
4.6.3.6 Monitoring processes in pods #
To view all containers in all pods, use the following command:
#
podman ps -a --pod
The output of the command will be similar to the following one:
CONTAINER ID IMAGE COMMAND CREATED STATUS [...] 4324f40c9651 k8s.gcr.io/pause:3.2 21 hours ago Created daa5732ecd02 k8s.gcr.io/pause:3.2 22 hours ago Up 3 hours ago e5c8e360c54b localhost/test:latest /bin/bash 3 days ago Exited (137) 3 days ago 82dad15828f7 localhost/opensuse/toolbox /bin/bash 3 days ago Exited (137) 3 days ago 1a23da456b6f docker.io/i386/ubuntu /bin/bash 4 days ago Exited (0) 6 hours ago df890193f651 localhost/opensuse/toolbox /bin/bash 4 days ago Created
The first two records are the INFRA
containers of
each pod, based on the k8s.gcr.io/pause:3.2
image.
Other containers in the output are stand-alone containers that do not
belong to any pod.
5 Workloads #
5.1 Introduction #
The Adaptable Linux Platform (ALP) runs containerized workloads instead of traditional applications. Images of these containers are stored in image registries online. ALP can run any containerized workload that is supported by the default container manager Podman. This article lists and describes workloads securely distributed and supported by SUSE. You can find the source files of the workloads at https://build.opensuse.org/project/show/SUSE:ALP:Workloads.
5.2 YaST #
The following YaST container images are available:
- yast-mgmt-ncurses
The base YaST workload. It contains the text version of YaST (ncurses).
For more details, refer to Section 5.9, “Running the YaST workload using Podman”.
- yast-mgmt-qt
This workload adds the Qt-based graphical user interface.
- yast-mgmt-web
This workload exposes the standard graphical interface via a VNC server and uses a JavaScript VNC client to render the screen in a Web browser.
5.3 KVM #
This workload adds virtualization capability to ALP so that you
can use it as a VM Host Server. It uses the KVM hypervisor supported by the
libvirt
toolkit.
For more details, refer to Section 5.10, “Running the KVM virtualization workload using Podman”.
5.4 Cockpit Web server #
This workload adds the Cockpit Web server to ALP so that you can administer the system and container via a user-friendly interface in your Web browser.
For more details, refer to Section 5.11, “Running the Cockpit Web server using Podman”.
5.5 GDM #
This workload runs GDM and basic GNOME environment. For more details, refer to Section 5.12, “Running the GNOME Display Manager workload using Podman”.
5.6 firewalld
#
This workload adds firewall capability to ALP to define the trust level of network connections or interfaces.
For more details, refer to
Section 5.13, “Running firewalld
using Podman”.
5.7 Grafana #
This workload adds a Web-based dashboard to the ALP host that lets you query, monitor, visualize and better understand existing data residing on any client host.
For more details, refer to Section 5.14, “Running the Grafana workload using Podman”.
Running the YaST workload using Podman #
5.9.1 Introduction #
This article describes how to start the YaST workload on the Adaptable Linux Platform (ALP).
5.9.2 Requirements #
Deployed ALP base OS.
Installed and enabled Podman.
5.9.3 Starting YaST in text mode #
To start the text version (ncurses) of YaST as a workload, follow these steps:
Identify the full URL address in a registry of container images, for example:
>
podman search yast-mgmt-ncurses registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses [...]To start the container, run the following command:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latestFigure 5.1: YaST running in text mode on ALP #
5.9.4 Starting graphical YaST #
To start the graphical Qt version of YaST as a workload, follow these steps:
To view the graphical YaST on your local X server, you need to use SSH X forwarding. It requires the xauth package installed, applied by the host reboot:
#
transactional-update pkg install xauth && rebootConnect to the ALP host using
ssh
with the X forwarding enabled:>
ssh -X ALP_HOSTIdentify the full URL address in a registry of container images, for example:
>
podman search yast-mgmt-qt registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt [...]To start the container, run the following command:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt:latestFigure 5.2: Running graphical YaST on top of ALP #
Running the KVM virtualization workload using Podman #
5.10.1 Introduction #
This article describes how to run KVM VM Host Server on the Adaptable Linux Platform (ALP).
5.10.2 Requirements #
Deployed ALP base OS.
When running ALP in a virtualized environment, you need to enable the nested KVM virtualization on the bare-metal host operating system and use
kernel-default
kernel instead of the defaultkernel-default-base
in ALP.Installed and enabled Podman.
5.10.3 Starting the KVM workload #
ALP can serve as a host running virtual machines. The following procedure describes steps to prepare the ALP host to run containerized KVM VM Host Server and run an example VM Guest on top of it.
Identify the KVM workload image:
#
podman search kvm registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvmPull the image from the registry and install all the wrapper scripts:
#
podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latestCreate the
libvirtd
container from the downloaded image:#
kvm-container-manage.sh createStart the container:
#
kvm-container-manage.sh startOptionally, run a VM Guest on top of the started KVM VM Guest using the
virt-install.sh
script.Tipvirt-install.sh
uses theopenSUSE-Tumbleweed-JeOS.x86_64-OpenStack-Cloud.qcow2
image by default. To specify another VM image, modify theAPPLIANCE_MIRROR
andAPPLIANCE
options in the/etc/kvm-container.conf
file.Tipvirsh.sh
is a wrapper script to launch thevirsh
command inside the container (the default container name islibvirtd
).>
virt-install.sh [...] Starting install... Password for first root login is: OPjQok1nlfKp5DRZ Allocating 'Tumbleweed-JeOS_5221fd7860.qcow2' | 0 B 00:00:00 ... Creating domain... | 0 B 00:00:00 Running text console command: virsh --connect qemu:///system console Tumbleweed-JeOS_5221fd7860 Connected to domain 'Tumbleweed-JeOS_5221fd7860' Escape character is ^] (Ctrl + ]) Welcome to openSUSE Tumbleweed 20220919 - Kernel 5.19.8-1-default (hvc0). eth0: 192.168.10.67 fe80::5054:ff:fe5a:c416 localhost login:
Usage of the kvm-container-manage.sh
script #
The kvm-container-manage.sh
script is used to manage the
KVM server container on the Adaptable Linux Platform (ALP). This article lists each
subcommand of the script and describes its purpose.
kvm-container-manage.sh create
Creates a KVM server container from a previously downloaded container image. To download the images, use
podman
, for example:#
podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latestkvm-container-manage.sh start
Starts the KVM server container.
kvm-container-manage.sh virsh list
Lists all running VM Guests. Append the
--all
option to get the list of all—running and stopped—VM Guests.kvm-container-manage.sh stop
Stops the running KVM server container.
kvm-container-manage.sh uninstall
Cleans the host environment by uninstalling all files that were required to run the KVM server container.
Running the Cockpit Web server using Podman #
5.11.1 Introduction #
This article describes how to run a containerized Cockpit Web server on the Adaptable Linux Platform (ALP) using Podman.
An alternative way of installing and enabling the Cockpit Web server is described in https://en.opensuse.org/openSUSE:ALP/Workgroups/SysMngmnt/Cockpit#Install_the_Web_Server_Via_Packages.
5.11.2 Requirements #
Deployed ALP base OS.
Installed and enabled Podman.
Installed the alp_cockpit pattern.
5.11.3 Starting the Cockpit workload #
Cockpit is a tool to administer one or more hosts from one place via a Web user interface. Its default functionality is extended by plug-ins that you can install additionally. You do not need the Cockpit Web user interface installed on every ALP host. One instance of the Web interface can connect to multiple hosts if they have the alp_cockpit pattern installed.
ALP has the base part of the Cockpit component installed by default. It is included in the alp_cockpit pattern. To install and run Cockpit's Web interface, follow these steps:
Identify the Cockpit Web server workload image:
#
podman search cockpit-ws registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-wsPull the image from the registry:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latestRun the Cockpit's containerized Web server:
#
podman container runlabel --name cockpit-ws run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latestTo run the Cockpit's Web server on each ALP boot, enable its service:
#
systemctl enable cockpit.serviceTo view the Cockpit Web user interface, point your Web browser to the following address and accept the self-signed certificate:
https://HOSTNAME_OR_IP_OF_ALP_HOST:9090
Figure 5.3: Cockpit running on ALP #
5.11.4 Next steps #
Administer the system using Cockpit.
Install and run additional workloads. For their list and description, refer to Chapter 5, Workloads.
Adding more functionality to Cockpit #
5.11.6.1 Introduction #
After you deploy Cockpit on the Adaptable Linux Platform (ALP), it already provides a default functionality. The following sections describe how to extend it by installing additional Cockpit extensions. Note that you need to reboot ALP to apply the changes.
Some packages described in this article are available from the
ALP-Build
repository which may be disabled by
default. To make sure the repository is enabled, run the following
command:
#
zypper mr -e ALP-Build && refresh
5.11.6.2 Metrics #
To enable the visualization of some current metrics, install the PCP extension:
#
transactional-update pkg install cockpit-pcp#
reboot
5.11.6.3 Software updates #
To be able to perform transactional software updates from Cockpit, install the cockpit-tukit package:
#
transactional-update pkg install cockpit-tukit#
reboot
5.11.6.4 Storage devices #
To manage local storage devices and their associated technologies, install the cockpit-storaged package:
#
transactional-update pkg install cockpit-storaged#
reboot
Running the GNOME Display Manager workload using Podman #
5.12.1 Introduction #
This article describes how to deploy and run the GNOME Display Manager (GDM) on the Adaptable Linux Platform (ALP).
5.12.2 Requirements #
Deployed ALP base OS
Installed and enabled Podman
5.12.3 Starting the GDM workload #
On the ALP host system, install accountsservice and systemd-experimental packages:
#
transactional-update pkg install accountsservice systemd-experimental#
rebootVerify that SELinux is configured in the permissive mode and enable the permissive mode if required:
#
setenforce 0Identify the GDM container:
>
podman search gdm registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm [...]Download and recreate the GDM container locally:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latestReload the affected
systemd
services:#
systemctl daemon-reload#
systemctl reload dbus#
systemctl restart accounts-daemonRun the GDM container.
For a standalone process in a container, run:
#
systemctl start gdm.serviceAlternatively, run the command manually:
#
podman container runlabel --name gdm run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latestFor systems with
systemd
running in a container, run:#
systemctl start gdm-systemd.serviceAlternatively, run the command manually:
#
podman container runlabel run-systemd --name gdm \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
The GDM starts. After you log in, a basic GNOME environment opens.
Figure 5.7: GNOME Settings on top of ALP #
If you need to clean the environment from all deployed files, run the following command:
#
podman container runlabel uninstall \
registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
Running firewalld
using Podman #
5.13.1 Introduction #
This article describes how to run a containerized firewalld
on the
Adaptable Linux Platform (ALP) using Podman.
The firewalld
container needs access to the host network and needs to
run as a privileged container. The container image uses the system dbus
instance. Therefore, you need to install dbus and
polkit configuration files first.
5.13.2 Requirements #
Deployed ALP base OS
Installed and enabled Podman
Installed alp_cockpit pattern
5.13.3 Running the firewalld
workload #
Identify the
firewalld
workload image:#
podman search firewalld registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalldVerify that firewalld is not installed in the host system. Remove it, if necessary, and reboot the ALP host:
#
transactional-update pkg remove firewalld rebootInitialize the environment:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalldThe command prepares the system and creates the following files on the host system:
/etc/dbus-1/system.d/FirewallD.conf /etc/polkit-1/actions/org.fedoraproject.FirewallD1.policy 1 /etc/systemd/system/firewalld.service 2 /etc/default/container-firewalld /usr/local/bin/firewall-cmd 3
The polkit policy file will only be installed if polkit itself is installed. It may be necessary to restart the
dbus
andpolkit
daemon afterwards.The
systemd
service and the corresponding configuration file/etc/default/container-firewalld
allow to start and stop the container usingsystemd
if Podman is used as a container manager./usr/local/bin/firewall-cmd
is a wrapper to call thefirewall-cmd
inside the container. Docker and Podman are supported.Run the container:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalldThe command will run the container as a privileged container with the host network. Additionally,
/etc/firewalld
and thedbus
socket are mounted into the container.TipIf your container manager is Podman, you can operate
firewalld
by using itssystemd
unit files, for example:#
systemctl start firewalldOptionally, you can remove the
firewalld
workload and clean the environment from all related files. Configuration files are left on the system.#
podman container runlabel uninstall \ registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld
5.13.3.1 Managing the firewalld
instance #
After the firewalld
container is started, you can manage its
instance in two ways. You can manually call its client application via
the podman exec
command, for example:
podman exec firewalld firewall-cmd OPTIONS
Alternatively, you can use a shorter syntax by running the
firewall-cmd
wrapper script.
5.13.3.2 firewalld
manual pages #
To read the firewalld
manual page, run the following command:
>
podman run -i --rm \
registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
man firewalld
To read the firewall-cmd
manual page, run the
following command:
>
podman run -i --rm \
registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
man firewall-cmd
Running the Grafana workload using Podman #
5.14.1 Introduction #
This article describes how to run the Grafana visualization tool on the Adaptable Linux Platform (ALP).
5.14.2 Requirements #
Deployed ALP base OS
Installed and enabled Podman
5.14.3 Starting the Grafana workload #
This section describes how to start the Grafana workload, set up a client so that we can test it with real data, and configure the Grafana Web application to visualize the client's data.
Identify the Grafana workload image:
#
podman search grafana registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafanaPull the image from the registry and prepare the environment:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana:latestCreate the
grafana
container from the downloaded image:#
grafana-container-manage.sh createStart the container with the Grafana server:
#
grafana-container-manage.sh start
5.14.4 Setting up a Grafana client #
To test Grafana, you need to set up a client that will provide real data to the Grafana server.
Log in to the client host and install the golang-github-prometheus-node_exporter and golang-github-prometheus-prometheus packages:
#
zypper in golang-github-prometheus-node_exporter golang-github-prometheus-prometheusNoteIf your Grafana server and client hosts are virtualized by a KVM containerized workload, use the
--network
option while creating the POD because the--publish
option does not work in this scenario. To get the IP of the VM Host Server default network, run the following command on the VM Host Server:>
virsh net-dhcp-leases defaultRestart the Prometheus services on the client host:
#
systemctl restart prometheus-node_exporter.service#
systemctl restart prometheus
5.14.5 Configuring the Grafana Web application #
To configure a data source for the Grafana Web dashboard, follow these steps:
Open the Grafana Web page that is running on port 3000 on the ALP host where the Grafana workload is running, for example:
>
firefox http://ALP_HOST_IP_ADDRESS:3000Log in to Grafana. The default user name and password are both set to
admin
. After logging in, enter a new password.Add the Prometheus data source provided by the client. In the left panel, hover your mouse over the gear icon and select
.Figure 5.8: Grafana data sources #Click
and select . Fill the field with the URL of the client where the Prometheus service runs on port 9090, for example:Figure 5.9: Prometheus URL configuration in Grafana #Confirm with
Create a dashboard based on Prometheus data. Hover your mouse over the plus sign in the left panel and select
.Figure 5.10: Creating a Grafana dashboard #Enter
405
as the dashboard ID and confirm with .From the
drop-down list at the bottom, select the data source you have already created. Confirm with .Grafana shows your newly created dashboard.
Figure 5.11: New Grafana dashboard #
Usage of the grafana-container-manage.sh
script #
The grafana-container-manage.sh
script is used to manage
the Grafana container on the Adaptable Linux Platform (ALP). This article lists each
subcommand of the script and describes their purpose.
grafana-container-manage.sh create
Pulls the Grafana image and creates the corresponding container.
grafana-container-manage.sh install
Installs additional files that are required to manage the
grafana
container.grafana-container-manage.sh start
Starts the container called
grafana
.grafana-container-manage.sh uninstall
Uninstalls all files on the host that were required to manage the
grafana
container.grafana-container-manage.sh stop
Stops the
grafana
container.grafana-container-manage.sh rm
Deletes the
grafana
container.grafana-container-manage.sh rmcache
Removes the container image in cache.
grafana-container-manage.sh
Runs the
grafana
container.grafana-container-manage.sh bash
Runs the
bash
shell inside thegrafana
container.grafana-container-manage.sh logs
Displays log messages of the
grafana
container.