5 Workloads #
5.1 Introduction #
The Adaptable Linux Platform (ALP) runs containerized workloads instead of traditional applications. Images of these containers are stored in image registries online. ALP can run any containerized workload that is supported by the default container manager Podman. This article lists and describes workloads securely distributed and supported by SUSE. You can find the source files of the workloads at https://build.opensuse.org/project/show/SUSE:ALP:Workloads.
5.2 YaST #
The following YaST container images are available:
- yast-mgmt-ncurses
The base YaST workload. It contains the text version of YaST (ncurses).
For more details, refer to Section 5.9, “Running the YaST workload using Podman”.
- yast-mgmt-qt
This workload adds the Qt-based graphical user interface.
- yast-mgmt-web
This workload exposes the standard graphical interface via a VNC server and uses a JavaScript VNC client to render the screen in a Web browser.
5.3 KVM #
This workload adds virtualization capability to ALP so that you
can use it as a VM Host Server. It uses the KVM hypervisor supported by the
libvirt
toolkit.
For more details, refer to Section 5.10, “Running the KVM virtualization workload using Podman”.
5.4 Cockpit Web server #
This workload adds the Cockpit Web server to ALP so that you can administer the system and container via a user-friendly interface in your Web browser.
For more details, refer to Section 5.11, “Running the Cockpit Web server using Podman”.
5.5 GDM #
This workload runs GDM and basic GNOME environment. For more details, refer to Section 5.12, “Running the GNOME Display Manager workload using Podman”.
5.6 firewalld
#
This workload adds firewall capability to ALP to define the trust level of network connections or interfaces.
For more details, refer to
Section 5.13, “Running firewalld
using Podman”.
5.7 Grafana #
This workload adds a Web-based dashboard to the ALP host that lets you query, monitor, visualize and better understand existing data residing on any client host.
For more details, refer to Section 5.14, “Running the Grafana workload using Podman”.
Running the YaST workload using Podman #
5.9.1 Introduction #
This article describes how to start the YaST workload on the Adaptable Linux Platform (ALP).
5.9.2 Requirements #
Deployed ALP base OS.
Installed and enabled Podman.
5.9.3 Starting YaST in text mode #
To start the text version (ncurses) of YaST as a workload, follow these steps:
Identify the full URL address in a registry of container images, for example:
>
podman search yast-mgmt-ncurses registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses [...]To start the container, run the following command:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latestFigure 5.1: YaST running in text mode on ALP #
5.9.4 Starting graphical YaST #
To start the graphical Qt version of YaST as a workload, follow these steps:
To view the graphical YaST on your local X server, you need to use SSH X forwarding. It requires the xauth package installed, applied by the host reboot:
#
transactional-update pkg install xauth && rebootConnect to the ALP host using
ssh
with the X forwarding enabled:>
ssh -X ALP_HOSTIdentify the full URL address in a registry of container images, for example:
>
podman search yast-mgmt-qt registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt [...]To start the container, run the following command:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt:latestFigure 5.2: Running graphical YaST on top of ALP #
Running the KVM virtualization workload using Podman #
5.10.1 Introduction #
This article describes how to run KVM VM Host Server on the Adaptable Linux Platform (ALP).
5.10.2 Requirements #
Deployed ALP base OS.
When running ALP in a virtualized environment, you need to enable the nested KVM virtualization on the bare-metal host operating system and use
kernel-default
kernel instead of the defaultkernel-default-base
in ALP.Installed and enabled Podman.
5.10.3 Starting the KVM workload #
ALP can serve as a host running virtual machines. The following procedure describes steps to prepare the ALP host to run containerized KVM VM Host Server and run an example VM Guest on top of it.
Identify the KVM workload image:
#
podman search kvm registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvmPull the image from the registry and install all the wrapper scripts:
#
podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latestCreate the
libvirtd
container from the downloaded image:#
kvm-container-manage.sh createStart the container:
#
kvm-container-manage.sh startOptionally, run a VM Guest on top of the started KVM VM Guest using the
virt-install.sh
script.Tipvirt-install.sh
uses theopenSUSE-Tumbleweed-JeOS.x86_64-OpenStack-Cloud.qcow2
image by default. To specify another VM image, modify theAPPLIANCE_MIRROR
andAPPLIANCE
options in the/etc/kvm-container.conf
file.Tipvirsh.sh
is a wrapper script to launch thevirsh
command inside the container (the default container name islibvirtd
).>
virt-install.sh [...] Starting install... Password for first root login is: OPjQok1nlfKp5DRZ Allocating 'Tumbleweed-JeOS_5221fd7860.qcow2' | 0 B 00:00:00 ... Creating domain... | 0 B 00:00:00 Running text console command: virsh --connect qemu:///system console Tumbleweed-JeOS_5221fd7860 Connected to domain 'Tumbleweed-JeOS_5221fd7860' Escape character is ^] (Ctrl + ]) Welcome to openSUSE Tumbleweed 20220919 - Kernel 5.19.8-1-default (hvc0). eth0: 192.168.10.67 fe80::5054:ff:fe5a:c416 localhost login:
Usage of the kvm-container-manage.sh
script #
The kvm-container-manage.sh
script is used to manage the
KVM server container on the Adaptable Linux Platform (ALP). This article lists each
subcommand of the script and describes its purpose.
kvm-container-manage.sh create
Creates a KVM server container from a previously downloaded container image. To download the images, use
podman
, for example:#
podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latestkvm-container-manage.sh start
Starts the KVM server container.
kvm-container-manage.sh virsh list
Lists all running VM Guests. Append the
--all
option to get the list of all—running and stopped—VM Guests.kvm-container-manage.sh stop
Stops the running KVM server container.
kvm-container-manage.sh uninstall
Cleans the host environment by uninstalling all files that were required to run the KVM server container.
Running the Cockpit Web server using Podman #
5.11.1 Introduction #
This article describes how to run a containerized Cockpit Web server on the Adaptable Linux Platform (ALP) using Podman.
An alternative way of installing and enabling the Cockpit Web server is described in https://en.opensuse.org/openSUSE:ALP/Workgroups/SysMngmnt/Cockpit#Install_the_Web_Server_Via_Packages.
5.11.2 Requirements #
Deployed ALP base OS.
Installed and enabled Podman.
Installed the alp_cockpit pattern.
5.11.3 Starting the Cockpit workload #
Cockpit is a tool to administer one or more hosts from one place via a Web user interface. Its default functionality is extended by plug-ins that you can install additionally. You do not need the Cockpit Web user interface installed on every ALP host. One instance of the Web interface can connect to multiple hosts if they have the alp_cockpit pattern installed.
ALP has the base part of the Cockpit component installed by default. It is included in the alp_cockpit pattern. To install and run Cockpit's Web interface, follow these steps:
Identify the Cockpit Web server workload image:
#
podman search cockpit-ws registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-wsPull the image from the registry:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latestRun the Cockpit's containerized Web server:
#
podman container runlabel --name cockpit-ws run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latestTo run the Cockpit's Web server on each ALP boot, enable its service:
#
systemctl enable cockpit.serviceTo view the Cockpit Web user interface, point your Web browser to the following address and accept the self-signed certificate:
https://HOSTNAME_OR_IP_OF_ALP_HOST:9090
Figure 5.3: Cockpit running on ALP #
5.11.4 Next steps #
Administer the system using Cockpit.
Install and run additional workloads. For their list and description, refer to Chapter 5, Workloads.
Adding more functionality to Cockpit #
5.11.6.1 Introduction #
After you deploy Cockpit on the Adaptable Linux Platform (ALP), it already provides a default functionality. The following sections describe how to extend it by installing additional Cockpit extensions. Note that you need to reboot ALP to apply the changes.
Some packages described in this article are available from the
ALP-Build
repository which may be disabled by
default. To make sure the repository is enabled, run the following
command:
#
zypper mr -e ALP-Build && refresh
5.11.6.2 Metrics #
To enable the visualization of some current metrics, install the PCP extension:
#
transactional-update pkg install cockpit-pcp#
reboot
5.11.6.3 Software updates #
To be able to perform transactional software updates from Cockpit, install the cockpit-tukit package:
#
transactional-update pkg install cockpit-tukit#
reboot
5.11.6.4 Storage devices #
To manage local storage devices and their associated technologies, install the cockpit-storaged package:
#
transactional-update pkg install cockpit-storaged#
reboot
Running the GNOME Display Manager workload using Podman #
5.12.1 Introduction #
This article describes how to deploy and run the GNOME Display Manager (GDM) on the Adaptable Linux Platform (ALP).
5.12.2 Requirements #
Deployed ALP base OS
Installed and enabled Podman
5.12.3 Starting the GDM workload #
On the ALP host system, install accountsservice and systemd-experimental packages:
#
transactional-update pkg install accountsservice systemd-experimental#
rebootVerify that SELinux is configured in the permissive mode and enable the permissive mode if required:
#
setenforce 0Identify the GDM container:
>
podman search gdm registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm [...]Download and recreate the GDM container locally:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latestReload the affected
systemd
services:#
systemctl daemon-reload#
systemctl reload dbus#
systemctl restart accounts-daemonRun the GDM container.
For a standalone process in a container, run:
#
systemctl start gdm.serviceAlternatively, run the command manually:
#
podman container runlabel --name gdm run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latestFor systems with
systemd
running in a container, run:#
systemctl start gdm-systemd.serviceAlternatively, run the command manually:
#
podman container runlabel run-systemd --name gdm \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
The GDM starts. After you log in, a basic GNOME environment opens.
Figure 5.7: GNOME Settings on top of ALP #
If you need to clean the environment from all deployed files, run the following command:
#
podman container runlabel uninstall \
registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
Running firewalld
using Podman #
5.13.1 Introduction #
This article describes how to run a containerized firewalld
on the
Adaptable Linux Platform (ALP) using Podman.
The firewalld
container needs access to the host network and needs to
run as a privileged container. The container image uses the system dbus
instance. Therefore, you need to install dbus and
polkit configuration files first.
5.13.2 Requirements #
Deployed ALP base OS
Installed and enabled Podman
Installed alp_cockpit pattern
5.13.3 Running the firewalld
workload #
Identify the
firewalld
workload image:#
podman search firewalld registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalldVerify that firewalld is not installed in the host system. Remove it, if necessary, and reboot the ALP host:
#
transactional-update pkg remove firewalld rebootInitialize the environment:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalldThe command prepares the system and creates the following files on the host system:
/etc/dbus-1/system.d/FirewallD.conf /etc/polkit-1/actions/org.fedoraproject.FirewallD1.policy 1 /etc/systemd/system/firewalld.service 2 /etc/default/container-firewalld /usr/local/bin/firewall-cmd 3
The polkit policy file will only be installed if polkit itself is installed. It may be necessary to restart the
dbus
andpolkit
daemon afterwards.The
systemd
service and the corresponding configuration file/etc/default/container-firewalld
allow to start and stop the container usingsystemd
if Podman is used as a container manager./usr/local/bin/firewall-cmd
is a wrapper to call thefirewall-cmd
inside the container. Docker and Podman are supported.Run the container:
#
podman container runlabel run \ registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalldThe command will run the container as a privileged container with the host network. Additionally,
/etc/firewalld
and thedbus
socket are mounted into the container.TipIf your container manager is Podman, you can operate
firewalld
by using itssystemd
unit files, for example:#
systemctl start firewalldOptionally, you can remove the
firewalld
workload and clean the environment from all related files. Configuration files are left on the system.#
podman container runlabel uninstall \ registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld
5.13.3.1 Managing the firewalld
instance #
After the firewalld
container is started, you can manage its
instance in two ways. You can manually call its client application via
the podman exec
command, for example:
podman exec firewalld firewall-cmd OPTIONS
Alternatively, you can use a shorter syntax by running the
firewall-cmd
wrapper script.
5.13.3.2 firewalld
manual pages #
To read the firewalld
manual page, run the following command:
>
podman run -i --rm \
registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
man firewalld
To read the firewall-cmd
manual page, run the
following command:
>
podman run -i --rm \
registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
man firewall-cmd
Running the Grafana workload using Podman #
5.14.1 Introduction #
This article describes how to run the Grafana visualization tool on the Adaptable Linux Platform (ALP).
5.14.2 Requirements #
Deployed ALP base OS
Installed and enabled Podman
5.14.3 Starting the Grafana workload #
This section describes how to start the Grafana workload, set up a client so that we can test it with real data, and configure the Grafana Web application to visualize the client's data.
Identify the Grafana workload image:
#
podman search grafana registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafanaPull the image from the registry and prepare the environment:
#
podman container runlabel install \ registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana:latestCreate the
grafana
container from the downloaded image:#
grafana-container-manage.sh createStart the container with the Grafana server:
#
grafana-container-manage.sh start
5.14.4 Setting up a Grafana client #
To test Grafana, you need to set up a client that will provide real data to the Grafana server.
Log in to the client host and install the golang-github-prometheus-node_exporter and golang-github-prometheus-prometheus packages:
#
zypper in golang-github-prometheus-node_exporter golang-github-prometheus-prometheusNoteIf your Grafana server and client hosts are virtualized by a KVM containerized workload, use the
--network
option while creating the POD because the--publish
option does not work in this scenario. To get the IP of the VM Host Server default network, run the following command on the VM Host Server:>
virsh net-dhcp-leases defaultRestart the Prometheus services on the client host:
#
systemctl restart prometheus-node_exporter.service#
systemctl restart prometheus
5.14.5 Configuring the Grafana Web application #
To configure a data source for the Grafana Web dashboard, follow these steps:
Open the Grafana Web page that is running on port 3000 on the ALP host where the Grafana workload is running, for example:
>
firefox http://ALP_HOST_IP_ADDRESS:3000Log in to Grafana. The default user name and password are both set to
admin
. After logging in, enter a new password.Add the Prometheus data source provided by the client. In the left panel, hover your mouse over the gear icon and select
.Figure 5.8: Grafana data sources #Click
and select . Fill the field with the URL of the client where the Prometheus service runs on port 9090, for example:Figure 5.9: Prometheus URL configuration in Grafana #Confirm with
Create a dashboard based on Prometheus data. Hover your mouse over the plus sign in the left panel and select
.Figure 5.10: Creating a Grafana dashboard #Enter
405
as the dashboard ID and confirm with .From the
drop-down list at the bottom, select the data source you have already created. Confirm with .Grafana shows your newly created dashboard.
Figure 5.11: New Grafana dashboard #
Usage of the grafana-container-manage.sh
script #
The grafana-container-manage.sh
script is used to manage
the Grafana container on the Adaptable Linux Platform (ALP). This article lists each
subcommand of the script and describes their purpose.
grafana-container-manage.sh create
Pulls the Grafana image and creates the corresponding container.
grafana-container-manage.sh install
Installs additional files that are required to manage the
grafana
container.grafana-container-manage.sh start
Starts the container called
grafana
.grafana-container-manage.sh uninstall
Uninstalls all files on the host that were required to manage the
grafana
container.grafana-container-manage.sh stop
Stops the
grafana
container.grafana-container-manage.sh rm
Deletes the
grafana
container.grafana-container-manage.sh rmcache
Removes the container image in cache.
grafana-container-manage.sh
Runs the
grafana
container.grafana-container-manage.sh bash
Runs the
bash
shell inside thegrafana
container.grafana-container-manage.sh logs
Displays log messages of the
grafana
container.