Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / The Adaptable Linux Platform Guide / Workloads

5 Workloads

5.1 Introduction

The Adaptable Linux Platform (ALP) runs containerized workloads instead of traditional applications. Images of these containers are stored in image registries online. ALP can run any containerized workload that is supported by the default container manager Podman. This article lists and describes workloads securely distributed and supported by SUSE. You can find the source files of the workloads at https://build.opensuse.org/project/show/SUSE:ALP:Workloads.

5.2 YaST

The following YaST container images are available:

yast-mgmt-ncurses

The base YaST workload. It contains the text version of YaST (ncurses).

For more details, refer to Section 5.9, “Running the YaST workload using Podman”.

yast-mgmt-qt

This workload adds the Qt-based graphical user interface.

yast-mgmt-web

This workload exposes the standard graphical interface via a VNC server and uses a JavaScript VNC client to render the screen in a Web browser.

5.3 KVM

This workload adds virtualization capability to ALP so that you can use it as a VM Host Server. It uses the KVM hypervisor supported by the libvirt toolkit.

For more details, refer to Section 5.10, “Running the KVM virtualization workload using Podman”.

5.4 Cockpit Web server

This workload adds the Cockpit Web server to ALP so that you can administer the system and container via a user-friendly interface in your Web browser.

For more details, refer to Section 5.11, “Running the Cockpit Web server using Podman”.

5.5 GDM

This workload runs GDM and basic GNOME environment. For more details, refer to Section 5.12, “Running the GNOME Display Manager workload using Podman”.

5.6 firewalld

This workload adds firewall capability to ALP to define the trust level of network connections or interfaces.

For more details, refer to Section 5.13, “Running firewalld using Podman”.

5.7 Grafana

This workload adds a Web-based dashboard to the ALP host that lets you query, monitor, visualize and better understand existing data residing on any client host.

For more details, refer to Section 5.14, “Running the Grafana workload using Podman”.

Running the YaST workload using Podman

5.9.1 Introduction

This article describes how to start the YaST workload on the Adaptable Linux Platform (ALP).

5.9.2 Requirements

  • Deployed ALP base OS.

  • Installed and enabled Podman.

5.9.3 Starting YaST in text mode

To start the text version (ncurses) of YaST as a workload, follow these steps:

  1. Identify the full URL address in a registry of container images, for example:

    > podman search yast-mgmt-ncurses
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses
    [...]
  2. To start the container, run the following command:

    # podman container runlabel run \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-ncurses:latest
    YaST running in text mode on ALP
    Figure 5.1: YaST running in text mode on ALP

5.9.4 Starting graphical YaST

To start the graphical Qt version of YaST as a workload, follow these steps:

  1. To view the graphical YaST on your local X server, you need to use SSH X forwarding. It requires the xauth package installed, applied by the host reboot:

    # transactional-update pkg install xauth && reboot
  2. Connect to the ALP host using ssh with the X forwarding enabled:

    > ssh -X ALP_HOST
  3. Identify the full URL address in a registry of container images, for example:

    > podman search yast-mgmt-qt
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt
    [...]
  4. To start the container, run the following command:

    # podman container runlabel run \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/yast-mgmt-qt:latest
    Running graphical YaST on top of ALP
    Figure 5.2: Running graphical YaST on top of ALP

Running the KVM virtualization workload using Podman

5.10.1 Introduction

This article describes how to run KVM VM Host Server on the Adaptable Linux Platform (ALP).

5.10.2 Requirements

  • Deployed ALP base OS.

  • When running ALP in a virtualized environment, you need to enable the nested KVM virtualization on the bare-metal host operating system and use kernel-default kernel instead of the default kernel-default-base in ALP.

  • Installed and enabled Podman.

5.10.3 Starting the KVM workload

ALP can serve as a host running virtual machines. The following procedure describes steps to prepare the ALP host to run containerized KVM VM Host Server and run an example VM Guest on top of it.

  1. Identify the KVM workload image:

    # podman search kvm
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm
  2. Pull the image from the registry and install all the wrapper scripts:

    # podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latest
  3. Create the libvirtd container from the downloaded image:

    # kvm-container-manage.sh create
  4. Start the container:

    # kvm-container-manage.sh start
  5. Optionally, run a VM Guest on top of the started KVM VM Guest using the virt-install.sh script.

    Tip
    Tip

    virt-install.sh uses the openSUSE-Tumbleweed-JeOS.x86_64-OpenStack-Cloud.qcow2 image by default. To specify another VM image, modify the APPLIANCE_MIRROR and APPLIANCE options in the /etc/kvm-container.conf file.

    Tip
    Tip

    virsh.sh is a wrapper script to launch the virsh command inside the container (the default container name is libvirtd).

    > virt-install.sh
    [...]
    Starting install...
    Password for first root login is: OPjQok1nlfKp5DRZ
    Allocating 'Tumbleweed-JeOS_5221fd7860.qcow2'            |    0 B  00:00:00 ...
    Creating domain...                                       |    0 B  00:00:00
    Running text console command: virsh --connect qemu:///system console Tumbleweed-JeOS_5221fd7860
    Connected to domain 'Tumbleweed-JeOS_5221fd7860'
    Escape character is ^] (Ctrl + ])
    
    Welcome to openSUSE Tumbleweed 20220919 - Kernel 5.19.8-1-default (hvc0).
    
    eth0: 192.168.10.67 fe80::5054:ff:fe5a:c416
    
    localhost login:

Usage of the kvm-container-manage.sh script

The kvm-container-manage.sh script is used to manage the KVM server container on the Adaptable Linux Platform (ALP). This article lists each subcommand of the script and describes its purpose.

kvm-container-manage.sh create

Creates a KVM server container from a previously downloaded container image. To download the images, use podman, for example:

# podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kvm:latest
kvm-container-manage.sh start

Starts the KVM server container.

kvm-container-manage.sh virsh list

Lists all running VM Guests. Append the --all option to get the list of all—running and stopped—VM Guests.

kvm-container-manage.sh stop

Stops the running KVM server container.

kvm-container-manage.sh uninstall

Cleans the host environment by uninstalling all files that were required to run the KVM server container.

Running the Cockpit Web server using Podman

5.11.1 Introduction

This article describes how to run a containerized Cockpit Web server on the Adaptable Linux Platform (ALP) using Podman.

Note
Note

An alternative way of installing and enabling the Cockpit Web server is described in https://en.opensuse.org/openSUSE:ALP/Workgroups/SysMngmnt/Cockpit#Install_the_Web_Server_Via_Packages.

5.11.2 Requirements

  • Deployed ALP base OS.

  • Installed and enabled Podman.

  • Installed the alp_cockpit pattern.

5.11.3 Starting the Cockpit workload

Cockpit is a tool to administer one or more hosts from one place via a Web user interface. Its default functionality is extended by plug-ins that you can install additionally. You do not need the Cockpit Web user interface installed on every ALP host. One instance of the Web interface can connect to multiple hosts if they have the alp_cockpit pattern installed.

ALP has the base part of the Cockpit component installed by default. It is included in the alp_cockpit pattern. To install and run Cockpit's Web interface, follow these steps:

  1. Identify the Cockpit Web server workload image:

    # podman search cockpit-ws
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws
  2. Pull the image from the registry:

    # podman container runlabel install \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest
  3. Run the Cockpit's containerized Web server:

    # podman container runlabel --name cockpit-ws run \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest
  4. To run the Cockpit's Web server on each ALP boot, enable its service:

    # systemctl enable cockpit.service
  5. To view the Cockpit Web user interface, point your Web browser to the following address and accept the self-signed certificate:

    https://HOSTNAME_OR_IP_OF_ALP_HOST:9090
    Cockpit running on ALP
    Figure 5.3: Cockpit running on ALP

5.11.4 Next steps

  • Administer the system using Cockpit.

  • Install and run additional workloads. For their list and description, refer to Chapter 5, Workloads.

Adding more functionality to Cockpit

5.11.6.1 Introduction

After you deploy Cockpit on the Adaptable Linux Platform (ALP), it already provides a default functionality. The following sections describe how to extend it by installing additional Cockpit extensions. Note that you need to reboot ALP to apply the changes.

Important
Important

Some packages described in this article are available from the ALP-Build repository which may be disabled by default. To make sure the repository is enabled, run the following command:

# zypper mr -e ALP-Build && refresh

5.11.6.2 Metrics

To enable the visualization of some current metrics, install the PCP extension:

# transactional-update pkg install cockpit-pcp
# reboot
Metrics and history in Cockpit
Figure 5.4: Metrics and history in Cockpit

5.11.6.3 Software updates

To be able to perform transactional software updates from Cockpit, install the cockpit-tukit package:

# transactional-update pkg install cockpit-tukit
# reboot
Software Updates in Cockpit
Figure 5.5: Software updates in Cockpit

5.11.6.4 Storage devices

To manage local storage devices and their associated technologies, install the cockpit-storaged package:

# transactional-update pkg install cockpit-storaged
# reboot
Storage in Cockpit
Figure 5.6: Storage in Cockpit

Running the GNOME Display Manager workload using Podman

5.12.1 Introduction

This article describes how to deploy and run the GNOME Display Manager (GDM) on the Adaptable Linux Platform (ALP).

5.12.2 Requirements

  • Deployed ALP base OS

  • Installed and enabled Podman

5.12.3 Starting the GDM workload

  1. On the ALP host system, install accountsservice and systemd-experimental packages:

    # transactional-update pkg install accountsservice systemd-experimental
    # reboot
  2. Verify that SELinux is configured in the permissive mode and enable the permissive mode if required:

    # setenforce 0
  3. Identify the GDM container:

    > podman search gdm
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm
    [...]
  4. Download and recreate the GDM container locally:

    # podman container runlabel install \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
  5. Reload the affected systemd services:

    # systemctl daemon-reload
    # systemctl reload dbus
    # systemctl restart accounts-daemon
  6. Run the GDM container.

    1. For a standalone process in a container, run:

      # systemctl start gdm.service

      Alternatively, run the command manually:

      # podman container runlabel --name gdm run \
       registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
    2. For systems with systemd running in a container, run:

      # systemctl start gdm-systemd.service

      Alternatively, run the command manually:

      # podman container runlabel run-systemd --name gdm \
       registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest
  7. The GDM starts. After you log in, a basic GNOME environment opens.

    GNOME Settings on top of ALP
    Figure 5.7: GNOME Settings on top of ALP
Tip
Tip: Uninstalling deployed files

If you need to clean the environment from all deployed files, run the following command:

# podman container runlabel uninstall \
 registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/gdm:latest

Running firewalld using Podman

5.13.1 Introduction

This article describes how to run a containerized firewalld on the Adaptable Linux Platform (ALP) using Podman.

Important
Important

The firewalld container needs access to the host network and needs to run as a privileged container. The container image uses the system dbus instance. Therefore, you need to install dbus and polkit configuration files first.

5.13.2 Requirements

  • Deployed ALP base OS

  • Installed and enabled Podman

  • Installed alp_cockpit pattern

5.13.3 Running the firewalld workload

  1. Identify the firewalld workload image:

    # podman search firewalld
    registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld
  2. Verify that firewalld is not installed in the host system. Remove it, if necessary, and reboot the ALP host:

    # transactional-update pkg remove firewalld
    reboot
  3. Initialize the environment:

    # podman container runlabel install \
    registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld

    The command prepares the system and creates the following files on the host system:

    /etc/dbus-1/system.d/FirewallD.conf
    /etc/polkit-1/actions/org.fedoraproject.FirewallD1.policy 1
    /etc/systemd/system/firewalld.service 2
    /etc/default/container-firewalld
    /usr/local/bin/firewall-cmd 3

    1

    The polkit policy file will only be installed if polkit itself is installed. It may be necessary to restart the dbus and polkit daemon afterwards.

    2

    The systemd service and the corresponding configuration file /etc/default/container-firewalld allow to start and stop the container using systemd if Podman is used as a container manager.

    3

    /usr/local/bin/firewall-cmd is a wrapper to call the firewall-cmd inside the container. Docker and Podman are supported.

  4. Run the container:

    # podman container runlabel run \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld

    The command will run the container as a privileged container with the host network. Additionally, /etc/firewalld and the dbus socket are mounted into the container.

    Tip
    Tip

    If your container manager is Podman, you can operate firewalld by using its systemd unit files, for example:

    # systemctl start firewalld
  5. Optionally, you can remove the firewalld workload and clean the environment from all related files. Configuration files are left on the system.

    # podman container runlabel uninstall \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld

5.13.3.1 Managing the firewalld instance

After the firewalld container is started, you can manage its instance in two ways. You can manually call its client application via the podman exec command, for example:

podman exec firewalld firewall-cmd OPTIONS

Alternatively, you can use a shorter syntax by running the firewall-cmd wrapper script.

5.13.3.2 firewalld manual pages

To read the firewalld manual page, run the following command:

> podman run -i --rm \
 registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
 man firewalld

To read the firewall-cmd manual page, run the following command:

> podman run -i --rm \
 registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
 man firewall-cmd

Running the Grafana workload using Podman

5.14.1 Introduction

This article describes how to run the Grafana visualization tool on the Adaptable Linux Platform (ALP).

5.14.2 Requirements

  • Deployed ALP base OS

  • Installed and enabled Podman

5.14.3 Starting the Grafana workload

This section describes how to start the Grafana workload, set up a client so that we can test it with real data, and configure the Grafana Web application to visualize the client's data.

  1. Identify the Grafana workload image:

    # podman search grafana
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana
  2. Pull the image from the registry and prepare the environment:

    # podman container runlabel install \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana:latest
  3. Create the grafana container from the downloaded image:

    # grafana-container-manage.sh create
  4. Start the container with the Grafana server:

    # grafana-container-manage.sh start

5.14.4 Setting up a Grafana client

To test Grafana, you need to set up a client that will provide real data to the Grafana server.

  1. Log in to the client host and install the golang-github-prometheus-node_exporter and golang-github-prometheus-prometheus packages:

    # zypper in golang-github-prometheus-node_exporter golang-github-prometheus-prometheus
    Note
    Note

    If your Grafana server and client hosts are virtualized by a KVM containerized workload, use the --network option while creating the POD because the --publish option does not work in this scenario. To get the IP of the VM Host Server default network, run the following command on the VM Host Server:

    > virsh net-dhcp-leases default
  2. Restart the Prometheus services on the client host:

    # systemctl restart prometheus-node_exporter.service
    # systemctl restart prometheus

5.14.5 Configuring the Grafana Web application

To configure a data source for the Grafana Web dashboard, follow these steps:

  1. Open the Grafana Web page that is running on port 3000 on the ALP host where the Grafana workload is running, for example:

    > firefox http://ALP_HOST_IP_ADDRESS:3000
  2. Log in to Grafana. The default user name and password are both set to admin. After logging in, enter a new password.

  3. Add the Prometheus data source provided by the client. In the left panel, hover your mouse over the gear icon and select Data sources.

    Grafana data sources
    Figure 5.8: Grafana data sources
  4. Click Add data source and select Prometheus. Fill the URL field with the URL of the client where the Prometheus service runs on port 9090, for example:

    Prometheus URL configuration in Grafana
    Figure 5.9: Prometheus URL configuration in Grafana

    Confirm with Save & test

  5. Create a dashboard based on Prometheus data. Hover your mouse over the plus sign in the left panel and select Import.

    Creating Grafana dashboard
    Figure 5.10: Creating a Grafana dashboard
  6. Enter 405 as the dashboard ID and confirm with Load.

  7. From the Prometheus drop-down list at the bottom, select the data source you have already created. Confirm with Import.

  8. Grafana shows your newly created dashboard.

    New Grafana dashboard
    Figure 5.11: New Grafana dashboard

Usage of the grafana-container-manage.sh script

The grafana-container-manage.sh script is used to manage the Grafana container on the Adaptable Linux Platform (ALP). This article lists each subcommand of the script and describes their purpose.

grafana-container-manage.sh create

Pulls the Grafana image and creates the corresponding container.

grafana-container-manage.sh install

Installs additional files that are required to manage the grafana container.

grafana-container-manage.sh start

Starts the container called grafana.

grafana-container-manage.sh uninstall

Uninstalls all files on the host that were required to manage the grafana container.

grafana-container-manage.sh stop

Stops the grafana container.

grafana-container-manage.sh rm

Deletes the grafana container.

grafana-container-manage.sh rmcache

Removes the container image in cache.

grafana-container-manage.sh

Runs the grafana container.

grafana-container-manage.sh bash

Runs the bash shell inside the grafana container.

grafana-container-manage.sh logs

Displays log messages of the grafana container.