Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / The SUSE ALP Dolomite Guide / SUSE Workloads

5 SUSE Workloads

SUSE ALP Dolomite runs containerized workloads instead of traditional applications. Images of these containers are stored in image registries online. ALP Dolomite can run any containerized workload that is supported by the default container manager Podman. This article lists and describes workloads securely distributed and supported by SUSE. You can find the source files of the workloads at https://build.opensuse.org/project/show/SUSE:ALP:Workloads.

5.1 Common requirements

To run workloads on ALP Dolomite using Podman, you generally need to have:

  • Deployed ALP Dolomite.

  • Installed and enabled Podman.

5.2 Running firewalld using Podman

This article describes how to run a containerized firewalld on SUSE ALP Dolomite using Podman. This workload adds firewall capability to ALP Dolomite to define the trust level of network connections or interfaces.

Important
Important

The firewalld container needs access to the host network and needs to run as a privileged container. The container image uses the system dbus instance. Therefore, you need to install dbus and polkit configuration files first.

5.2.1 Running the firewalld workload

  1. Identify the firewalld workload image:

    # podman search firewalld
    [...]
    registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld
  2. Verify that firewalld is not installed in the host system. Remove it, if necessary, and reboot the ALP Dolomite host:

    # transactional-update pkg remove firewalld
    reboot
  3. Initialize the environment:

    # podman container runlabel install \
    registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld

    The command prepares the system and creates the following files on the host system:

    /etc/dbus-1/system.d/FirewallD.conf
    /etc/polkit-1/actions/org.fedoraproject.FirewallD1.policy 1
    /etc/systemd/system/firewalld.service 2
    /etc/default/container-firewalld
    /usr/local/bin/firewall-cmd 3

    1

    The polkit policy file will only be installed if polkit itself is installed. It may be necessary to restart the dbus and polkit daemon afterwards.

    2

    The systemd service and the corresponding configuration file /etc/default/container-firewalld allow to start and stop the container using systemd if Podman is used as a container manager.

    3

    /usr/local/bin/firewall-cmd is a wrapper to call the firewall-cmd inside the container. Docker and Podman are supported.

  4. Run the container:

    # podman container runlabel run \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld

    The command will run the container as a privileged container with the host network. Additionally, /etc/firewalld and the dbus socket are mounted into the container.

    Tip
    Tip

    If your container manager is Podman, you can operate firewalld by using its systemd unit files, for example:

    # systemctl start firewalld
  5. Optionally, you can remove the firewalld workload and clean the environment from all related files. Configuration files are left on the system.

    # podman container runlabel uninstall \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld

5.2.1.1 Managing the firewalld instance

After the firewalld container is started, you can manage its instance in two ways. You can manually call its client application via the podman exec command, for example:

podman exec firewalld firewall-cmd OPTIONS

Alternatively, you can use a shorter syntax by running the firewall-cmd wrapper script.

5.2.1.2 firewalld manual pages

To read the firewalld manual page, run the following command:

> podman run -i --rm \
 registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
 man firewalld

To read the firewall-cmd manual page, run the following command:

> podman run -i --rm \
 registry.opensuse.org/suse/alp/workloads/tumbleweed_images/suse/alp/workloads/firewalld \
 man firewall-cmd

5.3 Running the Grafana workload using Podman

This article describes how to run the Grafana visualization tool on SUSE ALP Dolomite. This workload adds a Web-based dashboard to the ALP Dolomite host that lets you query, monitor, visualize and better understand existing data residing on any client host.

5.3.1 Starting the Grafana workload

This section describes how to start the Grafana workload, set up a client so that we can test it with real data, and configure the Grafana Web application to visualize the client's data.

  1. Identify the Grafana workload image:

    # podman search grafana
    [...]
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana
  2. Pull the image from the registry and prepare the environment:

    # podman container runlabel install \
     registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/grafana:latest
  3. Create the grafana container from the downloaded image:

    # grafana-container-manage.sh create
  4. Start the container with the Grafana server:

    # grafana-container-manage.sh start

5.3.2 Setting up a Grafana client

To test Grafana, you need to set up a client that will provide real data to the Grafana server.

  1. Log in to the client host and install the golang-github-prometheus-node_exporter and golang-github-prometheus-prometheus packages:

    # zypper in golang-github-prometheus-node_exporter golang-github-prometheus-prometheus
    Note
    Note

    If your Grafana server and client hosts are virtualized by a KVM containerized workload, use the --network option while creating the POD because the --publish option does not work in this scenario. To get the IP of the VM Host Server default network, run the following command on the VM Host Server:

    > virsh net-dhcp-leases default
  2. Restart the Prometheus services on the client host:

    # systemctl restart prometheus-node_exporter.service
    # systemctl restart prometheus

5.3.3 Configuring the Grafana Web application

To configure a data source for the Grafana Web dashboard, follow these steps:

  1. Open the Grafana Web page that is running on port 3000 on the ALP Dolomite host where the Grafana workload is running, for example:

    > firefox http://ALP_HOST_IP_ADDRESS:3000
  2. Log in to Grafana. The default user name and password are both set to admin. After logging in, enter a new password.

  3. Add the Prometheus data source provided by the client. In the left panel, hover your mouse over the gear icon and select Data sources.

    Grafana data sources
    Figure 5.1: Grafana data sources
  4. Click Add data source and select Prometheus. Fill the URL field with the URL of the client where the Prometheus service runs on port 9090, for example:

    Prometheus URL configuration in Grafana
    Figure 5.2: Prometheus URL configuration in Grafana

    Confirm with Save & test

  5. Create a dashboard based on Prometheus data. Hover your mouse over the plus sign in the left panel and select Import.

    Creating Grafana dashboard
    Figure 5.3: Creating a Grafana dashboard
  6. Enter 405 as the dashboard ID and confirm with Load.

  7. From the Prometheus drop-down list at the bottom, select the data source you have already created. Confirm with Import.

  8. Grafana shows your newly created dashboard.

    New Grafana dashboard
    Figure 5.4: New Grafana dashboard

5.3.4 Usage of the grafana-container-manage.sh script

The grafana-container-manage.sh script is used to manage the Grafana container on SUSE ALP Dolomite. This article lists each subcommand of the script and describes its purpose.

grafana-container-manage.sh create

Pulls the Grafana image and creates the corresponding container.

grafana-container-manage.sh install

Installs additional files that are required to manage the grafana container.

grafana-container-manage.sh start

Starts the container called grafana.

grafana-container-manage.sh uninstall

Uninstalls all files on the host that were required to manage the grafana container.

grafana-container-manage.sh stop

Stops the grafana container.

grafana-container-manage.sh rm

Deletes the grafana container.

grafana-container-manage.sh rmcache

Removes the container image in cache.

grafana-container-manage.sh

Runs the grafana container.

grafana-container-manage.sh bash

Runs the bash shell inside the grafana container.

grafana-container-manage.sh logs

Displays log messages of the grafana container.

5.4 Running the NeuVector workload using Podman

NeuVector is a powerful container security platform that includes end-to-end vulnerability scanning and complete runtime protection for containers, pods and hosts. This article describes how to run NeuVector on SUSE ALP Dolomite.

Important
Important

NeuVector requires SELinux to be set into the permissive mode by running the following command:

# setenforce 0

You can find more details about SELinux in Section 2.5.3, “SELinux”.

5.4.1 Starting NeuVector

  1. Identify the NeuVector workload image:

    # podman search neuvector
    [...]
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/neuvector
  2. Pull the image from the registry and install systemd services to handle NeuVector container start-up and shutdown:

    # podman container runlabel install \
      registry.opensuse.org/suse/alp/workloads/bci_containerfiles/suse/alp/workloads/neuvector-demo:latest
  3. Start the NeuVector service:

    # systemctl start neuvector.service
  4. Connect to NeuVector in the Web browser by entering the following URL:

    https://HOST_RUNNING_NEUVECTOR_SERVICE:8443

    You need to accept the warning about the self-signed SSL certificate and log in with the following default credentials: admin / admin.

5.4.2 Uninstalling NeuVector

To uninstall NeuVector, run the following command:

# podman container runlabel uninstall \
  registry.opensuse.org/suse/alp/workloads/bci_containerfiles/suse/alp/workloads/neuvector-demo:latest

5.5 Running the Ansible workload using Podman

Ansible is a suite of tools for managing and provisioning data centers via definition files. This article describes how to run Ansible on SUSE ALP Dolomite.

Important
Important

python3-lxml and python3-rpm packages are required for Ansible to interact with libvirt and gather package facts. The kernel-default-base package does not contain the required drivers for multiple NetworkManager or nmcli operations, such as creating bonded interfaces. Replace it with kernel-default:

# transactional-update pkg install python3-rpm python3-lxml kernel-default -kernel-default-base
# shutdown -r now

5.5.1 Installing Ansible commands

  1. Identify the Ansible workload image:

    # podman search ansible
    [...]
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible
  2. Pull the image from the registry and install Ansible commands.

    1. For root, the Ansible commands are placed in the /usr/local/bin directory. Run the following command to install Ansible commands for root:

      # podman container runlabel install \
        registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible:latest
      Tip
      Tip: Example Ansible playbooks

      If you installed the Ansible commands as root, you can find example playbooks in the /usr/local/share/ansible-container/examples directory.

    2. For non-root, the Ansible commands are placed in the bin/ subdirectory of the current working directory. When installing them in your home directory, verify that the bin/ subdirectory exists. Run the following commands to install Ansible commands in your home directory:

      > cd && podman container runlabel user-install \
        registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible:latest

After the successful installation of Ansible, the following commands are available:

  • ansible

  • ansible-community

  • ansible-config

  • ansible-connection

  • ansible-console

  • ansible-doc

  • ansible-galaxy

  • ansible-inventory

  • ansible-lint

  • ansible-playbook

  • ansible-pull

  • ansible-test

  • ansible-vault

5.5.2 Uninstalling Ansible

To uninstall Ansible as root, run the following command:

# podman container runlabel uninstall \
  registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible:latest

To uninstall Ansible as non-root, run the following commands:

> cd && podman container runlabel user-uninstall \
  registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/ansible:latest

5.5.3 Operation via SSH

Because Ansible is running inside a container, the default localhost environment is the container itself and not the system hosting the container instance. Therefore, any changes made to the localhost environment are made to the container and are lost when the container exits.

Instead, Ansible can be targeted via an SSH connection at the host that is running the container, namely host.containers.internal, using an Ansible inventory similar to the following example:

alhost_group:
  hosts:
    alphost:
      ansible_host: host.containers.internal
      ansible_python_interpreter: /usr/bin/python3

An equivalent alphost default inventory item has also been added to the container's /etc/ansible/hosts inventory, which can be used by the ansible command-line tool. For example, to run the setup module to collect and show system facts from the alphost, run a command similar to the following:

# ansible alphost -m setup
  alphost | SUCCESS => {
    "ansible_facts": {
[...]
    },
    "changed": false
}
Tip
Tip

The inventory record may also contain other hosts to be managed.

Important
Important: Set up SSH keys

The container must be able to connect to the system being managed. The following conditions must be fulfilled:

  • The system supports SSH access.

  • SSH keys are created using ssh-keygen.

  • The public SSH key is included in the .ssh/authorized_keys file for the target user.

The preferred method is to use a non-root account that has passwordless sudo privileges. Any operations in Ansible playbooks that require system privileges need to use the become: true setting.

Note that the SSH access can be validated with the ssh localhost command.

5.5.4 Examples of Ansible playbooks

5.5.4.1 Introduction

On the ALP Dolomite system where the Ansible workload container has been installed using the install runlabel (refer to Section 5.5.1, “Installing Ansible commands” for more details), the examples are available in /usr/local/share/ansible-container/examples/ansible.

5.5.4.2 Simple Ansible test

The playbook.yml playbook tests several common Ansible operations, such as gathering facts and testing for installed packages. To invoke the play, change to the directory /usr/local/share/ansible-container/examples/ansible and enter the following command:

> ansible-playbook playbook.yml
...
PLAY RECAP *********************************************************************
alphost    : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

5.5.4.3 Drive nmcli to change system networking

The network.yml playbook uses the community.general.nmcli plugin to test common network operations, such as assigning static IP addresses to NICs or creating bonded interfaces.

The NICs, IP addresses, bond names, and bonded NICs are defined in the vars section of the network.yml file. Update it to reflect the current environment. To invoke the play, change to the directory /usr/local/share/ansible-container/examples/ansible and enter the following command:

> ansible-playbook network.yml
...
ASK [Ping test Bond IPs] ************************************************************************************************
ok: [alphost] => (item={'name': 'bondcon0', 'ifname': 'bond0', 'ip4': '192.168.181.10/24', 'gw4': '192.168.181.2', 'mode': 'active-backup'})
ok: [alphost] => (item={'name': 'bondcon1', 'ifname': 'bond1', 'ip4': '192.168.181.11/24', 'gw4': '192.168.181.2', 'mode': 'balance-alb'})

TASK [Ping test static nics IPs] *****************************************************************************************
ok: [alphost] => (item={'name': 'enp2s0', 'ifname': 'enp2s0', 'ip4': '192.168.181.3/24', 'gw4': '192.168.181.2', 'dns4': ['8.8.8.8']})
ok: [alphost] => (item={'name': 'enp3s0', 'ifname': 'enp3s0', 'ip4': '192.168.181.4/24', 'gw4': '192.168.181.2', 'dns4': ['8.8.8.8']})

PLAY RECAP ***************************************************************************************************************
alphost                    : ok=9    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

5.5.4.4 Set up ALP Dolomite as a libvirt host

The setup_libvirt_host.yml playbook installs the kvm-container workload and enables the libvirtd systemd service. To invoke the play, change to the directory /usr/local/share/ansible-container/examples/ansible and enter the following command:

> ansible-playbook setup_libvirt_host.yml
...
PLAY RECAP *********************************************************************
alphost    : ok=9 changed=2 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0

> sudo /usr/local/bin/virsh list --all
using /etc/kvm-container.conf as configuration file
+ podman exec -ti libvirtd virsh list --all
Authorization not available.
Check if polkit service is running or see debug message for more information.
Note
Note

If the required kernel and supporting packages are not already installed, a reboot is required to complete the installation of missing packages. Follow the directions generated by the playbook. After the reboot has completed successfully, rerun the playbook to finish the setup of the libvirtd service.

5.5.4.5 Create an openSUSE Tumbleweed appliance VM

The playbook creates and starts a libvirt managed VM called tumbleweed that is based on the latest available openSUSE Tumbleweed appliance VM image.

It uses the setup_libvirt_host.yml playbook (see Section 5.5.4.4, “Set up ALP Dolomite as a libvirt host”) to ensure that the ALP Dolomite host is ready to manage VMs before creating the new one. It may fail prompting you to reboot before running the playbook again to finish setting up libvirt and creating the VM.

To invoke the play, change to the directory /usr/local/share/ansible-container/examples/ansible and enter the following command:

> ansible-playbook create_tumbleweed_vm.yml
...
TASK [Query list of libvirt VMs] ***********************************************
ok: [alphost]

TASK [Show that Tumbleweed appliance has been created] *************************
ok: [alphost] => {
    "msg": "Running VMs: tumbleweed"
}

PLAY RECAP *********************************************************************

5.6 Running the Kea DHCP server using Podman

Kea is an open-source DHCP server that supports both DHCPv4 and DHCPv6 protocols. It provides IPv6 prefix delegation, host reservations optionally stored in a database, PXE boot, high-availability setup and other features.

5.6.1 Deploying and running the Kea workload

  1. Identify the Kea DHCP server container image:

    # podman search kea
    [...]
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea
  2. Pull the image from the registry:

    # podman pull \
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest
  3. Install all required parts of the Kea workload:

    # podman container runlabel install \
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest

    The previous command installs:

    • Default configuration files in the /etc/kea directory

    • The keactrl wrapper in the /usr/local/bin directory

    • systemd service files for the dhcp4 and dhcp6 containers in the /etc/systemd/system/ directory

  4. Run the Kea DHCP server. You can run it either using systemd unit files, or manually.

    Tip
    Tip

    To run DHCP server with firewalld active, you need to add exception rules based on the version of DHCP you're using.

    For DHCPv4:

    > sudo firewall-cmd --add-service=dhcp

    For DHCPv6:

    > sudo firewall-cmd --add-service=dhcpv6
    1. To run Kea as a systemd service, use one of the following commands:

      # systemctl start kea-dhcp4.service

      Or, for DHCPv6:

      # systemctl start kea-dhcp6.service
    2. To run Kea manually, use one of the following commands:

      # podman container runlabel run \
      registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest

      Or, for DHCPv6:

      # podman container runlabel run_dhcp6 \
      registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest
  5. Optionally, you can uninstall the Kea workload. The following command removes all Kea-related files except for the configuration directory and its content:

    # podman container runlabel uninstall \
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest
    Tip
    Tip

    The purge runlabel removes the Kea configuration directory /etc/kea but leaves the rest of Kea deployment in place:

    # podman container runlabel purge \
    registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/kea:latest

5.6.2 Configuration files

The Kea configuration files—kea-dhcp4.conf and kea-dhcp6.conf—are located in the /etc/kea directory. They include the default configuration. You can find detailed information about configuring the DHCP server in the official documentation at https://kea.readthedocs.io/.

Tip
Tip

If you modify configuration files, run keactrl reload to apply them to running servers.

5.6.3 The keactrl wrapper

The installed keactrl wrapper uses the original keactrl tool to send commands to deployed containers. It uses the same options as the original tool with one exception: the -s option is adjusted to send commands to the DHCPv4 (-s dhcp4) or DHCPv6 (-s dhcp6). If -s is not specified, keactrl sends commands to both servers if they are started.

5.7 For more information