Jump to content
SUSE Linux Enterprise Server 15 SP2

Container Guide

This guide provides an introduction to the SUSE container ecosystem. This document is a work in progress. The content in this document is subject to change without notice.

Authors: Dmitri Popov and Nora Kořánová
Publication Date: November 26, 2021
About This Guide
Required Background
Giving Feedback
Documentation Conventions
1 Introduction to Linux Containers
1.1 Key Concepts and Brief Introduction to Podman
2 Tool for Building Images and Managing Containers
2.1 Tools Available to Customers
2.2 SUSE Build Tools
2.3 Building Official SLE Images
3 Docker Open Source Engine Overview
3.1 Docker Open Source Engine Architecture
4 Setting Up Docker Open Source Engine
4.1 Preparing the Host
4.2 Configuring the Network
4.3 Storage Drivers
4.4 Updates
5 Configuring Image Storage
5.1 What is Docker Registry?
5.2 Running a Docker Registry
5.3 Limitations
5.4 Portus
6 Obtaining Containers
6.1 SUSE Linux Enterprise Base Images
6.2 SUSE Container Properties
6.3 SUSE Registry
6.4 Verifying containers
6.5 Comparing Containers
6.6 On-Premises Registry
7 Creating Custom Container Images
7.1 Pulling Base SLES Images
7.2 Customizing SLES Container Images
8 Creating Application Images
8.1 Running an Application with Specific Package Versions
8.2 Running Applications with a Specific Configuration
8.3 Sharing Data Between an Application and the Host System
8.4 Applications Running in the Background
9 Working with Containers
9.1 Starting and Removing Containers
10 Podman Overview
10.1 Podman Installation
10.2 Podman Basic Usage
11 Buildah Overview
11.1 Podman and Buildah
11.2 Buildah Installation
11.3 Building Images with Buildah
12 Container Orchestration
12.1 Pod Deployment with Podman
13 Troubleshooting
13.1 Analyze Container Images with container-diff
14 Support Plans
14.1 Supported Containers on SUSE Host Environments
14.2 Supported Container Host Environments
A Terminology
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see https://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide Edit source

This guide provides an introduction to the SUSE container ecosystem.

1 Required Background Edit source

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Giving Feedback Edit source

Your feedback and contributions to this documentation are welcome! Several channels are available:

Service Requests and Support

For services and support options available for your product, refer to https://www.suse.com/support/.

To open a service request, you need a subscription at SUSE Customer Center. Go to https://scc.suse.com/support/requests, log in, and click Create New.

Bug Reports

Report issues with the documentation at https://bugzilla.suse.com/. To simplify this process, you can use the Report Documentation Bug links next to headlines in the HTML version of this document. These preselect the right product and category in Bugzilla and add a link to the current section. You can start typing your bug report right away. A Bugzilla account is required.

Contributions

To contribute to this documentation, use the Edit Source links next to headlines in the HTML version of this document. They take you to the source code on GitHub, where you can open a pull request. A GitHub account is required.

For more information about the documentation environment used for this documentation, see the repository's README.

Mail

Alternatively, you can report errors and send feedback concerning the documentation to <>. Make sure to include the document title, the product version and the publication date of the documentation. Refer to the relevant section number and title (or include the URL) and provide a concise description of the problem.

3 Documentation Conventions Edit source

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • AMD/Intel This paragraph is only relevant for the AMD64/Intel 64 architecture. The arrows mark the beginning and the end of the text block.

    IBM Z, POWER This paragraph is only relevant for the architectures IBM Z and POWER. The arrows mark the beginning and the end of the text block.

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

1 Introduction to Linux Containers Edit source

Linux containers offer a lightweight virtualization method to run multiple virtual environments (containers) simultaneously on a single host. Unlike technologies like Xen or KVM, where the processor simulates a complete hardware environment and a hypervisor controls virtual machines, containers provide virtualization at the operating system level, where the kernel controls the isolated containers.

Advantages of Using Containers
  • Containers make it possible to isolate applications in self-contained units.

  • Containers provide near-native performance. Depending on the runtime, a container can use the host kernel directly, thus minimizing overhead.

  • It is possible to control network interfaces and apply resources inside containers through kernel control groups (see Book “System Analysis and Tuning Guide”, Chapter 9 “Kernel Control Groups”).

Limitations of Containers
  • Containers run on the host system's kernel, so they cannot use different kernels or different kernel versions.

  • Only Linux-based applications can be containerized.

  • Containers are not secure, and the overall security depends on the host system. Containerized applications can be secured through AppArmor or SELinux profiles. Securing containers is harder than securing virtual machines, due to the larger attack surface.

1.1 Key Concepts and Brief Introduction to Podman Edit source

Although Docker Open Source Engine is a popular choice for working with images and containers, Podman provides a drop-in replacement for Docker that offers several advantages. While Chapter 10, Podman Overview provides more information on Podman, this chapter offers a quick introduction to key concepts and a basic procedure of creating a container image and using it to run a container.

The basic Podman workflow is as follows:

Running a container, either on a local machine or cloud service, usually involves the following steps:

  1. Fetch a base image by pulling it from a registry to your local machine

  2. Create a Dockerfile and use it to build a custom image on top of the base image

  3. Use the created image to start one or more containers

To run a container, you need an image. An image includes all the dependencies needed to run the application. For example, the SLE base image contains the SLE distribution with a minimal package selection.

While it is possible to create an image from scratch, few applications would work in such an empty environment. Thus, using an existing base image is more practical in most situations. A base image has no parent, meaning it is not based on another image.

Although you can use a base image for running containers, the main purpose of base images is to serve as foundations for creating custom images that can run containers with specific applications, servers, services, and so on.

Both base and custom images are usually available through a repository of images called registry. Unless a registry is explicitly specified, Podman pulls images from the Docker Hub registry. While you can fetch a base image manually, Podman can do that automatically when building a custom image.

To build a custom image, you need to create a special file called Containerfile or Dockerfile, containing building instructions. For example, a Dockerfile can contain instructions to update the system software, install the desired application, open specific network ports, run commands, etc.

You can build images not only from base images, but also on top of custom images. So you can have an image consisting of multiple layers:

1.1.1 Practical Example Edit source

The following procedure shows how to build a custom Docker image that can be used to run a container with a simple PHP application called example, served using the built-in PHP development server.

Procedure 1.1: Building an Image and Running a Container
  1. Install Podman:

    tux > sudo zypper in podman
  2. Switch to the PHP project's directory and create a file named Dockerfile :

    tux > cd example
    tux > touch Dockerfile
  3. Open the Dockerfile file for editing, and add the following:

    FROM php:7.4-cli
    COPY . /usr/src/example
    WORKDIR /usr/src/example
    EXPOSE 8000
    CMD [ "php", "-S", "0.0.0.0:8000" ]
  4. Build a container image:

    tux > sudo podman build -t example .
  5. Run a container:

    tux > sudo podman run -it -p8000:8000 --rm example
  6. Point the browser to localhost:8000 to access the application running in the container.

Note that SUSE does not provide support for third-party images, such as the one used in this example.

2 Tool for Building Images and Managing Containers Edit source

This chapter provides a brief overview of tools for building images and managing containers. Most of the tools mentioned below are part of the SUSE Linux Enterprise Server 15 SP2 Containers Module . You can see the full list of packages in the Containers Module in the SUSE Customer Center .

2.1 Tools Available to Customers Edit source

2.1.1 Docker Edit source

Docker is a system for creating and managing containers. Its core is the Docker Open Source Engine—a lightweight virtualization solution to run containers simultaneously on a single host. Docker containers can be built using Dockerfiles (see Dockerfile). For a general introduction to Docker Open Source Engine, refer to Chapter 3, Docker Open Source Engine Overview.

2.1.2 Podman Edit source

Podman stands for Pod Manager tool. It is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers on a Linux system, and it offers a drop-in alternative for Docker. Podman is the default container runtime in openSUSE Kubic—a certified Kubernetes distribution built on top of openSUSE. For a general introduction to Podman, refer to Chapter 10, Podman Overview.

2.1.3 Buildah Edit source

Buildah facilitates building OCI container images. It is a complimentary tool to Podman, and podman build uses Buildah to perform container image builds. Buildah makes it possible to build images from scratch, from existing images, and using Dockerfiles. OCI images built using the Buildah command-line tool and the underlying OCI-based technologies (for example, containers/image and containers/storage ) are portable and can therefore run in a Docker Open Source Engine environment.

For information on installing and using Buildah, refer to Chapter 11, Buildah Overview.

2.2 SUSE Build Tools Edit source

2.2.1 Open Build Service Edit source

The Open Build Service (OBS) provides free infrastructure for building and storing RPM packages including various container formats. The OBS Container Registry provides a detailed listing of all container images built by the OBS, complete with commands for pulling the images into your local Docker environment. The OBS openSUSE container image templates can be modified to specific needs, which offers the easiest way to create your own container branch. Container images can be built with native Docker tools from an existing image using a Dockerfile. Alternatively, images can be built from scratch using the KIWI image-building solution.

Instructions on how to build images on OBS can be found at https://openbuildservice.org/2018/05/09/container-building-and-distribution/ .

2.2.2 KIWI Edit source

KIWI Next Generation is a multi-purpose tool for building images. In addition to container images, regular installation ISO images, and images for virtual machines, KIWI can build images that boot via PXE or Vagrant boxes. The main building block in KIWI is an image XML description, a directory that includes the config.xml or .kiwi file along with scripts or configuration data. The process of creating images with KIWI is fully automated and does not require any user interaction. Any information required for the image creation process is provided by the primary configuration file config.xml . The image can be customized using the config.sh and images.sh scripts.

Note
Note

It is important to distinguish between KIWI NG (currently version 9.20.9) and its unmaintained legacy versions (7.x.x or older), now called KIWI Legacy .

For specific information on how to install KIWI and use it to build images, see the KIWI documentation . A collection of example image descriptions can be found on the KIWI GitHub repository .

KIWI's man pages provide information on using the tool. To access man pages, install the kiwi-man-pages package.

2.3 Building Official SLE Images Edit source

Images are considered official only if they are built using the Internal Build Service.

There are no official SLE container images on https://build.opensuse.org , and the RPMs exported there are not identical to the internal ones. This means that it is not possible to build officially supported images on https://build.opensuse.org .

3 Docker Open Source Engine Overview Edit source

The Docker Open Source Engine is a lightweight virtualization solution to run multiple virtual Linux environments (containers) simultaneously on top of a single Linux kernel, without a hypervisor. Containers are isolated using Kernel cgroups (Control groups) and Namespaces.

Full virtualization solutions, such as Xen, KVM, or libvirt, are based on simulating a complete hardware environment and running multiple operating system instances inside these virtual machines. The Docker Open Source Engine provides operating-system-level virtualization: a single Linux kernel controls multiple isolated containers.

The Docker Open Source Engine allows developers and system administrators to manage the complete life cycle of images. The Docker Open Source Engine makes it easy to build, ship, and run images containing applications.

Docker Open Source Engine has the following advantages:

  • Isolation of applications through containers.

  • Near-native performance, as the Docker Open Source Engine manages allocation of resources in real time.

  • Control network interfaces and resources available inside containers through cgroups.

  • Versioning of images.

  • Building new images based on existing ones.

  • Container orchestration.

Docker Open Source Engine has the following limitations:

  • Containers run on the host system's kernel and cannot use a different kernel.

  • Only supports Linux applications and not other operating systems.

  • Docker Open Source Engine is not a full virtualization stack like Xen, KVM, or libvirt.

  • Security depends on the host system. Refer to the official security documentation for more details.

3.1 Docker Open Source Engine Architecture Edit source

Docker Open Source Engine uses a client/server architecture. You can use the CLI client to communicate with the daemon. The daemon performs operations with containers and manages images locally or in registry. The CLI client can run on the same server as the host daemon or on a different machine. The CLI client communicates with the daemon by using network sockets. The architecture is shown in Figure 3.1, “The Docker Open Source Engine Architecture”.

The Docker Open Source Engine Architecture
Figure 3.1: The Docker Open Source Engine Architecture

4 Setting Up Docker Open Source Engine Edit source

4.1 Preparing the Host Edit source

Prepare the host as described below. Before installing any Docker-related packages, you need to enable the container module:

Note
Note: Built-in Docker Orchestration Support

Starting with Docker Open Source Engine 1.12, container orchestration is now an integral part of the Docker Open Source Engine. Even though this feature is available in SUSE Linux Enterprise Server, it is not supported by SUSE and is only provided as a technology preview. Use Kubernetes for container orchestration. For details, refer to the Kubernetes documentation.

Procedure 4.1: Enabling the Container Module Using Graphical User Interface YaST
  1. Start YaST, and select Software › Software Repositories.

  2. Click Add to open the add-on dialog.

  3. Select Extensions and Modules from Registration Server and click Next.

  4. From the list of available extensions and modules, select Container Module 15 x86_64 and click Next.

    The containers module and its repositories will be added to your system.

  5. If you use Repository Mirroring Tool, update the list of repositories on the RMT server.

Procedure 4.2: Enabling the Container Module from Command Line Using SUSEConnect
  • The Container Module can be added also with the following command:

    tux > sudo SUSEConnect -p sle-module-containers/15.2/x86_64
Procedure 4.3: Installing and Setting Up the Docker Open Source Engine
  1. Install the docker package:

    tux > sudo zypper install docker
  2. To automatically start the Docker service at boot time:

    tux > sudo systemctl enable docker.service

    This automatically enables docker.socket in consequence.

  3. To use Portus (for more info on Portus, see Section 5.4, “Portus”) and an SSL-secured registry:

    1. Open the /etc/sysconfig/docker file. Search for the parameter DOCKER_OPTS and add --insecure-registry ADDRESS_OF_YOUR_REGISTRY.

    2. Add CA certificates to the directory /etc/docker/certs.d/REGISTRY_ADDRESS

      tux > sudo cp CA /etc/pki/trust/anchors/
    3. Copy the CA certificates to your system:

      tux > sudo update-ca-certificates
  4. Start the Docker service:

    tux > sudo systemctl start docker.service

    This automatically starts docker.socket.

The Docker daemon listens on a local socket accessible only by the root user and by the members of the docker group. The docker group is automatically created during package installation.

To allow a certain user to connect to the local Docker daemon, use the following command:

tux > sudo /usr/sbin/usermod -aG docker USERNAME

This allows the user to communicate with the local Docker daemon.

4.2 Configuring the Network Edit source

To give the containers access the external network, enable the ipv4 ip_forward rule.

4.2.1 How the Docker Open Source Engine Interacts with iptables Edit source

To learn more about how containers interact with each other and the system firewall, see the Docker documentation.

It is also possible to completely prevent the Docker Open Source Engine from manipulating iptables. See the Docker documentation.

4.3 Storage Drivers Edit source

Docker Open Source Engine supports different storage drivers:

  • vfs: this driver is automatically used when the Docker host file system does not support copy-on-write. This driver is simpler than the others listed and does not leverage certain advantages of the Docker Open Source Engine such as shared layers. It is a reliable but slow driver.

  • devicemapper: this driver relies on the device-mapper thin provisioning module. It supports copy-on-write, so it leverages all the advantages of the Docker Open Source Engine.

  • btrfs: this driver relies on Btrfs to provide all the features required by the Docker Open Source Engine. To use this driver the /var/lib/docker directory must be on a Btrfs file system.

Since SUSE Linux Enterprise Server 12, the Btrfs file system is used by default, which forces the Docker Open Source Engine to use the btrfs driver.

It is possible to specify what driver to use by changing the value of the DOCKER_OPTS variable defined in the /etc/sysconfig/docker file. This can be done either manually or using YaST by browsing to System › /etc/sysconfig Editor › System › Management › DOCKER_OPTS menu and entering the -s storage_driver string.

For example, to force the usage of the devicemapper driver enter the following text:

DOCKER_OPTS="-s devicemapper"
Important
Important: Mounting /var/lib/docker

It is recommended to mount /var/lib/docker on a separate partition or volume. In case of file system corruption, this would leave the operating system running the Docker Open Source Engine unaffected.

If you choose the Btrfs file system for /var/lib/docker, it is strongly recommended to create a subvolume for it. This ensures that the directory is excluded from file system snapshots. If you do not exclude /var/lib/docker from snapshots, the file system will likely run out of disk space soon after you start deploying containers. In addition, a rollback to a previous snapshot will also reset the Docker database and images. For more information, see Book “Administration Guide”, Chapter 7 “System Recovery and Snapshot Management with Snapper”, Section 7.1.4.3 “Creating and Mounting New Subvolumes”.

4.4 Updates Edit source

All updates to the docker package are marked as interactive (that is, no automatic updates) to avoid accidental updates breaking running container workloads. In general, we recommend stopping all running containers before applying an update to Docker Open Source Engine.

To avoid data loss, we do not recommend having workloads rely on containers being startable after an update to Docker Open Source Engine. Although it is technically possible to keep containers running during an update via the --live-restore option, experience has shown that such updates can introduce regressions. SUSE does not support this feature.

5 Configuring Image Storage Edit source

Before creating custom images, decide where you want to store images. The easiest solution is to push images to Docker Hub. By default, all images pushed to Docker Hub are public. Make sure not to publish sensitive data or software not licensed for public use.

You can restrict access to custom container images with the following:

  • Docker Hub allows creating private repositories for paid subscribers.

  • An on-site Docker Registry allows storing all the container images used by your organization. This can be combined with Portus to secure the registry.

This chapter describes the second option: setting up an on-site Docker Registry and combining it with Portus.

5.1 What is Docker Registry? Edit source

Docker Registry is an open-source platform for storing and retrieving container images. Running a local instance of Docker Registry, it is possible to completely avoid using Docker Hub.

Docker Registry is also used by Docker Hub. However, from a user's point of view, Docker Hub consists of the following components:

The user interface (UI)

The part that is accessed by users using a browser. The UI provides an easy way to browse the contents of Docker Hub, either manually or using a search feature. It can be used the create organizations by different users.

This component is closed-source.

The authentication component

This component is used to protect the images stored in Docker Hub. It validates all push, pull, and search requests.

This component is closed-source.

The storage back-end

A place that images are uploaded to and downloaded from. It is provided by Docker Registry.

This component is open-source.

5.2 Running a Docker Registry Edit source

The SUSE Registry provides a container image that makes it possible to run a local Docker Registry as a container. Before you start a container, create a config.yml file with the following example configuration:

version: 0.1
log:
  level: info
storage:
  filesystem:
    rootdirectory: /var/lib/docker-registry
http:
  addr: 0.0.0.0:5000

Also create an empty directory necessary to map the /var/lib/docker-registry directory outside the container. This directory is used for storing container images.

Run the following command to pull the registry container image from the SUSE Registry and start a container that can be accessed on port 5000:

podman run -d --restart=always --name registry -p 5000:5000 \
-v /PATH/config.yml:/etc/docker/registry/config.yml \
-v /PATH/DIR:/var/lib/ \ docker-registry registry.suse.com/sles12/registry:2.6.2

To make it easier to manage the registry, create a corresponding system unit:

root #  podman generate systemd registry >  \
 /etc/systemd/system/suse_registry.service

Enable and start the registry service, then verify its status:

root # systemctl enable suse_registry.service
root # systemctl start suse_registry.service
root # systemctl status suse_registry.service

For more details about Docker Registry and its configuration, see the official documentation at https://docs.docker.com/registry/.

5.3 Limitations Edit source

Docker Registry has two major limitations:

  • It lacks any form of authentication. That means everybody with access to Docker Registry can push and pull images to it. That includes overwriting existing images.

  • There is no way to see which images have been pushed to Docker Registry. You need to manually take notes of what is being stored on it. There is also no search functionality. These limitations are resolved by installing Portus.

5.4 Portus Edit source

Portus is an authentication service and user interface for Docker Registry. It is an open-source project created by SUSE to address limitations of local instances of Docker Registry. By combining Portus and Docker Registry, it is possible to have a secure and enterprise ready on-premises version of Docker Hub.

Portus is available for SUSE Linux Enterprise Server customers as a container image from SUSE Container Registry. For example, to pull the 2.4.3 tag of the SUSE Linux Enterprise Server 12 image, run the following command:

tux > podman pull registry.suse.com/sles12/portus:2.4.3

In addition to the official version of the Portus image from SUSE Container Registry, there is a community version that can be found on Docker Hub. However, as a SUSE Linux Enterprise Server customer, we strongly suggest you use the official Portus image. The Portus image for SUSE Linux Enterprise Server customers has the same code as the one from the community. Therefore, the setup instructions from http://port.us.org/docs/deploy.html apply for both images.

6 Obtaining Containers Edit source

This chapter provides information on obtaining container images.

6.1 SUSE Linux Enterprise Base Images Edit source

SUSE offers a number of official base container images that can be used as a starting point for building custom containers. Each SLE base image provides a minimal environment with a shell and package management.

Base images are available from https://registry.suse.com. For information about the SUSE Registry, see Section 6.3, “SUSE Registry”. The base images in the SUSE Registry all have the status General Availability (that is, they are suitable for production use) and LTSS releases of SLES 12 and SLES 15. SUSE Linux Enterprise base images in the SUSE Registry receive security updates and are covered by the SUSE support plans. For more information about these support plans, see Chapter 14, Support Plans.

6.2 SUSE Container Properties Edit source

SUSE container images have identifiers that provide information about their version, origin, and creation time. The individual identifiers listed below can be accessed after you pull a container image from the repository and run podman inspect on it.

6.2.1 Repository Names Edit source

Repository names start with the name of the product, for example: suse/sle..., opensuse/tumbleweed, or caasp/.... The SLE 15 containers for all service packs reside in the repository suse/sle15. However, for SLE 12, there is a separate repository name for each service pack, for example suse/sles12sp3, suse/sles12sp4, suse/sles12sp5.

6.2.2 Labels Edit source

Labels help to identify images. All SLE container image labels begin with com.suse.PRODUCTCONTAINER_NAME followed by a further specification. Container images also contain org.opencontainers.image labels.

Below is a list of all currently defined labels.

org.opencontainers.image.title, com.suse.sle.base.title
  • Must be provided by derived images: Yes

  • OCI notation: org.opencontainers.image.title

  • Description: Title of the image

  • Example: SUSE Linux Enterprise 15 Base Container

org.opencontainers.image.description, com.suse.sle.base.description
  • Must be provided by derived images: Yes

  • OCI notation: org.opencontainers.image.description

  • Description: Short description of the image

  • Example: Image containing a minimal environment for containers based on SUSE Linux Enterprise 15

org.opencontainers.image.version, com.suse.sle.base.version
  • Must be provided by derived images: Yes

  • OCI notation: org.opencontainers.image.version

  • Description: Image version (MAJOR.SP.CICOUNT.BUILDCOUNT)

  • Example: 15.0.4.2

org.opencontainers.image.created, com.suse.sle.base.created
  • Must be provided by derived images: Yes

  • OCI notation: org.opencontainers.image.created

  • Description: Timestamp of image build

  • Example: 2018-07-27T14:12:30Z

org.opencontainers.image.vendor, com.suse.sle.base.vendor
  • Must be provided by derived images: No

  • OCI notation: org.opencontainers.image.vendor

  • Description: Image vendor

  • Example: SUSE LLC

org.opencontainers.image.url, com.suse.sle.base.url
  • Must be provided by derived images: No

  • OCI notation: org.opencontainers.image.url

  • Description: Additional information

  • Example: https://www.suse.com/products/server/

org.openbuildservice.disturl, com.suse.sle.base.disturl
  • Must be provided by derived images: Yes

  • OCI notation: org.openbuildservice.disturl

  • Description: Image OBS URL

  • Example: obs://build.suse.de/SUSE:SLE-15:Update:CR/images/2951b67133dd6384cacb28203174e030-sles15-image

org.opensuse.reference, com.suse.sle.base.reference
  • Must be provided by derived images: Yes

  • OCI notation: org.opensuse.reference

  • Description: Reference pointing to the image. The image you get with docker pull REF_NAME must not change.

  • Example: registry.suse.com/suse/sle15:4.2

6.2.3 Tags Edit source

Tags are used to refer to images. A tag forms a part of the image's name. Unlike labels, tags can be freely defined, and they are usually used to indicate a verson number.

If a tag exists in multiple images, the newest image is used. The image maintainer decides which tags to assign to the container image.

The conventional tag format is repository name: image version specification (usually version number). For example, the tag for the latest published image of SUSE Linux Enterprise Server 15 SP2 would be suse/sle15:15.2.

6.3 SUSE Registry Edit source

The official SUSE Registry is available at https://registry.suse.com. It contains tested and updated SUSE Linux Enterprise base container images. All images in the SUSE Registry undergo a maintenance process. The images are built to contain the latest available updates and fixes. The SUSE Registry's Web user interface lists a subset of the available images.

6.4 Verifying containers Edit source

Signatures for images available through SUSE Registry are stored in the Notary. You can verify the signature of a specific image using the following command:

docker trust inspect --pretty registry.suse.com/suseIMAGE:TAG

For example, the command docker trust inspect --pretty registry.suse.com/suse/sle15:latest verifies the signature of the latest SLE15 base image.

To automatically validate an image when you pull it, set the environment DOCKER_CONTENT_TRUST to 1. For example:

env DOCKER_CONTENT_TRUST=1 docker pull registry.suse.com/suse/sle15:latest

6.5 Comparing Containers Edit source

The container-diff tool can be used for analyzing and comparing container images. container-diff can examine images along several different criteria, including the following:

  • Docker Image History

  • Image file system

  • DEB packages

  • RPM packages

  • PyPI packages

  • NPM packages

You can inspect a single image, or perform a diff operation on two images. container-diff supports Docker images located in both a local Docker daemon and a remote registry. It is also possible to use the tool with the .tar, .tar.gz, and .tgz archives.

The container-diff package is part of the SUSE Linux Enterprise Server 15 SP2 Containers Module. Alternatively, it can be installed separately. For instructions on installing it, see the container-diff documentation.

6.6 On-Premises Registry Edit source

6.6.1 Portus Edit source

Portus is an on-premises application that provides a graphical interface and an authorization mechanism for Docker registries. For a more detailed description of Portus functionality, see http://port.us.org/features.html.

Portus can be deployed using a standard Docker container, inside a Kubernetes cluster, or on bare metal. For deployment options and instructions on how to get started with Portus in a development environment, see http://port.us.org/docs/deploy.html.

7 Creating Custom Container Images Edit source

To create a custom image, you need a base image of SUSE Linux Enterprise Server. You can use any of the pre-built SUSE Linux Enterprise Server images.

7.1 Pulling Base SLES Images Edit source

To obtain a pre-built base image for SUSE Linux Enterprise 12 SP3 and later, use the following command:

      tux > docker pull registry.suse.com/suse/
      IMAGENAME

For example, for SUSE Linux Enterprise Server 15, the command is as follows:

tux > docker pull registry.suse.com/suse/sle15

sle2docker is not required, because the image is being pulled from the Docker Registry.

For information on obtaining specific base images, refer to Section 6.1, “SUSE Linux Enterprise Base Images” .

When the container image is ready, you can customize it as described in Section 7.2, “Customizing SLES Container Images” .

7.2 Customizing SLES Container Images Edit source

The pre-built images do not have any repositories configured and do not include any modules or extensions. They contain a zypper service that contacts either the SUSE® Customer Center or a Repository Mirroring Tool (RMT) server, according to the configuration of the SUSE Linux Enterprise Server host that runs the container. The service obtains the list of repositories available for the product used by the container image. You can also directly declare extensions in your Dockerfile . For more information, see Section 7.2.3, “Adding SLE Extensions and Modules to Images” .

You do not need to add any credentials to the container image, because the machine credentials are automatically injected into the /run/secrets directory in the container by the docker daemon. The same applies to the /etc/SUSEConnect file of the host system, which is automatically injected into the /run/secrets directory.

Note
Note: Credentials and Security

The contents of the /run/secrets directory are never included in a container image, hence there is no risk of your credentials leaking.

Note
Note: Building Images on Systems Registered with RMT

When the host system used for building container images is registered with RMT, the default behavior allows only building containers of the same code base as the host. For example, if your container host is an SLE 15 system, you can only build SLE 15-based images on that host by default. To build images for a different SLE version, for example SLE 12 on an SLE 15 host, the host machine credentials for the target release can be injected into the container as outlined below.

When the host system is registered with SUSE Customer Center, this restriction does not apply.

Note
Note: Building Container Images in On-Demand SLE Instances in the Public Cloud

Building container images on SLE instances that were launched as so-called on-demand or pay as you go instances on a public cloud (AWS, GCE, or Azure) requires additional steps. To install packages and updates, the on-demand public cloud instances are connected to update infrastructure. This infrastructure is based on RMT servers operated by SUSE on the various public cloud providers.

Therefore, your machines need to locate the required services and authenticate with them. This can be done using the containerbuild-regionsrv service. This service is available in the public cloud images provided through the marketplaces of the various public cloud providers. Before building an image, this service must be started on the public cloud instance by running the following command:

tux > sudo systemctl start containerbuild-regionsrv

To start it automatically on system start-up, enable it:

tux > sudo systemctl enable containerbuild-regionsrv

The Zypper plug-ins provided by the SLE base images connect to this service and retrieve authentication details and information about which update server to talk to. For this to work, the container has to be built with host networking enabled, for example:

        tux > docker build --network host
        build-directory/

Since update infrastructure in the public clouds is based upon RMT, the restrictions to building SLE images for SLE versions different from the SLE version of the host apply as well (see Note: Building Images on Systems Registered with RMT ).

To obtain the list of repositories, use the following command:

tux > sudo zypper ref -s

This automatically adds all the repositories to the container. For each repository added to the system, a new file will be created under /etc/zypp/repos.d . The URLs of these repositories include an access token that automatically expires after 12 hours. To renew the token, run the command zypper ref -s . Including these files in a container image does not pose any security risk.

To use a different set of credentials, put a custom /etc/zypp/credentials.d/SCCcredentials file inside of the container image. It contains the machine credentials that have the subscription you want to use. The same applies to the SUSEConnect file: to override the existing file on the host system running the container, add a custom /etc/SUSEConnect file inside of the container image.

Now you can create a custom container image by using a Dockerfile as described in Section 7.2.1, “Creating a Custom Image for SLE 12 SP3 and Later”

If you want to move your application to a container, see Chapter 8, Creating Application Images .

After you have edited the Dockerfile , build the image by running the following command in the same directory in which the Dockerfile resides:

tux > docker build .

For more information about docker build options, see the official Docker documentation .

Note
Note: Creating Application Images

For information about creating a Dockerfile for the application you want to run inside a container, see Chapter 8, Creating Application Images .

7.2.1 Creating a Custom Image for SLE 12 SP3 and Later Edit source

The following Dockerfile creates a simple container image based on SUSE Linux Enterprise Server 15:

        FROM registry.suse.com/suse/sle15

        RUN zypper ref -s
        RUN zypper -n in vim

When the Docker host machine is registered with an internal RMT server, the image requires the SSL certificate used by RMT:

        FROM registry.suse.com/suse/sle15

        # Import the crt file of our private SMT server
        ADD http://smt.example.com/smt.crt /etc/pki/trust/anchors/smt.crt
        RUN update-ca-certificates

        RUN zypper ref -s
        RUN zypper -n in vim

7.2.2 Meta Information in SLE Container Images Edit source

Starting with SUSE Linux Enterprise 12 SP3, all base container images include information such as a build time-stamp and description. This information is provided in the form of labels attached to the base images, and is therefore available for derived images and containers (see Section 6.2.2, “Labels” ). This information can be viewed with docker inspect :

        tux > docker inspect registry.suse.com/suse/sle15
        [...]
        "Labels": {
            "com.suse.sle.base.created": "2020-11-23T11:51:32.695975200Z",
            "com.suse.sle.base.description": "Image containing a minimal environment for containers based on SUSE Linux Enterprise Server 15 SP2.",
            "com.suse.sle.base.disturl": "obs://build.suse.de/SUSE:SLE-15-SP2:Update:CR/images/4a8871be8078bcef2e2417e2a98fc3a0-sles15-image",
            "com.suse.sle.base.reference": "registry.suse.com/suse/sle15:15.2.8.2.794",
            "com.suse.sle.base.title": "SUSE Linux Enterprise Server 15 SP2 Base Container",
            "com.suse.sle.base.url": "https://www.suse.com/products/server/",
            "com.suse.sle.base.vendor": "SUSE LLC",
            "com.suse.sle.base.version": "15.2.8.2.794",
            "org.openbuildservice.disturl": "obs://build.suse.de/SUSE:SLE-15-SP2:Update:CR/images/4a8871be8078bcef2e2417e2a98fc3a0-sles15-image",
            "org.opencontainers.image.created": "2020-11-23T11:51:32.695975200Z",
            "org.opencontainers.image.description": "Image containing a minimal environment for containers based on SUSE Linux Enterprise Server 15 SP2.",
            "org.opencontainers.image.title": "SUSE Linux Enterprise Server 15 SP2 Base Container",
            "org.opencontainers.image.url": "https://www.suse.com/products/server/",
            "org.opencontainers.image.vendor": "SUSE LLC",
            "org.opencontainers.image.version": "15.2.8.2.794",
            "org.opensuse.reference": "registry.suse.com/suse/sle15:15.2.8.2.794"
        },
        [...]

All labels are shown twice, to ensure that in derived images, the information about the original base image is still visible and not overwritten.

7.2.3 Adding SLE Extensions and Modules to Images Edit source

If you have subscriptions to SUSE Linux Enterprise Server extensions or modules that you would like to use in your custom image, you can add them to the container image by specifying the ADDITIONAL_MODULES environment variable:

ENV ADDITIONAL_MODULES sle-module-desktop-applications,sle-module-development-tools

8 Creating Application Images Edit source

Docker Open Source Engine is designed to allow running multiple separate application environments in parallel, with lower resource use than when using full virtual machines. Several types of applications are suitable for running inside containers: daemons, Web servers, and applications that expose IP ports for communications. You can use Docker Open Source Engine to automate the building and deployment processes by performing the build process inside a container, building an image, and then deploying containers based on the image.

Running an application inside a container has the following advantages.

  • The image with the application is portable across servers running different Linux host distributions and versions.

  • You can share the image of the application using a repository.

  • You can use different versions of software in the container and on the host system, without creating dependency issues.

  • You can run several instances of the same application that are completely independent from each other.

Using Docker Open Source Engine to build applications has the following advantages.

  • You can prepare an image of the complete build environment.

  • The application can run in the same environment it was built in.

  • Developers can test their code in the same environment as used in production.

The following section provides examples and recommendations on creating container images for applications. Before proceeding, make sure that you have activated your SUSE Linux Enterprise Server base image as described in Section 7.1, “Pulling Base SLES Images” .

8.1 Running an Application with Specific Package Versions Edit source

If your application needs a version of a package different from the package installed on the system, you can create a container image that includes the package version the application requires. The following example Dockerfile allows building an image based on an up-to-date version of SUSE Linux Enterprise Server with an older version of the example package:

FROM registry.suse.com/suse/sle15
LABEL maintainer=tux
RUN zypper ref && zypper in -f example-1.0.0-0
COPY application.rpm /tmp/
RUN zypper --non-interactive in /tmp/application.rpm
ENTRYPOINT ["/etc/bin/application"]
CMD ["-i"]

Build the image by running the following command in the directory that the Dockerfile resides in:

tux > docker build --tag tux_application:latest .

The Dockerfile example shown above performs the following operations during the docker build:

  1. Updates the SUSE Linux Enterprise Server repositories.

  2. Installs the desired version of the example package.

  3. Copies the application package to the image. The binary RPM must be placed in the build context.

  4. Unpacks the application.

  5. The last two steps run the application after a container is started.

After a successful build of the tux_application image, you can start a container based on the new image using the following command:

tux > docker run -it --name application_instance tux_application:latest

Keep in mind that after closing the application, the container exits as well.

8.2 Running Applications with a Specific Configuration Edit source

To run an instance using a different configuration, create a derived image and include additional configuration with it. For example, if your application is called example and can be configured using the file /etc/example/configuration_example, you could use:

FROM registry.suse.com/suse/sle15 1
RUN zypper ref && zypper --non-interactive in example 2
ENV BACKUP=/backup 3
RUN mkdir -p $BACKUP 4
COPY configuration_example /etc/example/ 5
ENTRYPOINT ["/etc/bin/example"] 6

The above example Dockerfile performs the following operations:

1

Pulls sle15 base image from Section 7.1, “Pulling Base SLES Images” .

2

Refreshes repositories and installs of the example .

3

Sets a BACKUP environment variable (the variable persists to containers started from the image). You can always overwrite the value of the variable while running the container by specifying a new value.

4

Creates the directory /backup.

5

Copies the configuration_example to the image.

6

Runs the example application.

You can now build the image. After a successful build, you can run a container based on the image you just created.

8.3 Sharing Data Between an Application and the Host System Edit source

Docker Open Source Engine allows sharing data between host and a container by using volumes. You can specify a mount point directly in the Dockerfile. However, you cannot specify a directory on the host system in the Dockerfile, as the directory may not be accessible at build time. Find the mounted directory under /var/lib/docker/volumes/ on the host system.

Note
Note: Discarding Changes to the Directory to Be Shared

After you specify a mount point by using the VOLUME instruction, all changes performed to the directory with the RUN instruction are discarded. After the mount point is specified, the volume becomes a part of a temporary container, which is removed after a successful build. This means that for certain actions to take effect, they must be performed before specifying a mount point. For example, if you need to change permissions, do this before you specify the directory as a mount point in the Dockerfile.

Specify a particular mount point on the host system when running a container by using the -v option:

tux > docker run -it --name testing -v /home/tux/data:/data sles12sp4:latest /bin/bash
Note
Note

The -v option overwrites the VOLUME instruction if you specify the same mount point in the container.

The following example image contains a Web server that reads Web content from the host's file system. The Dockerfile could look as follows:

FROM registry.suse.com/suse/sles12sp4
RUN zypper ref && zypper --non-interactive in apache2
COPY apache2 /etc/sysconfig/
RUN chown -R admin /data
EXPOSE 80
VOLUME /data
ENTRYPOINT ["apache2ctl"]

The example above installs the Apache Web server to the image and copies the entire configuration to the image. The data directory is owned by the admin user and is used as a mount point to store Web pages.

8.4 Applications Running in the Background Edit source

If your application needs to run in the background as a daemon, or as an application exposing ports for communication, you can run the container in the background.

An example Dockerfile for an application exposing a port looks as follows:

Example 8.1: Building an Apache2 Web Server Container ( Dockerfile )
FROM registry.suse.com/suse/sle15 1
LABEL maintainer=tux 2
ADD etc/ /etc/zypp/ 3
RUN zypper refs && zypper refresh 4
RUN zypper --non-interactive in apache2 5
RUN echo "The Web server is running" > /srv/www/htdocs/test.html 6
# COPY data/* /srv/www/htdocs/ 7
EXPOSE 80 8
ENTRYPOINT ["/usr/sbin/httpd"]
CMD ["-D", "FOREGROUND"]

1

Pull base image from Section 7.1, “Pulling Base SLES Images” .

2

Maintainer of the image (optional).

3

The repositories and service files to be copied to /etc/zypp/repos.d and /etc/zypp/services.d. This makes them available on the host in the container.

4

Command to refresh repositories and services.

5

Command to install Apache2.

6

Test line for debugging purposes. This line can be removed if everything works as expected.

7

A COPY instruction to copy data from the host system to the directory in the container used by the server. The leading hash character ( # ) marks this line as a comment: it is not executed.

8

The exposed port for the Apache Web server.

Note
Note: Make Sure the Ports Used by the Container Image Are Unused

To use port 80, make sure there is no other server software running on this port on the host.

To use the container, proceed as follows:

Procedure 8.1: Testing the Apache2 Web Server
  1. Prepare the host system for the build process.

    1. Make sure the host system is subscribed to the Server Applications Module of SUSE Linux Enterprise Server. To view installed modules or install additional modules, open YaST and select Add System Extensions or Modules .

    2. Make sure the SUSE Linux Enterprise images from the SUSE Registry are installed as described in Section 7.1, “Pulling Base SLES Images” .

    3. Save the Dockerfile from Example 8.1, “Building an Apache2 Web Server Container ( Dockerfile )” into the docker directory.

    4. Within the container, you need access to software repositories and services that are registered on the host. To make them available, copy repositories and service files from the host to the docker/etc directory:

      tux > cd docker
      tux > mkdir etc
      tux > sudo cp -a /etc/zypp/{repos.d,services.d} etc/

      Instead of copying all repository and service files, you can also copy only the subset that is required by the container.

    5. Add Web site data (such as HTML files) into the docker/data directory. The contents of this directory are copied to the container image and are thus published by the Web server.

  2. Build the container. Set a tag for your image with the -t option (in the command below, it is tux/apache2):

    tux > sudo docker build -t tux/apache2 .

    Docker Open Source Engine executes the instructions provided in Dockerfile : pull the base image, copy content, refresh repositories, install the Apache2, etc.

  3. Start a container instance from the image created in the previous step:

    tux > docker run --detach --interactive --tty tux/apache2

    Docker Open Source Engine returns the container ID, for example:

    7bd674eb196d330d50f8a3cfc2bc61a243a4a535390767250b11a7886134ab93
  4. Point a browser to http://localhost:80/test.html . You should see the message The Web server is running .

  5. To see an overview of running containers, use:

    tux > docker ps --latest
    CONTAINER ID        IMAGE               COMMAND                  [...]
    7bd674eb196d
    tux/apache2         "/usr/sbin/httpd -..."   [...]

    To stop and delete the container, run the following command:

    tux > docker rm --force 7bd674eb196d

You can use the resulting container to serve your data with the Apache2 Web server by following these steps:

Procedure 8.2: Creating a Container with your Own Data
  1. In the Dockerfile :

  2. Rebuild the image as described in Step 2 of Procedure 8.1 .

  3. Run the image in the detached mode:

    tux > docker run --detach --interactive --tty tux/apache2

    Docker Open Source Engine responds with the container ID, for example:

    e43fff4ae9832ecdb7677c058a73039d7610c32145a1d9b6ad0a4ed52b5c4dc7

To view the published data, point a browser at http://localhost:80/test.html .

To avoid copying Web site data into the container, share a directory of the host with the container. For more information, see https://docs.docker.com/storage/volumes/ .

9 Working with Containers Edit source

After you have created a custom image, you can start containers based on it. You can run an instance of the image using the docker run command. The command accepts several arguments:

  • A container name—it is recommended to name your container

  • A user to use in your container

  • A mount point

  • A particular host name, etc

9.1 Starting and Removing Containers Edit source

Containers normally exit when their main process finishes. For example, if a container starts a particular application, the container exits as soon as the application quits. You can start the container again by running:

tux > docker start -ai <container name>

To remove unused containers:

tux > docker rm <container name>

10 Podman Overview Edit source

Podman is short for Pod Manager Tool. It is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers on a Linux system, and it offers a drop-in alternative for Docker. Podman is the default container runtime in openSUSE Kubic—a certified Kubernetes distribution built on top of openSUSE. Podman can be used to create OCI-compliant container images using a Dockerfile and a range of commands identical to Docker Open Source Engine. For example, the podman build command performs the same task as docker build. In other words, Podman provides a drop-in replacement for Docker Open Source Engine.

Moving from Docker Open Source Engine to Podman does not require any changes in the established workflow. There is no need to rebuild images, and you can use the exact same commands to build and manage images as well as running and controlling containers.

Podman differs from Docker Open Source Engine in two important ways.

  • Podman does not uses a daemon, so the container engine interacts directly with an image registry, containers, and image storage. As Podman does not have a daemon, it provides integration with systemd. This makes it possible to control containers via systemd units. You can create these units for existing containers as well as generate units that can start containers if they do not exist in the system. Moreover, Podman can run systemd inside containers.

  • Because Podman relies on several namespaces, which provide an isolation mechanism for Linux processes, it does not require root privileges to create and run containers. This means that Podman can run in the root mode as well as in an unpriviledged environment. Moreover, a container created by an unprivileged user cannot get higher privileges on the host than the container's creator.

10.1 Podman Installation Edit source

To install Podman, run the sudo zypper in podman. Run then podman --version to check whether Podman has been installed successfully.

By default, Podman requires root privileges. To enable rootless mode for the current user, run the following command:

tux >  sudo usermod --add-subuids 200000-201000 --add-subgids 200000-201000 $USER

Reboot the machine to enable the change. Instead of rebooting, you can stop the session of the current user. To do this, run the loginctl list-sessions | grep $USER and note the session ID. Use then the command loginctl kill-session SESSION_ID to terminate the session.

The command above defines a range of local uids to which the uids allocated to users inside the container are mapped on the host. Note that the ranges defined for different users must not overlap. It is also important that the ranges do not reuse the uid of an existing local user or group. By default, adding a user with the useradd on SLES 15, automatically allocates subuid and subgid ranges.

Running a container with Podman in the rootless mode on SUSE Linux Enterprise Server may fail, because the container needs read access to the SUSE Customer Center credentials. For example, running a container with the podman run -it --rm registry.suse.com/suse/sle15 bash, and then executing zypper ref results in the following error message:

Refreshing service 'container-suseconnect-zypp'.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp] 
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
Warning: There are no enabled repositories defined.
Use 'zypper addrepo' or 'zypper modifyrepo' commands to add or enable repositories

To solve the problem, grant the current user the required access rights by running the following command on the host:

tux > sudo setfacl -m u:$USER:r /etc/zypp/credentials.d/*

Log out and log in again to apply the changes.

To give multiple users the required access, create a dedicated group using the groupadd GROUPNAME command. Use then the following command to change the group ownership and rights of files in the /etc/zypp/credentials.d/ directory.

tux > sudo chgrp GROUPNAME /etc/zypp/credentials.d/*
tux > sudo chmod g+r /etc/zypp/credentials.d/*

You can then grant a specific user write access by adding them to the created group.

10.2 Podman Basic Usage Edit source

Since Podman is compatible with Docker Open Source Engine, it features the same commands and options. For example, the podman pull command fetches a container image from a registry, while the podman build command is used to build images.

One of the advantages of Podman over Docker Open Source Engine is that Podman can be configured to search multiple registries. To make Podman to search the SUSE registry first and use Docker Hub as a fallback, add the following configuration to the /etc/containers/registries.conf file:

[registries.search]
registries = ["registry.suse.com", "docker.io"]

Similar to Docker Open Source Engine, Podman can run containers in an interactive mode, allowing you to inspect and work with an image. To run the suse/sle15 in the interactive mode, use the following command:

tux > podman run --rm -ti suse/sle15

10.2.1 Building Images with Podman Edit source

Podman can build images from a Dockerfile. The podman build command behaves as docker build, and it accepts the same options.

Podman's companion tool Buildah provides an alternative way to build images. For further information about Buildah, refer to Chapter 11, Buildah Overview.

11 Buildah Overview Edit source

Buildah is tool for building OCI-compliant container images. Buildah can handle the following tasks.

  • Create containers from scratch, or from an existing image.

  • Create an image from a working container or via Dockerfile.

  • Build images in the OCI or Docker Open Source Engine image formats.

  • Mount a working container's root filesystem for manipulation.

  • Use the updated contents of a container's root filesystem as a filesystem layer to create a new image.

  • Delete a working container or an image and rename a local container.

Compared to Docker Open Source Engine, Buildah has several advantages.

  • The tool makes it possible to mount a working container's filesystem, so it becomes accessible by the host.

  • The process of building container images using Buildah can be automated via scripts by using Buildah's subcommands instead of a Containerfile or Dockerfile.

  • Similar to Podman, Buildah does not require a daemon to run and can be used by unprivileged users.

  • It is possible to build images inside a container without mounting the Docker socket, which improves security.

11.1 Podman and Buildah Edit source

Both Podman and Buildah can be used to build container images. While Podman makes it possible to build images using Dockerfiles, Buildah offers an expanded range of image building options and capabilities.

11.2 Buildah Installation Edit source

To install Buildah, run the sudo zypper in buildah. Run buildah --version to check whether Buildah has been installed successfully.

If you already have Podman installed and set up for use in the rootless mode, Buildah can be used in an unprivileged environment without any further configuration. If you need to enable the rootless mode for Buildah, run the following command:

tux > sudo usermod --add-subuids 200000-201000 --add-subgids 200000-201000 $USER

This command enables the rootless mode for the current user. After running the command, log out and log in again to enable the changes.

The command above defines a range of local uids on the host, on to which the uids allocated to users inside the container are mapped. Note that the ranges defined for different users must not overlap. It is also important that the ranges do not reuse the uid of any existing local users or groups. By default, adding a user with the useradd on SLES 15, automatically allocates subuid and subgid ranges.

Note
Note: Buildah in rootless mode

In the rootless mode, Buildah commands must be executed in a modified user namespace of the user. To enter this user namespace, run the command buildah unshare. Otherwise, the buildah mount command will fail.

11.3 Building Images with Buildah Edit source

Instead of a special file with instructions, Buildah uses individual commands to build an image. Building an image with Buildah involves several steps: run a container based on the specified image, edit container (install packages, configure settings, etc.), configure container options, commit all changes into a new image. While this process may include additional steps, such as mounting the container's filesystem and working with it, the basic workflow logic remains the same.

The following example can give you a general idea of how to build an image with Buildah.

Example 11.1: Build image example
container=$(buildah from suse/sle15) 1
buildah run $container zypper up 2
buildah copy $container . /usr/src/example/ 3
buildah config --workingdir /usr/src/example $container
buildah config --port 8000 $container
buildah config --cmd "php -S 0.0.0.0:8000" $container 4
buildah config --label maintainer="Tux" $container
buildah config --label version="0.1" $container 5
buildah commit $container example 6
buildah rm $container 7

1

Run a container (also called a working container) based on the specified image (in this case, sle15).

2

Run a command in the working container you just created. In this example, Buildah runs the zypper up command.

3

Copy files and directories to the specified location in the container. In this example, Buildah copies the entire contents of the current directory to /usr/src/example/.

4

The buildah config commands specify container options. This includes defining a working directory, exposing a port, and running a command inside the container.

5

The buildah config --label command allows you to assign labels to the container. This may include the maintainer, description, version, and so on.

6

Create an image from the working container by committing all the modifications.

7

Delete the working container.

12 Container Orchestration Edit source

12.1 Pod Deployment with Podman Edit source

In addition to building and managing images, Podman makes it possible to work with pods. A pod is a group of one or more containers with shared resources, such as the network interface. A pod usually encapsulates an application composed of multiple containers into a single unit.

The podman pod can be used to create, delete, query, and inspect pods. To create a new pod, run the podman pod create command. This creates a pod with a random name. To list the existing pods, use the podman pod list command. To view a list of running pods, run podman ps -a --pod. The output of the command looks as follows (the STATUS and CREATED columns are omitted for brevity):

POD ID        NAME                # OF CONTAINERS   INFRA ID
399a120a09ff  suspicious_curie    1                 e57820093817

Notice that the command assigned a random name to the pod (suspicious_curie in this case). You can use the --name parameter to assign the desired name to a pod.

To examine the pod and its contents, run the podman ps -a --pod command and take a look at the output (the COMMAND, CREATED, STATUS, PORTS, and POD ID columns are omitted for brevity)

CONTAINER ID  IMAGE                 NAMES              PODNAME
e57820093817  k8s.gcr.io/pause:3.2  399a120a09ff-infra suspicious_curie

The created pod has an infra container identified by the k8s.gcr.io name. The purpose of this container is to reserve the namespaces associated with the pod and allow Podman to add other containers to the pod.

Using the podman run --pod command, you can run a container and add it to the desired pod. For example, the command below runs a container based on the suse/sle15 image and adds the container to the suspicious_curie pod:

podman run -d --pod suspicious_curie registry.suse.com/suse/sle15 sleep 1h

The command above adds a container that sleeps for 60 minutes and then exits. Run the podman ps -a --pod again, and you should see that the pod now has two containers.

Containers in a pod can be restarted, stopped, and started without affecting the overall status of the pod. For example, you can stop a container using the sudo podman stop CONTAINER_NAME command.

To stop the pod, use the podman pod stop command:

podman pod stop suspicious_curie

13 Troubleshooting Edit source

13.1 Analyze Container Images with container-diff Edit source

In case a custom Docker Open Source Engine container image built on top of the SLE base container image is not working as expected, the container-diff tool can help you analyze the image and collect information relevant for troubleshooting.

container-diff makes it possible analyze image changes by computing differences between images and presenting the diff in a human-readable and actionable format. The tool can find differences in system packages, language-level packages, and files in a container image.

container-diff can handle local container images (using the prefix daemon://), images in a remote registry (using the prefix remote://), and images saved as .tar archives. You can use container-diff to compute the diff between a local version of an image and a remote version.

To install container-diff, run the sudo zypper in container-diff command.

13.1.1 Basic container-diff commands Edit source

The command container-diff analyze IMAGE runs a standard analysis on a single image. By default, it return a hash and size of the container image. For more information that can help you to identify and fix problems, use the specific analyzers. Use the --type parameter to specify the desired analyzer. Two most useful analyzers are history (returns a list of descriptions of how an image layer was created) and file (returns a list of file system contents, including names, paths, and sizes):

tux > sudo container-diff analyze --type=history daemon://IMAGE
tux > sudo container-diff analyze --type=file daemon://IMAGE

To view all available parameters and their brief descriptions, run the container-diff analyze --help command.

Using the container-diff diff command, you can compare two container images and examine differences between them. Similar to the container-diff analyze command, container-diff diff supports several parameters. The example command below compares two images and returns a list of descriptions of how IMAGE_2 was created from IMAGE_1.

tux > sudo container-diff diff daemon://IMAGE_1 daemon://IMAGE_2 --type=history

To view all available parameters and their brief descriptions, run the container-diff diff --help command.

14 Support Plans Edit source

This chapter explains how SLES container support plans work.

There are three guiding principles of SUSE container support.

  1. The container image lifecycle follows the lifecycle of the related products.

    For example, SLES 15 SP2 container images follow the SLES 15 SP2 lifecycle.

  2. Container release status also matches the status of the related product.

    For example, if SLES 15 SP2 is in Alpha, Beta, RC or GA stage, the related containers have the same release status.

  3. Containers are built using the packages from the related products.

    For example, SLES 15 SP2 container images are built using the same packages as the main SLES 15 SP2 release.

14.1 Supported Containers on SUSE Host Environments Edit source

The following support options (tiers) apply to SUSE Linux Enterprise Server and SUSE CaaS Platform containers on SUSE host environments.

14.1.1 Tier One Edit source

Containers Delivered by SUSE

Containers delivered by SUSE are fully supported. This applies to both the container and host environment as well as all products under support. This includes both general support and Long Term Service Pack Support (LTSS).

14.1.2 Tier Two Edit source

Containers Delivered by Partners with an Agreement Ensuring a Joint Engineering Collaboration

This tier targets important Independent Software Vendors (ISVs). Partner containers with a joint engineering collaboration agreement are fully supported. This applies to both the container and host environment as well as all products under support (both general, as well as LTSS) covered by the agreement. Products not covered by the agreement fall under Tier Three.

14.1.3 Tier Three Edit source

All Other Third-Party Containers

The SUSE container host environment is fully supported. However the container vendor is responsible for handling issues related to third-party containers they maintain.

14.2 Supported Container Host Environments Edit source

The support options (tiers) covered below apply to the following container options:

14.2.1 Tier One Edit source

SUSE Products

This tier applies to SUSE Linux Enterprise Server, SUSE CaaS Platform and SUSE Cloud Application Platform. Both the containers and host environments delivered by SUSE are fully supported as well as all products under support. This includes both general support and LTSS.

14.2.2 Tier Two Edit source

Third-Party Vendors with an Agreement Ensuring a Joint Engineering Collaboration

Partner containers and host environments with a joint engineering collaboration agreement are fully supported. This applies to both the container and host environment as well as all products under support (both general and LTSS) covered by the agreement.

14.2.3 Tier Three Edit source

Selected Third-Party Vendors with No Agreement

This tier targets environments delivered by selected third-party vendors. While SUSE-based containers are fully supported, issues in the host environment must be handled by the host environment vendor. SUSE supports components that come from the SUSE base containers. Packages from SUSE repositories are also supported. Additional components and application in the containers are not covered by SUSE support. A SLE subscription is required for building a derived container.

14.2.4 Tier Four Edit source

Any Other Container Host Environment

Any container host environment not mentioned in the support tiers above has limited support. Details can be discussed with the SUSE Support Team, who might triage the issue and recommend alternative solutions. In any other case, issues in the host environment must be handled by the host environment vendor.

A Terminology Edit source

Container

A container is a running instance based on a particular container image. Each container can be distinguished by a unique container ID.

Control groups

Control groups, also called cgroups , are a Linux kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchically-organized groups, to manage their resource limits.

Docker Open Source Engine

Docker Open Source Engine is a server-client type application that performs all tasks related to containers. Docker Open Source Engine comprises the following:

  • Daemon:  The server side of Docker Open Source Engine, which manages all Docker objects (images, containers, network connections used by containers, etc.).

  • REST API:  Applications can use this API to communicate directly with the daemon.

  • CLI Client:  Enables you to communicate with the daemon. If the daemon is running on a different machine than the CLI client, the CLI client can communicate by using network sockets or the REST API provided by Docker Open Source Engine.

Dockerfile

A Dockerfile provides instructions on how to build a container image. Docker Open Source Engine reads instructions in the Dockerfile and builds a new image according to the instructions.

Image

An image is a read-only template used to create a container. A Docker image is made by a series of layers built one over the other. Each layer corresponds to a permanent change, for example, an update of an application. The changes are stored in a file called a Dockerfile. For more details, see the official Docker documentation.

Container Image

A container image is an unchangeable, static file that includes executable code so it can run an isolated process on IT infrastructure. The image is comprised of system libraries, system tools, and other platform settings a program needs to run on a containerization platform. A container image is compiled from file system layers built on top of a parent or base image.

Base Image

A base image is an image that does not have a parent image. In a Dockerfile, a base image is identified by the FROM scratch directive.

Parent Image

The image that served the basis for another container image. In other words, if an image is not a base image, it is derived from a parent image. In a Dockerfile, the FROM directive is pointing to the parent image. Most Docker containers are created using parent images.

Namespaces

Docker Open Source Engine uses Linux namespaces for its containers, which isolates resources reserved for particular containers.

Orchestration

In a production environment, you typically need a cluster with many containers on each cluster node. The containers must cooperate and you need a framework that enables you to automatically manage the containers. The act of automatic container management is called container orchestration and is typically handled by Kubernetes.

Registry

A registry is storage for already created images. It typically contains several repositories There are two types of registries:

  • public registry: Any (usually registered) user can download and use images. A typical example of a public registry is Docker Hub.

  • private registry: Access is restricted to particular users, or from a particular private network.

Repository

A repository is storage for images in a registry.

Print this page