Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE CaaS Platform 4.2.4

2 Deployment Scenarios

2.1 Default Deployment Scenario

The default scenario consists of a SUSE CaaS Platform cluster, an external load balancer, and a management workstation.

The minimum viable failure tolerant configuration for the cluster is 3 master nodes and 2 worker nodes. For more information, refer to Chapter 1, Requirements.

CaaSP Components

2.2 Airgapped deployment

An air gapped deployment is defined by not allowing any direct connection to the Internet or external networks from the cluster during setup or runtime.

All data that is transferred to the cluster must be transferred in a secure fashion.


Air gapped deployment can be performed with any of the other deployment types and includes a set of steps that need to be performed before, or during the deployment steps of the concrete deployment.

Important: Scope Of This Document

This document focuses on providing mirrors for the resources provided by SUSE and required for basic SUSE CaaS Platform functionality. If you require additional functionality, you can use these instructions as an example on how to provide additional mirrors.

Providing a full set of mirroring instructions, for all usage scenarios, is beyond the scope of this document.

2.2.1 Process Checklist

The steps that must be performed for an air gapped installation are:

  1. Read the concepts section.

    Section 2.2.2, “Concepts”

  2. Deploy mirror servers on external and internal networks.

    Section, “Mirror Servers”

  3. Install Repository Mirroring Tool (RMT) on servers.

    Section 2.2.4, “RPM Repository Mirror”

  4. Configure container image registry on servers.

    Section 2.2.6, “Container Registry Mirror”

  5. Configure Helm Chart repository on internal mirror.

    Section 2.2.7, “Helm Chart Repository Mirror”

  6. Perform the Repository Mirroring Tool (RMT) update procedure to populate the RPM repository.

    Section 2.2.5, “Updating RPM Repository Mirror”

  7. Perform the shared update procedure to populate the Helm chart repository and registry services.

    Section 2.2.5, “Updating RPM Repository Mirror”

  8. Deploy SUSE CaaS Platform and configure the nodes to use the respective services on the internal network.

    Section 2.2.9, “Deploying SUSE CaaS Platform”

    RPM Packages: Section, “Client Configuration”

    Helm Charts: Section, “Client Configuration”

    Container Images: Section, “Client Configuration”

2.2.2 Concepts Network Separation

For an air gapped scenario we assume a network separation into three logical parts.

caasp cluster airgap network

Outside the controlled network.


Inside the controlled network, outside the air gapped network.


Inside the air gapped network.

The following instructions will use these three terms to refer to parts of the infrastructure. For example: "internal mirror" refers to the mirroring server on the air gapped network. The terms air gapped and internal will be used interchangeably. Mirrored Resources

In order to disconnect SUSE CaaS Platform from the external network, we provide ways for the components to retrieve data from alternative sources inside the internal (air gapped) network.

You will need to create a mirror server inside the internal network; which acts as a replacement for the default sources.

The three main sources that must be replaced are:

  • SUSE Linux Enterprise Server RPM packages

    Provided by the SUSE package repositories

  • Helm installation charts

    Provided by the SUSE helm chart repository (https://kubernetes-charts.suse.com/)

  • Container images

    Provided by the SUSE container registry (https://registry.suse.com)

You will provide replacements for these resources on a dedicated server inside your internal (air gapped) network.

The internal mirror must be updated with data retrieved from the original upstream sources; in a trusted and secure fashion. To achieve this, you will need an additional mirroring server outside of the air gapped network which acts as a first stage mirror and allows retrieving data from the internet.

Updating of mirrors happens in three stages.

  1. Update the external mirror from upstream.

  2. Transfer the updated data onto a trusted storage device.

  3. Update the internal mirror from the trusted storage device.

Once the replacement sources are in place, the key components are reconfigured to use the mirrors as their main sources. RPM Package Repository Mirroring

Mirroring of the RPM repositories is handled by the Repository Mirroring Tool for SUSE Linux Enterprise Server 15. The tool provides functionality that mirrors the upstream SUSE package repositories on the local network. This is intended to minimize reliance on SUSE infrastructure for updating large volumes of machines. The air gapped deployment uses the same technology to provide the packages locally for the air gapped environment.

SUSE Linux Enterprise Server bundles software packages into so called modules. You must enable the SUSE CaaS Platform, SUSE Linux Enterprise Server and Containers Module modules in addition to the modules enabled by default. All enabled modules need to be mirrored inside the air gapped network in order to provide the necessary software for other parts of this scenario.

Repository Mirroring Tool (RMT) will provide a repository server that holds the packages and related metadata for SUSE Linux Enterprise Server; to install them like from the upstream repository. Data is synchronized once a day to the external mirror automatically or can be forced via the CLI.

You can copy this data to your trusted storage at any point and update the internal mirror. Helm Chart and Container Image Mirroring

SUSE CaaS Platform uses Helm as one method to install additional software on the cluster. The logic behind this relies on Charts, which are configuration files that tell Kubernetes how to deploy software and its dependencies. The actual software installed using this method is delivered as container images. The download location of the container image is stored inside the Helm chart.

Container images are provided by SUSE and others on so called registries. The SUSE container registry is used to update the SUSE CaaS Platform components.

To mirror container images inside the air gapped environment, you will run two container image registry services that are used to pull and in turn serve these images. The registry service is shipped as a container image itself.

Helm charts are provided independently from container images and can be developed by any number of sources. Please make sure that you trust the origin of container images referenced in the helm charts.

We provide helm-mirror to allow downloading all charts present in a chart repository in bulk and moreover to extract all container image URLs from the charts. skopeo is used to download all the images referred to in the Helm charts from their respective registry.

Helm charts will be provided to the internal network by a webserver and refer to the container images hosted on the internal registry mirror.

Once mirroring is configured, you will not have to modify Dockerfile(s) or Kubernetes manifests to use the mirrors. The requests are passed through the container engine which forwards them to the configured mirrors. For example: All images with a prefix registry.suse.com/ will be automatically pulled from the configured (internal) mirror instead.

For further information on registry mirror configuration, refer to https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-admin/#_configuring_container_registries_for_cri_o.

2.2.3 Requirements Mirror Servers

Note: Shared Mirror Server

If you have multiple SUSE CaaS Platform clusters or a very large number of nodes accessing the mirrors, you should increase the sizing of CPU/RAM.

Storage sizing depends on your intended update frequency and data retention model. If you want to keep snapshots or images of repository states at various points, you must increase storage size accordingly.

You will need to provide and maintain at least two machines in addition to your SUSE CaaS Platform cluster. These mirror servers will reside on the external part of your network and the internal (air gapped) network respectively.

For more information on the requirements of a SUSE Linux Enterprise 15 server, refer to: Installation Preparation.


This machine will host the Repository Mirroring Tool (RMT) for RPM packages and the container image registry for container images.

  • 1 Host machines for the mirror servers.

    • SLES 15

    • 2 (v)CPU

    • 4 GB RAM

    • 250 GB Storage

Internal (Air gapped)

This machine will host the Repository Mirroring Tool (RMT) for RPM packages, and container image registry for container images as well as the Helm chart repository files.

  • 1 Host machines for the mirror servers.

    • SLES 15

    • 2 (v)CPU

    • 8 GB RAM

    • 500 GB Storage

Important: Adjust Number Of Mirror Servers

This scenario description does not contain any fallback contingencies for the mirror servers. Add additional mirror servers (behind a load balancer) if you require additional reliability/availability.

Procedure: Provision Mirror Servers
  1. Set up two SUSE Linux Enterprise Server 15 machines one on the internal network and one on the air gapped network.

  2. Make sure you have enabled the Containers module on both servers.

  3. Make sure you have Repository Mirroring Tool installed on both server. Networking

Note: Additional Port Configuration

If you choose to add more container image registries to your internal network, these must run on different ports than the standard registry running on 5000. Configure your network to allow for this communication accordingly. Ports

The external mirror server must be able to exchange outgoing traffic with upstream sources on ports 80 and 443.

All members of the SUSE CaaS Platform cluster must be able to communicate with the internal mirror server(s) within the air gapped network. You must configure at least these ports in all firewalls between the cluster and the internal mirror:

  • 80 HTTP - Repository Mirroring Tool (RMT) Server and Helm chart repository mirror

  • 443 HTTPS - Repository Mirroring Tool (RMT) Server and Helm chart repository mirror

  • 5000 HTTPS - Container image registry Hostnames / FQDN

You need to define fully qualified domain names (FQDN) for both of the mirror servers in their respective network. These hostnames are the basis for the required SSL certificates and are used by the components to access the respective mirror sources. SSL Certificates

You will need SSL/TLS certificates to secure services on each server.

On the air gapped network, certificates need to cover the hostname of your server and the subdomains for the registry (registry.) and helm chart repository (charts.). You must add corresponding aliases to the certificate.


You can use wildcard certificates to cover the entire hostname.

The certificates can be replaced with the self-signed certificate, or you can re-use the certificates created by Repository Mirroring Tool (RMT) during the setup of the mirror servers.

Place the certificate, CA certificate and key file in /etc/rmt/ssl/ as rmt-server.crt, rmt-ca.cert, and rmt-server.key.

These certificates can be re-used by all three mirror services.

Make sure the CA certificate is available to SUSE CaaS Platform system wide; so they can be used by the deployed components.

You can add system wide certificates with following commands on all nodes:

sudo cp /etc/rmt/ssl/rmt-ca.crt /etc/pki/trust/anchors/
sudo update-ca-certificates Trusted Storage

Transferring data from the external network mirror to the internal mirror can be performed in many ways. The most common way is portable storage (USB keys or external hard drives).

Sizing of the storage is dependent on the number of data sources that need to be stored. Container images can easily measure several Gigabytes per item; although they are generally smaller for Kubernetes related applications. The overall size of any given RPM repository is at least tens of Gigabytes. For example: At the time of writing, the package repository for SUSE Linux Enterprise Server contains approximately 36 GB of data.

The storage must be formatted to a file system type supporting files larger than 4 GB.

We recommend external storage with at least 128 GB.

Note: Mount Point For Storage In Examples

In the following procedures, we will assume the storage (when connected) is mounted on /mnt/storage . Please make sure to adjust the mountpoint in the respective command to where the device is actually available.

Note: Handling Of Trusted Storage

Data integrity checks, duplication, backup, and secure handling procedures of trusted storage are beyond the scope of this document.

2.2.4 RPM Repository Mirror Mirror Configuration

Note: Deploy The Mirror Before SUSE CaaS PlatformCluster Deployment

The mirror on the air gapped network must be running and populated before

Procedure: Configure The External Mirror
  1. Connect the external mirror to SUSE Customer Center as described in these instructions.

    Important: Mirror Registration

    During the installation of Repository Mirroring Tool (RMT) you will be asked for login credentials. On the external mirror, you need to enter your SUSE Customer Center login credentials to register. On the internal mirror, you can skip the SUSE Customer Center login since the registration will not be possible without an internet connection to SUSE Customer Center .

  2. You need to disable the automatic repository sync on the internal server. Otherwise it will attempt to download information from SUSE Customer Center which can not be reached from inside the air gapped network.

    sudo systemctl stop rmt-server-sync.timer
    sudo systemctl disable rmt-server-sync.timer

Now you need to perform the update procedure to do an initial sync of data between the upstream sources and the external mirror and the external and internal mirrors. Refer to: Section 2.2.5, “Updating RPM Repository Mirror”. Client Configuration

Follow these instructions to configure all SUSE CaaS Platform nodes to use the package repository mirror server in the air gapped network.

2.2.5 Updating RPM Repository Mirror

Follow these instructions to update the external server, transfer the data to a storage device, and use that device to update the air gapped server.

2.2.6 Container Registry Mirror

Note: Mirroring Multiple Image Registries / Chart Repositories

You can mirror images and charts from multiple registries in one shared internal registry. We do not recommend mirroring multiple registries in a shared registry due to the potential conflicts.

We highly recommend running separate helm chart and container registry mirrors for each source registry.

Additional mirror registries must be run on separate mirror servers for technical reasons. Mirror Configuration

The container image registry is provided as a container image itself. You must download the registry container from SUSE and run it on the respective server.

Note: Which images to Mirror

CaaS Platform requires a base set of images to be mirrored, as they contain the core services needed to run the cluster.

This list of base images can be found under the following link: https://documentation.suse.com/external-tree/en-us/suse-caasp/4/skuba-cluster-images.txt

Alternatively, the list can be obtained from skuba - just run this command on the machine you have skuba installed on:

skuba cluster images

This will print out a list of the images skuba is expecting to use on the cluster to be bootstrapped.

Mirror those and setup the crio-registries to point to the location they are mirrored at.

In addition to the base images used by skuba, Helm 2 also requires the helm-tiller image. When using Helm 3 in an air-gapped deployment the helm-tiller image is not required.


These images need to be available in the external and internal mirrors at the time you try to deploy SUSE CaaS Platform. Image tags will vary depending on the version of kubernetes you install.








































Note: Internal Registry Mirror Is Read Only

For security reasons, the internal registry mirror is configured in read-only mode. Therefore, pushing container images to this mirror will not be possible. It can only serve images that were previously pulled and cached by the external mirror and then uploaded to the internal mirror.

You can modify and store your own container images on the external registry and transfer them with the other container images using the same process. If you need to be able to modify and store container images on the internal network, we recommend creating a new registry that will hold these images. The steps needed to run your own full container image registry are not part of this document.

For more information you can refer to: SLES15 - Docker Open Source Engine Guide: What is Docker Registry?.

We will re-use the nginx webserver that is running as part of Repository Mirroring Tool (RMT) to act as a reverse proxy for the container image registry service and to serve the chart repository files. This step is not necessary for the external host.

Procedure: Set Up Reverse Proxy and Virtual Host
  1. SSH into the internal mirror server.

  2. Create a virtual host configuration file /etc/nginx/vhosts.d/registry-server-https.conf .

    Replace mymirror.local with the hostname of your mirror server for which you created the SSL certificates.

    upstream docker-registry {
    map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
      '' 'registry/2.0';
    server {
        listen 443   ssl;
        server_name  registry.`mymirror.local`;
        access_log  /var/log/nginx/registry_https_access.log;
        error_log   /var/log/nginx/registry_https_error.log;
        root        /usr/share/rmt/public;
        ssl_certificate     /etc/rmt/ssl/rmt-server.crt;
        ssl_certificate_key /etc/rmt/ssl/rmt-server.key;
        ssl_protocols       TLSv1.2 TLSv1.3;
        # disable any limits to avoid HTTP 413 for large image uploads
        client_max_body_size 0;
        location /v2/ {
          # Do not allow connections from docker 1.5 and earlier
          # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
          if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
            return 404;
          ## If $docker_distribution_api_version is empty, the header is not added.
          ## See the map directive above where this variable is defined.
          add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
          proxy_pass                          http://docker-registry;
          proxy_set_header  Host              $http_host;   # required for docker client's sake
          proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
          proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
          proxy_set_header  X-Forwarded-Proto $scheme;
          proxy_read_timeout                  900;
  3. Create a virtual host configuration file /etc/nginx/vhosts.d/charts-server-https.conf .

    Replace mymirror.local with the hostname of your mirror server for which you created the SSL certificates.

    server {
      listen 443   ssl;
      server_name  charts.`mymirror.local`;
      access_log  /var/log/nginx/charts_https_access.log;
      error_log   /var/log/nginx/charts_https_error.log;
      root        /srv/www/;
      ssl_certificate     /etc/rmt/ssl/rmt-server.crt;
      ssl_certificate_key /etc/rmt/ssl/rmt-server.key;
      ssl_protocols       TLSv1.2 TLSv1.3;
      location /charts {
        autoindex on;
  4. Restart nginx for the changes to take effect.

    sudo systemctl restart nginx
Procedure: Set Up The External Mirror
  1. SSH into the external mirror server.

  2. Install docker , helm-mirror and skopeo .

    sudo zypper in docker helm-mirror skopeo

    The helm-mirror documentation gives instruction for installing as a plugin, but in an air-gapped environment that plugin installation will not be available. Please use the zypper installation as described here.

  3. Start the docker service and enable it at boot time:

    sudo systemctl enable --now docker.service
  4. Pull the registry container image from SUSE .

    sudo docker pull registry.suse.com/sles12/registry:2.6.2
  5. Save the pulled image to a .tar file.

    sudo docker save -o /tmp/registry.tar registry.suse.com/sles12/registry:2.6.2
  6. Connect the trusted storage to the external mirror. Copy the registry image onto the storage.

    mv /tmp/registry.tar /mnt/storage/registry.tar
  7. Create basic authentication credentials for the container image registry.

    Replace USERNAME and PASSWORD with proper credentials of your choosing.

    sudo mkdir -p /etc/docker/registry/{auth,certs}
    sudo docker run --entrypoint htpasswd registry.suse.com/sles12/registry:2.6.2 -Bbn <USERNAME> <PASSWORD> | sudo tee /etc/docker/registry/auth/htpasswd
  8. Create the /etc/docker/registry/config.yml configuration file.


    Setting up a required authentication seems to break, when using CRI-O as the client, so the internal registry does not use any authentication.

    version: 0.1
        service: registry
        blobdescriptor: inmemory
        rootdirectory: /var/lib/registry
        X-Content-Type-Options: [nosniff]
        enabled: true
        interval: 10s
    threshold: 3

    For more details on the configuration, refer to: Docker Registry: Configuration

  9. Start the registry container.

    sudo docker run -d -p 5000:5000 -v /etc/rmt/ssl:/etc/rmt/ssl:ro --restart=always --name registry \
    -v /etc/docker/registry:/etc/docker/registry:ro \
    -v /var/lib/registry:/var/lib/registry registry.suse.com/sles12/registry:2.6.2
Procedure: Set Up Internal Mirror
  1. SSH into the internal mirror server.

  2. Install docker .

    sudo zypper in docker
  3. Start the docker service and enable it at boot time:

    sudo systemctl enable --now docker.service
  4. Connect the trusted storage to the internal mirror and load the registry container image to the local file system.

    sudo docker load -i /mnt/storage/registry.tar
  5. Create the /etc/docker/registry/config.yml configuration file.

    sudo mkdir -p /etc/docker/registry/
    version: 0.1
        service: registry
        blobdescriptor: inmemory
        rootdirectory: /var/lib/registry
          enabled: true
        X-Content-Type-Options: [nosniff]
        certificate: /etc/rmt/ssl/rmt-server.crt
        key: /etc/rmt/ssl/rmt-server.key
        enabled: true
        interval: 10s
    threshold: 3

    For more details on the configuration, refer to: Docker Registry: Configuration

  6. Start the registry container.

    sudo docker run -d -p 5000:5000 -v /etc/rmt/ssl:/etc/rmt/ssl:ro --restart=always --name registry \
    -v /etc/docker/registry:/etc/docker/registry:ro \
    -v /var/lib/registry:/var/lib/registry registry.suse.com/sles12/registry:2.6.2

Now, you should have the registries set up and listening on port 5000 on their respective servers. Client Configuration


The example provided with the installation is in the old v1 format of the CRI-O registries syntax. You must replace/remove all content from the example file and build a new file based on the v2 syntax.

The example below is written in the correct v2 syntax. registries.conf is written using TOML.

Configure /etc/containers/registries.conf to setup the mirroring from registry.suse.com to the internal mirror. This needs to be done on all cluster nodes. Make sure to adjust all the correct domain name for your local registry:

prefix = "registry.suse.com"
location = "registry01.mydomain.local:5000/registry.suse.com"
prefix = "docker.io"
location = "registry01.mydomain.local:5000/docker.io"
prefix = "docker.io/library"
location = "registry01.mydomain.local:5000/docker.io"
prefix = "quay.io"
location = "registry01.mydomain.local:5000/quay.io"
prefix = "k8s.gcr.io"
location = "registry01.mydomain.local:5000/k8s.gcr.io"
prefix = "gcr.io"
location = "registry01.mydomain.local:5000/gcr.io"

For detailed information about the configuration format see https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-admin/#_configuring_container_registries_for_cri_o.

2.2.7 Helm Chart Repository Mirror


To make use of the helm charts, you must complete Section 2.2.6, “Container Registry Mirror”.

The helm charts will require images available from a registry mirror. The charts themselves are served on a simple webserver and do not require any particular configuration apart from basic networking availability and a hostname. Mirror Configuration

Update the Helm chart repository by following the shared update procedure Section 2.2.8, “Updating Registry Mirror For Helm Charts”. Client Configuration

Add the webserver as a repo to helm.

This step needs to be performed on a machine where Helm is installed and configured to talk to the Tiller server in the SUSE CaaS Platform cluster. For steps to install Helm 2 and Tiller, reference https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-admin/#_installing_tiller and be sure that the helm-tiller image has been mirrored as described in Section 2.2.8, “Updating Registry Mirror For Helm Charts”.

To initialize Helm 2 before the helm-tiller image is ready, use the command:

helm init --client-only --skip-refresh

<SUSE_MIRROR> will be the user-defined name for this repository listed by Helm. The name of the repository must adhere to Helm Chart naming conventions.

helm repo add <SUSE_MIRROR> https://charts.<MYMIRROR.LOCAL>

2.2.8 Updating Registry Mirror For Helm Charts

Note: Live Update Of Registry

There is no need to stop the container image registry services while doing the update procedures. All changed images will be re-indexed automatically.

Helm charts and container images must be refreshed in the same procedure, otherwise charts might refer to image versions that are not mirrored or you are mirroring outdated image versions that cause the chart deployment to fail.

Procedure: Pull Data From Upstream Sources
  1. SSH into the mirror server on the external network.

  2. Download all charts from the repository to the file system (e.g. /tmp/charts ).

    This action will download all charts and overwrite the existing Helm chart repository URL. Replace http://charts.mymirror.local with the hostname of the webserver providing the Helm chart repository on the internal network.

    mkdir /tmp/charts
    cd /tmp/charts
    helm-mirror --new-root-url http://charts.mymirror.local https://kubernetes-charts.suse.com /tmp/charts
  3. Translate the chart information into the skopeo format.

    helm-mirror inspect-images /tmp/charts -o skopeo=sync.yaml --ignore-errors
    Note: Ignoring Chart Errors

    The helm-mirror tool will attempt to render and inspect all downloaded charts. Some charts will have values that are filled from environment data on their source repository and produce errors. You can still proceed with this step by using the --ignore-errors flag.

  4. Add helm-tiller to the image list.

    Edit the sync.yaml file and add an entry for the helm-tiller image after any other entries. Verify the version of the image matches the version used in helm init as documented in https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-admin/#_installing_tiller and adjust the entry if needed.

              - 2.16.12
  5. Download all the referenced images using skopeo.

    mkdir /tmp/skopeodata
    skopeo sync --src yaml --dest dir sync.yaml /tmp/skopeodata

    skopeo will automatically create a directory named after the hostname of the registry from which you are downloading the images. The final path will be something like /tmp/skopeodata/registry.suse.com/ .

  6. Populate the local registry with the downloaded data.

    For --dest-creds you must use the credentials you created during Section, “Mirror Configuration”.

    skopeo sync --dest-creds USERNAME:PASSWORD \
    --src dir --dest docker \
    /tmp/skopeodata/registry.suse.com/ mymirror.local:5000
  7. After the synchronization is done, you can remove the skopeodata directory.

    rm -rf /tmp/skopeodata
Procedure: Transfer Data To Secure Storage
  1. Connect the trusted storage to the external mirror.

  2. Transfer the container image data to the trusted storage. This will remove all files and directories that are no longer present on the external host from the trusted storage.

    rsync -aP /var/lib/registry/ /mnt/storage/registry/ --delete
  3. Transfer the helm chart data to the trusted storage.

    rsync -aP /tmp/charts/ /mnt/storage/charts --delete
  4. Connect the trusted storage to the internal mirror.

  5. Transfer the container image data to the internal mirror. This will remove all files and directories that are no longer present on the trusted storage from the internal mirror.

    The target directory is /var/lib/registry.

    rsync -aP /mnt/storage/registry/ /var/lib/registry/ --delete
  6. Transfer the helm chart data to the internal mirror. This will remove all charts that do not exist on the trusted storage. If you have added any charts to the location manually, please back up these first and restore after the sync from the trusted storage is done.

    rsync -aP /mnt/storage/charts/ /srv/www/charts/ --delete
  7. Set the file permissions and ownership to 555 and nginx:nginx.

    sudo chown -R nginx:nginx /srv/www/charts sudo chmod -R 555 /srv/www/charts/
Procedure: Refresh information on the SUSE CaaS Platformcluster
  1. Update the repository information on the machine on which you are using Helm to install software to the cluster.

    helm repo update

    You can now deploy additional software on your SUSE CaaS Platform Refer to: https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-admin/#software-installation.

2.2.9 Deploying SUSE CaaS Platform

Use the SUSE CaaS Platform Deployment Guide as usual. Some of the considerations below apply; depending of the chosen installation medium.

Make sure to add the CA certificate of your Repository Mirroring Tool (RMT) server during deployment. Refer to: Section, “SSL Certificates”. Using the ISO

From YaST register the node against the Repository Mirroring Tool (RMT) server. This will ensure the node zypper repositories are pointed against Repository Mirroring Tool (RMT). Moreover, all the available updates are going to be installed and there is no need to manually install updates right after the installation. Using AutoYaST

Ensure the admin node is registered against Repository Mirroring Tool (RMT), that will ensure the nodes that are provisioned by AutoYaST are registered against Repository Mirroring Tool (RMT) to have all the updates applied.

2.2.10 Troubleshooting Skopeo Fails Because Of Self Signed Certificate

If you are using a self-signed certificate for the registry you can use the --dest-cert-dir /path/to/the/cert parameter to provide the certificate. Registering An Existing Node against Repository Mirroring Tool (RMT)

Refer to: Section, “Client Configuration”. Helm chart connection terminated by HTTPS TO HTTP

When registry mirror is using virtual repository URL. You may need to manually modify the Helm chart index.yaml and point the correct HTTPS base URL. Helm Tiller container fails to start

If the URL for the helm-tiller image is not available when helm init is invoked, the pod for Tiller may be stuck in the "ImagePullBackOff" state. To clear out the broken pod and prepare to attempt helm init again, use the following command.

helm reset --force
Print this page