Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE CaaS Platform 4.5.2

3 Software Management

3.1 Software Installation

Software can be installed in three basic layers

Base OS layer

Linux RPM packages, Kernel etc.. Installation via AutoYaST,Terraform or {zypper}

Kubernetes Stack

Software that helps/controls execution of workloads in Kubernetes

Container image

Here it entirely depends on the actual makeup of the container what can be installed and how. Please refer to your respecitve container image documentation for further details.

Note
Note

Installation of software in container images is beyond the scope of this document.

3.1.1 Base OS

Applications that will be deployed to Kubernetes will typically contain all the required software to be executed. In some cases, especially when it comes to the hardware layer abstraction (storage backends, GPU), additional packages must be installed on the underlying operating system outside of Kubernetes.

Note
Note

The following examples show installation of required packages for Ceph, please adjust the list of packages and repositories to whichever software you need to install.

While you can install any software package from the SLES ecosystem this falls outside of the support scope for SUSE CaaS Platform.

3.1.1.1 Initial Rollout

During the rollout of nodes you can use either AutoYaST or Terraform (depending on your chosen deployment type) to automatically install packages to all nodes.

For example, to install additional packages required by the Ceph storage backend you can modify your autoyast.xml or tfvars.yml files to include the additional repositories and instructions to install xfsprogs and ceph-common.

  1. tfvars.yml

    # EXAMPLE:
    # repositories = {
    #   repository1 = "http://example.my.repo.com/repository1/"
    #   repository2 = "http://example.my.repo.com/repository2/"
    # }
    repositories = {
            ....
    }
    
    # Minimum required packages. Do not remove them.
    # Feel free to add more packages
    packages = [
      "kernel-default",
      "-kernel-default-base",
      "xfsprogs",
      "ceph-common"
    ]
  2. autoyast.xml

    <!-- install required packages -->
    <software>
      <image/>
      <products config:type="list">
        <product>SLES</product>
      </products>
      <instsource/>
      <patterns config:type="list">
        <pattern>base</pattern>
        <pattern>enhanced_base</pattern>
        <pattern>minimal_base</pattern>
        <pattern>basesystem</pattern>
      </patterns>
      <packages config:type="list">
        <package>ceph-common</package>
        <package>xfsprogs</package>
      </packages>
    </software>

3.1.1.2 Existing Cluster

To install software on existing cluster nodes, you must use zypper on each node individually. Simply log in to a node via SSH and run:

sudo zypper in ceph-common xfsprogs

3.1.2 Kubernetes stack

3.1.2.1 Installing Helm

As of SUSE CaaS Platform 4.5.2, Helm 3 is the default and provided by the package repository. To install, run the following command from the location where you normally run skuba commands:

sudo zypper install helm3

3.1.2.2 Helm 2 to 3 Migration

Note
Note

The process for migrating an installation from Helm 2 to Helm 3 has been documented and tested by the Helm community. Refer to:

3.1.2.2.1 Preconditions
  • A healthy SUSE CaaS Platform 4.5.x installation with applications deployed using Helm 2 and Tiller.

  • A system, which skuba and helm version 2 have run on previously.

    • The procedure below requires an available internet connection to install the 2to3 plugin. If the installation is in an air gapped environment, the system may need to be moved back out of the air gapped environment.

  • These instructions are written for a single cluster managed from a single Helm 2 installation. If more than one cluster is being managed by this installation of Helm 2, please reference https://github.com/helm/helm-2to3 for further details and do not do the clean-up step until all clusters are migrated.

3.1.2.2.2 Migration Procedure

This is a procedure for migrating a SUSE CaaS Platform 4.5 deployment that has used Helm 2 to deploy applications.

  1. Install helm3 package in the same location you normally run skuba commands (alongside the helm2 package):

    sudo zypper in helm3
  2. Install the 2to3 plugin:

    helm3 plugin install https://github.com/helm/helm-2to3.git
  3. Backup Helm 2 data found in the following:

    1. Helm 2 home folder.

    2. Release data from the cluster. Refer to How Helm Uses ConfigMaps to Store Data for details on how Helm 2 stores release data in the cluster. This should apply similarly if Helm 2 is configured for secrets.

  4. Move configuration from 2 to 3:

    helm3 2to3 move config
    1. After the move, if you have installed any custom plugins, then check that they work fine with Helm 3. If needed, remove and re-add them as described in https://github.com/helm/helm-2to3s.

    2. If you have configured any local helm chart repositories, you will need to remove and re-add them. For example:

      helm3 repo remove <my-custom-repo>
      helm3 repo add <my-custom-repo> <url-to-custom-repo>
      helm3 repo update
  5. Migrate Helm releases (deployed charts) in place:

    helm3 2to3 convert RELEASE
  6. Clean up Helm 2 data:

    Warning
    Warning

    Tiller will be cleaned up, and Helm 2 will not be usable on this cluster after cleanup.

    helm3 2to3 cleanup
  7. You may now uninstall the helm2 package and use the helm command line from the helm3 package from now on.

    sudo zypper remove helm2
3.1.2.2.3 Migration Procedure (Air gap)
Note
Note

If you are upgrading in an air gap environment, you must manually install the "developer" version of the 2to3 plugin.

  1. Install helm3 package in the same location you normally run skuba commands (alongside the helm2 package):

    sudo zypper in helm3
  2. Download the latest release from https://github.com/helm/helm-2to3/releases

  3. On your internal workstation unpack the archive file:

    mkdir ./helm-2to3
    tar -xvf helm-2to3_0.7.0_linux_amd64.tar.gz -C ./helm-2to3
  4. Install the plugin

    export HELM_LINTER_PLUGIN_NO_INSTALL_HOOK=true
    helm plugin install ./helm-2to3

    The expected output should contain a message like:

    Development mode: not downloading versioned release.
    Installed plugin: 2to3
  5. Now copy the installed plugin to a sub directory to allow manual execution

    cd $HOME/.helm/plugins/helm-2to3/
    mkdir bin
    cp 2to3 bin/2to3
  6. Backup Helm 2 data found in the following:

    1. Helm 2 home folder.

    2. Release data from the cluster. Refer to How Helm Uses ConfigMaps to Store Data for details on how Helm 2 stores release data in the cluster. This should apply similarly if Helm 2 is configured for secrets.

  7. Move configuration from 2 to 3:

    helm3 2to3 move config
    1. After the move, if you have installed any custom plugins, then check that they work fine with Helm 3. If needed, remove and re-add them as described in https://github.com/helm/helm-2to3s.

    2. If you have configured any local helm chart repositories, you will need to remove and re-add them. For example:

      helm3 repo remove <my-custom-repo>
      helm3 repo add <my-custom-repo> <url-to-custom-repo>
      helm3 repo update
  8. Migrate Helm releases (deployed charts) in place:

    helm3 2to3 convert RELEASE
  9. Clean up Helm 2 data:

    Warning
    Warning

    Tiller will be cleaned up, and Helm 2 will not be usable on this cluster after cleanup.

    helm3 2to3 cleanup
  10. You may now uninstall the helm2 package and use the helm command line from the helm3 package from now on.

    sudo zypper remove helm2
Print this page