Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE CaaS Platform 4.2.4

3 Software Management

3.1 Software Installation

Software can be installed in three basic layers

Base OS layer

Linux RPM packages, Kernel etc.. Installation via AutoYaST,Terraform or {zypper}

Kubernetes Stack

Software that helps/controls execution of workloads in Kubernetes

Container image

Here it entirely depends on the actual makeup of the container what can be installed and how. Please refer to your respecitve container image documentation for further details.

Note
Note

Installation of software in container images is beyond the scope of this document.

3.1.1 Base OS

Applications that will be deployed to Kubernetes will typically contain all the required software to be executed. In some cases, especially when it comes to the hardware layer abstraction (storage backends, GPU), additional packages must be installed on the underlying operating system outside of Kubernetes.

Note
Note

The following examples show installation of required packages for Ceph, please adjust the list of packages and repositories to whichever software you need to install.

While you can install any software package from the SLES ecosystem this falls outside of the support scope for SUSE CaaS Platform.

3.1.1.1 Initial Rollout

During the rollout of nodes you can use either AutoYaST or Terraform (depending on your chosen deployment type) to automatically install packages to all nodes.

For example, to install additional packages required by the Ceph storage backend you can modify your autoyast.xml or tfvars.yml files to include the additional repositories and instructions to install xfsprogs and ceph-common.

  1. tfvars.yml

    # EXAMPLE:
    # repositories = {
    #   repository1 = "http://example.my.repo.com/repository1/"
    #   repository2 = "http://example.my.repo.com/repository2/"
    # }
    repositories = {
            ....
    }
    
    # Minimum required packages. Do not remove them.
    # Feel free to add more packages
    packages = [
      "kernel-default",
      "-kernel-default-base",
      "xfsprogs",
      "ceph-common"
    ]
  2. autoyast.xml

    <!-- install required packages -->
    <software>
      <image/>
      <products config:type="list">
        <product>SLES</product>
      </products>
      <instsource/>
      <patterns config:type="list">
        <pattern>base</pattern>
        <pattern>enhanced_base</pattern>
        <pattern>minimal_base</pattern>
        <pattern>basesystem</pattern>
      </patterns>
      <packages config:type="list">
        <package>ceph-common</package>
        <package>xfsprogs</package>
      </packages>
    </software>

3.1.1.2 Existing Cluster

To install software on existing cluster nodes, you must use zypper on each node individually. Simply log in to a node via SSH and run:

sudo zypper in ceph-common xfsprogs

3.1.2 Kubernetes stack

3.1.2.1 Installing Helm

As of SUSE CaaS Platform 4.2.4, Helm is part of the SUSE CaaS Platform package repository, so to use this, you only need to run the following command from the location where you normally run skuba commands:

sudo zypper install helm

Helm 2 is the default for SUSE CaaS Platform 4.2.4. Helm 3 is offered as an alternate tool and may be installed in parallel to aid migration.

sudo zypper install helm3
sudo update-alternatives --set helm /usr/bin/helm3
Warning
Warning

Unless you are migrating from SUSE CaaS Platform 4.2 with Helm charts already deployed or have legacy Helm charts that only work with Helm 2, please use Helm 3.

Helm 2 is planned to end support in November 2020. Helm 3 is offered as an alternative in SUSE CaaS Platform 4.5.0 and will become the default tool in the following release. Please see Section 3.1.2.3, “Helm 2 to 3 Migration” for upgrade instructions and upgrade as soon as feasible.

3.1.2.2 Installing Tiller

Note
Note

Tiller is only a requirement for Helm 2 and has been removed from Helm 3. If using Helm 3, please skip this section.

As of SUSE CaaS Platform 4.2.4, Tiller is not part of the SUSE CaaS Platform package repository but it is available as a helm chart from the chart repository. To install the Tiller server, choose either way to deploy the Tiller server:

3.1.2.2.1 Unsecured Tiller Deployment

This will install Tiller without additional certificate security.

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller \
    --clusterrole=cluster-admin \
    --serviceaccount=kube-system:tiller

helm init \
    --tiller-image registry.suse.com/caasp/v4/helm-tiller:2.16.12 \
    --service-account tiller
3.1.2.2.2 Secured Tiller Deployment with TLS certificate

This installs tiller with TLS certificate security.

3.1.2.2.2.1 Trusted Certificates

Please refer to Section 5.10.10.1.1, “Trusted Server Certificate” and Section 5.10.10.1.2, “Trusted Client Certificate” on how to sign the trusted tiller and helm certificate. The server.conf for IP.1 is 127.0.0.1.

Then, import trusted certificate to Kubernetes cluster. In this example, trusted certificate are ca.crt, tiller.crt, tiller.key, helm.crt and helm.key.

3.1.2.2.2.2 Self-signed Certificates (optional)

Please refer to Section 5.10.10.2.2, “Self-signed Server Certificate” and Section 5.10.10.2.3, “Self-signed Client Certificate” on how to sign the self-signed tiller and helm certificate. The server.conf for IP.1 is 127.0.0.1.

Then, import trusted certificate to Kubernetes cluster. In this example, trusted certificate are ca.crt, tiller.crt, tiller.key, helm.crt and helm.key.

  1. Deploy Tiller server with TLS certificate

    kubectl create serviceaccount --namespace kube-system tiller
    kubectl create clusterrolebinding tiller \
        --clusterrole=cluster-admin \
        --serviceaccount=kube-system:tiller
    
    helm init \
        --tiller-tls \
        --tiller-tls-verify \
        --tiller-tls-cert tiller.crt \
        --tiller-tls-key tiller.key \
        --tls-ca-cert ca.crt \
        --tiller-image registry.suse.com/caasp/v4/helm-tiller:2.16.12 \
        --service-account tiller
  2. Configure Helm client with TLS certificate

    Setup $HELM_HOME environment and copy the CA certificate, helm client certificate and key to the $HELM_HOME path.

    export HELM_HOME=<path/to/helm/home>
    
    cp ca.crt $HELM_HOME/ca.pem
    cp helm.crt $HELM_HOME/cert.pem
    cp helm.key $HELM_HOME/key.pem

    Then, for helm commands, pass flag --tls. For example:

    helm ls --tls [flags]
    helm install --tls <CHART> [flags]
    helm upgrade --tls <RELEASE_NAME> <CHART> [flags]
    helm del --tls <RELEASE_NAME> [flags]

3.1.2.3 Helm 2 to 3 Migration

Note
Note

The process for migrating an installation from Helm 2 to Helm 3 has been documented and tested by the Helm community. Refer to:

3.1.2.3.1 Preconditions
  • A healthy SUSE CaaS Platform installation with applications deployed using Helm 2 and Tiller.

  • A system, which skuba and version 2 of helm have run on previously.

    • The procedure below requires an available internet connection to install the 2to3 plugin. If the installation is in an air gapped environment, the system may need to be moved back out of the air gapped environment.

  • These instructions are written for a single cluster managed from a single Helm 2 installation. If more than one cluster is being managed by this installation of Helm 2, please reference https://github.com/helm/helm-2to3 for further details and do not do the clean-up step until all clusters are migrated.

3.1.2.3.2 Migration Procedure

This is a procedure for migrating a SUSE CaaS Platform deployment that has used Helm 2 to deploy applications.

  1. Install helm3 package in the same location you normally run skuba commands (alongside the helm package):

    sudo zypper in helm3
  2. Install the 2to3 plugin:

    helm3 plugin install https://github.com/helm/helm-2to3.git
  3. Backup Helm 2 data found in the following:

    1. Helm 2 home folder.

    2. Release data from the cluster. Refer to How Helm Uses ConfigMaps to Store Data for details on how Helm 2 stores release data in the cluster. This should apply similarly if Helm 2 is configured for secrets.

  4. Move configuration from 2 to 3:

    helm3 2to3 move config
    1. After the move, if you have installed any custom plugins, then check that they work fine with Helm 3. If needed, remove and re-add them as described in https://github.com/helm/helm-2to3s.

    2. If you have configured any local helm chart repositories, you will need to remove and re-add them. For example:

      helm3 repo remove <my-custom-repo>
      helm3 repo add <my-custom-repo> <url-to-custom-repo>
      helm3 repo update
  5. Migrate Helm releases (deployed charts) in place:

    helm3 2to3 convert RELEASE
  6. Clean up Helm 2 data:

    Warning
    Warning

    Tiller will be cleaned up, and Helm 2 will not be usable on this cluster after cleanup.

    helm3 2to3 cleanup
  7. You may now set the helm command line to use the the helm3 package from now on.

    sudo update-alternatives --set helm /usr/bin/helm3
3.1.2.3.3 Migration Procedure (Air gap)
Note
Note

If you are upgrading in an air gap environment, you must manually install the "developer" version of the 2to3 plugin.

  1. Install helm3 package in the same location you normally run skuba commands (alongside the helm2 package):

    sudo zypper in helm3
  2. Download the latest release from https://github.com/helm/helm-2to3/releases

  3. On your internal workstation unpack the archive file:

    mkdir ./helm-2to3
    tar -xvf helm-2to3_0.7.0_linux_amd64.tar.gz -C ./helm-2to3
  4. Install the plugin

    export HELM_LINTER_PLUGIN_NO_INSTALL_HOOK=true
    helm plugin install ./helm-2to3

    The expected output should contain a message like:

    Development mode: not downloading versioned release.
    Installed plugin: 2to3
  5. Now copy the installed plugin to a sub directory to allow manual execution

    cd $HOME/.helm/plugins/helm-2to3/
    mkdir bin
    cp 2to3 bin/2to3
  6. Backup Helm 2 data found in the following:

    1. Helm 2 home folder.

    2. Release data from the cluster. Refer to How Helm Uses ConfigMaps to Store Data for details on how Helm 2 stores release data in the cluster. This should apply similarly if Helm 2 is configured for secrets.

  7. Move configuration from 2 to 3:

    helm3 2to3 move config
    1. After the move, if you have installed any custom plugins, then check that they work fine with Helm 3. If needed, remove and re-add them as described in https://github.com/helm/helm-2to3s.

    2. If you have configured any local helm chart repositories, you will need to remove and re-add them. For example:

      helm3 repo remove <my-custom-repo>
      helm3 repo add <my-custom-repo> <url-to-custom-repo>
      helm3 repo update
  8. Migrate Helm releases (deployed charts) in place:

    helm3 2to3 convert RELEASE
  9. Clean up Helm 2 data:

    Warning
    Warning

    Tiller will be cleaned up, and Helm 2 will not be usable on this cluster after cleanup.

    helm3 2to3 cleanup
  10. You may now uninstall the helm2 package and use the helm command line from the helm3 package from now on.

    sudo zypper remove helm2
Print this page