Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise Server 15-SP2, K3s 1.20.6, SUSE Rancher 2.5.8

5 Deployment

This section describes the process steps for the deployment of the SUSE Rancher solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration towards production, providing scaling guidance that is needed to create the solution.

5.1 Deployment overview

The deployment stack is represented in the following figure:

RI Deployment Rancher K3s SLES
Figure 5.1: SUSE Rancher Deployment Stack

and details are covered for each layer in the following sections.

Note
Note

The following section’s content is ordered and described from the bottom layer up to the top.

5.2 Compute Platform

Preparation(s)

For each node used in the deployment:

  • Validate the necessary CPU, memory, disk capacity, and network interconnect quantity and type are present for each node and its intended role. Refer to the recommended CPU/Memory/Disk/Networking requirements as noted in the SUSE Rancher Hardware Requirements.

  • Further suggestions

    • Disk : Use a pair of local, direct attached, mirrored disk drives is present on each node (SSDs are preferred); these will become the target for the operating system installation.

    • Network : Prepare an IP addressing scheme and optionally create both a public and private network, along with the respective subnets and desired VLAN designations for the target environment.

      • Baseboard Management Controller : If present, consider using a distinct management network for controlled access.

    • Boot Settings : BIOS/uEFI reset to defaults for a known baseline, consistent state or perhaps with desired, localized values.

    • Firmware : Use consistent and up-to-date versions for BIOS/uEFI/device firmware to reduce potential troubleshooting issues later

5.3 SUSE Linux Enterprise Server

Utilize an enterprise-grade Linux operating system , like SUSE Linux Enterprise Server, as the base software layer.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be utilized.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service ( DNS ) - an external network-accessible service to map IP Addresses to hostnames

    • Network Time Protocol ( NTP ) - an external network-accessible service to obtain and synchronize system times to aid in timestamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center ( SCC ) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool ( RMT )

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command-line tool named SUSEConnect.

Deployment Process

On the compute platform node, install the noted SUSE operating system, by following these steps:

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • To reduce user intervention, unattended deployments of SUSE Linux Enterprise Micro can be automated

      • for ISO-based installations, by referring to the AutoYaST Guide

      • for raw-image based installation, by configuring the Ignition and Combustion tooling as described in the Installation Quick Start

5.4 K3s

Utilize an enterprise-grade Linux operating system , like SUSE Linux Enterprise Server, as the base software layer.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be utilized.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service ( DNS ) - an external network-accessible service to map IP Addresses to hostnames

    • Network Time Protocol ( NTP ) - an external network-accessible service to obtain and synchronize system times to aid in timestamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center ( SCC ) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool ( RMT )

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command-line tool named SUSEConnect.

  2. Identify the appropriate, supported version of the K3s binary (e.g. vX.YY.ZZ+k3s1), by reviewing the "Rancher Support Matrix" on the Support and Maintenance Terms of Service web page.

Deployment Process

Perform the following steps to install the first K3s server on one of the nodes to be used for the Kubernetes control plane

  1. Set the following variable with the noted version of K3s, as found during the preparation steps.

    K3s_VERSION=""
  2. Install the version of K3s with embedded etcd enabled:

    curl -sfL https://get.k3s.io | \
    	INSTALL_K3S_VERSION=${K3s_VERSION} \
    	INSTALL_K3S_EXEC='server --cluster-init --write-kubeconfig-mode=644' \
    	sh -s -
    Tip
    Tip

    To address Availability and possible scaling to a multiple node cluster, etcd is enabled instead of using the default SQLite datastore.

    • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

      • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"

      • Use Ctrl+c to exit the watch loop after all deployment pods are running

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Availability

    • A full high-availability K3s cluster is recommended for production workloads. The etcd key/value store (aka database) requires an odd number of servers (aka master nodes) be allocated to the K3s cluster. In this case, two additional control-plane servers should be added; for a total of three.

      1. Deploy the same operating system on the new compute platform nodes, then log into the new nodes as root or as a user with sudo privileges.

      2. Execute the following sets of commands on each of the remaining control-plane nodes:

        • Set the following additional variables, as appropriate for this cluster

          # Private IP preferred, if available
          FIRST_SERVER_IP=""
          
          # From /var/lib/rancher/k3s/server/node-token file on the first server
          NODE_TOKEN=""
          
          # Match the first of the first server
          K3s_VERSION=""
        • Install K3s

          curl -sfL https://get.k3s.io | \
          	INSTALL_K3S_VERSION=${K3s_VERSION} \
          	K3S_URL=https://${FIRST_SERVER_IP}:6443 \
          	K3S_TOKEN=${NODE_TOKEN} \
          	K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC='server' \
          	sh -
        • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

          • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"

          • Use Ctrl+c to exit the watch loop after all deployment pods are running

            By default, the K3s server nodes are available to run non-control-plane workloads. In this case, the K3s default behavior is perfect for the SUSE Rancher server cluster as it doesn’t require additional agent (aka worker) nodes to maintain a highly available SUSE Rancher server application.

            Note
            Note

            This can be changed to the normal Kubernetes default by adding a taint to each server node. See the official Kubernetes documentation for more information on how to do that.

        • (Optional) In cases where agent nodes are desired, execute the following sets of commands, using the same "K3s_VERSION", "FIRST_SERVER_IP", and "NODE_TOKEN" variable settings as above, on each of the agent nodes to add it to the K3s cluster:

          curl -sfL https://get.k3s.io | \
          	INSTALL_K3S_VERSION=${K3s_VERSION} \
          	K3S_URL=https://${FIRST_SERVER_IP}:6443 \
          	K3S_TOKEN=${NODE_TOKEN} \
          	K3S_KUBECONFIG_MODE="644" \
          	sh -

5.5 SUSE Rancher

Utilize an enterprise-grade Linux operating system , like SUSE Linux Enterprise Server, as the base software layer.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be utilized.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service ( DNS ) - an external network-accessible service to map IP Addresses to hostnames

    • Network Time Protocol ( NTP ) - an external network-accessible service to obtain and synchronize system times to aid in timestamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center ( SCC ) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool ( RMT )

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command-line tool named SUSEConnect.

Deployment Process

While logged into the node, as root or with sudo privileges, install SUSE Rancher:

  1. Create the Helm Chart custom resource for cert-manager:

    • Set the following variable with the desired version of cert-manager

      CERT_MANAGER_VERSION=""
      Note
      Note

      At this time, the most current, supported version of cert-manager is v1.0.4

    • Create the cert-manager Helm Chart custom resource manifest

      cat <<EOF> cert-manager-helm-crd.yaml
      apiVersion: helm.cattle.io/v1
      kind: HelmChart
      metadata:
        name: cert-manager
        namespace: kube-system
      spec:
        chart: cert-manager
        targetNamespace: cert-manager
        version: ${CERT_MANAGER_VERSION}
        repo: https://charts.jetstack.io
      EOF
    • Create the cert-manager CRDs and apply the Helm Chart resource manifest:

      kubectl create namespace cert-manager
      
      kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml
      
      sudo mv cert-manager-helm-crd.yaml /var/lib/rancher/k3s/server/manifests/
      • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

        • The deployment is complete when all deployments (cert-manager, cert-manager-cainjector, cert-manager-webhook) show at least "1" as "AVAILABLE"

        • Use Ctrl+c to exit the watch loop after all pods are running

  2. Create the Helm Chart custom resource for SUSE Rancher:

    • Set the following variable to the hostname of the SUSE Rancher server instance

      HOSTNAME=""
      Note
      Note

      This hostname should be resolvable to an IP address of the K3s host, or a load balancer/proxy server that supports this installation of SUSE Rancher.

    • Create the SUSE Rancher Helm Chart custom resource manifest

      cat <<EOF> suse-rancher-helm-crd.yaml
      apiVersion: helm.cattle.io/v1
      kind: HelmChart
      metadata:
        name: rancher
        namespace: kube-system
      spec:
        chart: rancher
        targetNamespace: cattle-system
        repo: https://releases.rancher.com/server-charts/stable
        set:
          hostname: ${HOSTNAME}
      EOF
    • Apply the Helm Chart resource manifest:

      kubectl create namespace cattle-system
      sudo mv suse-rancher-helm-crd.yaml /var/lib/rancher/k3s/server/manifests/
      • Monitor the progress of the installation: watch -c "kubectl get pods -n cattle-system"

        • The installation is complete when all pods have a status of "Completed" or a status of "Running" with the number of "READY" pods being "1/1", "2/2", etc.

        • Use Ctrl+c to exit the watch loop after all pods are running

  3. (Optional) Create an SSH tunnel to access SUSE Rancher:

    Note
    Note

    This optional step is useful in cases where NAT routers and/or firewalls prevent the client web browser from reaching the exposed SUSE Rancher server IP address and/or port. This step requires that a Linux host is accessible through SSH from the client system and that the Linux host can reach the exposed SUSE Rancher service. The SUSE Rancher hostname should be resolvable to the appropriate IP address by the local workstation.

    • Create an SSH tunnel through the Linux host to the IP address of the SUSE Rancher server on the NodePort, as noted in Step 3:

      ssh -N -D 8080 user@Linux-host
    • On the local workstation web browser, change the SOCKS Host settings to "127.0.0.1" and port "8080"

      Note
      Note

      This will route all traffic from this web browser through the remote Linux host. Be sure to close the tunnel and revert the SOCKS Host settings when you’re done.

  4. Connect to the SUSE Rancher web UI and configure SUSE Rancher:

    • On the client system, use a web browser to connect to the SUSE Rancher service

    • Provide a new Admin password

      Important
      Important

      On the second configuration page, ensure the "Rancher Server URL" is set to the hostname specified when creating the SUSE Rancher HelmChart custom resource and the port is 443.

      • e.g., suse-rancher.sandbox.local:443

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices

  • Availability

    • In instances where a load balancer is used to access a K3s cluster, deploying two additional K3s cluster nodes, for a total of three, will automatically make SUSE Rancher highly available.

  • Security

    • The basic deployment steps described above are for deploying SUSE Rancher with automatically generated, self-signed security certificates. Other options are to have SUSE Rancher create public certificates via Let’s Encrypt associated with with a publicly resolvable hostname for the SUSE Rancher server, or to provide preconfigured, private certificates. See SUSE Rancher product documentation for more information.

  • Integrity

    • This deployment of SUSE Rancher uses the K3s etcd key/value store to persist its data and configuration, which offers several advantages. With a multi-node cluster and this resiliency through replication, having to provide highly-available storage isn’t needed. In addition, backing up the K3s etcd store protects the cluster as well as the installation of SUSE Rancher and permits restoration of a given state.

After this successful deployment of the SUSE Rancher solution, review the product documentation for details on how downstream Kubernetes clusters can be:

  • deployed ( refer to sub-section "Setting up Kubernetes Clusters in Rancher" ) or

  • imported ( refer to sub-section "Importing Existing Clusters" ), then

  • managed ( refer to sub-section "Cluster Administration" ) and

  • accessed ( refer to sub-section "Cluster Access" ) to address orchestration of workload, maintaining security and many more functions are readily available.

Print this page