Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Introductory Deployment of Rancher Kubernetes Engine Government / Deployment
Applies to SUSE Linux Enterprise Server 15 SP3, Rancher Kubernetes Engine Government 1.20.14

5 Deployment

This section describes the process steps for the deployment of the Rancher Kubernetes Engine Government solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration toward production, providing scaling guidance that is needed to create the solution.

5.1 Deployment overview

The deployment stack is represented in the following figure:

ri RKE2 SLES deployment
Figure 5.1: Deployment Stack - Rancher Kubernetes Engine Government

and details are covered for each layer in the following sections.

Note
Note

The following section’s content is ordered and described from the bottom layer up to the top.

5.2 Compute Platform

Preparation(s)

For each node used in the deployment:

  • Validate the necessary CPU, memory, disk capacity, and network interconnect quantity and type are present for each node and its intended role. Refer to the recommended CPU/Memory/Disk/Networking requirements as noted in the Rancher Kubernetes Engine Government Hardware Requirements.

  • Further suggestions

    • Disk : Use a pair of local, direct attached, mirrored disk drives is present on each node (SSDs are preferred); these will become the target for the operating system installation.

    • Network : Prepare an IP addressing scheme and optionally create both a public and private network, along with the respective subnets and desired VLAN designations for the target environment.

      • Baseboard Management Controller : If present, consider using a distinct management network for controlled access.

    • Boot Settings : BIOS/uEFI reset to defaults for a known baseline, consistent state or perhaps with desired, localized values.

    • Firmware : Use consistent and up-to-date versions for BIOS/uEFI/device firmware to reduce potential troubleshooting issues later

5.3 SUSE Linux Enterprise Server

As the base software layer, use an enterprise-grade Linux operating system. For example, SUSE Linux Enterprise Server.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be used.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to host names

    • Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in time stamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center (SCC) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool (RMT)

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command line tool named SUSEConnect.

Deployment Process

On the compute platform node, install the noted SUSE operating system, by following these steps:

  1. Download the SUSE Linux Enterprise Server product (either for the ISO or Virtual Machine image)

    • Identify the appropriate, supported version of SUSE Linux Enterprise Server by reviewing the support matrix for SUSE Rancher versions Web page.

  2. The installation process is described and can be performed with default values by following steps from the product documentation, see Installation Quick Start

    Tip
    Tip

    Adjust both the password and the local network addressing setup to comply with local environment guidelines and requirements.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • To reduce user intervention, unattended deployments of SUSE Linux Enterprise Server can be automated

5.4 Rancher Kubernetes Engine Government

Preparation(s)
  1. Identify the appropriate, desired version of the Rancher Kubernetes Engine Government (for example vX.YY.ZZ+rke2rV) by reviewing

    • the "Supported Rancher Kubernetes Engine Government Versions" associated with the respective SUSE Rancher version from "Rancher Kubernetes Engine Government Downstream Clusters" section, or

    • the "Releases" on the Download Web page.

  2. For Rancher Kubernetes Engine Government versions 1.21 and higher, if the host kernel supports AppArmor, the AppArmor tools (usually available via the "apparmor-parser" package) must also be present prior to installing Rancher Kubernetes Engine Government.

    • On the SUSE Linux Enterprise Server node, install this required package

      zypper install apparmor-parser
  3. For the underlying operating system firewall service, either

    • enable and configure the necessary inbound ports or

    • stop and completely disable the firewall service.

Deployment Process

Perform the following steps to install the first Rancher Kubernetes Engine Government server on one of the nodes to be used for the Kubernetes control plane

  1. Set the following variable with the noted version of Rancher Kubernetes Engine Government, as found during the preparation steps.

    RKE2_VERSION=""
  2. Install the appropriate version of Rancher Kubernetes Engine Government:

    • Download the installer script:

      curl -sfL https://get.rke2.io | \
      	INSTALL_RKE2_VERSION=${RKE2_VERSION} sh -
    • Set the following variable with the URL that will be used to access the SUSE Rancher server. This may be based on one or more DNS entries, a reverse-proxy server, or a load balancer:

      RKE2_subjectAltName=
    • Create the RKE2 config.yaml file:

      mkdir -p /etc/rancher/rke2/
      cat <<EOF> /etc/rancher/rke2/config.yaml
      write-kubeconfig-mode: "0644"
      tls-san:
        - "${RKE2_subjectAltName}"
      EOF
  3. Start and enable the RKE2 service, which will begin installing the required Kubernetes components:

    systemctl enable --now rke2-server.service
    • Include the Rancher Kubernetes Engine Government binary directories in this user’s path:

      echo "PATH=${PATH}:/opt/rke2/bin:/var/lib/rancher/rke2/bin/" >> ~/.bashrc
      source  ~/.bashrc
    • Monitor the progress of the installation:

      export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
      watch -c "kubectl get deployments -A"
      Note
      Note

      For the first two to three minutes of the installation, the initial output will include the error phrase "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?". As Kubernetes services get started this will be replace with "No resources found". About four minutes after beginning the installation, the output will begin showing the deployments being created, and after six to seven minutes the installation should be complete.

      • The Rancher Kubernetes Engine Government deployment is complete when elements of all the deployments (coredns, ingress, and metrics-server) show at least "1" as "AVAILABLE"

        • Use Ctrl+c to exit the watch loop after all deployment pods are running

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Availability

    • A full high-availability Rancher Kubernetes Engine Government cluster is recommended for production workloads. The etcd key/value store (aka database) requires an odd number of servers (aka master nodes) be allocated to the Rancher Kubernetes Engine Government cluster. In this case, two additional control-plane servers should be added; for a total of three.

      1. Deploy the same operating system on the new compute platform nodes

      2. Log in to the first server node and create a new config.yaml file for the remaining two server nodes:

        • Set the following variables, as appropriate for this cluster

          # Private IP preferred, if available
          FIRST_SERVER_IP=""
          
          # Private IP preferred, if available
          SECOND_SERVER_IP=""
          
          # Private IP preferred, if available
          THIRD_SERVER_IP=""
          
          # From the /var/lib/rancher/rke2/server/node-token file on the first server
          NODE_TOKEN=""
          
          # Match the first of the first server (Hint: `kubectl get nodes`)
          RKE2_VERSION=""
        • Create the new config.yaml file:

          echo "server: https://${FIRST_SERVER_IP}:9345" > config.yaml
          echo "token: ${NODE_TOKEN}" >> config.yaml
          cat /etc/rancher/rke2/config.yaml >> config.yaml
          Tip
          Tip

          The next steps require using SCP and SSH. Setting up passwordless SSH, and/or using ssh-agent, from the first server node to the second and third nodes will make these steps quicker and easier.

        • Copy the new config.yaml file to the remaining two server nodes:

          scp config.yaml ${SECOND_SERVER_IP}:~/
          scp config.yaml ${THIRD_SERVER_IP}:~/
        • Move the config.yaml file to the correct location in the file system:

          ssh ${SECOND_SERVER_IP} << EOF
          mkdir -p /etc/rancher/rke2/
          cp ~/config.yaml /etc/rancher/rke2/config.yaml
          cat /etc/rancher/rke2/config.yaml
          EOF
          
          ssh ${THIRD_SERVER_IP} << EOF
          mkdir -p /etc/rancher/rke2/
          cp ~/config.yaml /etc/rancher/rke2/config.yaml
          cat /etc/rancher/rke2/config.yaml
          EOF
        • Execute the following sets of commands on each of the remaining control-plane nodes:

          • Install Rancher Kubernetes Engine Government

            ssh ${SECOND_SERVER_IP} << EOF
            curl -sfL https://get.rke2.io | \
            	INSTALL_RKE2_VERSION=${RKE2_VERSION} sh -
            systemctl enable --now rke2-server.service
            EOF
            
            ssh ${THIRD_SERVER_IP} << EOF
            curl -sfL https://get.rke2.io | \
            	INSTALL_RKE2_VERSION=${RKE2_VERSION} sh -
            systemctl enable --now rke2-server.service
            EOF
        • Monitor the progress of the new server nodes joining the Rancher Kubernetes Engine Government cluster: watch -c "kubectl get nodes"

          • It takes up to eight minutes for each node to join the cluster

          • A node has deployed correctly when its status is "Ready" and it holds the roles of "control-plane,etcd,master"

          • Use Ctrl+c to exit the watch loop after all deployment pods are running

            Note
            Note

            This can be changed to the normal Kubernetes default by adding a taint to each server node. See the official Kubernetes documentation for more information on how to do that.

      3. (Optional) In cases where agent nodes are desired, execute the following sets of commands, using the same, "RKE2_VERSION", "FIRST_SERVER_IP" and "NODE_TOKEN" variable settings as above, on each of the agent nodes to add it to the Rancher Kubernetes Engine Government cluster:

        curl -sfL https://get.rke2.io | \
        	INSTALL_RKE2_VERSION=${RKE2_VERSION} \
        	RKE2_URL=https://${FIRST_SERVER_IP}:6443 \
        	RKE2_TOKEN=${NODE_TOKEN} \
        	RKE2_KUBECONFIG_MODE="644" \
        	sh -

After this successful deployment of the Rancher Kubernetes Engine Government solution, review the product documentation for details on how to directly use this Kubernetes cluster. Furthermore, by reviewing the SUSE Rancher product documentation this solution can also be:

  • imported (refer to sub-section "Importing Existing Clusters"), then

  • managed (refer to sub-section "Cluster Administration") and

  • accessed (refer to sub-section "Cluster Access") to address orchestration of workloads, maintaining security and many more functions are readily available.