Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise Server 15-SP2, K3s 1.21.2

5 Deployment

This section describes the process steps for the deployment of the K3s solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration towards production, providing scaling guidance that is needed to create the solution.

5.1 Deployment overview

The deployment stack is represented in the following figure:

RI Deployment K3s SLES
Figure 5.1: K3s Deployment Stack

and details are covered for each layer in the following sections.

Note
Note

The following section’s content is ordered and described from the bottom layer up to the top.

5.2 Compute Platform

Preparation(s)

For each node used in the deployment:

  • Validate the necessary CPU, memory, disk capacity, and network interconnect quantity and type are present for each node and its intended role. Refer to the recommended CPU/Memory/Disk/Networking requirements as noted in the SUSE Rancher Hardware Requirements.

  • Further suggestions

    • Disk : Use a pair of local, direct attached, mirrored disk drives is present on each node (SSDs are preferred); these will become the target for the operating system installation.

    • Network : Prepare an IP addressing scheme and optionally create both a public and private network, along with the respective subnets and desired VLAN designations for the target environment.

      • Baseboard Management Controller : If present, consider using a distinct management network for controlled access.

    • Boot Settings : BIOS/uEFI reset to defaults for a known baseline, consistent state or perhaps with desired, localized values.

    • Firmware : Use consistent and up-to-date versions for BIOS/uEFI/device firmware to reduce potential troubleshooting issues later

5.3 SUSE Linux Enterprise Server

Utilize an enterprise-grade Linux operating system , like SUSE Linux Enterprise Server, as the base software layer.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be utilized.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to hostnames

    • Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in timestamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center (SCC) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool (RMT)

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command-line tool named SUSEConnect.

Deployment Process

On the compute platform node, install the noted SUSE operating system, by following these steps:

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • To reduce user intervention, unattended deployments of SUSE Linux Enterprise Micro can be automated

      • for ISO-based installations, by referring to the AutoYaST Guide

      • for raw-image based installation, by configuring the Ignition and Combustion tooling as described in the Installation Quick Start

5.4 K3s

Utilize an enterprise-grade Linux operating system , like SUSE Linux Enterprise Server, as the base software layer.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be utilized.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to hostnames

    • Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in timestamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center (SCC) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool (RMT)

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command-line tool named SUSEConnect.

  2. Identify the appropriate, desired version of the K3s binary (e.g. vX.YY.ZZ+k3s1), by reviewing the "Releases" on the Download web page.

Deployment Process

Perform the following steps to install the first K3s server on one of the nodes to be used for the Kubernetes control plane

  1. Set the following variable with the noted version of K3s, as found during the preparation steps.

    K3s_VERSION=""
  2. Install the version of K3s with embedded etcd enabled:

    curl -sfL https://get.k3s.io | \
    	INSTALL_K3S_VERSION=${K3s_VERSION} \
    	INSTALL_K3S_EXEC='server --cluster-init --write-kubeconfig-mode=644' \
    	sh -s -
    Tip
    Tip

    To address Availability and possible scaling to a multiple node cluster, etcd is enabled instead of using the default SQLite datastore.

    • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

      • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"

      • Use Ctrl+c to exit the watch loop after all deployment pods are running

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Availability

    • A full high-availability K3s cluster is recommended for production workloads. The etcd key/value store (aka database) requires an odd number of servers (aka master nodes) be allocated to the K3s cluster. In this case, two additional control-plane servers should be added; for a total of three.

      1. Deploy the same operating system on the new compute platform nodes, then log into the new nodes as root or as a user with sudo privileges.

      2. Execute the following sets of commands on each of the remaining control-plane nodes:

        • Set the following additional variables, as appropriate for this cluster

          # Private IP preferred, if available
          FIRST_SERVER_IP=""
          
          # From /var/lib/rancher/k3s/server/node-token file on the first server
          NODE_TOKEN=""
          
          # Match the first of the first server
          K3s_VERSION=""
        • Install K3s

          curl -sfL https://get.k3s.io | \
          	INSTALL_K3S_VERSION=${K3s_VERSION} \
          	K3S_URL=https://${FIRST_SERVER_IP}:6443 \
          	K3S_TOKEN=${NODE_TOKEN} \
          	K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC='server' \
          	sh -
        • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

          • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"

          • Use Ctrl+c to exit the watch loop after all deployment pods are running

            Note
            Note

            This can be changed to the normal Kubernetes default by adding a taint to each server node. See the official Kubernetes documentation for more information on how to do that.

        • (Optional) In cases where agent nodes are desired, execute the following sets of commands, using the same "K3s_VERSION", "FIRST_SERVER_IP", and "NODE_TOKEN" variable settings as above, on each of the agent nodes to add it to the K3s cluster:

          curl -sfL https://get.k3s.io | \
          	INSTALL_K3S_VERSION=${K3s_VERSION} \
          	K3S_URL=https://${FIRST_SERVER_IP}:6443 \
          	K3S_TOKEN=${NODE_TOKEN} \
          	K3S_KUBECONFIG_MODE="644" \
          	sh -

After this successful deployment of the K3s solution, review the product documentation for details on how to directly utilize this Kubernetes cluster. Furthermore, by reviewing the SUSE Rancher product documentation this solution can also be:

  • imported (refer to sub-section "Importing Existing Clusters"), then

  • managed (refer to sub-section "Cluster Administration") and

  • accessed (refer to sub-section "Cluster Access") to address orchestration of workloads, maintaining security and many more functions are readily available.

Print this page