Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Layered Stack Deployment of Rancher Kubernetes Engine / Deployment
Applies to SUSE Linux Enterprise Server 15 SP3, Rancher Kubernetes Engine 1.2.16

5 Deployment

This section describes the process steps for the deployment of the Rancher Kubernetes Engine solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration toward production, providing scaling guidance that is needed to create the solution.

5.1 Deployment overview

The deployment stack is represented in the following figure:

rc RKE1 SLES HPE deployment
Figure 5.1: Deployment Stack - Rancher Kubernetes Engine

and details are covered for each layer in the following sections.

Note
Note

The following section’s content is ordered and described from the bottom layer up to the top.

5.2 Compute Platform

The base, starting configuration can reside all within a single Hewlett Packard Enterprise Synergy Frame. Based upon the relatively small resource requirements for a Rancher Kubernetes Engine deployment, a viable approach is to deploy as a virtual machine (VM) on the target nodes, on top of an existing hypervisor, like KVM. For a physical host, there are tools that can be used during the setup of the server, see below.

Preparation(s)

The HPE Integrated Lights Out [iLO] is designed for secure local and remote server management and helps IT administrators deploy, update and monitor HPE servers anywhere, anytime.

  1. Upgrade your basic iLO license for additional functionality, such as graphical remote console and virtual media access to allow the remote usage of software image files (ISO files), which can be used for installing operating systems or updating servers.

    • (Optional) - iLO Federation enables you to manage multiple servers from one system using the iLO web interface.

  2. For nodes situated in an HPE Synergy enclosure, like HPE Synergy SY480 used in the deployment:

    • Setup the necessary items in the Hewlett Packard Enterprise OneView interface, including:

      • Settings → Addresses and Identifiers (Subnets and Address Ranges)

      • Networks → Create (associate subnets and designate bandwidths)

      • Network Sets → Create (aggregate all the necessary Networks)

      • Logical Interconnects → Edit (include the respective Network Sets)

      • Logical Interconnect Groups → Edit (include the respective Network Sets)

      • Server Profile Templates → Create (or use existing hypervisor templates)

      • OS Deployment mode → could be configured to boot from PXE, local storage, shared storage

      • Firmware (upgrade to the latest and strive for consistency across node types)

      • Manage Connections (assign the Network Set to be bonded across NICs)

      • Local Storage (create the internal RAID1 set and request additional drives for the respective roles)

      • Manage Boot/BIOS/iLO Settings

      • Server Profile → Create (assign the role template to the target model)

    • Add Servers and Assign Server Roles

      • Use the Discover function from Hewlett Packard Enterprise OneView to see all of the available nodes that can be assigned to to their respective roles:

      • Then drag and drop the nodes into the roles and ensure there is no missing configuration information, by reviewing and editing each node’s server details

      • Manage Settings - setup DNS/NTP, designate Disk Models/NIC Mappings/Interface Model/Networks

      • Manage Subnet and Netmask - edit Management Network information, ensuring a match exists to those setup in Hewlett Packard Enterprise OneView

Deployment Process

On the respective compute module node, determine if a hypervisor is already available for the solution’s virtual machines.

  1. If this will be the first use of this node, an option is to deploy a KVM hypervisor, based upon SUSE Linux Enterprise Server by following the Virtualization Guide.

    • Given the simplicity of the deployment, the operating system and hypervisor can be installed with the SUSE Linux Enterprise Server ISO media and the Hewlett Packard Enterprise Integrated Lights Out virtual media and virtual console methodology.

  2. Then for the solution VM, use the hypervisor user interface to allocate the necessary CPU, memory, disk and networking as noted in the link:SUSE Rancher hardware requirements.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • For HPE Synergy servers, you can simplify multiple compute module setups and configurations, leveraging the Hewlett Packard Enterprise OneView SDK for Terraform Provider.

    • For nodes running KVM, you can leverage either virt-install or Terraform Libvirt Provider to quickly and efficiently automate the deployment of multiple virtual machines.

  • Availability

    • While the initial deployment only requires a single VM, as noted in later deployment sections, having multiple VMs provides resiliency to accomplish high availability. To reduce single points of failure, it would be beneficial to have the multi-VM deployments spread across multiple hypervisor nodes. So consideration of consistent hypervisor and compute module configurations, with the needed resources for the VMs will yield a robust, reliable production implementation.

5.3 SUSE Linux Enterprise Server

As the base software layer, use an enterprise-grade Linux operating system. For example, SUSE Linux Enterprise Server.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be used.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to host names

    • Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in time stamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center (SCC) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool (RMT)

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command line tool named SUSEConnect.

Deployment Process

On the compute platform node, install the noted SUSE operating system, by following these steps:

  1. Download the SUSE Linux Enterprise Server product (either for the ISO or Virtual Machine image)

    • Identify the appropriate, supported version of SUSE Linux Enterprise Server by reviewing the support matrix for SUSE Rancher versions Web page.

  2. The installation process is described and can be performed with default values by following steps from the product documentation, see Installation Quick Start

    Tip
    Tip

    Adjust both the password and the local network addressing setup to comply with local environment guidelines and requirements.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • To reduce user intervention, unattended deployments of SUSE Linux Enterprise Server can be automated

5.4 Rancher Kubernetes Engine

Preparation(s)
  1. Identify the appropriate, desired version of the Rancher Kubernetes Engine binary (for example vX.Y.Z) that includes the needed Kubernetes version by reviewing

    • the "Supported Rancher Kubernetes Engine Versions" associated with the respective SUSE Rancher version from "Rancher Kubernetes Engine Downstream Clusters" section, or

    • the "Releases" on the Download Web page.

  2. On the target node with a default installation of SUSE Linux Enterprise Server operating system, log in to the node either as root or as a user with sudo privileges and enable the required container runtime engine

    sudo SUSEConnect -p sle-module-containers/15.3/x86_64
    sudo zypper refresh ; zypper install docker
    sudo systemctl enable --now docker.service
    • Then validate the container runtime engine is working

      sudo systemctl status docker.service
      sudo docker ps --all
  3. For the underlying operating system firewall service, either

    • enable and configure the necessary inbound ports or

    • stop and completely disable the firewall service.

Deployment Process

The primary steps for deploying this Rancher Kubernetes Engine Kubernetes are:

Note
Note

Installing Rancher Kubernetes Engine requires a client system (i.e. admin workstation) that has been configured with kubectl.

  1. Download the Rancher Kubernetes Engine binary according to the instructions on product documentation page, then follow the directions on that page, but with the following exceptions:

  2. Create the cluster.yml file with the command rke config

    Note
    Note

    See product documentation for example-yamls and config-options for detailed examples and descriptions of the cluster.yml parameters.

    • It is recommended to create a unique SSH key for this Rancher Kubernetes Engine cluster with the command ssh-keygen

      • Provide the path to that key for the option "Cluster Level SSH Private Key Path"

    • The option "Number of Hosts" refers to the number of hosts to configure at this time

      • Additional hosts can be added very easily after Rancher Kubernetes Engine cluster creation

      • For this implementation it is recommended to configure one or three hosts

    • Give all hosts the roles of "Control Plane", "Worker", and "etcd"

    • Answer "n" for the option "Enable PodSecurityPolicy"

  3. Update the cluster.yml file before continuing with the step "Deploying Kubernetes with RKE"

  4. If a load balancer has been deployed for the Rancher Kubernetes Engine control-plane nodes, update the cluster.yml file before deploying Rancher Kubernetes Engine to include the IP address or FQDN of the load balancer. The appropriate location is under authentication.sans. For example:

    LB_IP_Host=""
    authentication:
      strategy: x509
      sans: ["${LB_IP_Host}"]
  5. Verify password-less SSH is available from the admin workstation to each of the cluster hosts as the user specified in the cluster.yml file

  6. When ready, run rke up to create the RKE cluster

  7. After the rke up command completes, the RKE cluster will continue the Kubernetes installation process

    • Monitor the progress of the installation:

      • Export the variable KUBECONFIG to the absolute path name of the kube_config_cluster.yml file. I.e. export KUBECONFIG=~/rke-cluster/kube_config_cluster.yml

      • Run the command: watch -c "kubectl get deployments -A"

        • The cluster deployment is complete when elements of all the deployments show at least "1" as "AVAILABLE"

        • Use Ctrl+c to exit the watch loop after all deployment pods are running

          Tip
          Tip

          To address Availability and possible scaling to a multiple node cluster, etcd is enabled instead of using the default SQLite datastore.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Availability

    • A full high-availability Rancher Kubernetes Engine cluster is recommended for production workloads. For this use case, two additional hosts should be added; for a total of three. All three hosts will perform the roles of control-plane, etcd, and worker.

      1. Deploy the same operating system on the new compute platform nodes, and prepare them in the same way as the first node

      2. Update the cluster.yml file to include the addional node

        • Using a text editor, copy the information for the first node (found under the "nodes:" section)

          • The node information usually starts with "- address:" and ends with the start of another node entry, or the beginning of the "services: " section, i.e.

            - address: 172.16.240.71
              port: "22"
              internal_address: ""
              role:
              - controlplane
              - worker
              - etcd
            
            . . .
            
              labels: {}
              taints: []
        • Paste the information into the same section, once for each additional host

        • Update the pasted information, as appropriate, for each additional host

      3. When the cluster.yml file is updated with the information specific to each node, run the command rke up

        • Run the command: watch -c "kubectl get deployments -A"

          • The cluster deployment is complete when elements of all the deployments show at least "1" as "AVAILABLE"

          • Use Ctrl+c to exit the watch loop after all deployment pods are running

After this successful deployment of the Rancher Kubernetes Engine solution, review the product documentation for details on how to directly use this Kubernetes cluster. Furthermore, by reviewing the SUSE Rancher product documentation this solution can also be:

  • imported (refer to subsection "Importing Existing Clusters"), then

  • managed (refer to subsection "Cluster Administration") and

  • accessed (refer to subsection "Cluster Access") to address orchestration of workloads, maintaining security and many more functions are readily available.