5 Deployment #
This section describes the process steps for the deployment of the SUSE Rancher solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration toward production, providing scaling guidance that is needed to create the solution.
5.1 Deployment overview #
The deployment stack is represented in the following figure:
and details are covered for each layer in the following sections.
The following section’s content is ordered and described from the bottom layer up to the top.
5.2 Compute Platform #
The base, starting configuration can reside all within a single Hewlett Packard Enterprise Synergy Frame. Based upon the relatively small resource requirements for a SUSE Rancher deployment, a viable approach is to deploy as a virtual machine (VM) on the target nodes, on top of an existing hypervisor, like KVM. For a physical host, there are tools that can be used during the setup of the server, see below.
- Preparation(s)
The HPE Integrated Lights Out [iLO] is designed for secure local and remote server management and helps IT administrators deploy, update and monitor HPE servers anywhere, anytime.
Upgrade your basic iLO license for additional functionality, such as graphical remote console and virtual media access to allow the remote usage of software image files (ISO files), which can be used for installing operating systems or updating servers.
(Optional) - iLO Federation enables you to manage multiple servers from one system using the iLO web interface.
For nodes situated in an HPE Synergy enclosure, like HPE Synergy SY480 used in the deployment:
Setup the necessary items in the Hewlett Packard Enterprise OneView interface, including:
Settings → Addresses and Identifiers (Subnets and Address Ranges)
Networks → Create (associate subnets and designate bandwidths)
Network Sets → Create (aggregate all the necessary Networks)
Logical Interconnects → Edit (include the respective Network Sets)
Logical Interconnect Groups → Edit (include the respective Network Sets)
Server Profile Templates → Create (or use existing hypervisor templates)
OS Deployment mode → could be configured to boot from PXE, local storage, shared storage
Firmware (upgrade to the latest and strive for consistency across node types)
Manage Connections (assign the Network Set to be bonded across NICs)
Local Storage (create the internal RAID1 set and request additional drives for the respective roles)
Manage Boot/BIOS/iLO Settings
Server Profile → Create (assign the role template to the target model)
Add Servers and Assign Server Roles
Use the Discover function from Hewlett Packard Enterprise OneView to see all of the available nodes that can be assigned to to their respective roles:
Then drag and drop the nodes into the roles and ensure there is no missing configuration information, by reviewing and editing each node’s server details
Manage Settings - setup DNS/NTP, designate Disk Models/NIC Mappings/Interface Model/Networks
Manage Subnet and Netmask - edit Management Network information, ensuring a match exists to those setup in Hewlett Packard Enterprise OneView
- Deployment Process
On the respective compute module node, determine if a hypervisor is already available for the solution’s virtual machines.
If this will be the first use of this node, an option is to deploy a KVM hypervisor, based upon SUSE Linux Enterprise Server by following the Virtualization Guide.
Given the simplicity of the deployment, the operating system and hypervisor can be installed with the SUSE Linux Enterprise Server ISO media and the Hewlett Packard Enterprise Integrated Lights Out virtual media and virtual console methodology.
Then for the solution VM, use the hypervisor user interface to allocate the necessary CPU, memory, disk and networking as noted in the link:SUSE Rancher hardware requirements.
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices:
For HPE Synergy servers, you can simplify multiple compute module setups and configurations, leveraging the Hewlett Packard Enterprise OneView SDK for Terraform Provider.
For nodes running KVM, you can leverage either virt-install or Terraform Libvirt Provider to quickly and efficiently automate the deployment of multiple virtual machines.
While the initial deployment only requires a single VM, as noted in later deployment sections, having multiple VMs provides resiliency to accomplish high availability. To reduce single points of failure, it would be beneficial to have the multi-VM deployments spread across multiple hypervisor nodes. So consideration of consistent hypervisor and compute module configurations, with the needed resources for the VMs will yield a robust, reliable production implementation.
5.3 SUSE Linux Enterprise Micro #
As the base software layer, use an enterprise-grade Linux operating system. For example, SUSE Linux Enterprise Micro.
- Preparation(s)
To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Micro can be used.
Ensure these services are in place and configured for this node to use:
Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to host names
Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in time stamp consistency
Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to
the general, internet-based SUSE Customer Center (SCC) or
an organization’s SUSE Manager infrastructure or
a local server running an instance of Repository Mirroring Tool (RMT)
NoteDuring the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command line tool named SUSEConnect.
- Deployment Process
On the compute platform node, install the noted SUSE operating system, by following these steps:
Download the SUSE Linux Enterprise Micro product (either for the ISO or Virtual Machine image)
Identify the appropriate, supported version of SUSE Linux Enterprise Micro by reviewing the support matrix for SUSE Rancher versions Web page.
The installation process is described and can be performed with default values by following steps from the product documentation, see Installation Quick Start
TipAdjust both the password and the local network addressing setup to comply with local environment guidelines and requirements.
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices:
To reduce user intervention, unattended deployments of SUSE Linux Enterprise Micro can be automated
for ISO-based installations, by referring to the AutoYaST Guide
for raw-image based installation, by configuring the Ignition and Combustion tooling as described in the Installation Quick Start
5.4 K3s #
- Preparation(s)
Identify the appropriate, desired version of the K3s binary (for example vX.YY.ZZ+k3s1) by reviewing
the "Installing SUSE Rancher on K3s" associated with the respective SUSE Rancher version, or
the "Releases" on the Download Web page.
For the underlying operating system firewall service, either
enable and configure the necessary inbound ports or
stop and completely disable the firewall service.
- Deployment Process
Perform the following steps to install the first K3s server on one of the nodes to be used for the Kubernetes control plane
Set the following variable with the noted version of K3s, as found during the preparation steps.
K3s_VERSION=""
Install the version of K3s with embedded etcd enabled:
curl -sfL https://get.k3s.io | \ INSTALL_K3S_VERSION=${K3s_VERSION} \ INSTALL_K3S_SKIP_SELINUX_RPM=true \ INSTALL_K3S_EXEC='server --cluster-init --write-kubeconfig-mode=644' \ sh -s -
TipTo address Availability and possible scaling to a multiple node cluster, etcd is enabled instead of using the default SQLite datastore.
Monitor the progress of the installation:
watch -c "kubectl get deployments -A"
The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"
Use Ctrl+c to exit the watch loop after all deployment pods are running
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices:
A full high-availability K3s cluster is recommended for production workloads. The etcd key/value store (aka database) requires an odd number of servers (aka master nodes) be allocated to the K3s cluster. In this case, two additional control-plane servers should be added; for a total of three.
Deploy the same operating system on the new compute platform nodes, then log in to the new nodes as root or as a user with sudo privileges.
Execute the following sets of commands on each of the remaining control-plane nodes:
Set the following additional variables, as appropriate for this cluster
# Private IP preferred, if available FIRST_SERVER_IP="" # From /var/lib/rancher/k3s/server/node-token file on the first server NODE_TOKEN="" # Match the first of the first server K3s_VERSION=""
Install K3s
curl -sfL https://get.k3s.io | \ INSTALL_K3S_VERSION=${K3s_VERSION} \ INSTALL_K3S_SKIP_SELINUX_RPM=true \ K3S_URL=https://${FIRST_SERVER_IP}:6443 \ K3S_TOKEN=${NODE_TOKEN} \ K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC='server' \ sh -
Monitor the progress of the installation:
watch -c "kubectl get deployments -A"
The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"
Use Ctrl+c to exit the watch loop after all deployment pods are running
By default, the K3s server nodes are available to run non-control-plane workloads. In this case, the K3s default behavior is perfect for the SUSE Rancher server cluster as it does not require additional agent (aka worker) nodes to maintain a highly available SUSE Rancher server application.
NoteThis can be changed to the normal Kubernetes default by adding a taint to each server node. See the official Kubernetes documentation for more information on how to do that.
(Optional) In cases where agent nodes are desired, execute the following sets of commands, using the same "K3s_VERSION", "FIRST_SERVER_IP", and "NODE_TOKEN" variable settings as above, on each of the agent nodes to add it to the K3s cluster:
curl -sfL https://get.k3s.io | \ INSTALL_K3S_VERSION=${K3s_VERSION} \ INSTALL_K3S_SKIP_SELINUX_RPM=true \ K3S_URL=https://${FIRST_SERVER_IP}:6443 \ K3S_TOKEN=${NODE_TOKEN} \ K3S_KUBECONFIG_MODE="644" \ sh -
5.5 SUSE Rancher #
- Preparation(s)
For the respective node’s firewall service, either
enable and configure the necessary inbound ports or
stop and completely disable the firewall service.
Determine the desired SSL configuration for TLS termination
Rancher-generated TLS certificate NOTE: This is the easiest way of installing SUSE Rancher with self-signed certificates.
Let’s Encrypt
Bring your own certificate
Obtain a Helm binary matching the respective Kubernetes version for this SUSE Rancher implementation.
NoteEnable the respective kubeconfig setting for kubectl , K3s - /etc/rancher/k3s/k3s.yml, to be leveraged by helm command.
- Deployment Process
While logged in to the node, as root or with sudo privileges, install SUSE Rancher:
Install cert-manager
Set the following variable with the desired version of cert-manager
CERT_MANAGER_VERSION=""
NoteAt this time, the most current, supported version of cert-manager is v1.5.1
Create the cert-manager CRDs and apply the Helm Chart resource manifest
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml # Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io # Update your local Helm chart repository cache helm repo update # Install the cert-manager Helm chart helm install cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version ${CERT_MANAGER_VERSION}
Check the progress of the installation, looking for all pods to be in running status:
kubectl get pods --namespace cert-manager
Add the SUSE Rancher helm chart repository:
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
Create a namespace for SUSE Rancher
kubectl create namespace cattle-system
Prepare to use the Helm Chart for SUSE Rancher:
Set the following variable to the host name of the SUSE Rancher server instance
HOSTNAME=""
NoteThis host name should be resolvable to an IP address of the K3s host, or a load balancer/proxy server that supports this installation of SUSE Rancher.
Set the following variable to the number of deployed K3s nodes planned to host the SUSE Rancher service
REPLICAS=""
Set the following variable to the desired version of SUSE Rancher server instance
RANCHER_VERSION=""
Install the SUSE Rancher Helm Chart
helm install rancher rancher-stable/rancher \ --namespace cattle-system \ --set hostname=${HOSTNAME} \ --set replicas=${REPLICAS} \ --version=${RANCHER_VERSION}
Monitor the progress of the installation:
kubectl -n cattle-system rollout status deploy/rancher
(Optional) Create an SSH tunnel to access SUSE Rancher:
NoteThis optional step is useful in cases where NAT routers and/or firewalls prevent the client Web browser from reaching the exposed SUSE Rancher server IP address and/or port. This step requires that a Linux host is accessible through SSH from the client system and that the Linux host can reach the exposed SUSE Rancher service. The SUSE Rancher host name should be resolvable to the appropriate IP address by the local workstation.
Create an SSH tunnel through the Linux host to the IP address of the SUSE Rancher server on the NodePort, as noted in Step 3:
ssh -N -D 8080 user@Linux-host
On the local workstation Web browser, change the SOCKS Host settings to "127.0.0.1" and port "8080".
NoteThis will route all traffic from this Web browser through the remote Linux host. Be sure to close the tunnel and revert the SOCKS Host settings when you are done.
Connect to the SUSE Rancher Web UI:
On a client system, use a Web browser to connect to the SUSE Rancher service, via HTTPs.
Provide a new Admin password.
ImportantOn the second configuration page, ensure the "Rancher Server URL" is set to the host name specified when installing the SUSE Rancher Helm Chart and the port is 443.
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices
In instances where a load balancer is used to access a K3s cluster, deploying two additional K3s cluster nodes, for a total of three, will automatically make SUSE Rancher highly available.
The basic deployment steps described above are for deploying SUSE Rancher with automatically generated, self-signed security certificates. Other options are to have SUSE Rancher create public certificates via Let’s Encrypt associated with a publicly resolvable host name for the SUSE Rancher server, or to provide preconfigured, private certificates.
This deployment of SUSE Rancher uses the K3s etcd key/value store to persist its data and configuration, which offers several advantages. With a multi-node cluster and this resiliency through replication, having to provide highly-available storage is not needed. In addition, backing up the K3s etcd store protects the cluster and the installation of SUSE Rancher and permits restoration of a given state.
After this successful deployment of the SUSE Rancher solution, review the product documentation for details on how downstream Kubernetes clusters can be:
deployed (refer to sub-section "Setting up Kubernetes Clusters in Rancher") or
imported (refer to sub-section "Importing Existing Clusters"), then
managed (refer to sub-section "Cluster Administration") and
accessed (refer to sub-section "Cluster Access") to address orchestration of workload, maintaining security and many more functions are readily available.