5 Deployment #
This section describes the process steps for the deployment of the Rancher Kubernetes Engine Government solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration toward production, providing scaling guidance that is needed to create the solution.
5.1 Deployment overview #
The deployment stack is represented in the following figure:
and details are covered for each layer in the following sections.
The following section’s content is ordered and described from the bottom layer up to the top.
5.2 Compute Platform #
The base, starting configuration can reside all within a single Cisco UCS server. Based upon the relatively small resource requirements for a Rancher Kubernetes Engine Government deployment, a viable approach is to deploy as a virtual machine (VM) on the target nodes, on top of an existing hypervisor, like KVM.
- Preparation(s)
For a physical host, that is racked, cabled and powered up, like Cisco UCS C240 SD M5 used in the deployment:
If using Cisco UCS Integrated Management Controller (IMC):
Provide a DHCP Server for an IP address to the Cisco UCS Integrated Management Controller or use a monitor, keyboard, and mouse for initial IMC configuration
Log into the interface as admin
On left menu click on
Storage → Cisco 12G Modular Raid Controller
Create virtual drive from unused physical drives, for example pick two drives for the operating system and click on
>>
button. Under virtual drive properties enterboot
as the name and click onCreate Virtual Drive
, thenOK
.
On the left menu click on
Networking → Adapter Card MLOM
Click on the
vNICs
tab, and the factory default configuration comes with two vNICs defined with one vNIC assigned to port 0 and one vNIC assigned to port 1. Both vNICs are configured to allow any kind of traffic, with or without a VLAN tag. VLAN IDs must be managed on the operating system level.TipA great feature of the Cisco VIC card is the possibility to define multiple virtual network adapters presented to the operating system, which are configured best for specific use. Like, admin traffic should be configured with MTU 1500 to be compatible with all communication partners, whereas the network for storage intensive traffic should be configured with MTU 9000 for best throughput. For high-availability, the two network devices per traffic type will be combined in a bond on the operating system layer.
These new settings become active with the next power cycle of the server. At the top right side of the window click on
Host Power → Power Off
, in the pop-up windows click onOK
.On the top menu item list, select
Launch vKVM
Select the
Virtual Media
tab and activateVirtual Devices
found inVirtual Media
tabClick the
Virtual Media
tab to selectMap CD/DVD
In the
Virtual Media - CD/DVD
window, browse to respective operating system media, open and use the image for a system boot.
- Deployment Process
On the respective compute module node, determine if a hypervisor is already available for the solution`s virtual machines.
If this will be the first use of this node, an option is to deploy a KVM hypervisor, based upon SUSE Linux Enterprise Server by following the Virtualization Guide.
Given the simplicity of the deployment, the operating system and hypervisor can be installed with the SUSE Linux Enterprise Server ISO media and the Cisco IMC virtual media and virtual console methodology.
Then for the solution VM, use the hypervisor user interface to allocate the necessary CPU, memory, disk and networking as noted in the SUSE Rancher hardware requirements.
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices:
To monitor and operate a Cisco UCS server from Intersight, the first step is to claim the device. The following procedure provides the steps to claim the Cisco UCS C240 server manually in Intersight.
Logon to Intersight web interface and navigate to
Admin > Targets
On the top right corner of the window click on
Claim a New Target
In the next window, select
Compute / Fabric → Cisco UCS Server (Standalone)
, click onStart
In another tab of the web browser, logon to the CIntegrated Management Controller portal of the Cisco UCS C240 SD M5 and navigate to
Admin → Device Connector
Back in Intersight, enter the Device ID and Claim Code from the server and click on Claim. The server is now listed in Intersight under
Targets
and underServers
Enable
Tunneld vKVM
and click onSave
. Tunneld vKVM allows Intersight to open the vKVM window in case the client has no direct network access to the server on the local lan or via VPN.Navigate to
Operate → Servers →
name of the new server to see the details and Actions available for this system.The available actions are based on the Intersight license level available for this server and the privileges of the used user account.
NotePlease have a look at Intersight Licensing to get an overview of the functions available with the different license tiers.
Now you can remotely manage the server and leverage existing or setup specific deployment profiles for the use case, plus perform the operating system installation.
TipAn even more advanced infrastructure-as-code approach with Intersight can use Terraform.
While the initial deployment only requires a single VM, as noted in later deployment sections, having multiple VMs provides resiliency to accomplish high availability. To reduce single points of failure, it would be beneficial to have the multi-VM deployments spread across multiple hypervisor nodes. So consideration of consistent hypervisor and compute module configurations, with the needed resources for the SUSE Rancher VMs will yield a robust, reliable production implementation.
5.3 SUSE Linux Enterprise Server #
As the base software layer, use an enterprise-grade Linux operating system. For example, SUSE Linux Enterprise Server.
- Preparation(s)
To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be used.
Ensure these services are in place and configured for this node to use:
Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to host names
Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in time stamp consistency
Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to
the general, internet-based SUSE Customer Center (SCC) or
an organization’s SUSE Manager infrastructure or
a local server running an instance of Repository Mirroring Tool (RMT)
NoteDuring the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command line tool named SUSEConnect.
- Deployment Process
On the compute platform node, install the noted SUSE operating system, by following these steps:
Download the SUSE Linux Enterprise Server product (either for the ISO or Virtual Machine image)
Identify the appropriate, supported version of SUSE Linux Enterprise Server by reviewing the support matrix for SUSE Rancher versions Web page.
The installation process is described and can be performed with default values by following steps from the product documentation, see Installation Quick Start
TipAdjust both the password and the local network addressing setup to comply with local environment guidelines and requirements.
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices:
To reduce user intervention, unattended deployments of SUSE Linux Enterprise Server can be automated
for ISO-based installations, by referring to the AutoYaST Guide
5.4 Rancher Kubernetes Engine Government #
- Preparation(s)
Identify the appropriate, desired version of the Rancher Kubernetes Engine Government (for example vX.YY.ZZ+rke2rV) by reviewing
the "Supported Rancher Kubernetes Engine Government Versions" associated with the respective SUSE Rancher version from "Rancher Kubernetes Engine Government Downstream Clusters" section, or
the "Releases" on the Download Web page.
For Rancher Kubernetes Engine Government versions 1.21 and higher, if the host kernel supports AppArmor, the AppArmor tools (usually available via the "apparmor-parser" package) must also be present prior to installing Rancher Kubernetes Engine Government.
On the SUSE Linux Enterprise Server node, install this required package
zypper install apparmor-parser
For the underlying operating system firewall service, either
enable and configure the necessary inbound ports or
stop and completely disable the firewall service.
- Deployment Process
Perform the following steps to install the first Rancher Kubernetes Engine Government server on one of the nodes to be used for the Kubernetes control plane
Set the following variable with the noted version of Rancher Kubernetes Engine Government, as found during the preparation steps.
RKE2_VERSION=""
Install the appropriate version of Rancher Kubernetes Engine Government:
Download the installer script:
curl -sfL https://get.rke2.io | \ INSTALL_RKE2_VERSION=${RKE2_VERSION} sh -
Set the following variable with the URL that will be used to access the SUSE Rancher server. This may be based on one or more DNS entries, a reverse-proxy server, or a load balancer:
RKE2_subjectAltName=
Create the RKE2 config.yaml file:
mkdir -p /etc/rancher/rke2/ cat <<EOF> /etc/rancher/rke2/config.yaml write-kubeconfig-mode: "0644" tls-san: - "${RKE2_subjectAltName}" EOF
Start and enable the RKE2 service, which will begin installing the required Kubernetes components:
systemctl enable --now rke2-server.service
Include the Rancher Kubernetes Engine Government binary directories in this user’s path:
echo "PATH=${PATH}:/opt/rke2/bin:/var/lib/rancher/rke2/bin/" >> ~/.bashrc source ~/.bashrc
Monitor the progress of the installation:
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml watch -c "kubectl get deployments -A"
NoteFor the first two to three minutes of the installation, the initial output will include the error phrase "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?". As Kubernetes services get started this will be replace with "No resources found". About four minutes after beginning the installation, the output will begin showing the deployments being created, and after six to seven minutes the installation should be complete.
The Rancher Kubernetes Engine Government deployment is complete when elements of all the deployments (coredns, ingress, and metrics-server) show at least "1" as "AVAILABLE"
Use Ctrl+c to exit the watch loop after all deployment pods are running
- Deployment Consideration(s)
To further optimize deployment factors, leverage the following practices:
A full high-availability Rancher Kubernetes Engine Government cluster is recommended for production workloads. The etcd key/value store (aka database) requires an odd number of servers (aka master nodes) be allocated to the Rancher Kubernetes Engine Government cluster. In this case, two additional control-plane servers should be added; for a total of three.
Deploy the same operating system on the new compute platform nodes
Log in to the first server node and create a new config.yaml file for the remaining two server nodes:
Set the following variables, as appropriate for this cluster
# Private IP preferred, if available FIRST_SERVER_IP="" # Private IP preferred, if available SECOND_SERVER_IP="" # Private IP preferred, if available THIRD_SERVER_IP="" # From the /var/lib/rancher/rke2/server/node-token file on the first server NODE_TOKEN="" # Match the first of the first server (Hint: `kubectl get nodes`) RKE2_VERSION=""
Create the new config.yaml file:
echo "server: https://${FIRST_SERVER_IP}:9345" > config.yaml echo "token: ${NODE_TOKEN}" >> config.yaml cat /etc/rancher/rke2/config.yaml >> config.yaml
TipThe next steps require using SCP and SSH. Setting up passwordless SSH, and/or using
ssh-agent
, from the first server node to the second and third nodes will make these steps quicker and easier.Copy the new config.yaml file to the remaining two server nodes:
scp config.yaml ${SECOND_SERVER_IP}:~/ scp config.yaml ${THIRD_SERVER_IP}:~/
Move the config.yaml file to the correct location in the file system:
ssh ${SECOND_SERVER_IP} << EOF mkdir -p /etc/rancher/rke2/ cp ~/config.yaml /etc/rancher/rke2/config.yaml cat /etc/rancher/rke2/config.yaml EOF ssh ${THIRD_SERVER_IP} << EOF mkdir -p /etc/rancher/rke2/ cp ~/config.yaml /etc/rancher/rke2/config.yaml cat /etc/rancher/rke2/config.yaml EOF
Execute the following sets of commands on each of the remaining control-plane nodes:
Install Rancher Kubernetes Engine Government
ssh ${SECOND_SERVER_IP} << EOF curl -sfL https://get.rke2.io | \ INSTALL_RKE2_VERSION=${RKE2_VERSION} sh - systemctl enable --now rke2-server.service EOF ssh ${THIRD_SERVER_IP} << EOF curl -sfL https://get.rke2.io | \ INSTALL_RKE2_VERSION=${RKE2_VERSION} sh - systemctl enable --now rke2-server.service EOF
Monitor the progress of the new server nodes joining the Rancher Kubernetes Engine Government cluster:
watch -c "kubectl get nodes"
It takes up to eight minutes for each node to join the cluster
A node has deployed correctly when its status is "Ready" and it holds the roles of "control-plane,etcd,master"
Use Ctrl+c to exit the watch loop after all deployment pods are running
NoteThis can be changed to the normal Kubernetes default by adding a taint to each server node. See the official Kubernetes documentation for more information on how to do that.
(Optional) In cases where agent nodes are desired, execute the following sets of commands, using the same, "RKE2_VERSION", "FIRST_SERVER_IP" and "NODE_TOKEN" variable settings as above, on each of the agent nodes to add it to the Rancher Kubernetes Engine Government cluster:
curl -sfL https://get.rke2.io | \ INSTALL_RKE2_VERSION=${RKE2_VERSION} \ RKE2_URL=https://${FIRST_SERVER_IP}:6443 \ RKE2_TOKEN=${NODE_TOKEN} \ RKE2_KUBECONFIG_MODE="644" \ sh -
After this successful deployment of the Rancher Kubernetes Engine Government solution, review the product documentation for details on how to directly use this Kubernetes cluster. Furthermore, by reviewing the SUSE Rancher product documentation this solution can also be:
imported (refer to sub-section "Importing Existing Clusters"), then
managed (refer to sub-section "Cluster Administration") and
accessed (refer to sub-section "Cluster Access") to address orchestration of workloads, maintaining security and many more functions are readily available.