Jump to content
documentation.suse.com / Layered Stack Deployment of SUSE Rancher
SUSE Linux Enterprise Server 15 SP3, K3s 1.20.14, SUSE Rancher 2.5.12

Layered Stack Deployment of SUSE Rancher

Integrated with Cisco (R)

Technical Reference Documentation
Reference Configuration
Image
Date: 2022-04-12

The purpose of this document is to provide an overview and procedure of implementing SUSE (R) and partner offerings for SUSE Rancher, as a multi-cluster container management platform for organizations that deploy containerized workloads, orchestrated by Kubernetes. SUSE Rancher makes it easy to deploy, manage, and use Kubernetes everywhere, meet IT requirements, and empower DevOps teams.

Disclaimer: Documents published as part of the series SUSE Technical Reference Documentation have been contributed voluntarily by SUSE employees and third parties. They are meant to serve as examples of how particular actions can be performed. They have been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. SUSE cannot verify that actions described in these documents do what is claimed or whether actions described have unintended consequences. SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors or the consequences thereof.

1 Introduction

On the digital transformation journey to a full cloud-native landscape, the use of microservices becomes the main approach with the dominant technology for such container orchestration being Kubernetes.⁠[1] With its large community of developers and abundant features and capabilities, Kubernetes has become the de-facto standard and is included across most container-as-a-service platforms. With all of these technologies in place, both developer and operation teams can effectively deploy, manage and deliver functionality to their end users in a resilient and agile manner.

1.1 Motivation

Once on such a digital transformation journey, also relevant to focus on areas like:

Workload(s)

Determine how to manage and launch internally developed containerized, microservice workloads

Kubernetes

While any developer or organization may simply start with a single, Kubernetes-based deployment, it is very common for that number of cluster instances to rapidly grow. While each of these may have specific focus areas, it becomes imperative to figure out how to use, manage, maintain and replicate all of these instances over time.

This is where SUSE Rancher leads the industry, being able to manage access, usage, infrastructure and applications across clusters, that are Cloud Native Computing Foundation (CNCF⁠[2]) conformant and certified⁠[3], anywhere across edge, on-premise data centers, or cloud service providers. SUSE Rancher optimizes creating and managing Kubernetes clusters like:

  • Lightweight edge-centric K3s

  • Rancher Kubernetes Engine (RKE)

  • Rancher Kubernetes Engine Government (RKE2)

  • and other Kubernetes clusters that are based upon CNCF certified Kubernetes distributions or installations

and deployed across various supported infrastructure elements.

Compute Platform(s)

To optimize availability, performance, scalability and integrity, assess current system or hosting platforms

from Independent Hardware Vendors (IHV), such as Cisco ® as the platform for physical, bare metal, hypervisors and virtual machines

1.2 Scope

The scope of this document is to provide a layered reference configuration for SUSE Rancher. This can be done in a variety of solution layered stacks, to become a fundamental component of a managing multiple Kubernetes ecosystems.

1.3 Audience

This document is intended for IT decision makers, architects, system administrators and technicians who are implementing a flexible, software-defined Kubernetes management platform. One should still be familiar with the traditional IT infrastructure pillars — networking, computing and storage — along with the local use cases for sizing, scaling and limitations within each pillars' environments.

2 Business aspect

Agility is driving developers toward more cloud-native methodologies that focus on microservices architectures and streamlined workflows. Container technologies, like Kubernetes, embody this agile approach and help enable cloud-native transformation.

By unifying IT operations with Kubernetes, organizations realize key benefits like increased reliability, improved security and greater efficiencies with standardized automation. Therefore, Kubernetes infrastructure platforms are adopted by enterprises to deliver:

Cluster Operations

Improved Production and DevOps efficiencies with simplified cluster usage and robust operations

Security Policy & User Management

Consistent security policy enforcement plus advanced user management on any Kubernetes infrastructure

Access to Shared Tools & Services

A high level of reliability with easy, consistent access to a broad set of tools and services

2.1 Business problem

Many organizations are deploying Kubernetes clusters everywhere — in the cloud, on-premises, and at the edge — to unify IT operations. Such organizations can realize dramatic benefits, including:

  • Consistently deliver a high level of reliability on any infrastructure

  • Improve DevOps efficiency with standardized automation

  • Ensure enforcement of security policies on any infrastructure

However, simply relying on upstream Kubernetes alone can introduce overhead and risk because Kubernetes clusters are typically deployed:

  • Without central visibility

  • Without consistent security policies

  • And must be managed independently

Deploying a scalable kubernetes requires consideration of a large ecosystem, encompassing many software and infrastructure components and providers. Further, the ability to continually address the needs and concerns of:

Developers

For those who just focus on writing code to build their apps securely using a preferred workflow, providing a simple, push-button deployment mechanism of their containerized workloads where needed.

IT Operators

General infrastructure requirements still rely upon traditional IT pillars are for the stacked, underlying infrastructure. Ease of deployment, availability, scalability, resiliency, performance, security and integrity are still core concerns to be addressed for administrative control and observability.

Beyond just the core infrastructure software layers of managed Kubernetes clusters, organizations may be also be impacted by:

Compute Platform

Potential inconsistencies and impacts of multiple target system platforms for the distributed deployments of the cluster elements, across:

  • physical, baremetal, hypervisors and virtual machines

2.2 Business value

By allowing operation teams to focus on infrastructure and developers to deploy code the way they want too, SUSE and the Rancher offerings helps bring products to market faster and accelerate an organization’s digital transformation.

SUSE Rancher is a fundamental part of the complete software stack for teams adopting containers. It provides DevOps teams with integrated tools for running containerized workloads while also addressing the operational and security challenges of managing multiple Kubernetes clusters across any targetedd infrastructure.

Developers

SUSE Rancher makes it easy to securely deploy containerized applications no matter where the Kubernetes infrastructure runs -– in the cloud, on-premises, or at the edge. Using Helm or the App Catalog to deploy and manage applications across any or all these environments, ensuring multi-cluster consistency with a single deployment process.

IT Operators

SUSE Rancher not only deploys and manages production-grade Kubernetes clusters from datacenter to cloud to the edge, it also unites them with centralized authentication, access control and observability. Further, it streamlines cluster deployment on bare metal or virtual machines and maintains them using defined security policies.

With this increased consistency of the managed Kubernetes infrastructure clusters, organizations benefit from an even higher level of the Cloud Native Computing model where each layer only relies upon the API and version of the adjacent layer, such as:

Compute Platform

Using the above software application and technology solutions with the server platforms offered by Cisco Unified Computing System (UCS) brings increased productivity, reduced total cost of ownership, and scalability into your computing realm. Cisco UCS is based upon industry-standard, x86-architecture servers with Cisco innovations and delivers a better balance of CPU, memory, and I/O resources. This balance brings processor power to life with more than 150 world-record-setting benchmark results that demonstrate leadership in application areas including virtualization, cloud computing, enterprise applications, database management systems, enterprise middleware, high-performance computing, and basic CPU integer and floating-point performance metrics.

  • Match servers to workloads - The breadth of the server product line makes the process of matching servers to workloads straightforward, enabling you to achieve the best balance of CPU, memory, I/O, internal disk, and external storage-access resources using the blade, rack, multinode, or storage server form factor that best meets your organization’s data center requirements and preferred purchasing model.

  • Powered by AMD EPYC processors or Intel Xeon Scalable processors

  • Industry-leading bandwidth - Cisco UCS virtual interface cards have dramatically simplified the deployment of servers for specific applications. By making the number and type of I/O devices programmable on demand, enables organizations to deploy and repurpose server I/O configurations without ever touching the hardware.

  • Lower infrastructure cost - Designed for lower infrastructure cost per server, is a choice that makes scaling fast, easy, and inexpensive in comparison to manually configured approaches.

  • Rack server deployment flexibility - Cisco UCS C-Series Rack Servers unique in the industry because they can be integrated with Cisco UCS connectivity and management or used as stand-alone servers

  • Integrated Management Controller (IMC) - Running in the system’s Baseboard Management Controller (BMC), when a Cisco UCS C-Series Rack Servers is integrated into a Cisco UCS domain, the fabric interconnects interface with the IMC to make the server part of a single unified management domain. When a server is used as a standalone server, direct access to the IMC through the servers’s management port allows a range of software tools (including Cisco Intersight) to configure the server through its API.

3 Architectural overview

This section outlines the core elements of the SUSE Rancher solution, along with the suggested target platforms and components.

3.1 Solution architecture

The figure below illustrates the high-level architecture of the SUSE Rancher installation that manages multiple downstream Kubernetes clusters:

Rancher architecture
Figure 3.1: Architecture Overview - SUSE Rancher
Authentication Proxy

A user is authenticated via SUSE Rancher and then, if authorized, can access both the SUSE Rancher environment and the downstream clusters and workloads.

API Server

This provides the programmatic interface back-end for a user, using command line interactions with SUSE Rancher and the managed clusters.

Data Store

The purpose of this service is to capture the configuration and state of SUSE Rancher and the managed clusters to aid in backup and recovery processes.

Cluster Controller

Interacting with a cluster agent on the downstream cluster, the cluster controller allows the communication path for users and services to leverage for workloads and cluster management.

When set up, users can interact with SUSE Rancher through the Web-based user interface (UI), the command line interface (CLI), and programatically through the application programming interface (API). Depending upon the assigned roles, group membership and privileges, a user could:

  • manage all clusters, users, roles, projects

  • deploy new clusters, import other clusters, or remove existing ones

  • manage workloads across respective or labelled clusters

  • simply view clusters or workloads, or benefit from what is running

For the best performance and security, the recommended deployment is a dedicated Kubernetes cluster for the SUSE Rancher management server. Running user workloads on this cluster is not advised. After deploying SUSE Rancher, one can then create or import clusters for orchestrated workloads.

4 Component model

This section describes the various components being used to create a SUSE Rancher solution deployment, in the perspective of top to bottom ordering. When completed, the SUSE Rancher instance enables the management of multiple, downstream Kubernetes clusters.

4.1 Component overview

By using:

  • Software

    • Multi-cluster Management Server - SUSE Rancher

    • Kubernetes Platform - K3s

    • Linux Operating System - SUSE Linux Enterprise Server

  • Compute Platform

    • Cisco UCS

you can create the necessary infrastructure and services. Further details for these components are described in the following sections.

4.2 Software - SUSE Rancher

SUSE Rancher is a Kubernetes native multi-cluster container management platform. It addresses these challenges by delivering the following key functions, as shown in the following figure:

Rancher overview
Figure 4.1: Component Overview - SUSE Rancher
Certified Kubernetes Distributions

SUSE Rancher supports management of any CNCF certified Kubernetes distribution for:

  • development, edge, branch workloads, SUSE offerings like K3s, a CNCF certified lightweight distribution of Kubernetes

  • workload infrastructures, either on-premise or public-cloud based, SUSE offerings like Rancher Kubernetes Engine (RKE) or Rancher Kubernetes Engine Government (RKE2), as CNCF certified Kubernetes distributions for both bare-metal and virtualized servers

  • the public cloud, hosted Kubernetes services like

    • Amazon Elastic Kubernetes Service (EKS⁠[4]),

    • Azure Kubernetes Service (AKS⁠[5]) and

    • Google Kubernetes Engine (GKE⁠[6]).

Simplified Cluster Operations and Infrastructure Management

SUSE Rancher provides simple, consistent cluster operations including provisioning and templates, configuration and lifecycle version management, along with visibility and diagnostics.

Security and Authentication

SUSE Rancher integrates and utilizes existing directory services, to automate processes and apply a consistent set of identity and access management (IAM) plus security policies for all the managed clusters, no matter where they are running.

Policy Enforcement and Governance

SUSE Rancher includes audit and security guideline enforcement, monitoring and logging functions, along with user, network and workload policies distributed across all managed clusters.

Platform Services

SUSE Rancher also provides a rich catalog of services for building, deploying and scaling containerized applications, including app packaging, logging, monitoring and service mesh.

Tip
Tip

Learn more information about SUSE Rancher

For a production implementation of SUSE Rancher, deploying upon a Kubernetes platform is required and the next sections describe the suggested component layering approach.

4.3 Software - K3s

K3s is packaged as a single binary, which is about 50 megabytes in size. Bundled in that single binary is everything needed to run Kubernetes anywhere, including low-powered IoT and Edge-based devices. The binary includes:

  • the container runtime

  • important host utilities such as iptables, socat and du

The only OS dependencies are the Linux kernel itself and a proper dev, proc and sysfs mounts (this is done automatically on all modern Linux distributions). K3s bundles the Kubernetes components:

  • kube-apiserver,

  • kube-controller-manager,

  • kube-scheduler,

  • kubelet and

  • kube-proxy

into combined processes that are presented as a simple server and agent model, as represented in the following figure:

K3s overview
Figure 4.2: Component Overview - K3s

K3s can run as a complete cluster on a single node or can be expanded into a multi-node cluster. Besides the core Kubernetes components, these are also included:

  • containerd,

  • Flannel,

  • CoreDNS,

  • ingress controller and

  • a simple host port-based service load balancer.

All of these components are optional and can be swapped out for your implementation of choice. With these included components, you get a fully functional and CNCF-conformant cluster so you can start running apps right away. K3s is now a CNCF Sandbox project, being the first Kubernetes distribution ever to be adopted into sandbox.

Tip
Tip

Learn more information about K3s

4.4 Software - SUSE Linux Enterprise Server

SUSE Linux Enterprise Server (SLES) is an adaptable and easy-to-manage platform that allows developers and administrators to deploy business-critical workloads on-premises, in the cloud and at the edge. It is a Linux operating system that is adaptable to any environment – optimized for performance, security and reliability. As a multimodal operating system that paves the way for IT transformation in the software-defined era, this simplifies multimodal IT, makes traditional IT infrastructure efficient and provides an engaging platform for developers. As a result, one can easily deploy and transition business-critical workloads across on-premises and public cloud environments.

Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix and Windows environments, supports open standard interfaces for systems management, and has been certified for IPv6 compatibility. This modular, general purpose operating system runs on four processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering. SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription. This makes it the perfect guest operating system for virtual computing.

4.5 Compute Platform

Leveraging the enterprise grade functionality of the operating system mentioned in the previous section, many compute platforms can be the foundation of the deployment:

  • Virtual machines on supported hypervisors or hosted on cloud service providers

  • Physical, baremetal or single-board computers, either on-premises or hosted by cloud service providers

Note
Note

To complete self-testing of hardware with SUSE YES Certified Process, you can download and install the respective SUSE operating system support-pack version of SUSE Linux Enterprise Server and the YES test suite. Then run the tests per the instructions in the test kit, fixing any problems encountered and when corrected, re-run all tests to obtain clean test results. Submit the test results into the SUSE Bulletin System (SBS) for audit, review and validation.

Tip
Tip

Certified systems and hypervisors can be verified via SUSE YES Certified Bulletins and then can be leveraged as supported nodes for this deployment, as long as the certification refers to the respective version of the underlying SUSE operating system required.

Cisco UCS C-Series Rack Servers

Cisco UCS C-Series Rack Servers delivers unified computing in an industry-standard form factor to reduce TCO and increase agility. Each server addresses varying workload challenges through a balance of processing, memory, I/O, and internal storage resources. These servers can be deployed as stand-alone servers or as part of a Cisco Unified Computing System (Cisco UCS) managed environment to take advantage of Cisco’s standards-based unified computing innovations that help reduce customers’ Total Cost of Ownership (TCO) and increase their business agility. ~

Server product-line and model options abound in the Cisco UCS C-Series Rack Servers, including:

  • Cisco UCS C240 SD M5 is a high-performance compute solution in a dense 2-socket, 2-Rack-Unit, 22” form-factor to handle the most critical real-time compute applications. This front-access server can be deployed as stand-alone servers or as part of a Cisco Unified Computing System (Cisco UCS) to deliver an exceptional management experience for a variety of applications by:

    • incorporating the 2nd generation of Intel Xeon Scalable processors, Intel Optane Memory, and various drive options including All-NVMe, SAS and SATA drives.

    • being density optimized to accommodate space constrained environments while still offering industry-leading performance and expandability. It supports a wide range of workloads from enterprise to edge applications such as Multi-access Edge Compute (MEC).

    Note
    Note

    Cisco UCS Hardware Compatibilty List provides a lookup tool for Servers & OS Support, for versions of SUSE offerings.

  • Cisco Intersight:: To simplify multiple compute module setups and configurations, leverage Cisco Intersight which is is an API driven, cloud-based system management platform that integrates with the Cisco Integrated Management Controller. It is designed to help organizations to achieve their IT management and operations with a higher level of automation, simplicity, and operational efficiency. It is a new generation of global management tool for the Cisco UCS and Cisco HyperFlex systems and provides a holistic and unified approach to managing the customers’ distributed and virtualized environments. Cisco Intersight simplifies the installation, monitoring, troubleshooting, upgrade, and support for your infrastructure with the following benefits:

    • Provide Cloud Based Management: The ability to manage Cisco UCS and Cisco HyperFlex from the cloud provides the customers the speed, simplicity, and easy scaling in the management of their infrastructure whether in the datacenters or remote and branch office locations.

    • Automation: Unified API in Cisco UCS and Cisco HyperFlex systems enables policy-driven configuration and management of the infrastructure and it makes Intersight itself and the devices connected to it fully programmable and DevOps friendly. An even more advanced infrastructure-as-code approach with Intersight can use Terraform.

    • Analytics and Telemetry: Intersight monitors the health and relationships of all the physical and virtual infrastructure components. It also collects telemetry and configuration information for developing the intelligence of the platform in the way in accordance with Cisco information security requirements.

    • Connected Cisco Technical Assistance Center (TAC): Solid integration with Cisco TAC enables more efficient and proactive technical support. Intersight provides enhanced operations automation by expediting sending files to speed troubleshooting.

    • Recommendation Engine: Driven by analytics and machine learning, Intersight recommendation engine provides actionable intelligence for IT operations management from the daily increasing knowledge base and practical insights learned in the entire system.

    • Management as A Service: Cisco Intersight provides management as a service and is designed to be infinitely scalable and easy to implement. It relieves users of the burden of maintaining systems management software and hardware.

Note
Note

A sample bill of materials, in the Chapter 9, Appendix, cites the necessary quantites of all components, along with a reference to the minimum resource requirements needed by the software components.

5 Deployment

This section describes the process steps for the deployment of the SUSE Rancher solution. It describes the process steps to deploy each of the component layers starting as a base functional proof-of-concept, having considerations on migration toward production, providing scaling guidance that is needed to create the solution.

5.1 Deployment overview

The deployment stack is represented in the following figure:

rc Rancher K3s SLES Cisco deployment
Figure 5.1: Deployment Stack - SUSE Rancher

and details are covered for each layer in the following sections.

Note
Note

The following section’s content is ordered and described from the bottom layer up to the top.

5.2 Compute Platform

The base, starting configuration can reside all within a single Cisco UCS server. Based upon the relatively small resource requirements for a SUSE Rancher deployment, a viable approach is to deploy as a virtual machine (VM) on the target nodes, on top of an existing hypervisor, like KVM.

Preparation(s)

For a physical host, that is racked, cabled and powered up, like Cisco UCS C240 SD M5 used in the deployment:

  1. If using Cisco UCS Integrated Management Controller (IMC):

    • Provide a DHCP Server for an IP address to the Cisco UCS Integrated Management Controller or use a monitor, keyboard, and mouse for initial IMC configuration

  2. Log into the interface as admin

    • On left menu click on Storage → Cisco 12G Modular Raid Controller

      • Create virtual drive from unused physical drives, for example pick two drives for the operating system and click on >> button. Under virtual drive properties enter boot as the name and click on Create Virtual Drive, then OK.

    • On the left menu click on Networking → Adapter Card MLOM

      • Click on the vNICs tab, and the factory default configuration comes with two vNICs defined with one vNIC assigned to port 0 and one vNIC assigned to port 1. Both vNICs are configured to allow any kind of traffic, with or without a VLAN tag. VLAN IDs must be managed on the operating system level.

        Tip
        Tip

        A great feature of the Cisco VIC card is the possibility to define multiple virtual network adapters presented to the operating system, which are configured best for specific use. Like, admin traffic should be configured with MTU 1500 to be compatible with all communication partners, whereas the network for storage intensive traffic should be configured with MTU 9000 for best throughput. For high-availability, the two network devices per traffic type will be combined in a bond on the operating system layer.

  3. These new settings become active with the next power cycle of the server. At the top right side of the window click on Host Power → Power Off, in the pop-up windows click on OK.

  4. On the top menu item list, select Launch vKVM

    • Select the Virtual Media tab and activate Virtual Devices found in Virtual Media tab

    • Click the Virtual Media tab to select Map CD/DVD

    • In the Virtual Media - CD/DVD window, browse to respective operating system media, open and use the image for a system boot.

Deployment Process

On the respective compute module node, determine if a hypervisor is already available for the solution`s virtual machines.

  1. If this will be the first use of this node, an option is to deploy a KVM hypervisor, based upon SUSE Linux Enterprise Server by following the Virtualization Guide.

    • Given the simplicity of the deployment, the operating system and hypervisor can be installed with the SUSE Linux Enterprise Server ISO media and the Cisco IMC virtual media and virtual console methodology.

  2. Then for the solution VM, use the hypervisor user interface to allocate the necessary CPU, memory, disk and networking as noted in the SUSE Rancher hardware requirements.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • To monitor and operate a Cisco UCS server from Intersight, the first step is to claim the device. The following procedure provides the steps to claim the Cisco UCS C240 server manually in Intersight.

      • Logon to Intersight web interface and navigate to Admin > Targets

      • On the top right corner of the window click on Claim a New Target

      • In the next window, select Compute / Fabric → Cisco UCS Server (Standalone), click on Start

      • In another tab of the web browser, logon to the CIntegrated Management Controller portal of the Cisco UCS C240 SD M5 and navigate to Admin → Device Connector

      • Back in Intersight, enter the Device ID and Claim Code from the server and click on Claim. The server is now listed in Intersight under Targets and under Servers

      • Enable Tunneld vKVM and click on Save. Tunneld vKVM allows Intersight to open the vKVM window in case the client has no direct network access to the server on the local lan or via VPN.

      • Navigate to Operate → Servers → name of the new server to see the details and Actions available for this system.

      • The available actions are based on the Intersight license level available for this server and the privileges of the used user account.

        Note
        Note

        Please have a look at Intersight Licensing to get an overview of the functions available with the different license tiers.

      • Now you can remotely manage the server and leverage existing or setup specific deployment profiles for the use case, plus perform the operating system installation.

        Tip
        Tip

        An even more advanced infrastructure-as-code approach with Intersight can use Terraform.

  • Availability

    • While the initial deployment only requires a single VM, as noted in later deployment sections, having multiple VMs provides resiliency to accomplish high availability. To reduce single points of failure, it would be beneficial to have the multi-VM deployments spread across multiple hypervisor nodes. So consideration of consistent hypervisor and compute module configurations, with the needed resources for the SUSE Rancher VMs will yield a robust, reliable production implementation.

5.3 SUSE Linux Enterprise Server

As the base software layer, use an enterprise-grade Linux operating system. For example, SUSE Linux Enterprise Server.

Preparation(s)

To meet the solution stack prerequisites and requirements, SUSE operating system offerings, like SUSE Linux Enterprise Server can be used.

  1. Ensure these services are in place and configured for this node to use:

    • Domain Name Service (DNS) - an external network-accessible service to map IP Addresses to host names

    • Network Time Protocol (NTP) - an external network-accessible service to obtain and synchronize system times to aid in time stamp consistency

    • Software Update Service - access to a network-based repository for software update packages. This can be accessed directly from each node via registration to

      • the general, internet-based SUSE Customer Center (SCC) or

      • an organization’s SUSE Manager infrastructure or

      • a local server running an instance of Repository Mirroring Tool (RMT)

        Note
        Note

        During the node’s installation, it can be pointed to the respective update service. This can also be accomplished post-installation with the command line tool named SUSEConnect.

Deployment Process

On the compute platform node, install the noted SUSE operating system, by following these steps:

  1. Download the SUSE Linux Enterprise Server product (either for the ISO or Virtual Machine image)

    • Identify the appropriate, supported version of SUSE Linux Enterprise Server by reviewing the support matrix for SUSE Rancher versions Web page.

  2. The installation process is described and can be performed with default values by following steps from the product documentation, see Installation Quick Start

    Tip
    Tip

    Adjust both the password and the local network addressing setup to comply with local environment guidelines and requirements.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Automation

    • To reduce user intervention, unattended deployments of SUSE Linux Enterprise Server can be automated

5.4 K3s

Preparation(s)
  1. Identify the appropriate, desired version of the K3s binary (for example vX.YY.ZZ+k3s1) by reviewing

    • the "Installing SUSE Rancher on K3s" associated with the respective SUSE Rancher version, or

    • the "Releases" on the Download Web page.

  2. For the underlying operating system firewall service, either

    • enable and configure the necessary inbound ports or

    • stop and completely disable the firewall service.

Deployment Process

Perform the following steps to install the first K3s server on one of the nodes to be used for the Kubernetes control plane

  1. Set the following variable with the noted version of K3s, as found during the preparation steps.

    K3s_VERSION=""
  2. Install the version of K3s with embedded etcd enabled:

    curl -sfL https://get.k3s.io | \
            INSTALL_K3S_VERSION=${K3s_VERSION} \
            INSTALL_K3S_EXEC='server --cluster-init --write-kubeconfig-mode=644' \
            sh -s -
    Tip
    Tip

    To address Availability and possible scaling to a multiple node cluster, etcd is enabled instead of using the default SQLite datastore.

    • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

      • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"

      • Use Ctrl+c to exit the watch loop after all deployment pods are running

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices:

  • Availability

    • A full high-availability K3s cluster is recommended for production workloads. The etcd key/value store (aka database) requires an odd number of servers (aka master nodes) be allocated to the K3s cluster. In this case, two additional control-plane servers should be added; for a total of three.

      1. Deploy the same operating system on the new compute platform nodes, then log in to the new nodes as root or as a user with sudo privileges.

      2. Execute the following sets of commands on each of the remaining control-plane nodes:

        • Set the following additional variables, as appropriate for this cluster

          # Private IP preferred, if available
          FIRST_SERVER_IP=""
          
          # From /var/lib/rancher/k3s/server/node-token file on the first server
          NODE_TOKEN=""
          
          # Match the first of the first server
          K3s_VERSION=""
        • Install K3s

          curl -sfL https://get.k3s.io | \
          	INSTALL_K3S_VERSION=${K3s_VERSION} \
          	K3S_URL=https://${FIRST_SERVER_IP}:6443 \
          	K3S_TOKEN=${NODE_TOKEN} \
          	K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC='server' \
          	sh -
        • Monitor the progress of the installation: watch -c "kubectl get deployments -A"

          • The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE"

          • Use Ctrl+c to exit the watch loop after all deployment pods are running

            By default, the K3s server nodes are available to run non-control-plane workloads. In this case, the K3s default behavior is perfect for the SUSE Rancher server cluster as it does not require additional agent (aka worker) nodes to maintain a highly available SUSE Rancher server application.

            Note
            Note

            This can be changed to the normal Kubernetes default by adding a taint to each server node. See the official Kubernetes documentation for more information on how to do that.

        • (Optional) In cases where agent nodes are desired, execute the following sets of commands, using the same "K3s_VERSION", "FIRST_SERVER_IP", and "NODE_TOKEN" variable settings as above, on each of the agent nodes to add it to the K3s cluster:

          curl -sfL https://get.k3s.io | \
          	INSTALL_K3S_VERSION=${K3s_VERSION} \
          	K3S_URL=https://${FIRST_SERVER_IP}:6443 \
          	K3S_TOKEN=${NODE_TOKEN} \
          	K3S_KUBECONFIG_MODE="644" \
          	sh -

5.5 SUSE Rancher

Preparation(s)
  1. For the respective node’s firewall service, either

    • enable and configure the necessary inbound ports or

    • stop and completely disable the firewall service.

  2. Determine the desired SSL configuration for TLS termination

    • Rancher-generated TLS certificate NOTE: This is the easiest way of installing SUSE Rancher with self-signed certificates.

    • Let’s Encrypt

    • Bring your own certificate

  3. Obtain a Helm binary matching the respective Kubernetes version for this SUSE Rancher implementation.

    Note
    Note

    Enable the respective kubeconfig setting for kubectl , K3s - /etc/rancher/k3s/k3s.yml, to be leveraged by helm command.

Deployment Process

While logged in to the node, as root or with sudo privileges, install SUSE Rancher:

  1. Install cert-manager

    • Set the following variable with the desired version of cert-manager

      CERT_MANAGER_VERSION=""
      Note
      Note

      At this time, the most current, supported version of cert-manager is v1.5.1

    • Create the cert-manager CRDs and apply the Helm Chart resource manifest

      kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml
      
      # Add the Jetstack Helm repository
      helm repo add jetstack https://charts.jetstack.io
      
      # Update your local Helm chart repository cache
      helm repo update
      
      # Install the cert-manager Helm chart
      helm install cert-manager jetstack/cert-manager \
        --namespace cert-manager \
        --create-namespace \
        --version ${CERT_MANAGER_VERSION}
      • Check the progress of the installation, looking for all pods to be in running status:

        kubectl get pods --namespace cert-manager
  2. Add the SUSE Rancher helm chart repository:

    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
  3. Create a namespace for SUSE Rancher

    kubectl create namespace cattle-system
  4. Prepare to use the Helm Chart for SUSE Rancher:

    • Set the following variable to the host name of the SUSE Rancher server instance

      HOSTNAME=""
      Note
      Note

      This host name should be resolvable to an IP address of the K3s host, or a load balancer/proxy server that supports this installation of SUSE Rancher.

    • Set the following variable to the number of deployed K3s nodes planned to host the SUSE Rancher service

      REPLICAS=""
    • Set the following variable to the desired version of SUSE Rancher server instance

      RANCHER_VERSION=""
    • Install the SUSE Rancher Helm Chart

      helm install rancher rancher-stable/rancher \
        --namespace cattle-system \
        --set hostname=${HOSTNAME} \
        --set replicas=${REPLICAS} \
        --version=${RANCHER_VERSION}
      • Monitor the progress of the installation:

        kubectl -n cattle-system rollout status deploy/rancher
  5. (Optional) Create an SSH tunnel to access SUSE Rancher:

    Note
    Note

    This optional step is useful in cases where NAT routers and/or firewalls prevent the client Web browser from reaching the exposed SUSE Rancher server IP address and/or port. This step requires that a Linux host is accessible through SSH from the client system and that the Linux host can reach the exposed SUSE Rancher service. The SUSE Rancher host name should be resolvable to the appropriate IP address by the local workstation.

    • Create an SSH tunnel through the Linux host to the IP address of the SUSE Rancher server on the NodePort, as noted in Step 3:

      ssh -N -D 8080 user@Linux-host
    • On the local workstation Web browser, change the SOCKS Host settings to "127.0.0.1" and port "8080".

      Note
      Note

      This will route all traffic from this Web browser through the remote Linux host. Be sure to close the tunnel and revert the SOCKS Host settings when you are done.

  6. Connect to the SUSE Rancher Web UI:

    • On a client system, use a Web browser to connect to the SUSE Rancher service, via HTTPs.

    • Provide a new Admin password.

      Important
      Important

      On the second configuration page, ensure the "Rancher Server URL" is set to the host name specified when installing the SUSE Rancher Helm Chart and the port is 443.

Deployment Consideration(s)

To further optimize deployment factors, leverage the following practices

  • Availability

    • In instances where a load balancer is used to access a K3s cluster, deploying two additional K3s cluster nodes, for a total of three, will automatically make SUSE Rancher highly available.

  • Security

    • The basic deployment steps described above are for deploying SUSE Rancher with automatically generated, self-signed security certificates. Other options are to have SUSE Rancher create public certificates via Let’s Encrypt associated with a publicly resolvable host name for the SUSE Rancher server, or to provide preconfigured, private certificates.

  • Integrity

    • This deployment of SUSE Rancher uses the K3s etcd key/value store to persist its data and configuration, which offers several advantages. With a multi-node cluster and this resiliency through replication, having to provide highly-available storage is not needed. In addition, backing up the K3s etcd store protects the cluster and the installation of SUSE Rancher and permits restoration of a given state.

After this successful deployment of the SUSE Rancher solution, review the product documentation for details on how downstream Kubernetes clusters can be:

  • deployed (refer to sub-section "Setting up Kubernetes Clusters in Rancher") or

  • imported (refer to sub-section "Importing Existing Clusters"), then

  • managed (refer to sub-section "Cluster Administration") and

  • accessed (refer to sub-section "Cluster Access") to address orchestration of workload, maintaining security and many more functions are readily available.

6 Summary

Using components and offerings from SUSE and the Rancher portfolio plus Cisco UCS Rack Servers streamline the ability to quickly and effectively engage in a digital transformation, taking advantage of cloud-native resources and disciplines. Using such technology approaches lets you deploy and leverage transformations of infrastructure into a durable, reliable enterprise-grade environment.

Simplify

Simplify and optimize your existing IT environments

  • Using SUSE Rancher enables you to simplify Kubernetes cluster deployment and management of the infrastructure components.

Modernize

Bring applications and data into modern computing

  • With SUSE Rancher, the digital transformation to containerized applications can be extended, in a distributed computing context, to benefit from the ability both to manage many target clusters, for each of the respective user bases, and to simplify the actual workload deployments.

Accelerate

Accelerate business transformation through the power of open source software

  • Given the open source nature of SUSE Rancher and the underlying software components, you can simplify management and make significant IT savings as you scale orchestrated microservice deployments anywhere you need to and for whatever use cases are needed, in an agile and innovative way.

7 References

White Papers

8 Glossary

  • Document Scope

    Reference Configuration

    A guide with the basic steps to deploy the layered stack of components from both the SUSE and partner portfolios. This is considered a fundamental basis to demonstrate a specific, tested configuration of components.

    Reference Architectures⁠[7]

    A guide with the general steps to deploy and validate the structured solution components from both the SUSE and partner portfolios. This provides a shareable template of consistency for consumers to leverage for similar production ready solutions, including design considerations, implementation suggestions and best practices.

    Best Practice

    Information that can overlap both the SUSE and partner space. It can either be provided as a stand-alone guide that provides reliable technical information not covered in other product documentation, based on real-life installation and implementation experiences from subject matter experts or complementary, embedded sections within any of the above documentation types describing considerations and possible steps forward.

  • Factor(s)

    Automation⁠[8]

    Infrastructure automation enables speed through faster execution when configuring the infrastructure and aims at providing visibility to help other teams across the enterprise work quickly and more efficiently. Automation removes the risk associated with human error, like manual misconfiguration; removing this can decrease downtime and increase reliability. These outcomes and attributes help the enterprise move toward implementing a culture of DevOps, the combined working of development and operations.

    Availability⁠[9]

    The probability that an item operates satisfactorily, without failure or downtime, under stated conditions as a function of its reliability, redundancy and maintainability attributes. Some major objectives to achieve a desired service level objectives are:

    • Preventing or reducing the likelihood and frequency of failures via design decisions within the allowed cost of ownership

    • Correcting or coping with possible component failures via resiliency, automated failover and disaster-recovery processes

    • Estimating and analyzing current conditions to prevent unexpected failures via predictive maintenance

    Integrity⁠[10]

    Integrity is the maintenance of, and the insurance of the accuracy and consistency of a specific element over its entire lifecycle. Both physical and logical aspects must be managed to ensure stability, performance, re-usability and maintainability.

    Security⁠[11]

    Security is about ensuring freedom from or resilience against potential harm, including protection from destructive or hostile forces. To minimize risks, one mus manage governance to avoid tampering, maintain access controls to prevent unauthorized usage and integrate layers of defense, reporting and recovery tactics.

  • Deployment Flavor(s)

    Proof-of-Concept⁠[12]

    A partial or nearly complete prototype constructed to demonstrate functionality and feasibility for verifying specific aspects or concepts under consideration. This is often a starting point when evaluating a new, transitional technology. Sometimes it starts as a Minimum Viable Product (MVP⁠[13]) that has just enough features to satisfy an initial set of requests. After such insights and feedback are obtained and potentially addressed, redeployments may be used to iteratively branch into other realms or to incorporate other known working functionality.

    Production

    A deployed environment that target customers or users can interact with and rely upon to meet their needs, plus be operationally sustainable in terms of resource usage and economic constraints.

    Scaling

    The flexibility of a system environment to either vertically scale-up, horizontally scale-out or conversely scale-down by adding or subtracting resources as needed. Attributes like capacity and performance are often the primary requirements to address, while still maintaining functional consistency and reliability.

9 Appendix

The following sections provide a bill of materials listing for the respective component layer(s) of the described deployment.

9.1 Compute platform bill of materials

Sample set of computing platform models, components and resources.

RoleQtySKUComponentNotes

Compute Platform

1-3

UCSC-C240-M5SD

Cisco UCS C240 SD M5

  • items below listed per node

 

2

UCS-CPU-I6248

  • Intel Xeon-Gold 6248 (2.5GHz/20-core/150W) Processor

 
 

8

UCS-MR-X32G2RT-H

  • S32 GB DDR4-2933-MHz RDIMM/2Rx4

 
 

1

UCSC-RAID-M5

  • Cisco 12G Modular RAID controller with 1GB cache

 
 

4

UCS-HD12TB10K12N

  • 1.2 TB 12G SAS 10K RPM SFF HDD

 

9.2 Software bill of materials

Sample set of software, support and services.

RoleQtySKUComponentNotes

Operating System

1-3

874-006875

SUSE Linux Enterprise Server,

  • x86_64,

  • Priority Subscription,

  • 1 Year

Configuration:

  • per node (up to 2 sockets, stackable) or 2 VMs

Kubernetes Management

1

R-0001-PS1

SUSE Rancher,

  • x86-64,

  • Priority Subscription,

  • 1 Year

Configuration:

  • per deployed instance

Rancher Management

2

R-0004-PS1

Rancher 10 Nodes

  • x86-64 or aarch64,

  • Priority Subscription,

  • 1 Year,

Configuration:

  • requires priority server subscription

Consulting and Training

1

R-0001-QSO

Rancher Quick Start,

  • Go Live Services

 
Note
Note

For the software components, other support term durations are also available.

9.3 Documentation configuration / attributes

This document was built using the following AsciiDoc and DocBook Authoring and Publishing Suite (DAPS) attributes:

Appendix=1 ArchOv=1 Automation=1 Availability=1 BP=1 BPBV=1 CompMod=1 DepConsiderations=1 Deployment=1 FCTR=1 FLVR=1 GFDL=1 Glossary=1 HWComp=1 HWDepCfg=1 IHV-Cisco-C240-SD=1 IHV-Cisco=1 Integrity=1 LN=1 PoC=1 Production=1 RA=1 RC=1 References=1 Requirements=1 SWComp=1 SWDepCfg=1 Scaling=1 Security=1 docdate=2022-04-12 env-daps=1 focusRancher=1 iIHV=1 iK3s=1 iRKE1=1 iRKE2=1 iRMT=1 iRancher=1 iSLEMicro=1 iSLES=1 iSUMa=1 layerK3s=1 layerSLES=1

11 GNU Free Documentation License

Copyright © 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

  1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

  2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

  3. State on the Title page the name of the publisher of the Modified Version, as the publisher.

  4. Preserve all the copyright notices of the Document.

  5. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

  6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

  7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.

  8. Include an unaltered copy of this License.

  9. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

  10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

  11. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

  12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

  13. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

  14. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

  15. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—​for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.

ADDENDUM: How to use this License for your documents

Copyright (c) YEAR YOUR NAME.
   Permission is granted to copy, distribute and/or modify this document
   under the terms of the GNU Free Documentation License, Version 1.2
   or any later version published by the Free Software Foundation;
   with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
   A copy of the license is included in the section entitled “GNU
   Free Documentation License”.

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “ with…​Texts.” line with this:

with the Invariant Sections being LIST THEIR TITLES, with the
   Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.

If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.

If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.