1 Introduction #
On the digital transformation journey to a full cloud-native landscape, the use of microservices becomes the main approach with the dominant technology for such container orchestration being Kubernetes.[1] With its large community of developers and abundant features and capabilities, Kubernetes has become the de-facto standard and is included across most container-as-a-service platforms. With all of these technologies in place, both developer and operation teams can effectively deploy, manage and deliver functionality to their end users in a resilient and agile manner.
1.1 Motivation #
Once on such a digital transformation journey, also relevant to focus on areas like:
- Workload(s)
Determine how to manage and launch internally developed containerized, microservice workloads
- Kubernetes
As developers and organizations continue their journey from simple, containerized microservices toward having these workloads orchestrated and deployed where ever they need, being able to install, monitor and use such Kubernetes infrastructures is a core need. Such deployments, being Cloud Native Computing Foundation (CNCF[2]) conformant and certified[3] are essential for both development and production workloads.
Solving common frustrations around installation complexity, Rancher Kubernetes Engine reduces many host dependencies and provides a stable path for deployment, upgrades, and rollbacks for core use cases.
- Compute Platform(s)
To optimize availability, performance, scalability and integrity, assess current system or hosting platforms
1.2 Scope #
The scope of this document is to provide a general reference implementation of Rancher Kubernetes Engine. This can be done in a variety of scenarios to create an enterprise Kubernetes cluster deployment anywhere.
1.3 Audience #
This document is intended for IT decision makers, architects, system administrators and technicians who are implementing a flexible, software-defined Kubernetes platform. One should still be familiar with the traditional IT infrastructure pillars — networking, computing and storage — along with the local use cases for sizing, scaling and limitations within each pillars' environments.