Installing Cluster API Operator
Installing Cluster API Operator by following this page (without it being a Helm dependency to SUSE® Rancher Prime Cluster API) is not a recommended installation method and intended only for local development purposes. |
This section describes how to install Cluster API Operator
in the Kubernetes cluster.
Installing Cluster API (CAPI) and providers
CAPI
and desired CAPI
providers could be installed using the helm-based installation for Cluster API Operator
or as a helm dependency for the SUSE® Rancher Prime Cluster API
.
Install manually with Helm (alternative)
To install Cluster API Operator
with version 1.9.4
of the CAPI
+ Docker
provider using helm, follow these steps:
-
Add the Helm repository for the
Cluster API Operator
:helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator helm repo add jetstack https://charts.jetstack.io
-
Update the Helm repository:
helm repo update
-
Install the Cert-Manager:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
-
Install the
Cluster API Operator
:helm install capi-operator capi-operator/cluster-api-operator \ --create-namespace -n capi-operator-system \ --set infrastructure=docker:v1.9.4 \ --set core=cluster-api:v1.9.4 \ --timeout 90s --wait # Core Cluster API with kubeadm bootstrap and control plane providers will also be installed
|
To provide additional environment variables, enable feature gates, or supply cloud credentials, similar to clusterctl
common provider, variables secret with name
and a namespace
of the secret could be specified for the Cluster API Operator
as shown below.
helm install capi-operator capi-operator/cluster-api-operator \
--create-namespace -n capi-operator-system \
--set infrastructure=docker:v1.9.4 \
--set core=cluster-api:v1.9.4 \
--timeout 90s \
--secret-name <secret_name> \
--wait
Example secret data:
apiVersion: v1
kind: Secret
metadata:
name: variables
namespace: default
type: Opaque
stringData:
CLUSTER_TOPOLOGY: "true"
EXP_CLUSTER_RESOURCE_SET: "true"
To select more than one desired provider to be installed together with the Cluster API Operator
, the --infrastructure
flag can be specified with multiple provider names separated by a comma. For example:
helm install ... --set infrastructure="docker:v1.9.4;aws:v2.7.1"
The infrastructure
flag is set to docker:v1.9.4;aws:v2.7.1
, representing the desired provider names. This means that the Cluster API Operator
will install and manage multiple providers, Docker
and AWS
, with versions v1.9.4
and v2.7.1
respectively.
The cluster is now ready to install SUSE® Rancher Prime Cluster API. The default behavior when installing the chart is to install Cluster API Operator as a Helm dependency. Since we decided to install it manually before installing SUSE® Rancher Prime Cluster API, the feature cluster-api-operator.enabled
must be explicitly disabled as otherwise it would conflict with the existing installation. You can refer to Install SUSE® Rancher Prime Cluster API without Cluster API Operator to see next steps.
For more fine-grained control of the providers and other components installed with CAPI, see the Add the infrastructure provider section. |
Install SUSE® Rancher Prime Cluster API without Cluster API Operator
as a Helm dependency
This option is only suitable for development purposes and not recommended for production environments. |
The rancher-turtles
chart is available in https://rancher.github.io/turtles and this Helm repository must be added before proceeding with the installation:
helm repo add turtles https://rancher.github.io/turtles
helm repo update
and then it can be installed into the rancher-turtles-system
namespace with:
helm install rancher-turtles turtles/rancher-turtles --version v0.16.0 \
-n rancher-turtles-system \
--set cluster-api-operator.enabled=false \
--set cluster-api-operator.cluster-api.enabled=false \
--create-namespace --wait \
--dependency-update
As you can see, we are telling Helm to ignore installing cluster-api-operator
as a dependency.
In this case, you must apply the following recource configs to the cluster, in order to be able to use CAPRKE2:
apiVersion: v1
kind: Namespace
metadata:
name: rke2-bootstrap-system
---
apiVersion: v1
kind: Namespace
metadata:
name: rke2-control-plane-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: capi-additional-rbac-roles
namespace: capi-system
data:
manifests: |-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: provisioning-rke-cattle-io
labels:
cluster.x-k8s.io/aggregate-to-manager: "true"
rules:
- apiGroups: ["rke.cattle.io"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: provisioning-rke-machine-cattle-io
labels:
cluster.x-k8s.io/aggregate-to-manager: "true"
rules:
- apiGroups: ["rke-machine.cattle.io"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: cluster-api
namespace: capi-system
spec:
name: cluster-api
type: core
version: v1.9.4
additionalManifests:
name: capi-additional-rbac-roles
namespace: capi-system
configSecret:
name: capi-env-variables
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: rke2-bootstrap
namespace: rke2-bootstrap-system
spec:
name: rke2
type: bootstrap
configSecret:
name: capi-env-variables
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: rke2-control-plane
namespace: rke2-control-plane-system
spec:
name: rke2
type: controlPlane
configSecret:
name: capi-env-variables
In the resource manifests above, ensure that the version set for the core CAPI Provider is the same version of CAPI that is supported by Rancher Turtles. |