Creating an RKE2 Kubernetes Cluster

You can now provision RKE2 Kubernetes clusters on top of the Harvester cluster in Rancher using the built-in Harvester node driver.

rke2-cluster
  • VLAN network is required for Harvester node driver.

  • Harvester node driver only supports cloud images.

  • For the port requirements of the guest clusters deployed within Harvester, please refer to the doc here.

  • For RKE2 with Harvester cloud provider support matrix, please refer to the website here.

Backward Compatibility Notice

Please note a known backward compatibility issue if you’re using the Harvester cloud provider version v0.2.2 or higher. If your Harvester version is below v1.2.0 and you intend to use newer RKE2 versions (i.e., >= v1.26.6+rke2r1, v1.25.11+rke2r1, v1.24.15+rke2r1), it is essential to upgrade your Harvester cluster to v1.2.0 or a higher version before proceeding with the upgrade of the guest Kubernetes cluster or Harvester cloud provider.

For a detailed support matrix, please refer to the Harvester CCM & CSI Driver with RKE2 Releases section of the official website.

Create your cloud credentials

  1. Click ☰ > Cluster Management.

  2. Click Cloud Credentials.

  3. Click Create.

  4. Click Harvester.

  5. Enter your cloud credential name

  6. Select "Imported Harvester Cluster".

  7. Click Create.

create-harvester-cloud-credentials

Create RKE2 kubernetes cluster

Users can create a RKE2 Kubernetes cluster from the Cluster Management page via the RKE2 node driver.

  1. Select Clusters menu.

  2. Click Create button.

  3. Toggle Switch to RKE2/K3s.

  4. Select Harvester node driver.

  5. Select a Cloud Credential.

  6. Enter Cluster Name (required).

  7. Enter Namespace (required).

  8. Enter Image (required).

  9. Enter Network Name (required).

  10. Enter SSH User (required).

  11. (optional) Configure the menu:Show Advanced[User Data] to install the required packages of VM.

    #cloud-config
    packages:
      - iptables

    Calico and Canal networks require the iptables or xtables-nft package to be installed on the node, for more details, please refer to the RKE2 known issues.

  12. Click Create.

    create-rke2-harvester-cluster-1 create-rke2-harvester-cluster-2 create-rke2-harvester-cluster-3

    • RKE2 v1.21.5+rke2r2 or above provides a built-in Harvester Cloud Provider and Guest CSI driver integration.

    • Only imported Harvester clusters are supported by the Harvester node driver.

Add node affinity

The Harvester node driver now supports scheduling a group of machines to particular nodes through the node affinity rules, which can provide high availability and better resource utilization.

Node affinity can be added to the machine pools during the cluster creation:

  1. Click the Show Advanced button and click the Add Node Selector affinity-add-node-selector

  2. Set priority to Required if you wish the scheduler to schedule the machines only when the rules are met.

  3. Click Add Rule to specify the node affinity rules, e.g., for the topology spread constraints use case, you can add the region and zone labels as follows:

    key: topology.kubernetes.io/region
    operator: in list
    values: us-east-1
    ---
    key: topology.kubernetes.io/zone
    operator: in list
    values: us-east-1a
    affinity-add-rules

Add workload affinity

The workload affinity rules allow you to constrain which nodes your machines can be scheduled on based on the labels of workloads (VMs and Pods) already running on these nodes, instead of the node labels.

Workload affinity rules can be added to the machine pools during the cluster creation:

  1. Select Show Advanced and choose Add Workload Selector. affinity-add-workload-selector

  2. Select Type, Affinity or Anti-Affinity.

  3. Select Priority. Prefered means it’s an optional rule, and Required means a mandatory rule.

  4. Select the namespaces for the target workloads.

  5. Select Add Rule to specify the workload affinity rules.

  6. Set Topology Key to specify the label key that divides Harvester hosts into different topologies.

Update RKE2 Kubernetes cluster

The fields highlighted below of the RKE2 machine pool represent the Harvester VM configurations. Any modifications to these fields will trigger node reprovisioning.

rke2-harvester-fields

Using Harvester RKE2 node driver in air gapped environment

RKE2 provisioning relies on the qemu-guest-agent package to get the IP of the virtual machine.

Calico and Canal require the iptables or xtables-nft package to be installed on the node.

However, it may not be feasible to install packages in an air gapped environment.

You can address the installation constraints with the following options:

  • Option 1. Use a VM image preconfigured with required packages (e.g., iptables, qemu-guest-agent).

  • Option 2. Go to Show Advanced > User Data to allow VMs to install the required packages via an HTTP(S) proxy.

Example user data in Harvester node template:

#cloud-config
apt:
  http_proxy: http://192.168.0.1:3128
  https_proxy: http://192.168.0.1:3128