Setting up the Amazon Cloud Provider

Important:

In Kubernetes 1.27 and later, you must use an out-of-tree AWS cloud provider. In-tree cloud providers have been deprecated. The Amazon cloud provider has been removed completely, and won’t work after an upgrade to Kubernetes 1.27. The steps listed below are still required to set up an Amazon cloud provider. You can set up an out-of-tree cloud provider after creating an IAM role and configuring the ClusterID.

You can also migrate from an in-tree to an out-of-tree AWS cloud provider on Kubernetes 1.26 and earlier. All existing clusters must migrate prior to upgrading to v1.27 in order to stay functional.

Starting with Kubernetes 1.23, you must deactivate the CSIMigrationAWS feature gate to use the in-tree AWS cloud provider. You can do this by setting feature-gates=CSIMigrationAWS=false as an additional argument for the cluster’s Kubelet, Controller Manager, API Server and Scheduler in the advanced cluster configuration.

When you use Amazon as a cloud provider, you can leverage the following capabilities:

  • Load Balancers: Launch an AWS Elastic Load Balancer (ELB) when you select Layer-4 Load Balancer in Port Mapping or when you launch a Service with type: LoadBalancer.

  • Persistent Volumes: Use AWS Elastic Block Stores (EBS) for persistent volumes.

See the cloud-provider-aws README for more information about the Amazon cloud provider.

To set up the Amazon cloud provider,

1. Create an IAM Role and attach to the instances

All nodes added to the cluster must be able to interact with EC2 so that they can create and remove resources. You can enable this interaction by using an IAM role attached to the instance. See Amazon documentation: Creating an IAM Role how to create an IAM role. There are two example policies:

  • The first policy is for the nodes with the controlplane role. These nodes have to be able to create/remove EC2 resources. The following IAM policy is an example, please remove any unneeded permissions for your use case.

  • The second policy is for the nodes with the etcd or worker role. These nodes only have to be able to retrieve information from EC2.

While creating an Amazon EC2 cluster, you must fill in the IAM Instance Profile Name (not ARN) of the created IAM role when creating the Node Template.

While creating a Custom cluster, you must manually attach the IAM role to the instance(s).

IAM Policy for nodes with the controlplane role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "autoscaling:DescribeAutoScalingGroups",
        "autoscaling:DescribeLaunchConfigurations",
        "autoscaling:DescribeTags",
        "ec2:DescribeInstances",
        "ec2:DescribeRegions",
        "ec2:DescribeRouteTables",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeSubnets",
        "ec2:DescribeVolumes",
        "ec2:CreateSecurityGroup",
        "ec2:CreateTags",
        "ec2:CreateVolume",
        "ec2:ModifyInstanceAttribute",
        "ec2:ModifyVolume",
        "ec2:AttachVolume",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:CreateRoute",
        "ec2:DeleteRoute",
        "ec2:DeleteSecurityGroup",
        "ec2:DeleteVolume",
        "ec2:DetachVolume",
        "ec2:RevokeSecurityGroupIngress",
        "ec2:DescribeVpcs",
        "elasticloadbalancing:AddTags",
        "elasticloadbalancing:AttachLoadBalancerToSubnets",
        "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
        "elasticloadbalancing:CreateLoadBalancer",
        "elasticloadbalancing:CreateLoadBalancerPolicy",
        "elasticloadbalancing:CreateLoadBalancerListeners",
        "elasticloadbalancing:ConfigureHealthCheck",
        "elasticloadbalancing:DeleteLoadBalancer",
        "elasticloadbalancing:DeleteLoadBalancerListeners",
        "elasticloadbalancing:DescribeLoadBalancers",
        "elasticloadbalancing:DescribeLoadBalancerAttributes",
        "elasticloadbalancing:DetachLoadBalancerFromSubnets",
        "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
        "elasticloadbalancing:ModifyLoadBalancerAttributes",
        "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
        "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
        "elasticloadbalancing:AddTags",
        "elasticloadbalancing:CreateListener",
        "elasticloadbalancing:CreateTargetGroup",
        "elasticloadbalancing:DeleteListener",
        "elasticloadbalancing:DeleteTargetGroup",
        "elasticloadbalancing:DescribeListeners",
        "elasticloadbalancing:DescribeLoadBalancerPolicies",
        "elasticloadbalancing:DescribeTargetGroups",
        "elasticloadbalancing:DescribeTargetHealth",
        "elasticloadbalancing:ModifyListener",
        "elasticloadbalancing:ModifyTargetGroup",
        "elasticloadbalancing:RegisterTargets",
        "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
        "iam:CreateServiceLinkedRole",
        "kms:DescribeKey"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}
json

IAM policy for nodes with the etcd or worker role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "ec2:DescribeRegions",
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:GetRepositoryPolicy",
        "ecr:DescribeRepositories",
        "ecr:ListImages",
        "ecr:BatchGetImage"
      ],
      "Resource": "*"
    }
  ]
}
json

2. Configure the ClusterID

The following resources need to tagged with a ClusterID:

  • Nodes: All hosts added in Rancher.

  • Subnet: The subnet used for your cluster.

  • Security Group: The security group used for your cluster.

Do not tag multiple security groups. Tagging multiple groups generates an error when creating an Elastic Load Balancer (ELB).

When you create an Amazon EC2 Cluster, the ClusterID is automatically configured for the created nodes. Other resources still need to be manually tagged.

Use the following tag:

Key = kubernetes.io/cluster/<cluster-id> Value = owned

Setting the value of the tag to owned tells the cluster that all resources with this tag are owned and managed by this cluster.

If you share resources between clusters, you can change the tag to:

Key = kubernetes.io/cluster/<cluster-id> Value = shared.

The string value, <cluster-id>, is the Kubernetes cluster’s ID.

Do not tag a resource with multiple owned or shared tags.

Using Amazon Elastic Container Registry (ECR)

The kubelet component has the ability to automatically obtain ECR credentials, when the IAM profile mentioned in Create an IAM Role and attach to the instances is attached to the instance(s). When using a Kubernetes version older than v1.15.0, the Amazon cloud provider needs be configured in the cluster. Starting with Kubernetes version v1.15.0, the kubelet can obtain ECR credentials without having the Amazon cloud provider configured in the cluster.

Using the Out-of-Tree AWS Cloud Provider

  1. Node name conventions and other prerequisites must be followed for the cloud provider to find the instance correctly.

  2. Rancher managed RKE2/K3s clusters don’t support configuring providerID. However, the engine will set the node name correctly if the following configuration is set on the provisioning cluster object:

    spec:
      rkeConfig:
        machineGlobalConfig:
          cloud-provider-name: aws
    yaml

    This option will be passed to the configuration of the various Kubernetes components that run on the node, and must be overridden per component to prevent the in-tree provider from running unintentionally:

    Override on Etcd:

    spec:
      rkeConfig:
        machineSelectorConfig:
          - config:
              kubelet-arg:
                - cloud-provider=external
            machineLabelSelector:
              matchExpressions:
                - key: rke.cattle.io/etcd-role
                  operator: In
                  values:
                    - 'true'
    yaml

    Override on Control Plane:

    spec:
      rkeConfig:
        machineSelectorConfig:
          - config:
            disable-cloud-controller: true
            kube-apiserver-arg:
              - cloud-provider=external
            kube-controller-manager-arg:
              - cloud-provider=external
            kubelet-arg:
              - cloud-provider=external
            machineLabelSelector:
              matchExpressions:
                - key: rke.cattle.io/control-plane-role
                  operator: In
                  values:
                    - 'true'
    yaml

    Override on Worker:

    spec:
      rkeConfig:
        machineSelectorConfig:
          - config:
              kubelet-arg:
                - cloud-provider=external
            machineLabelSelector:
              matchExpressions:
                - key: rke.cattle.io/worker-role
                  operator: In
                  values:
                    - 'true'
    yaml
  3. Select Amazon if relying on the above mechanism to set the provider ID. Otherwise, select External (out-of-tree) cloud provider, which sets --cloud-provider=external for Kubernetes components.

  4. Specify the aws-cloud-controller-manager Helm chart as an additional manifest to install:

    spec:
      rkeConfig:
        additionalManifest: |-
          apiVersion: helm.cattle.io/v1
          kind: HelmChart
          metadata:
            name: aws-cloud-controller-manager
            namespace: kube-system
          spec:
            chart: aws-cloud-controller-manager
            repo: https://kubernetes.github.io/cloud-provider-aws
            targetNamespace: kube-system
            bootstrap: true
            valuesContent: |-
              hostNetworking: true
              nodeSelector:
                node-role.kubernetes.io/control-plane: "true"
              args:
                - --configure-cloud-routes=false
                - --v=5
                - --cloud-provider=aws
    yaml

Helm Chart Installation from CLI

Official upstream docs for Helm chart installation can be found on GitHub.

  1. Add the Helm repository:

    helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws
    helm repo update
    shell
  2. Create a values.yaml file with the following contents to override the default values.yaml:

    # values.yaml
    hostNetworking: true
    tolerations:
      - effect: NoSchedule
        key: node.cloudprovider.kubernetes.io/uninitialized
        value: 'true'
      - effect: NoSchedule
        value: 'true'
        key: node-role.kubernetes.io/control-plane
    nodeSelector:
      node-role.kubernetes.io/control-plane: 'true'
    args:
      - --configure-cloud-routes=false
      - --use-service-account-credentials=true
      - --v=2
      - --cloud-provider=aws
    clusterRoleRules:
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - create
          - patch
          - update
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - '*'
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
      - apiGroups:
          - ""
        resources:
          - services
        verbs:
          - list
          - patch
          - update
          - watch
      - apiGroups:
          - ""
        resources:
          - services/status
        verbs:
          - list
          - patch
          - update
          - watch
      - apiGroups:
         - ''
        resources:
          - serviceaccounts
        verbs:
        - create
        - get
      - apiGroups:
          - ""
        resources:
          - persistentvolumes
        verbs:
          - get
          - list
          - update
          - watch
      - apiGroups:
          - ""
        resources:
          - endpoints
        verbs:
          - create
          - get
          - list
          - watch
          - update
      - apiGroups:
          - coordination.k8s.io
        resources:
          - leases
        verbs:
          - create
          - get
          - list
          - watch
          - update
      - apiGroups:
          - ""
        resources:
          - serviceaccounts/token
        verbs:
          - create
    yaml
  3. Install the Helm chart:

    helm upgrade --install aws-cloud-controller-manager aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml
    shell

    Verify that the Helm chart installed successfully:

    helm status -n kube-system aws-cloud-controller-manager
    shell
  4. (Optional) Verify that the cloud controller manager update succeeded:

    kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
    shell

Helm Chart Installation from UI

  1. Click , then select the name of the cluster from the left navigation.

  2. Select Apps > Repositories.

  3. Click the Create button.

  4. Enter https://kubernetes.github.io/cloud-provider-aws in the Index URL field.

  5. Select Apps > Charts from the left navigation and install aws-cloud-controller-manager.

  6. Select the namespace, kube-system, and enable Customize Helm options before install.

  7. Add the following container arguments:

      - '--use-service-account-credentials=true'
      - '--configure-cloud-routes=false'
    yaml
  8. Add get to verbs for serviceaccounts resources in clusterRoleRules. This allows the cloud controller manager to get service accounts upon startup.

      - apiGroups:
          - ''
        resources:
          - serviceaccounts
        verbs:
          - create
          - get
    yaml
  9. Rancher-provisioned RKE2 nodes are tainted node-role.kubernetes.io/control-plane. Update tolerations and the nodeSelector:

    tolerations:
      - effect: NoSchedule
        key: node.cloudprovider.kubernetes.io/uninitialized
        value: 'true'
      - effect: NoSchedule
        value: 'true'
        key: node-role.kubernetes.io/control-plane
    yaml
    nodeSelector:
      node-role.kubernetes.io/control-plane: 'true'
    yaml

    There’s currently a known issue where nodeSelector can’t be updated from the Rancher UI. Continue installing the chart and then edit the Daemonset manually to set the nodeSelector:

    +

    nodeSelector:
      node-role.kubernetes.io/control-plane: 'true'
    yaml
  10. Install the chart and confirm that the Daemonset aws-cloud-controller-manager is running. Verify aws-cloud-controller-manager pods are running in target namespace (kube-system unless modified in step 6).