Setting up the Amazon Cloud Provider
Important:
In Kubernetes 1.27 and later, you must use an out-of-tree AWS cloud provider. In-tree cloud providers have been deprecated. The Amazon cloud provider has been removed completely, and won’t work after an upgrade to Kubernetes 1.27. The steps listed below are still required to set up an Amazon cloud provider. You can set up an out-of-tree cloud provider after creating an IAM role and configuring the ClusterID. You can also migrate from an in-tree to an out-of-tree AWS cloud provider on Kubernetes 1.26 and earlier. All existing clusters must migrate prior to upgrading to v1.27 in order to stay functional. Starting with Kubernetes 1.23, you must deactivate the |
When you use Amazon as a cloud provider, you can leverage the following capabilities:
-
Load Balancers: Launch an AWS Elastic Load Balancer (ELB) when you select
Layer-4 Load Balancer
in Port Mapping or when you launch aService
withtype: LoadBalancer
. -
Persistent Volumes: Use AWS Elastic Block Stores (EBS) for persistent volumes.
See the cloud-provider-aws README for more information about the Amazon cloud provider.
To set up the Amazon cloud provider,
1. Create an IAM Role and attach to the instances
All nodes added to the cluster must be able to interact with EC2 so that they can create and remove resources. You can enable this interaction by using an IAM role attached to the instance. See Amazon documentation: Creating an IAM Role how to create an IAM role. There are two example policies:
-
The first policy is for the nodes with the
controlplane
role. These nodes have to be able to create/remove EC2 resources. The following IAM policy is an example, please remove any unneeded permissions for your use case. -
The second policy is for the nodes with the
etcd
orworker
role. These nodes only have to be able to retrieve information from EC2.
While creating an Amazon EC2 cluster, you must fill in the IAM Instance Profile Name (not ARN) of the created IAM role when creating the Node Template.
While creating a Custom cluster, you must manually attach the IAM role to the instance(s).
IAM Policy for nodes with the controlplane
role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}
IAM policy for nodes with the etcd
or worker
role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
2. Configure the ClusterID
The following resources need to tagged with a ClusterID
:
-
Nodes: All hosts added in Rancher.
-
Subnet: The subnet used for your cluster.
-
Security Group: The security group used for your cluster.
Do not tag multiple security groups. Tagging multiple groups generates an error when creating an Elastic Load Balancer (ELB). |
When you create an Amazon EC2 Cluster, the ClusterID
is automatically configured for the created nodes. Other resources still need to be manually tagged.
Use the following tag:
Key = kubernetes.io/cluster/<cluster-id>
Value = owned
Setting the value of the tag to owned
tells the cluster that all resources with this tag are owned and managed by this cluster.
If you share resources between clusters, you can change the tag to:
Key = kubernetes.io/cluster/<cluster-id>
Value = shared
.
The string value, <cluster-id>
, is the Kubernetes cluster’s ID.
Do not tag a resource with multiple owned or shared tags. |
Using Amazon Elastic Container Registry (ECR)
The kubelet component has the ability to automatically obtain ECR credentials, when the IAM profile mentioned in Create an IAM Role and attach to the instances is attached to the instance(s). When using a Kubernetes version older than v1.15.0, the Amazon cloud provider needs be configured in the cluster. Starting with Kubernetes version v1.15.0, the kubelet can obtain ECR credentials without having the Amazon cloud provider configured in the cluster.
Using the Out-of-Tree AWS Cloud Provider
-
RKE2
-
RKE
-
Node name conventions and other prerequisites must be followed for the cloud provider to find the instance correctly.
-
Rancher managed RKE2/K3s clusters don’t support configuring
providerID
. However, the engine will set the node name correctly if the following configuration is set on the provisioning cluster object:spec: rkeConfig: machineGlobalConfig: cloud-provider-name: aws
This option will be passed to the configuration of the various Kubernetes components that run on the node, and must be overridden per component to prevent the in-tree provider from running unintentionally:
Override on Etcd:
spec: rkeConfig: machineSelectorConfig: - config: kubelet-arg: - cloud-provider=external machineLabelSelector: matchExpressions: - key: rke.cattle.io/etcd-role operator: In values: - 'true'
Override on Control Plane:
spec: rkeConfig: machineSelectorConfig: - config: disable-cloud-controller: true kube-apiserver-arg: - cloud-provider=external kube-controller-manager-arg: - cloud-provider=external kubelet-arg: - cloud-provider=external machineLabelSelector: matchExpressions: - key: rke.cattle.io/control-plane-role operator: In values: - 'true'
Override on Worker:
spec: rkeConfig: machineSelectorConfig: - config: kubelet-arg: - cloud-provider=external machineLabelSelector: matchExpressions: - key: rke.cattle.io/worker-role operator: In values: - 'true'
-
Select
Amazon
if relying on the above mechanism to set the provider ID. Otherwise, select External (out-of-tree) cloud provider, which sets--cloud-provider=external
for Kubernetes components. -
Specify the
aws-cloud-controller-manager
Helm chart as an additional manifest to install:spec: rkeConfig: additionalManifest: |- apiVersion: helm.cattle.io/v1 kind: HelmChart metadata: name: aws-cloud-controller-manager namespace: kube-system spec: chart: aws-cloud-controller-manager repo: https://kubernetes.github.io/cloud-provider-aws targetNamespace: kube-system bootstrap: true valuesContent: |- hostNetworking: true nodeSelector: node-role.kubernetes.io/control-plane: "true" args: - --configure-cloud-routes=false - --v=5 - --cloud-provider=aws
-
Node name conventions and other prerequisites must be followed so that the cloud provider can find the instance. Rancher provisioned clusters don’t support configuring
providerID
.If you use IP-based naming, the nodes must be named after the instance followed by the regional domain name (
ip-xxx-xxx-xxx-xxx.ec2.<region>.internal
). If you have a custom domain name set in the DHCP options, you must set--hostname-override
onkube-proxy
andkubelet
to match this naming convention.To meet node naming conventions, Rancher allows setting
useInstanceMetadataHostname
when theExternal Amazon
cloud provider is selected. EnablinguseInstanceMetadataHostname
will query ec2 metadata service and set/hostname
ashostname-override
forkubelet
andkube-proxy
:rancher_kubernetes_engine_config: cloud_provider: name: external-aws useInstanceMetadataHostname: true
You must not enable
useInstanceMetadataHostname
when setting custom values forhostname-override
for custom clusters. When you create a custom cluster, add--node-name
to thedocker run
node registration command to sethostname-override
— for example,"$(hostname -f)"
. This can be done manually or by using Show Advanced Options in the Rancher UI to add Node Name. -
Select the cloud provider.
Selecting External Amazon (out-of-tree) sets
--cloud-provider=external
and enablesuseInstanceMetadataHostname
. As mentioned in step 1, enablinguseInstanceMetadataHostname
will query the EC2 metadata service and sethttp://169.254.169.254/latest/meta-data/hostname
ashostname-override
forkubelet
andkube-proxy
.You must disable
useInstanceMetadataHostname
when setting a custom node name for custom clusters vianode-name
.rancher_kubernetes_engine_config: cloud_provider: name: external-aws useInstanceMetadataHostname: true/false
Existing clusters that use an External cloud provider will set
--cloud-provider=external
for Kubernetes components but won’t set the node name. -
Install the AWS cloud controller manager after the cluster finishes provisioning. Note that the cluster isn’t successfully provisioned and nodes are still in an
uninitialized
state until you deploy the cloud controller manager. This can be done manually, or via Helm charts in UI.Refer to the offical AWS upstream documentation for the cloud controller manager.
Helm Chart Installation from CLI
-
RKE2
-
RKE
Official upstream docs for Helm chart installation can be found on GitHub.
-
Add the Helm repository:
helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws helm repo update
-
Create a
values.yaml
file with the following contents to override the defaultvalues.yaml
:# values.yaml hostNetworking: true tolerations: - effect: NoSchedule key: node.cloudprovider.kubernetes.io/uninitialized value: 'true' - effect: NoSchedule value: 'true' key: node-role.kubernetes.io/control-plane nodeSelector: node-role.kubernetes.io/control-plane: 'true' args: - --configure-cloud-routes=false - --use-service-account-credentials=true - --v=2 - --cloud-provider=aws clusterRoleRules: - apiGroups: - "" resources: - events verbs: - create - patch - update - apiGroups: - "" resources: - nodes verbs: - '*' - apiGroups: - "" resources: - nodes/status verbs: - patch - apiGroups: - "" resources: - services verbs: - list - patch - update - watch - apiGroups: - "" resources: - services/status verbs: - list - patch - update - watch - apiGroups: - '' resources: - serviceaccounts verbs: - create - get - apiGroups: - "" resources: - persistentvolumes verbs: - get - list - update - watch - apiGroups: - "" resources: - endpoints verbs: - create - get - list - watch - update - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - get - list - watch - update - apiGroups: - "" resources: - serviceaccounts/token verbs: - create
-
Install the Helm chart:
helm upgrade --install aws-cloud-controller-manager aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml
Verify that the Helm chart installed successfully:
helm status -n kube-system aws-cloud-controller-manager
-
(Optional) Verify that the cloud controller manager update succeeded:
kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
Official upstream docs for Helm chart installation can be found on GitHub.
-
Add the Helm repository:
helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws helm repo update
-
Create a
values.yaml
file with the following contents, to override the defaultvalues.yaml
:# values.yaml hostNetworking: true tolerations: - effect: NoSchedule key: node.cloudprovider.kubernetes.io/uninitialized value: 'true' - effect: NoSchedule value: 'true' key: node-role.kubernetes.io/controlplane nodeSelector: node-role.kubernetes.io/controlplane: 'true' args: - --configure-cloud-routes=false - --use-service-account-credentials=true - --v=2 - --cloud-provider=aws clusterRoleRules: - apiGroups: - "" resources: - events verbs: - create - patch - update - apiGroups: - "" resources: - nodes verbs: - '*' - apiGroups: - "" resources: - nodes/status verbs: - patch - apiGroups: - "" resources: - services verbs: - list - patch - update - watch - apiGroups: - "" resources: - services/status verbs: - list - patch - update - watch - apiGroups: - '' resources: - serviceaccounts verbs: - create - get - apiGroups: - "" resources: - persistentvolumes verbs: - get - list - update - watch - apiGroups: - "" resources: - endpoints verbs: - create - get - list - watch - update - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - get - list - watch - update - apiGroups: - "" resources: - serviceaccounts/token verbs: - create
-
Install the Helm chart:
helm upgrade --install aws-cloud-controller-manager -n kube-system aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml
Verify that the Helm chart installed successfully:
helm status -n kube-system aws-cloud-controller-manager
-
If present, edit the Daemonset to remove the default node selector
node-role.kubernetes.io/control-plane: ""
:kubectl edit daemonset aws-cloud-controller-manager -n kube-system
-
(Optional) Verify that the cloud controller manager update succeeded:
kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
Helm Chart Installation from UI
-
RKE2
-
RKE
-
Click ☰, then select the name of the cluster from the left navigation.
-
Select Apps > Repositories.
-
Click the Create button.
-
Enter
https://kubernetes.github.io/cloud-provider-aws
in the Index URL field. -
Select Apps > Charts from the left navigation and install aws-cloud-controller-manager.
-
Select the namespace,
kube-system
, and enable Customize Helm options before install. -
Add the following container arguments:
- '--use-service-account-credentials=true' - '--configure-cloud-routes=false'
-
Add
get
toverbs
forserviceaccounts
resources inclusterRoleRules
. This allows the cloud controller manager to get service accounts upon startup.- apiGroups: - '' resources: - serviceaccounts verbs: - create - get
-
Rancher-provisioned RKE2 nodes are tainted
node-role.kubernetes.io/control-plane
. Update tolerations and the nodeSelector:tolerations: - effect: NoSchedule key: node.cloudprovider.kubernetes.io/uninitialized value: 'true' - effect: NoSchedule value: 'true' key: node-role.kubernetes.io/control-plane
nodeSelector: node-role.kubernetes.io/control-plane: 'true'
There’s currently a known issue where nodeSelector can’t be updated from the Rancher UI. Continue installing the chart and then edit the Daemonset manually to set the
nodeSelector
:+
nodeSelector: node-role.kubernetes.io/control-plane: 'true'
-
Install the chart and confirm that the Daemonset
aws-cloud-controller-manager
is running. Verifyaws-cloud-controller-manager
pods are running in target namespace (kube-system
unless modified in step 6).
-
Click ☰, then select the name of the cluster from the left navigation.
-
Select Apps > Repositories.
-
Click the Create button.
-
Enter
https://kubernetes.github.io/cloud-provider-aws
in the Index URL field. -
Select Apps > Charts from the left navigation and install aws-cloud-controller-manager.
-
Select the namespace,
kube-system
, and enable Customize Helm options before install. -
Add the following container arguments:
- '--use-service-account-credentials=true' - '--configure-cloud-routes=false'
-
Add
get
toverbs
forserviceaccounts
resources inclusterRoleRules
. This allows the cloud controller manager to get service accounts upon startup:- apiGroups: - '' resources: - serviceaccounts verbs: - create - get
-
Rancher-provisioned RKE nodes are tainted
node-role.kubernetes.io/controlplane
. Update tolerations and the nodeSelector:tolerations: - effect: NoSchedule key: node.cloudprovider.kubernetes.io/uninitialized value: 'true' - effect: NoSchedule value: 'true' key: node-role.kubernetes.io/controlplane
nodeSelector: node-role.kubernetes.io/controlplane: 'true'
There’s currently a known issue where
nodeSelector
can’t be updated from the Rancher UI. Continue installing the chart and then Daemonset manually to set thenodeSelector
:+
nodeSelector: node-role.kubernetes.io/controlplane: 'true'
-
Install the chart and confirm that the Daemonset
aws-cloud-controller-manager
deploys successfully:kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager