Troubleshooting Controlplane Nodes
Prerequisites
As RKE2 and K3s rely on containerd as the container runtime, crictl replaces Docker for container management. Before proceeding with the troubleshooting commands, configure your environment by exporting the following variables:
Check if the Controlplane Components are Running
RKE2: There are three specific containers launched on nodes with the controlplane role:
-
kube-apiserver -
kube-controller-manager -
kube-scheduler
The containers should have state Running. You can check this using crictl:
crictl ps | grep -E 'kube-apiserver|kube-controller-manager|kube-scheduler'
Example output:
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
deb8a96948594 138b1e685e151 11 days ago Running kube-controller-manager 0 0996426295dc5 kube-controller-manager kube-system
f5abb4c7846e4 138b1e685e151 11 days ago Running kube-scheduler 0 80cd9f30af0be kube-scheduler kube-system
ecd8a6991c22a 138b1e685e151 11 days ago Running kube-apiserver 0 58e042fabe78c kube-apiserver kube-system
K3s: These components run as embedded processes within the K3s service. They do not run as separate containers, so their status is tied to the k3s systemd service:
systemctl status k3s
Controlplane Logging
|
If you added multiple nodes with the |
The logs can contain information on what the problem could be.
RKE2:
crictl logs $(crictl ps --name kube-apiserver -q)
crictl logs $(crictl ps --name kube-controller-manager -q)
crictl logs $(crictl ps --name kube-scheduler -q)
K3s:
journalctl -u k3s | grep -i "kube-apiserver"
journalctl -u k3s | grep -i "kube-controller-manager"
journalctl -u k3s | grep -i "kube-scheduler"