本文档采用自动化机器翻译技术翻译。 尽管我们力求提供准确的译文,但不对翻译内容的完整性、准确性或可靠性作出任何保证。 若出现任何内容不一致情况,请以原始 英文 版本为准,且原始英文版本为权威文本。

CIS 1.10 自我评估指南

概述

本文件是 K3s 安全强化指南 的补充。安全强化指南为 K3s 的生产安装提供了安全强化的建议指导,而本基准指南旨在帮助您评估加固集群在 CIS Kubernetes 基准中的每个控制项的安全级别。该指南供 K3s 操作员、安全团队、审计员和决策者使用。

本指南特定于 K3s 的 v1.28-v1.31 版本线和 CIS Kubernetes 基准的 v1.10 版本。

有关每个控制项的更多信息,包括详细描述和针对失败测试的修复措施,您可以参考 CIS Kubernetes 基准 v1.9 的相应部分。您可以在 互联网安全中心 (CIS) 创建免费账户后下载基准。

控制项测试方法

CIS Kubernetes 基准中的每个控制项都是针对根据附带的安全强化指南配置的 K3s 集群进行评估的。

当控制审计与原始 CIS 基准不同时,提供了特定于 K3s 的审计命令以供测试。

以下是每个控制项的可能结果:

  • 通过 - 被测试的 K3s 集群通过了基准中概述的审计。

  • 不适用 - 由于 K3s 的设计方式,该控制项不适用于 K3s。修复部分将解释原因。

  • 警告 - 在 CIS 基准中,该控制项是手动的,取决于集群的使用案例或其他必须由集群操作员确定的因素。这些控制项已被评估,以确保 K3s 不会阻止其实施,但未对被测试的集群进行进一步的配置或审计。

本指南假设 K3s 作为 Systemd 单元运行。您的安装可能会有所不同,需要您调整 "审计" 命令以适应您的场景。

1.1 控制平面节点配置文件

1.1.1 确保 API 服务器 pod 规范文件的权限设置为 600 或更严格(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将 API 服务器嵌入到 k3s 进程中。没有 API 服务器 pod 规范文件。

1.1.2 确保 API 服务器 pod 规范文件的所有权设置为 root:root(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将 API 服务器嵌入到 k3s 进程中。没有 API 服务器 pod 规范文件。

1.1.3 确保控制器管理器 pod 规范文件的权限设置为 600 或更严格(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将控制器管理器嵌入到 k3s 进程中。没有控制器管理器 pod 规范文件。

1.1.4 确保控制器管理器 pod 规范文件的所有权设置为 root:root(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将控制器管理器嵌入到 k3s 进程中。没有控制器管理器 pod 规范文件。

1.1.5 确保调度程序 pod 规范文件的权限设置为 600 或更严格(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将调度程序嵌入到 k3s 进程中。没有调度程序 pod 规范文件。

1.1.6 确保调度程序 pod 规范文件的所有权设置为 root:root(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将调度程序嵌入到 k3s 进程中。没有调度程序 pod 规范文件。

1.1.7 确保 etcd pod 规范文件的权限设置为 600 或更严格(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将 etcd 嵌入到 k3s 进程中。没有 etcd pod 规范文件。

1.1.8 确保 etcd pod 规范文件的所有权设置为 root:root(自动化)

*结果:*不适用

说明:

默认情况下,K3s 将 etcd 嵌入到 k3s 进程中。没有 etcd pod 规范文件。

1.1.9 确保容器网络接口文件的权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

find /var/lib/cni/networks -type f ! -name lock 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
修复:

默认情况下,K3s 将 CNI 文件权限设置为 600。 请注意,对于许多 CNI,会创建一个权限为 750 的锁定文件。这是预期的,可以忽略。 如果您修改了 CNI 配置,请确保权限设置为 600。 例如,chmod 600 /var/lib/cni/networks/<filename>

1.1.10 确保容器网络接口文件的所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G

预期结果: 'root:root' 存在

返回值:
root:root
root:root
root:root
root:root
root:root
root:root
root:root
修正:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chown root:root /var/lib/cni/networks/<filename>

1.1.11 确保 etcd 目录的权限设置为 700 或更严格(手动)

*结果:*PASS

Audit:

stat -c permissions=%a /var/lib/rancher/k3s/server/db/etcd

预期结果: 权限为 700,预期为 700 或更严格

返回值:
permissions=700
修复:

不适用于非 etcd 集群。如果仅运行主节点而没有 etcd 角色,则此检查不适用。 如果控制平面和 etcd 角色在同一节点上,但此检查为警告,则在 etcd 服务器节点上,获取 etcd 数据目录,作为参数 --data-dir,从命令 'ps -ef | grep etcd' 中获取。 运行以下命令(根据上面找到的 etcd 数据目录)。例如,chmod 700 /var/lib/rancher/k3s/server/db/etcd

1.1.12 确保 etcd 数据目录的所有权设置为 etcd:etcd(自动化)

*结果:*不适用

说明:

对于 K3s,etcd 嵌入在 k3s 进程中。没有单独的 etcd 进程。 因此,etcd 数据目录的所有权由 k3s 进程管理,应该是 root:root。

1.1.13 确保 admin.conf 文件的权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
修复:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig

1.1.14 确保 admin.conf 文件的所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'

预期结果: 'root:root' 等于 'root:root'

返回值:
root:root
修复:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig

1.1.15 确保 scheduler.conf 文件的权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
修复:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod 600 /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig

1.1.16 确保 scheduler.conf 文件的所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'

预期结果: 'root:root' 存在

返回值:
root:root
修复:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chown root:root /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig

1.1.17 确保 controller-manager.conf 文件的权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
修复:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod 600 /var/lib/rancher/k3s/server/cred/controller.kubeconfig

1.1.18 确保 controller-manager.conf 文件的所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

stat -c %U:%G /var/lib/rancher/k3s/server/cred/controller.kubeconfig

预期结果: 'root:root' 等于 'root:root'

返回值:
root:root
补救措施:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chown root:root /var/lib/rancher/k3s/server/cred/controller.kubeconfig

1.1.19 确保 Kubernetes PKI 目录和文件的所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

stat -c %U:%G /var/lib/rancher/k3s/server/tls

预期结果: 'root:root' 存在

返回值:
root:root
补救措施:

在控制平面节点上运行以下命令(根据您系统上的文件位置)。 例如,chown -R root:root /var/lib/rancher/k3s/server/tls

1.1.20 确保 Kubernetes PKI 证书文件的权限设置为 600 或更严格(手动)

*结果:*警告

补救措施: 在主节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod -R 600 /var/lib/rancher/k3s/server/tls/*.crt

1.1.21 确保 Kubernetes PKI 密钥文件的权限设置为 600(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'stat -c permissions=%a /var/lib/rancher/k3s/server/tls/*.key'

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
补救措施:

在主节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod -R 600 /var/lib/rancher/k3s/server/tls/*.key

1.2 API 服务器

1.2.1 确保 --anonymous-auth 参数设置为 false(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'

预期结果: '--anonymous-auth' 等于 'false'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 将 --anonymous-auth 参数设置为 false。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并删除类似以下内容。

kube-apiserver-arg:
  - "anonymous-auth=true"

1.2.2 确保 --token-auth-file 参数未设置(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--token-auth-file' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

请遵循文档并配置替代的身份验证机制。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并删除类似以下内容。

kube-apiserver-arg:
  - "token-auth-file=<path>"

1.2.3 确保 --DenyServiceExternalIPs 已设置(手动)

*结果:*警告

补救措施: 默认情况下,K3s 不会设置 DenyServiceExternalIPs。 要启用此标志,请按如下方式编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml。

kube-apiserver-arg:
  - "enable-admission-plugins=DenyServiceExternalIPs"

1.2.4 确保 --kubelet-client-certificate 和 --kubelet-client-key 参数设置正确(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--kubelet-client-certificate' 存在且 '--kubelet-client-key' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 会自动提供 kubelet 客户端证书和密钥。 它们会生成并位于 /var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt 和 /var/lib/rancher/k3s/server/tls/client-kube-apiserver.key。如果出于某种原因需要提供您自己的证书和密钥,可以在 K3s 配置文件 /etc/rancher/k3s/config.yaml 中设置以下参数。

kube-apiserver-arg:
  - "kubelet-client-certificate=<path/to/client-cert-file>"
  - "kubelet-client-key=<path/to/client-key-file>"

1.2.5 确保 --kubelet-certificate-authority 参数设置正确(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'

预期结果: '--kubelet-certificate-authority' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 会自动提供 kubelet CA 证书文件,位于 /var/lib/rancher/k3s/server/tls/server-ca.crt。 如果出于某种原因需要提供您自己的 CA 证书,请考虑使用 k3s 证书命令行工具。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并删除以下类似的行。

kube-apiserver-arg:
  - "kubelet-certificate-authority=<path/to/ca-cert-file>"

1.2.6 确保 --authorization-mode 参数未设置为 AlwaysAllow(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'

预期结果: '--authorization-mode' 不包含 'AlwaysAllow'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 不会将 --authorization-mode 设置为 AlwaysAllow。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,删除以下类似的行。

kube-apiserver-arg:
  - "authorization-mode=AlwaysAllow"

1.2.7 确保 --authorization-mode 参数包含 Node(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'

预期结果: '--authorization-mode' 包含 'Node'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 将 --authorization-mode 设置为 Node 和 RBAC。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,确保您没有覆盖 authorization-mode。

1.2.8 确保 --authorization-mode 参数包含 RBAC(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'

预期结果: '--authorization-mode' 包含 'RBAC'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 将 --authorization-mode 设置为 Node 和 RBAC。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,确保您没有覆盖 authorization-mode。

1.2.9 确保 admission 控制插件 EventRateLimit 已设置(手动)

*结果:*警告

补救措施: 请遵循 Kubernetes 文档,并在配置文件中设置所需的限制。 然后,编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并设置以下参数。

kube-apiserver-arg:
  - "enable-admission-plugins=...,EventRateLimit,..."
  - "admission-control-config-file=<path/to/configuration/file>"

1.2.10 确保 admission 控制插件 AlwaysAdmit 未设置(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'

预期结果: '--enable-admission-plugins' 不包含 'AlwaysAdmit' 或者 '--enable-admission-plugins' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 不会将 --enable-admission-plugins 设置为 AlwaysAdmit。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,删除以下类似的行。

kube-apiserver-arg:
  - "enable-admission-plugins=AlwaysAdmit"

1.2.11 确保 admission 控制插件 AlwaysPullImages 已设置(手动)

*结果:*警告

补救措施: 根据 CIS 指南,宽松的设置,"此设置可能会影响离线或隔离的集群,这些集群已预加载镜像,并且没有访问注册表以拉取正在使用的镜像。"此设置不适合使用此配置的集群。" 编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并设置以下参数。

kube-apiserver-arg:
  - "enable-admission-plugins=...,AlwaysPullImages,..."

1.2.12 确保 admission 控制插件 ServiceAccount 已设置(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--disable-admission-plugins' 存在或 '--disable-admission-plugins' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 不会将 --disable-admission-plugins 设置为任何值。 请遵循文档并根据您的环境创建 ServiceAccount 对象。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "disable-admission-plugins=ServiceAccount"

1.2.13 确保 admission 控制插件 NamespaceLifecycle 已设置(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--disable-admission-plugins' 存在或 '--disable-admission-plugins' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 不会将 --disable-admission-plugins 设置为任何值。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "disable-admission-plugins=...,NamespaceLifecycle,..."

1.2.14 确保 admission 控制插件 NodeRestriction 已设置(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'

预期结果: '--enable-admission-plugins' 包含 'NodeRestriction'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
补救措施:

默认情况下,K3s 将 --enable-admission-plugins 设置为 NodeRestriction。 如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请检查您是否没有覆盖入场插件。 如果您是,请将 NodeRestriction 包含在列表中。

kube-apiserver-arg:
  - "enable-admission-plugins=...,NodeRestriction,..."

1.2.15 确保 --profiling 参数设置为 false(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'

预期结果: '--profiling' 等于 'false'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 将 --profiling 参数设置为 false。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "profiling=true"

1.2.16 确保 --audit-log-path 参数已设置(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--audit-log-path' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并将 audit-log-path 参数设置为您希望审计日志写入的合适路径和文件,例如,

kube-apiserver-arg:
  - "audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log"

1.2.17 确保 --audit-log-maxage 参数设置为 30 或适当的值(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--audit-log-maxage' 大于或等于 30

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

在控制平面节点上编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并将 audit-log-maxage 参数设置为 30 或适当的天数,例如,

kube-apiserver-arg:
  - "audit-log-maxage=30"

1.2.18 确保 --audit-log-maxbackup 参数设置为 10 或适当的值(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--audit-log-maxbackup' 大于或等于 10

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

在控制平面节点上编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并将 audit-log-maxbackup 参数设置为 10 或适当的值。例如,

kube-apiserver-arg:
  - "audit-log-maxbackup=10"

1.2.19 确保 --audit-log-maxsize 参数设置为 100 或适当的值(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--audit-log-maxsize' 大于或等于 100

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

在控制平面节点上编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并将 audit-log-maxsize 参数设置为合适的大小(以 MB 为单位)。例如,

kube-apiserver-arg:
  - "audit-log-maxsize=100"

1.2.20 确保 --request-timeout 参数设置为适当的值(手动)

*结果:*警告

修正: 根据 CIS 指南,"建议根据需要设置此限制,并仅在必要时更改默认限制 60 秒"。 编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并在需要时设置以下参数。例如,

kube-apiserver-arg:
  - "request-timeout=300s"

1.2.21 确保 --service-account-lookup 参数设置为 true(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--service-account-lookup' 不存在 或者 '--service-account-lookup' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 不会设置 --service-account-lookup 参数。 编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并设置 service-account-lookup。例如,

kube-apiserver-arg:
  - "service-account-lookup=true"

或者,您可以从此文件中删除 service-account-lookup 参数,以便默认设置生效。

1.2.22 确保 --service-account-key-file 参数设置正确(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1

预期结果: '--service-account-key-file' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

K3s 自动生成并设置服务账户密钥文件。 它位于 /var/lib/rancher/k3s/server/tls/service.key。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并去除任何如下的行。

kube-apiserver-arg:
  - "service-account-key-file=<path>"

1.2.23 确保 --etcd-certfile 和 --etcd-keyfile 参数设置正确(自动)

*结果:*PASS

Audit:

if [ "$(journalctl -m -u k3s | grep -m1 'Managed etcd cluster' | wc -l)" -gt 0 ]; then
  journalctl -m -u k3s | grep  -m1 'Running kube-apiserver' | tail -n1
else
  echo "--etcd-certfile AND --etcd-keyfile"
fi

预期结果: '--etcd-certfile' 存在 且 '--etcd-keyfile' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

K3s 自动生成并设置 etcd 证书和密钥文件。 它们位于 /var/lib/rancher/k3s/server/tls/etcd/client.crt 和 /var/lib/rancher/k3s/server/tls/etcd/client.key。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "etcd-certfile=<path>"
  - "etcd-keyfile=<path>"

1.2.24 确保 --tls-cert-file 和 --tls-private-key-file 参数设置正确(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep -A1 'Running kube-apiserver' | tail -n2

预期结果: '--tls-cert-file' 存在 且 '--tls-private-key-file' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
修正:

默认情况下,K3s 自动生成并提供 apiserver 的 TLS 证书和私钥。 它们生成并存放于 /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt 和 /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key。如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "tls-cert-file=<path>"
  - "tls-private-key-file=<path>"

1.2.25 确保 --client-ca-file 参数设置正确(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'

预期结果: '--client-ca-file' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 自动提供客户端证书授权文件。 它生成并位于 /var/lib/rancher/k3s/server/tls/client-ca.crt。 如果出于某种原因需要提供您自己的 CA 证书,请考虑使用 k3s 证书命令行工具。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "client-ca-file=<path>"

1.2.26 确保 --etcd-cafile 参数设置正确(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'

预期结果: '--etcd-cafile' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 自动提供 etcd 证书授权文件。 它生成并位于 /var/lib/rancher/k3s/server/tls/client-ca.crt。 如果出于某种原因需要提供您自己的 CA 证书,请考虑使用 k3s 证书命令行工具。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-apiserver-arg:
  - "etcd-cafile=<path>"

1.2.27 确保 --encryption-provider-config 参数设置正确(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'

预期结果: '--encryption-provider-config' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

K3s 可以配置为使用加密提供程序来加密静态秘密。 在控制平面节点上编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并设置以下参数。 secrets-encryption: true 然后可以使用 k3s secrets-encrypt 命令行工具管理秘密加密。 如有需要,您可以在 /var/lib/rancher/k3s/server/cred/encryption-config.json 找到生成的加密配置。

1.2.28 确保加密提供程序配置正确(手动)

*结果:*PASS

Audit:

ENCRYPTION_PROVIDER_CONFIG=$(journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -o 'providers\"\:\[.*\]' $ENCRYPTION_PROVIDER_CONFIG | grep -o "[A-Za-z]*" | head -2 | tail -1  | sed 's/^/provider=/'; fi

预期结果: 'provider' 包含来自 'aescbc,kms,secretbox' 的有效元素

返回值:
provider=aescbc
修正:

K3s 可以配置为使用加密提供程序来加密静态秘密。K3s 将使用 aescbc 提供程序。 在控制平面节点上编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml 并设置以下参数。 secrets-encryption: true 然后可以使用 k3s secrets-encrypt 命令行工具管理秘密加密。 如有需要,您可以在 /var/lib/rancher/k3s/server/cred/encryption-config.json 找到生成的加密配置。

1.2.29 确保 API 服务器仅使用强加密密码(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'

预期结果: '--tls-cipher-suites' 包含来自 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384' 的有效元素

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s kube-apiserver 符合此测试。对这些值的更改可能会导致回归,因此在生产部署中应用之前,请确保所有 apiserver 客户端支持新的 TLS 配置。 如果需要自定义 TLS 配置,请考虑创建与您的要求相符的此规则的自定义版本。 如果此检查失败,请去除与 tls-cipher-suites 相关的任何自定义配置,或更新 /etc/rancher/k3s/config.yaml 文件以匹配默认值,添加以下内容:

kube-apiserver-arg:
  - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"

1.3 控制器管理器

1.3.1 确保 --terminated-pod-gc-threshold 参数设置正确(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'

预期结果: '--terminated-pod-gc-threshold' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

在控制平面节点上编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并将 --terminated-pod-gc-threshold 设置为适当的阈值,

kube-controller-manager-arg:
  - "terminated-pod-gc-threshold=10"

1.3.2 确保 --profiling 参数设置为 false(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'

预期结果: '--profiling' 等于 'false'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

默认情况下,K3s 将 --profiling 参数设置为 false。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-controller-manager-arg:
  - "profiling=true"

1.3.3 确保 --use-service-account-credentials 参数设置为 true(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'

预期结果: '--use-service-account-credentials' 不等于 'false'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

默认情况下,K3s 将 --use-service-account-credentials 参数设置为 true。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-controller-manager-arg:
  - "use-service-account-credentials=false"

1.3.4 确保 --service-account-private-key-file 参数设置正确(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'

预期结果: '--service-account-private-key-file' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

默认情况下,K3s 自动提供服务账户私钥文件。 它在 /var/lib/rancher/k3s/server/tls/service.current.key 生成并存放。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-controller-manager-arg:
  - "service-account-private-key-file=<path>"

1.3.5 确保 --root-ca-file 参数设置正确(自动)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'

预期结果: '--root-ca-file' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

默认情况下,K3s 自动提供 root CA 文件。 它在 /var/lib/rancher/k3s/server/tls/server-ca.crt 生成并存放。 如果出于某种原因您需要提供自己的 CA 证书,请考虑使用 k3s 证书命令行工具。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-controller-manager-arg:
  - "root-ca-file=<path>"

1.3.6 确保 RotateKubeletServerCertificate 参数设置为 true(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1

预期结果: '--feature-gates' 存在 或者 '--feature-gates' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

默认情况下,K3s 不会设置 RotateKubeletServerCertificate 功能开关。 如果您已启用此功能开关,则应将其去除。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,去除以下类似的行。

kube-controller-manager-arg:
  - "feature-gate=RotateKubeletServerCertificate"

1.3.7 确保 --bind-address 参数设置为 127.0.0.1(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1

预期结果: '--bind-address' 等于 '127.0.0.1' 或者 '--bind-address' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
修正:

默认情况下,K3s 将 --bind-address 参数设置为 127.0.0.1。如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-controller-manager-arg:
  - "bind-address=<IP>"

1.4 调度程序

1.4.1 确保 --profiling 参数设置为 false(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'profiling'

预期结果: '--profiling' 等于 'false'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
修正:

默认情况下,K3s 将 --profiling 参数设置为 false。 如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-scheduler-arg:
  - "profiling=true"

1.4.2 确保 --bind-address 参数设置为 127.0.0.1(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'

预期结果: '--bind-address' 等于 '127.0.0.1' 或者 '--bind-address' 不存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
修正:

默认情况下,K3s 将 --bind-address 参数设置为 127.0.0.1。如果此检查失败,请编辑 K3s 配置文件 /etc/rancher/k3s/config.yaml,并去除以下类似的行。

kube-scheduler-arg:
  - "bind-address=<IP>"

2 Etcd 节点配置

2.1 确保 --cert-file 和 --key-file 参数设置为适当值(手动)

*结果:*PASS

Audit:

预期结果: '.client-transport-security.cert-file' 等于 '/var/lib/rancher/k3s/server/tls/etcd/server-client.crt' 且 '.client-transport-security.key-file' 等于 '/var/lib/rancher/k3s/server/tls/etcd/server-client.key'

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 当使用嵌入式etcd时,K3s会为etcd生成证书和密钥文件。 这些文件位于/var/lib/rancher/k3s/server/tls/etcd/。 如果此检查失败,请确保配置文件/var/lib/rancher/k3s/server/db/etcd/config未被修改为使用自定义证书和密钥文件。

2.2 确保—​client-cert-auth参数设置为true(手动)

*结果:*PASS

Audit:

预期结果: '.client-transport-security.client-cert-auth’等于’true'

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 当使用嵌入式etcd时,K3s将—​client-cert-auth参数设置为true。 如果此检查失败,请确保配置文件/var/lib/rancher/k3s/server/db/etcd/config未被修改为禁用客户端证书认证。

2.3 确保—​auto-tls参数未设置为true(手动)

*结果:*PASS

Audit:

预期结果: '.client-transport-security.auto-tls’存在或'.client-transport-security.auto-tls’不存在

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 当使用嵌入式etcd时,K3s不设置—​auto-tls参数。 如果此检查失败,请编辑主节点上的etcd pod 规范文件 /var/lib/rancher/k3s/server/db/etcd/config,去除 --auto-tls 参数或将其设置为 false。client-transport-security: auto-tls: false

2.4 确保—​peer-cert-file和—​peer-key-file参数设置为适当值(手动)

*结果:*PASS

Audit:

预期结果: '.peer-transport-security.cert-file’等于'/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt’并且'.peer-transport-security.key-file’等于'/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key'

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 当使用嵌入式etcd时,K3s为etcd生成对等证书和密钥文件。 这些文件位于/var/lib/rancher/k3s/server/tls/etcd/。 如果此检查失败,请确保配置文件/var/lib/rancher/k3s/server/db/etcd/config未被修改为使用自定义对等证书和密钥文件。

2.5 确保 --peer-client-cert-auth 参数设置为 true(手动)

*结果:*PASS

Audit:

预期结果: '.peer-transport-security.client-cert-auth’等于’true'

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 当使用嵌入式etcd时,K3s 将 --peer-cert-auth 参数设置为 true。 如果此检查失败,请确保配置文件/var/lib/rancher/k3s/server/db/etcd/config未被修改为禁用对等客户端证书认证。

2.6 确保 --peer-auto-tls 参数未设置为 true(手动)

*结果:*PASS

Audit:

预期结果: '.peer-transport-security.auto-tls' 存在 或 '.peer-transport-security.auto-tls' 不存在

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 在使用嵌入式etcd时,K3s 不会设置 --peer-auto-tls 参数。 如果此检查失败,请编辑主节点上的 etcd pod 规范文件 /var/lib/rancher/k3s/server/db/etcd/config,去除 --peer-auto-tls 参数或将其设置为 false。peer-transport-security: auto-tls: false

2.7 确保 etcd 使用唯一的证书颁发机构(手动)

*结果:*PASS

Audit:

预期结果: '.peer-transport-security.trusted-ca-file' 等于 '/var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt'

返回值:
advertise-client-urls: https://10.10.10.100:2379
client-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt
data-dir: /var/lib/rancher/k3s/server/db/etcd
election-timeout: 5000
experimental-initial-corrupt-check: true
experimental-watch-progress-notify-interval: 5000000000
heartbeat-interval: 500
initial-advertise-peer-urls: https://10.10.10.100:2380
initial-cluster: server-0-08c675b0=https://10.10.10.100:2380
initial-cluster-state: new
listen-client-http-urls: https://127.0.0.1:2382
listen-client-urls: https://127.0.0.1:2379,https://10.10.10.100:2379
listen-metrics-urls: http://127.0.0.1:2381
listen-peer-urls: https://127.0.0.1:2380,https://10.10.10.100:2380
log-outputs:
- stderr
logger: zap
name: server-0-08c675b0
peer-transport-security:
  cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt
  client-cert-auth: true
  key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
  trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
snapshot-count: 10000
修正:

如果使用sqlite或外部数据库,etcd检查不适用。 在使用嵌入式 etcd 时,K3s 为 etcd 生成唯一的证书颁发机构。 该文件位于 /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt。 如果此检查失败,请确保配置文件 /var/lib/rancher/k3s/server/db/etcd/config 未被修改为使用共享证书颁发机构。

4.1 工作节点配置文件

4.1.1 确保 kubelet 服务文件权限设置为 600 或更严格(自动化)

*结果:*不适用

基本原理:

kubelet 嵌入在 k3s 进程中。没有 kubelet 服务文件,所有配置在运行时作为参数传递。

4.1.2 确保 kubelet 服务文件的所有权设置为 root:root(自动化)

*结果:*不适用

基本原理:

kubelet 嵌入在 k3s 进程中。没有 kubelet 服务文件,所有配置在运行时作为参数传递。

所有配置作为参数在容器运行时传递。

4.1.3 如果代理 kubeconfig 文件存在,确保权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; fi'

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
修正:

在每个工作节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod 600 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig

4.1.4 如果代理 kubeconfig 文件存在,请确保所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; fi'

预期结果: 'root:root' 存在

返回值:
root:root
修正:

在每个工作节点上运行以下命令(根据您系统上的文件位置)。 例如,chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig

4.1.5 确保 --kubeconfig kubelet.conf 文件的权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test -e /var/lib/rancher/k3s/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/agent/kubelet.kubeconfig; fi'

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
修正:

在每个工作节点上运行以下命令(根据您系统上的文件位置)。 例如,chmod 600 /var/lib/rancher/k3s/agent/kubelet.kubeconfig

4.1.6 确保 --kubeconfig kubelet.conf 文件的所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig

预期结果: 'root:root' 存在

返回值:
root:root
修正:

在每个工作节点上运行以下命令(根据您系统上的文件位置)。 例如,chown root:root /var/lib/rancher/k3s/agent/kubelet.kubeconfig

4.1.7 确保证书颁发机构文件权限设置为 600 或更严格(自动化)

*结果:*PASS

Audit:

stat -c permissions=%a /var/lib/rancher/k3s/agent/client-ca.crt

预期结果: 权限为 600,预期为 600 或更严格

返回值:
permissions=600
修正:

运行以下命令以修改 --client-ca-file chmod 600 /var/lib/rancher/k3s/agent/client-ca.crt 的文件权限

4.1.8 确保客户端证书颁发机构文件所有权设置为 root:root(自动化)

*结果:*PASS

Audit:

stat -c %U:%G /var/lib/rancher/k3s/agent/client-ca.crt

预期结果: 'root:root' 等于 'root:root'

返回值:
root:root
修正:

运行以下命令以修改 --client-ca-file 的所有权。chown root:root /var/lib/rancher/k3s/agent/client-ca.crt

4.1.9 确保 kubelet --config 配置文件的权限设置为 600 或更严格(自动化)

*结果:*不适用

基本原理:

kubelet 嵌入在 k3s 进程中。没有 kubelet 配置文件,所有配置作为参数在运行时传递。

4.1.10 确保 kubelet --config 配置文件所有权设置为 root:root(自动化)。

*结果:*不适用

基本原理:

kubelet 嵌入在 k3s 进程中。没有 kubelet 配置文件,所有配置在运行时作为参数传递。

4.2 Kubelet

4.2.1 确保 --anonymous-auth 参数设置为 false(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test $(journalctl -m -u k3s | grep  "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep  "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'

预期结果: '--anonymous-auth' 等于 'false'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 将 --anonymous-auth 设置为 false。如果您将其设置为其他值,您应该将其重新设置为 false。如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请删除类似以下的行。

kubelet-arg:
  - "anonymous-auth=true"

如果使用命令行,请编辑 K3s 服务文件并删除以下参数。--kubelet-arg="anonymous-auth=true" 根据您的系统,重启 k3s 服务。例如,systemctl daemon-reload systemctl restart k3s.service

4.2.2 确保 --authorization-mode 参数未设置为 AlwaysAllow(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test $(journalctl -m -u k3s | grep  "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep  "Running kube-apiserver" | tail -n1 | grep "authorization-mode"; else echo "--authorization-mode=Webhook"; fi'

预期结果: '--authorization-mode' 不包含 'AlwaysAllow'

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 不会将 --authorization-mode 设置为 AlwaysAllow。 如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请删除类似以下的行。

kubelet-arg:
  - "authorization-mode=AlwaysAllow"

如果使用命令行,请编辑 K3s 服务文件并移除以下参数。 --kubelet-arg="authorization-mode=AlwaysAllow" 根据您的系统,重启 k3s 服务。例如,systemctl daemon-reload systemctl restart k3s.service

4.2.3 确保 --client-ca-file 参数设置正确(自动化)

*结果:*PASS

Audit:

/bin/sh -c 'if test $(journalctl -m -u k3s | grep  "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep  "Running kube-apiserver" | tail -n1 | grep "client-ca-file"; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'

预期结果: '--client-ca-file' 存在

返回值:
Sep 11 17:22:08 server-0 k3s[2234]: time="2025-09-11T17:22:08Z" level=info msg="Running kube-apiserver --admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml --advertise-address=10.10.10.100 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --encryption-provider-config-automatic-reload=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
修正:

默认情况下,K3s 会自动为 Kubelet 提供客户端 CA 证书。 它被生成并位于 /var/lib/rancher/k3s/agent/client-ca.crt

4.2.4 验证 --read-only-port 参数是否设置为 0(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--read-only-port' 等于 '0' 或者 '--read-only-port' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

默认情况下,K3s 将 --read-only-port 设置为 0。如果您将其设置为其他值,您应该将其改回 0。如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请去除类似以下的行。

kubelet-arg:
  - "read-only-port=XXXX"

如果使用命令行,请编辑 K3s 服务文件并去除以下参数。 --kubelet-arg="read-only-port=XXXX" 根据您的系统,重启 k3s 服务。例如,systemctl daemon-reload systemctl restart k3s.service

4.2.5 确保 --streaming-connection-idle-timeout 参数未设置为 0(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--streaming-connection-idle-timeout' 不等于 '0' 或者 '--streaming-connection-idle-timeout' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请将以下参数设置为适当的值。

kubelet-arg:
  - "streaming-connection-idle-timeout=5m"

如果使用命令行,请使用 --kubelet-arg="streaming-connection-idle-timeout=5m" 运行 K3s。 根据您的系统,重启 k3s 服务。例如, systemctl restart k3s.service

4.2.6 确保 --make-iptables-util-chains 参数设置为 true(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--make-iptables-util-chains' 等于 'true' 或者 '--make-iptables-util-chains' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请设置以下参数。

kubelet-arg:
  - "make-iptables-util-chains=true"

如果使用命令行,请使用 --kubelet-arg="make-iptables-util-chains=true" 运行 K3s。 根据您的系统,重启 k3s 服务。例如, systemctl restart k3s.service

4.2.7 确保 --hostname-override 参数未设置(自动化)

*结果:*不适用

基本原理:

默认情况下,K3s 会设置 --hostname-override 参数。根据 CIS 指南,这是为了遵守要求此标志的云提供商,以确保主机名与节点名称匹配。

4.2.8 确保 eventRecordQPS 参数设置为确保适当事件捕获的水平(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--event-qps' 大于或等于 0 或者 '--event-qps' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

默认情况下,K3s 将 event-qps 设置为 0。如果您希望更改此设置,如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请将以下参数设置为适当的值。

kubelet-arg:
  - "event-qps=<value>"

如果使用命令行,请使用 --kubelet-arg="event-qps=<value>" 运行 K3s。 根据您的系统,重启 k3s 服务。例如, systemctl restart k3s.service

4.2.9 确保 --tls-cert-file 和 --tls-private-key-file 参数设置为适当(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--tls-cert-file' 存在并且 '--tls-private-key-file' 存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

默认情况下,K3s 会自动提供 Kubelet 的 TLS 证书和私钥。 它们被生成并位于 /var/lib/rancher/k3s/agent/serving-kubelet.crt 和 /var/lib/rancher/k3s/agent/serving-kubelet.key。如果出于某种原因您需要提供自己的证书和密钥,可以在 K3s 配置文件 /etc/rancher/k3s/config.yaml 中设置以下参数。

kubelet-arg:
  - "tls-cert-file=<path/to/tls-cert-file>"
  - "tls-private-key-file=<path/to/tls-private-key-file>"

4.2.10 确保 --rotate-certificates 参数未设置为 false(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--rotate-certificates' 存在或 '--rotate-certificates' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

默认情况下,K3s 不会设置 --rotate-certificates 参数。如果您已将此标志设置为 false,则应将其设置为 true 或完全删除该标志。 如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请删除任何 rotate-certificates 参数。 如果使用命令行,请移除 K3s 标志 --kubelet-arg="rotate-certificates"。 根据您的系统,重启 k3s 服务。例如, systemctl restart k3s.service

4.2.11 验证 RotateKubeletServerCertificate 参数是否设置为 true(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果:'RotateKubeletServerCertificate' 存在 或 'RotateKubeletServerCertificate' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

默认情况下,K3s 不会设置 RotateKubeletServerCertificate 功能开关。 如果您已启用此功能开关,则应将其删除。 如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请移除任何 feature-gate=RotateKubeletServerCertificate 参数。 如果使用命令行,请移除 K3s 标志 --kubelet-arg="feature-gate=RotateKubeletServerCertificate"。 根据您的系统,重启 k3s 服务。例如, systemctl restart k3s.service

4.2.12 确保 Kubelet 仅使用强加密密码(手动)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1

预期结果: '--tls-cipher-suites' 包含来自 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' 的有效元素

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --event-qps=0 --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=10.10.10.100 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
修正:

如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请编辑该文件以设置 TLSCipherSuites

kubelet-arg:
  - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"

或这些值的子集。 如果使用命令行,请添加 K3s 标志 --kubelet-arg="tls-cipher-suites=<same values as above>"。然后根据您的系统重启 k3s 服务。例如, systemctl restart k3s.service

4.2.13 确保对 pod PIDs 设置了限制(手动)

*结果:*警告

修正: 决定此参数的适当级别并进行设置,如果使用 K3s 配置文件 /etc/rancher/k3s/config.yaml,请编辑该文件将 podPidsLimit 设置为。

kubelet-arg:
  - "pod-max-pids=<value>"

4.3 kube-proxy

4.3.1 确保 kube-proxy 指标服务绑定到 localhost(自动化)

*结果:*PASS

Audit:

journalctl -m -u k3s -u k3s-agent | grep 'Running kube-proxy' | tail -n1

预期结果: '--metrics-bind-address' 存在 或 '--metrics-bind-address' 不存在

返回值:
Sep 11 17:22:10 server-0 k3s[2234]: time="2025-09-11T17:22:10Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
修正:

修改或去除任何将指标服务绑定到非 localhost 地址的值。 默认值为 127.0.0.1:10249。

5.1 RBAC 和服务账户

5.1.1 确保仅在需要时使用 cluster-admin 角色(自动化)

*结果:*PASS

Audit:

kubectl get clusterrolebindings -o=custom-columns=ROLE:.roleRef.name,NAME:.metadata.name,SUBJECT:.subjects[*].name --no-headers |  grep cluster-admin

预期结果: 'cluster-admin' 包含来自 'cluster-admin, helm-kube-system-traefik, helm-kube-system-traefik-crd' 的有效元素

返回值:
cluster-admin                                          cluster-admin                                          system:masters
cluster-admin                                          helm-kube-system-traefik                               helm-traefik
cluster-admin                                          helm-kube-system-traefik-crd                           helm-traefik-crd
修正:

识别所有与 cluster-admin 角色相关的 clusterrolebindings。检查它们是否被使用,以及它们是否需要此角色,或者是否可以使用权限更少的角色。K3s 对 helm-kube-system-traefik 和 helm-kube-system-traefik-crd clusterrolebindings 给予例外,因为这些是将 traefik 安装到 kube-system 名称空间以进行常规操作所必需的。 在可能的情况下,首先将用户绑定到权限较低的角色,然后移除与 cluster-admin 角色的 clusterrolebinding:

kubectl delete clusterrolebinding [name]

5.1.2 最小化对 Secret 的访问(自动化)

*结果:*警告

修正: 在可能的情况下,移除对集群中 Secret 对象的获取、列出和监视访问。

5.1.3 最小化角色和集群角色中的通配符使用(自动化)

*结果:*PASS

Audit:

# Check Roles
kubectl get roles --all-namespaces -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name
do
  role_rules=$(kubectl get role -n "${role_namespace}" "${role_name}" -o=json | jq -c '.rules')
  if echo "${role_rules}" | grep -q "\[\"\*\"\]"; then
    printf "**role_name: %-50s  role_namespace: %-25s role_rules: %s is_compliant: false\n" "${role_name}" "${role_namespace}" "${role_rules}"
  else
    printf "**role_name: %-50s role_namespace: %-25s is_compliant: true\n" "${role_name}" "${role_namespace}"
  fi;
done

cr_whitelist="cluster-admin k3s-cloud-controller-manager local-path-provisioner-role"
cr_whitelist="$cr_whitelist system:kube-controller-manager system:kubelet-api-admin system:controller:namespace-controller"
cr_whitelist="$cr_whitelist system:controller:disruption-controller system:controller:generic-garbage-collector"
cr_whitelist="$cr_whitelist system:controller:horizontal-pod-autoscaler system:controller:resourcequota-controller"
# Check ClusterRoles
kubectl get clusterroles -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name
do
  clusterrole_rules=$(kubectl get clusterrole "${clusterrole_name}" -o=json | jq -c '.rules')
  if echo "${cr_whitelist}" | grep -q "${clusterrole_name}"; then
    printf "**clusterrole_name: %-50s is_whitelist: true  is_compliant: true\n" "${clusterrole_name}"
  elif echo "${clusterrole_rules}" | grep -q "\[\"\*\"\]"; then
    echo "**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} is_compliant: false"
  else
    printf "**clusterrole_name: %-50s is_whitelist: false is_compliant: true\n" "${clusterrole_name}"
  fi;
done

预期结果: 'is_compliant' 等于 'true'

返回值:
**role_name: system:controller:bootstrap-signer                 role_namespace: kube-public               is_compliant: true
**role_name: extension-apiserver-authentication-reader          role_namespace: kube-system               is_compliant: true
**role_name: system::leader-locking-kube-controller-manager     role_namespace: kube-system               is_compliant: true
**role_name: system::leader-locking-kube-scheduler              role_namespace: kube-system               is_compliant: true
**role_name: system:controller:bootstrap-signer                 role_namespace: kube-system               is_compliant: true
**role_name: system:controller:cloud-provider                   role_namespace: kube-system               is_compliant: true
**role_name: system:controller:token-cleaner                    role_namespace: kube-system               is_compliant: true
**clusterrole_name: admin                                              is_whitelist: true  is_compliant: true
**clusterrole_name: cluster-admin                                      is_whitelist: true  is_compliant: true
**clusterrole_name: clustercidrs-node                                  is_whitelist: false is_compliant: true
**clusterrole_name: edit                                               is_whitelist: false is_compliant: true
**clusterrole_name: k3s-cloud-controller-manager                       is_whitelist: true  is_compliant: true
**clusterrole_name: local-path-provisioner-role                        is_whitelist: true  is_compliant: true
**clusterrole_name: system:aggregate-to-admin                          is_whitelist: false is_compliant: true
**clusterrole_name: system:aggregate-to-edit                           is_whitelist: false is_compliant: true
**clusterrole_name: system:aggregate-to-view                           is_whitelist: false is_compliant: true
**clusterrole_name: system:aggregated-metrics-reader                   is_whitelist: false is_compliant: true
**clusterrole_name: system:auth-delegator                              is_whitelist: false is_compliant: true
**clusterrole_name: system:basic-user                                  is_whitelist: false is_compliant: true
**clusterrole_name: system:certificates.k8s.io:certificatesigningrequests:nodeclient is_whitelist: false is_compliant: true
**clusterrole_name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient is_whitelist: false is_compliant: true
**clusterrole_name: system:certificates.k8s.io:kube-apiserver-client-approver is_whitelist: false is_compliant: true
**clusterrole_name: system:certificates.k8s.io:kube-apiserver-client-kubelet-approver is_whitelist: false is_compliant: true
**clusterrole_name: system:certificates.k8s.io:kubelet-serving-approver is_whitelist: false is_compliant: true
**clusterrole_name: system:certificates.k8s.io:legacy-unknown-approver is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:attachdetach-controller          is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:certificate-controller           is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:clusterrole-aggregation-controller is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:cronjob-controller               is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:daemon-set-controller            is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:deployment-controller            is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:disruption-controller            is_whitelist: true  is_compliant: true
**clusterrole_name: system:controller:endpoint-controller              is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:endpointslice-controller         is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:endpointslicemirroring-controller is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:ephemeral-volume-controller      is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:expand-controller                is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:generic-garbage-collector        is_whitelist: true  is_compliant: true
**clusterrole_name: system:controller:horizontal-pod-autoscaler        is_whitelist: true  is_compliant: true
**clusterrole_name: system:controller:job-controller                   is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:namespace-controller             is_whitelist: true  is_compliant: true
**clusterrole_name: system:controller:node-controller                  is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:persistent-volume-binder         is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:pod-garbage-collector            is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:pv-protection-controller         is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:pvc-protection-controller        is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:replicaset-controller            is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:replication-controller           is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:resourcequota-controller         is_whitelist: true  is_compliant: true
**clusterrole_name: system:controller:root-ca-cert-publisher           is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:route-controller                 is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:service-account-controller       is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:service-controller               is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:statefulset-controller           is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:ttl-after-finished-controller    is_whitelist: false is_compliant: true
**clusterrole_name: system:controller:ttl-controller                   is_whitelist: false is_compliant: true
**clusterrole_name: system:coredns                                     is_whitelist: false is_compliant: true
**clusterrole_name: system:discovery                                   is_whitelist: false is_compliant: true
**clusterrole_name: system:heapster                                    is_whitelist: false is_compliant: true
**clusterrole_name: system:k3s-controller                              is_whitelist: false is_compliant: true
**clusterrole_name: system:kube-aggregator                             is_whitelist: false is_compliant: true
**clusterrole_name: system:kube-controller-manager                     is_whitelist: true  is_compliant: true
**clusterrole_name: system:kube-dns                                    is_whitelist: false is_compliant: true
**clusterrole_name: system:kube-scheduler                              is_whitelist: false is_compliant: true
**clusterrole_name: system:kubelet-api-admin                           is_whitelist: true  is_compliant: true
**clusterrole_name: system:metrics-server                              is_whitelist: false is_compliant: true
**clusterrole_name: system:monitoring                                  is_whitelist: false is_compliant: true
**clusterrole_name: system:node                                        is_whitelist: false is_compliant: true
**clusterrole_name: system:node-bootstrapper                           is_whitelist: false is_compliant: true
**clusterrole_name: system:node-problem-detector                       is_whitelist: false is_compliant: true
**clusterrole_name: system:node-proxier                                is_whitelist: false is_compliant: true
**clusterrole_name: system:persistent-volume-provisioner               is_whitelist: false is_compliant: true
**clusterrole_name: system:public-info-viewer                          is_whitelist: false is_compliant: true
**clusterrole_name: system:service-account-issuer-discovery            is_whitelist: false is_compliant: true
**clusterrole_name: system:volume-scheduler                            is_whitelist: false is_compliant: true
**clusterrole_name: traefik-kube-system                                is_whitelist: false is_compliant: true
**clusterrole_name: view                                               is_whitelist: false is_compliant: true
修正:

在可能的情况下,将集群角色和角色中的任何通配符使用替换为特定对象或操作。 K3s 对以下集群角色给予例外,这些角色是常规操作所必需的:

  • k3s-cloud-controller-manager, local-path-provisioner-role, cluster-admin

  • system:kube-controller-manager, system:kubelet-api-admin, system:controller:namespace-controller,

  • system:controller:disruption-controller, system:controller:generic-garbage-collector,

  • system:controller:horizontal-pod-autoscaler, system:controller:resourcequota-controller

5.1.4 最小化创建 pod 的访问(自动化)

*结果:*警告

修正: 在可能的情况下,去除对集群中 pod 对象的创建访问。

5.1.5 确保默认服务帐户未被积极使用。(自动化)

*结果:*PASS

Audit:

kubectl get serviceaccounts --all-namespaces --field-selector metadata.name=default \
-o custom-columns=N:.metadata.namespace,SA:.metadata.name,ASA:.automountServiceAccountToken --no-headers \
| while read -r namespace serviceaccount automountserviceaccounttoken
do
  if [ "${automountserviceaccounttoken}" = "<none>" ]; then
    automountserviceaccounttoken="notset"
  fi
  if [ "${namespace}" != "kube-system" ] && [ "${automountserviceaccounttoken}" != "false" ]; then
    printf "**namespace: %-20s service_account: %-10s automountServiceAccountToken: %-6s is_compliant: false\n" "${namespace}" "${serviceaccount}" "${automountserviceaccounttoken}"
  else
    printf "**namespace: %-20s service_account: %-10s automountServiceAccountToken: %-6s is_compliant: true\n" "${namespace}" "${serviceaccount}" "${automountserviceaccounttoken}"
  fi
done

预期结果: 'is_compliant' 等于 'true'

返回值:
**namespace: default              service_account: default    automountServiceAccountToken: false  is_compliant: true
**namespace: kube-node-lease      service_account: default    automountServiceAccountToken: false  is_compliant: true
**namespace: kube-public          service_account: default    automountServiceAccountToken: false  is_compliant: true
**namespace: kube-system          service_account: default    automountServiceAccountToken: notset is_compliant: true
修正:

在 Kubernetes 工作负载需要特定访问 Kubernetes API 服务器的地方创建显式服务帐户。 K3s 对 kube-system 名称空间中的默认服务帐户给予例外。 修改每个默认服务帐户的配置以包含此值 automountServiceAccountToken: false 或使用 kubectl:

kubectl patch serviceaccount --namespace <NAMESPACE> default --patch '{"automountServiceAccountToken": false}'

5.1.6 确保服务账户令牌仅在必要时挂载(自动化)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken
do
  # Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.
  svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n "${pod_namespace}" "${pod_service_account}" -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
  pod_is_automountserviceaccounttoken=$(echo "${pod_is_automountserviceaccounttoken}" | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
  if [ "${svacc_is_automountserviceaccounttoken}" = "false" ] && ( [ "${pod_is_automountserviceaccounttoken}" = "false" ] || [ "${pod_is_automountserviceaccounttoken}" = "notset" ] ); then
    is_compliant="true"
  elif [ "${svacc_is_automountserviceaccounttoken}" = "true" ] && [ "${pod_is_automountserviceaccounttoken}" = "false" ]; then
    is_compliant="true"
  else
    is_compliant="false"
  fi
  echo "**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}"
done

预期结果: 'is_compliant' 等于 'true' 或 'service_account' 包含来自 'coredns, helm-traefik, helm-traefik-crd, traefik, metrics-server, svclb, local-path-provisioner-service-account' 的有效元素

返回值:
**namespace: kube-system pod_name: coredns-559656f558-b89kj service_account: coredns pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false
**namespace: kube-system pod_name: helm-install-traefik-crd-7fvrx service_account: helm-traefik-crd pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: true is_compliant: false
**namespace: kube-system pod_name: helm-install-traefik-rttbs service_account: helm-traefik pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: true is_compliant: false
**namespace: kube-system pod_name: local-path-provisioner-7677785564-6kh8q service_account: local-path-provisioner-service-account pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false
**namespace: kube-system pod_name: metrics-server-7cbbc464f4-q8xn9 service_account: metrics-server pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false
**namespace: kube-system pod_name: svclb-traefik-19f40894-dr4mq service_account: svclb pod_is_automountserviceaccounttoken: false svacc_is_automountServiceAccountToken: notset is_compliant: false
**namespace: kube-system pod_name: traefik-6c7b69cd74-v86cg service_account: traefik pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false
修正:

修改不需要挂载服务账户令牌的 ServiceAccounts 和 Pods 的定义,以禁用它,使用 automountServiceAccountToken: false。 如果 ServiceAccount 和 Pod 的 .spec 都指定了 automountServiceAccountToken 的值,则 Pod 的 spec 优先。 条件:当 Pod 的 is_compliant 为 true 时

  • 服务账户的 automountServiceAccountToken: false 且 Pod 的 automountServiceAccountToken: false 或未设置

  • 服务账户的 automountServiceAccountToken 为 true 或未设置,且 Pod 的 automountServiceAccountToken 为 false。K3s 对以下服务账户给予例外,这些是常规操作所需的:

  • coredns, helm-traefik, helm-traefik-crd, traefik, metrics-server, svclb, local-path-provisioner-service-account

5.1.7 避免使用 system:masters 组(手动)

*结果:*警告

修正: 从集群中的所有用户中移除 system:masters 组。

5.1.8 限制在 Kubernetes 集群中使用 Bind、Impersonate 和 Escalate 权限(手动)

*结果:*警告

修正: 在可能的情况下,从主体中移除 impersonate、bind 和 escalate 权限。

5.1.9 最小化创建持久卷的访问权限(手动)

*结果:*警告

修正: 在可能的情况下,从集群中去除对 PersistentVolume 对象的创建访问权限。

5.1.10 最小化对节点的代理子资源的访问权限(手动)

*结果:*警告

修正: 在可能的情况下,去除对节点对象的代理子资源的访问权限。

5.1.11 最小化对证书签名请求对象的审批子资源的访问权限(手动)

*结果:*警告

修正: 在可能的情况下,去除对证书签名请求对象的审批子资源的访问权限。

5.1.12 最小化对 webhook 配置对象的访问权限(手动)

*结果:*警告

修正: 在可能的情况下,去除对 validatingwebhookconfigurations 或 mutatingwebhookconfigurations 对象的访问

5.1.13 最小化对服务账户词元创建的访问(手动)

*结果:*警告

修正: 在可能的情况下,去除对 serviceaccount 对象的词元子资源的访问。

5.2 Pod 安全标准

5.2.1 确保集群中至少有一个活动的策略控制机制(手动)

*结果:*警告

修正: 确保每个包含用户工作负载的名称空间中都有 Pod 安全准入或外部策略控制系统。

5.2.2 最小化特权容器的准入(手动)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
  # Retrieve container(s) for each Pod.
  kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
  do
    # Retrieve container's name.
    container_name=$(echo ${container} | jq -r '.name')
    # Retrieve container's .securityContext.privileged value.
    container_privileged=$(echo ${container} | jq -r '.securityContext.privileged' | sed -e 's/null/notset/g')
    if [ "${container_privileged}" = "false" ] || [ "${container_privileged}" = "notset" ] ; then
      echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: true"
    else
      echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: false"
    fi
  done
done

预期结果: 'is_compliant' 等于 'true'

返回值:
***pod_name: coredns-559656f558-b89kj container_name: coredns pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: helm-install-traefik-crd-7fvrx container_name: helm pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: helm-install-traefik-rttbs container_name: helm pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: local-path-provisioner-7677785564-6kh8q container_name: local-path-provisioner pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: metrics-server-7cbbc464f4-q8xn9 container_name: metrics-server pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq container_name: lb-tcp-80 pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq container_name: lb-tcp-443 pod_namespace: kube-system is_container_privileged: notset is_compliant: true
***pod_name: traefik-6c7b69cd74-v86cg container_name: traefik pod_namespace: kube-system is_container_privileged: notset is_compliant: true
修正:

为集群中每个包含用户工作负载的名称空间添加策略,以限制特权容器的准入。 审计:审计列出所有 Pod 的容器以获取它们的 .securityContext.privileged 值。 条件:如果容器的 .securityContext.privileged 设置为 true,则 is_compliant 为 false。 默认:默认情况下,对特权容器的创建没有限制。

5.2.3 最小化希望共享主机进程 ID 命名空间的容器的准入(手动)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
  # Retrieve spec.hostPID for each pod.
  pod_hostpid=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostPID}' 2>/dev/null)
  if [ -z "${pod_hostpid}" ]; then
    pod_hostpid="false"
    echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: true"
  else
    echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: false"
  fi
done

预期结果: 'is_compliant' 等于 'true'

返回值:
***pod_name: coredns-559656f558-b89kj pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
***pod_name: helm-install-traefik-crd-7fvrx pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
***pod_name: helm-install-traefik-rttbs pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
***pod_name: local-path-provisioner-7677785564-6kh8q pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
***pod_name: metrics-server-7cbbc464f4-q8xn9 pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
***pod_name: traefik-6c7b69cd74-v86cg pod_namespace: kube-system is_pod_hostpid: false is_compliant: true
修正:

为集群中每个包含用户工作负载的名称空间添加策略,以限制 hostPID 容器的准入。 审计:审计获取每个 Pod 的 spec.hostPID。 条件:如果 Pod 的 spec.hostPID 设置为 true,则 is_compliant 为 false。 默认:默认情况下,对 hostPID 容器的创建没有限制。

5.2.4 最小化希望共享主机 IPC 命名空间的容器的准入(手动)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
  # Retrieve spec.hostIPC for each pod.
  pod_hostipc=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostIPC}' 2>/dev/null)
  if [ -z "${pod_hostipc}" ]; then
    pod_hostipc="false"
    echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: true"
  else
    echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: false"
  fi
done

预期结果: 'is_compliant' 等于 'true'

返回值:
***pod_name: coredns-559656f558-b89kj pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
***pod_name: helm-install-traefik-crd-7fvrx pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
***pod_name: helm-install-traefik-rttbs pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
***pod_name: local-path-provisioner-7677785564-6kh8q pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
***pod_name: metrics-server-7cbbc464f4-q8xn9 pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
***pod_name: traefik-6c7b69cd74-v86cg pod_namespace: kube-system is_pod_hostipc: false is_compliant: true
修正:

为集群中每个包含用户工作负载的名称空间添加策略,以限制 hostIPC 容器的准入。 审计:审计获取每个 Pod 的 spec.IPC。 条件:如果 Pod 的 spec.hostIPC 设置为 true,则 is_compliant 为 false。 默认:默认情况下,对 hostIPC 容器的创建没有限制。

5.2.5 最小化希望共享主机网络名称空间的容器的准入(手动)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
  # Retrieve spec.hostNetwork for each pod.
  pod_hostnetwork=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostNetwork}' 2>/dev/null)
  if [ -z "${pod_hostnetwork}" ]; then
    pod_hostnetwork="false"
    echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: true"
  else
    echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: false"
  fi
done

预期结果: 'is_compliant' 等于 'true'

返回值:
***pod_name: coredns-559656f558-b89kj pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
***pod_name: helm-install-traefik-crd-7fvrx pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
***pod_name: helm-install-traefik-rttbs pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
***pod_name: local-path-provisioner-7677785564-6kh8q pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
***pod_name: metrics-server-7cbbc464f4-q8xn9 pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
***pod_name: traefik-6c7b69cd74-v86cg pod_namespace: kube-system is_pod_hostnetwork: false is_compliant: true
修正:

为集群中每个包含用户工作负载的名称空间添加策略,以限制 hostNetwork 容器的准入。 审计:审计检索每个 Pod 的 spec.hostNetwork。 条件:如果 Pod 的 spec.hostNetwork 设置为 true,则 is_compliant 为 false。 默认:默认情况下,创建 hostNetwork 容器没有限制。

5.2.6 最小化允许特权提升的容器的准入(手动)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
  # Retrieve container(s) for each Pod.
  kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
  do
    # Retrieve container's name
    container_name=$(echo ${container} | jq -r '.name')
    # Retrieve container's .securityContext.allowPrivilegeEscalation
    container_allowprivesc=$(echo ${container} | jq -r '.securityContext.allowPrivilegeEscalation' | sed -e 's/null/notset/g')
    if [ "${container_allowprivesc}" = "false" ] || [ "${container_allowprivesc}" = "notset" ]; then
      echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: true"
    else
      echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: false"
    fi
  done
done

预期结果: 'is_compliant' 等于 'true'

返回值:
***pod_name: coredns-559656f558-b89kj container_name: coredns pod_namespace: kube-system is_container_allowprivesc: false is_compliant: true
***pod_name: helm-install-traefik-crd-7fvrx container_name: helm pod_namespace: kube-system is_container_allowprivesc: false is_compliant: true
***pod_name: helm-install-traefik-rttbs container_name: helm pod_namespace: kube-system is_container_allowprivesc: false is_compliant: true
***pod_name: local-path-provisioner-7677785564-6kh8q container_name: local-path-provisioner pod_namespace: kube-system is_container_allowprivesc: notset is_compliant: true
***pod_name: metrics-server-7cbbc464f4-q8xn9 container_name: metrics-server pod_namespace: kube-system is_container_allowprivesc: false is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq container_name: lb-tcp-80 pod_namespace: kube-system is_container_allowprivesc: notset is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq container_name: lb-tcp-443 pod_namespace: kube-system is_container_allowprivesc: notset is_compliant: true
***pod_name: traefik-6c7b69cd74-v86cg container_name: traefik pod_namespace: kube-system is_container_allowprivesc: false is_compliant: true
修正:

为集群中每个包含用户工作负载的名称空间添加策略,以限制允许特权提升的容器的准入,其中 .securityContext.allowPrivilegeEscalation 设置为 true。 审计:审计检索每个 Pod 的容器 .securityContext.allowPrivilegeEscalation。 条件:如果容器的 .securityContext.allowPrivilegeEscalation 设置为 true,则 is_compliant 为 false。 默认值:如果未设置,则允许特权提升(默认为 true)。但是,如果使用带有 restricted 控制文件的 PSP/PSA,则特权提升被明确禁止,除非另行配置。

5.2.7 最小化 root 容器的准入(手动)

*结果:*警告

修正: 为集群中每个名称空间创建一个策略,确保 MustRunAsNonRootMustRunAs 的 UID 范围不包括 0。

5.2.8 最小化具有 NET_RAW 能力的容器的接纳(手动)

*结果:*警告

修正: 为集群中具有用户工作负载的每个名称空间添加策略,以限制具有 NET_RAW 能力的容器的接纳。

5.2.9 最小化具有附加能力的容器的接纳(手动)

*结果:*PASS

Audit:

kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
  # Retrieve container(s) for each Pod.
  kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
  do
    # Retrieve container's name
    container_name=$(echo ${container} | jq -r '.name')
    # Retrieve container's added capabilities
    container_caps_add=$(echo ${container} | jq -r '.securityContext.capabilities.add' | sed -e 's/null/notset/g')
    # Set is_compliant to true by default.
    is_compliant=true
    is_whitelist=false
    caps_list=""

    # Check if pod is in whitelist
    if echo "${pod_name}" | grep -q -E "^(coredns|svclb-traefik)"; then
      is_whitelist=true
      is_compliant=true
    elif [ "${container_caps_add}" != "notset" ]; then
      # Loop through all caps and append caps_list, then set is_compliant to false.
      for cap in $(echo "${container_caps_add}" | jq -r '.[]'); do
        caps_list="${caps_list}${cap},"
        is_compliant=false
      done
      # Remove trailing comma for the last list member.
      caps_list=${caps_list%,}
    fi
    # Remove newlines from final output.
    continaer_caps_add=$(echo "${container_caps_add}" | tr -d '\n')
    if [ "${is_whitelist}" = true ]; then
      printf "***pod_name: %-30s container_name: %-30s pod_namespace: %-20s is_whitelist: %-5s is_compliant: true\n" "${pod_name}" "${container_name}" "${pod_namespace}" "${is_whitelist}"
    elif [ "${is_compliant}" = true ]; then
      printf "***pod_name: %-30s container_name: %-30s pod_namespace: %-20s container_caps_add: %-15s is_compliant: true\n" "${pod_name}" "${container_name}" "${pod_namespace}" "${container_caps_add}"
    else
      printf "***pod_name: %-30s container_name: %-30s pod_namespace: %-20s container_caps_add: %-15s is_compliant: false\n" "${pod_name}" "${container_name}" "${pod_namespace}" "${caps_list}"
    fi
  done
done

预期结果: 'is_compliant' 等于 'true'

返回值:
***pod_name: coredns-559656f558-b89kj       container_name: coredns                        pod_namespace: kube-system          is_whitelist: true  is_compliant: true
***pod_name: helm-install-traefik-crd-7fvrx container_name: helm                           pod_namespace: kube-system          container_caps_add: notset          is_compliant: true
***pod_name: helm-install-traefik-rttbs     container_name: helm                           pod_namespace: kube-system          container_caps_add: notset          is_compliant: true
***pod_name: local-path-provisioner-7677785564-6kh8q container_name: local-path-provisioner         pod_namespace: kube-system          container_caps_add: notset          is_compliant: true
***pod_name: metrics-server-7cbbc464f4-q8xn9 container_name: metrics-server                 pod_namespace: kube-system          container_caps_add: notset          is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq   container_name: lb-tcp-80                      pod_namespace: kube-system          is_whitelist: true  is_compliant: true
***pod_name: svclb-traefik-19f40894-dr4mq   container_name: lb-tcp-443                     pod_namespace: kube-system          is_whitelist: true  is_compliant: true
***pod_name: traefik-6c7b69cd74-v86cg       container_name: traefik                        pod_namespace: kube-system          container_caps_add: notset          is_compliant: true
修正:

确保 allowedCapabilities 不在集群的策略中,除非它被设置为空数组。 审计:审计检索每个 Pod 的容器的附加能力。 条件:如果给定容器添加了附加能力,则 is_compliant 为 false。 默认值:容器以容器运行时分配的默认能力集运行。 K3s 对以下 Pod 给予例外,这些 Pod 是正常操作所需的:

  • coredns, svclb-traefik

5.2.10 最小化具有已分配的能力的容器的接纳(手动)

*结果:*警告

修正: 审查在您的集群上运行的应用程序中能力的使用。如果一个名称空间包含不需要任何 Linux 能力来操作的应用程序,请考虑添加一个禁止接纳不丢弃所有能力的容器的 PSP。

5.2.11 最小化 Windows HostProcess 容器的接纳(手动)

*结果:*警告

修正: 为集群中具有用户工作负载的每个名称空间添加策略,以限制具有 .securityContext.windowsOptions.hostProcess 设置为 true 的容器的接纳。

5.2.12 最小化 HostPath 卷的接纳(手动)

*结果:*警告

修正: 为集群中具有用户工作负载的每个名称空间添加策略,以限制具有 hostPath 卷的容器的接纳。

5.2.13 最小化使用 HostPorts 的容器的接纳(手动)

*结果:*警告

修正: 为集群中具有用户工作负载的每个名称空间添加策略,以限制使用 hostPort 部分的容器的接纳。

5.3 网络策略和CNI

5.3.1 确保所使用的CNI支持网络策略(手动)

*结果:*警告

修正: 如果所使用的CNI插件不支持网络策略,则应考虑使用其他插件,或寻找替代机制以限制Kubernetes集群中的流量。

5.3.2 确保所有名称空间都定义了网络策略(手动)

*结果:*警告

修正: 按照文档创建所需的网络策略对象。

5.4 秘密管理

5.4.1 优先使用文件形式的秘密,而不是环境变量形式的秘密(手动)

*结果:*警告

修正: 如果可能,重写应用程序代码以从挂载的秘密文件中读取秘密,而不是从环境变量中读取。

5.4.2 考虑外部秘密存储(手动)

*结果:*警告

修正: 参考您的云服务提供商或第三方秘密管理解决方案提供的秘密管理选项。

5.5 可扩展的接纳控制

5.5.1 使用ImagePolicyWebhook接纳控制器配置镜像溯源(手动)

*结果:*警告

修正: 按照Kubernetes文档设置镜像溯源。

5.7 一般策略

5.7.1 使用名称空间在资源之间创建管理边界(手动)

*结果:*警告

修正: 按照文档创建您部署中对象所需的名称空间。

5.7.2 确保在您的Pod定义中将seccomp配置文件设置为docker/default(手动)

*结果:*警告

修正: 使用`securityContext`在您的Pod定义中启用docker/default seccomp配置文件。 示例如下:
securityContext:
seccompProfile:
type:运行时默认值

5.7.3 将 SecurityContext 应用到您的 Pods 和容器(手动)

*结果:*警告

修正: 请遵循 Kubernetes 文档并将 SecurityContexts 应用到您的 Pods。有关建议的 SecurityContexts 列表,您可以参考 Docker 容器的 CIS 安全基准。

5.7.4 不应使用默认名称空间(手动)

*结果:*警告

修正: 确保创建名称空间以允许适当的 Kubernetes 资源隔离,并且所有新资源都在特定名称空间中创建。