SUSE Multi-Linux Manager 5.2 Beta 2 Proxy Deployment on Kubernetes

1. Proxy on Kubernetes changes

There were multiple changes in how to install SUSE Multi-Linux Manager proxies running on Kubernetes:

  • mgrpxy is no longer handling proxies on Kubernetes, helm and the proxy-helm chart need to be used instead.

  • The TLS certificates have to be in secrets, rather than in the configuration tarball. This aims at allowing cloud-native TLS certificates management for the proxies.

  • The proxy queries the server at the start of the container to verify that the versions are compatible.

  • The needed persistent volume claims has been reduced to the squid cache only.

  • The SUSE Multi-Linux Manager proxy is supported when running on an RKE2 cluster, K3S is longer supported.

2. 先决条件

Installing the Kubernetes cluster and configuring it is out of the scope of this document.

The cluster is assumed to be ready to be used with a user having rights on a namespace dedicated to SUSE Multi-Linux Manager.

Create Role and RoleBinding if they do not exist already. The minimum rights required to deploy proxy-helm are defined as:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: example-resource-manager
  namespace: $NAMESPACE
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "services", "secrets", "configmaps", "persistentvolumeclaims"]
  verbs: ["*"]
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["*"]
- apiGroups: ["networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: example-resource-manager-binding
  namespace: $NAMESPACE
subjects:
- kind: User
  name: $USERNAME
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: example-resource-manager
  apiGroup: rbac.authorization.k8s.io

This guide assumes the reader knows how to work with Kubernetes: the concepts will not be explained here as they are extensively documented in the official Kubernetes documentation.

The SUSE Multi-Linux Manager administrator needs to deploy the proxy-helm Helm chart. However, this chart requires to prepare:

  • TLS certificates chain for the proxy,

  • a ConfigMap for the proxy root CA certificate,

  • a persistent volumes for the claim the chart will create or a storage class automatically creating it,

  • Load balancers or other mechanisms to expose the Salt, SSH and TFTP ports.

Run the following command to read the full details on how to use the proxy Helm chart:

helm show readme --version 5.2.0-beta2 \
    oci://registry.suse.com/suse/multi-linux-manager/5.2/proxy-helm

2.1. Credentials

A secret with the SCC credentials needs to be defined in order to pull the images from registry.suse.com. Refer to https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ for the instructions to prepare the secret. Set the registrySecret proxy-helm chart value to the name of the secret containing those credentials to use it.

2.2. TLS setup

The proxy-cert TLS secret is expected. It contains the TLS certificate and key for the Ingress rule and needs to have the public FQDN as Subject Alternate Name.

These secrets can be created using the kubectl create secret tls -n $NAMESPACE command. The certificate file passed to this command needs to start with the server certificate followed by the chain of intermediary CA certificates if any. The root CA is not needed in these secrets as it is expected in a ConfigMap.

The Root CA certificate of proxy-cert is expected in a ConfigMap named uyuni-ca stored in the ca.crt key. It can be created with a command like kubectl create cm -n $NAMESPACE uyuni-ca --from-file=ca.crt=/path/to/uyuni-ca.crt.

2.3. Storage

The proxy chart defines a volume as a Persistent Volume Claim (PVC).

  • The creation of the underlying PV is the responsibility of the cluster administrators.

  • The PVC use the ReadWriteOnce access mode.

The created PVC can be tuned Helm chart values, it can have the following values:

  • size: to set the requested size of the PVC.

  • storageClass: can be used to select the storage class to use for the PVC.

  • extraLabels: can be used to add custom labels to the PVC.

  • annotations: can be used to set custom annotations on the PVC.

  • volumeName: can be used to hard code which volume the PVC should be bound to.

  • selector: is the YAML fragment of the PVC selector to use to find the PV to bind to.

Refer to https://kubernetes.io/docs/concepts/storage/persistent-volumes/ for more information on persistent volumes and their claims.

Refer to the proxy-helm README for the list of persistent volume claims which will be created and will need to be bound to persistent volumes.

While the default sizes are provided, it is highly recommended to change them based on the distributions you plan to synchronize.

For more information on storage requirements see 一般要求.

2.4. Exposing ports

SUSE Multi-Linux Manager proxy requires some TCP and UDP ports to be routed to its services. Refer to the proxy-helm README for the list of ports to be exposed.

RKE2 ships with nginx as the default ingress controller. However, as this is deprecated and soon to be unsupported, the proxy-helm chart defaults to use Traefik as ingress controller. Using the nginx ingress controller might work and will not be documented, use at your own risk.

The proxy-helm chart supports Gateway API version 1.4. Since this requires experimental CRDs which are not shipped with RKE2 1.35, it is not recommended to be used in production.

There are multiple ways to expose the ports, but this documentation will only mention how to configure RKE2’s Traefik for this. This is not a task for the SUSE Multi-Linux Manager administrator, but the Kubernetes cluster administrator as it requires configuration to be set on the cluster nodes.

To set Traefik to expose and route the needed ports, create a /var/lib/rancher/rke2/server/manifests/uyuni-traefik.yaml on each node with the following content. Note that Traefik takes a few seconds to be reinstalled after saving the file.

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      ssh:
        port: 8022
        expose:
          default: true
        exposedPort: 8022
        protocol: TCP
        hostPort: 8022
      salt-publish:
        port: 4505
        expose:
          default: true
        exposedPort: 4505
        protocol: TCP
        hostPort: 4505
        containerPort: 4505
      salt-request:
        port: 4506
        expose:
          default: true
        exposedPort: 4506
        protocol: TCP
        hostPort: 4506
        containerPort: 4506

If Traefik is used as the Ingress controller, the user needs access to additional resources. Add the following to the rules of the previously defined role:

- apiGroups: ["traefik.io", "traefik.containo.us"]
  resources: ["ingressroutetcps"]
  verbs: ["*"]

If Gateway API is used instead, add the following to the rules of the previously defined role:

- apiGroups: ["gateway.networking.k8s.io"]
  resources: ["gateways", "httproutes", "tcproutes"]
  verbs: ["*"]

TFTP is complex to expose from a Kubernetes pod due to the nature of the protocol: the TFTP server receives requests on port 69, but negotiates another random port to continue. This port also needs to stay the same through the whole session for the server to recognize the client as being the same. This means that there are only two possible ways to use the TFTP server:

  • using a load balancer compatible with TFTP,

  • using the host network for the TFTP pod. This can be achieved by setting the tftp.hostNetwork helm chart value to true.

3. Configuration generation

Before deploying the SUSE Multi-Linux Manager proxy, a configuration archive needs to be generated.

3.1. 使用 Web UI 生成代理配置

过程:使用 Web UI 生成代理容器配置
  1. 在 Web UI 中,导航到系统  代理配置,然后填写所需数据:

  2. 代理 FQDN字段中,键入代理的完全限定域名。

  3. 父 FQDN字段中,键入 SUSE Multi-Linux Manager 服务器或另一个 SUSE Multi-Linux Manager 代理的完全限定域名。

  4. 代理 SSH 端口字段中,键入 SSH 服务在 SUSE Multi-Linux Manager 代理上监听的 SSH 端口。建议保留默认值 8022。

  5. In the Max Squid cache size [MB] field type maximal allowed size for Squid cache. Recommended is to use at most 80% of available storage for the containers.

    2 GB 表示默认的代理 squid 缓存大小。需要根据您的环境调整此大小。

    In the SSL certificate selection list choose if new server certificate should be generated for SUSE Multi-Linux Manager Proxy or an existing one should be used. You can consider generated certificates as SUSE Multi-Linux Manager builtin (self signed) certificates. If SUSE Multi-Linux Manager server runs on Kubernetes, the generated certificate option is not possible and replaced with no SSL certificate as they are managed outside the containers.

    然后根据所做的选择提供用于生成新证书的签名 CA 证书的路径,或者要用作代理证书的现有证书及其密钥的路径。

    服务器生成的 CA 证书存储在 /var/lib/containers/storage/volumes/root/_data/ssl-build 目录中。

    有关现有或自定义证书的详细信息以及企业和中间证书的概念,请参见 导入 SSL 证书

  6. 单击 生成 以在 SUSE Multi-Linux Manager 服务器中注册新代理 FQDN,并生成包含容器主机细节的配置归档 (config.tar.gz)。

  7. 片刻之后,系统会显示文件可供下载。请将此文件保存在本地。

3.2. 使用 spacecmd 和自我签名证书生成代理配置

You can generate a Proxy configuration using spacecmd. This is only possible if SUSE Multi-Linux Manager server runs on podman and has a self-signed root CA certificate.

过程:使用 spacecmd 和自我签名证书生成代理配置
  1. 通过 SSH 连接到您的容器主机。

  2. 执行以下命令(替换其中的服务器和代理 FQDN):

    mgrctl exec -ti 'spacecmd proxy_container_config_generate_cert -- dev-pxy.example.com dev-srv.example.com 2048 email@example.com -o /tmp/config.tar.gz'
  3. 从服务器容器复制生成的配置:

    mgrctl cp server:/tmp/config.tar.gz

3.3. 使用 spacecmd 和自定义证书生成代理配置

可以使用 spacecmd 为自定义证书(而不是默认的自我签名证书)生成代理配置。

过程:使用 spacecmd 和自定义证书生成代理配置
  1. 通过 SSH 连接到您的服务器容器主机。

  2. Execute the following commands, replacing the Server and Proxy FQDN:

    for f in ca.crt proxy.crt proxy.key; do
      mgrctl cp $f server:/tmp/$f
    done
    mgrctl exec -ti 'spacecmd proxy_container_config -- -p 8022 pxy.example.com srv.example.com 2048 email@example.com /tmp/ca.crt /tmp/proxy.crt /tmp/proxy.key -o /tmp/config.tar.gz'
  3. 如果您的设置使用中间 CA,请同时复制该证书,并在命令中通过 -i 选项(可根据需要多次提供)包含该证书:

    mgrctl cp intermediateCA.pem server:/tmp/intermediateCA.pem
    mgrctl exec -ti 'spacecmd proxy_container_config -- -p 8022 -i /tmp/intermediateCA.pem pxy.example.com srv.example.com 2048 email@example.com /tmp/ca.crt /tmp/proxy.crt /tmp/proxy.key -o /tmp/config.tar.gz'
  4. 从服务器容器复制生成的配置:

    mgrctl cp server:/tmp/config.tar.gz

3.4. Generate Proxy Configuration With spacecmd and no Certificate

You can generate a Proxy configuration using spacecmd with no TLS certificates. This is needed for SUSE Multi-Linux Manager running on Kubernetes as the certificates are handled outside of the containers.

Procedure: Generating Proxy Configuration with spacecmd and no Certificate
  1. 通过 SSH 连接到您的服务器容器主机。

  2. Execute the following commands, replacing the Server and Proxy FQDN:

    for f in ca.crt proxy.crt proxy.key; do
      mgrctl cp $f server:/tmp/$f
    done
    mgrctl exec -ti 'spacecmd proxy_container_config_nossl -- -p 8022 pxy.example.com srv.example.com 2048 email@example.com -o /tmp/config.tar.gz'
  3. 从服务器容器复制生成的配置:

    mgrctl cp server:/tmp/config.tar.gz

4. 部署 SUSE Multi-Linux Manager 代理 helm 图表

Copy and extract the generated configuration tar.gz file and then install using helm:

helm install smlm-proxy \
    oci://registry.suse.com/suse/multi-linux-manager/5.2/proxy-helm \
    -n $NAMESPACE \
    --description "Proxy installation" \
    --set "registrySecret=the-scc-secret" \
    --set-file global.config=path/to/config.yaml \
    --set-file global.ssh=path/to/ssh.yaml \
    --set-file global.httpd=path/to/httpd.yaml \

When setting multiple values, using a YAML values file is recommended instead of passing several --set parameters. Refer to the helm command help for more details.

5. Example helm charts

Some helm charts using the proxy-helm chart can be found in the Manager-5.2 branch of the uyuni-charts git repository. They show case how the TLS certificate can be generated using cert-manager and trust-manager. Those examples may assume to have Kubernetes cluster administrator permissions.

These examples are not supported, only provided for documentation purpose.