SUSE Multi-Linux Manager 5.2 Beta 2 Proxy Deployment on Kubernetes
1. Proxy on Kubernetes changes
There were multiple changes in how to install SUSE Multi-Linux Manager proxies running on Kubernetes:
-
mgrpxyis no longer handling proxies on Kubernetes, helm and theproxy-helmchart need to be used instead. -
The TLS certificates have to be in secrets, rather than in the configuration tarball. This aims at allowing cloud-native TLS certificates management for the proxies.
-
The proxy queries the server at the start of the container to verify that the versions are compatible.
-
The needed persistent volume claims has been reduced to the squid cache only.
-
The SUSE Multi-Linux Manager proxy is supported when running on an RKE2 cluster, K3S is longer supported.
2. 선행 조건
Installing the Kubernetes cluster and configuring it is out of the scope of this document.
The cluster is assumed to be ready to be used with a user having rights on a namespace dedicated to SUSE Multi-Linux Manager.
Create Role and RoleBinding if they do not exist already. The minimum rights required to deploy proxy-helm are defined as:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: example-resource-manager
namespace: $NAMESPACE
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "services", "secrets", "configmaps", "persistentvolumeclaims"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["*"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-resource-manager-binding
namespace: $NAMESPACE
subjects:
- kind: User
name: $USERNAME
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: example-resource-manager
apiGroup: rbac.authorization.k8s.io
|
This guide assumes the reader knows how to work with Kubernetes: the concepts will not be explained here as they are extensively documented in the official Kubernetes documentation. |
The SUSE Multi-Linux Manager administrator needs to deploy the proxy-helm Helm chart. However, this chart requires to prepare:
-
TLS certificates chain for the proxy,
-
a
ConfigMapfor the proxy root CA certificate, -
a persistent volumes for the claim the chart will create or a storage class automatically creating it,
-
Load balancers or other mechanisms to expose the Salt, SSH and TFTP ports.
Run the following command to read the full details on how to use the proxy Helm chart:
helm show readme --version 5.2.0-beta2 \
oci://registry.suse.com/suse/multi-linux-manager/5.2/proxy-helm
2.1. Credentials
A secret with the SCC credentials needs to be defined in order to pull the images from registry.suse.com. Refer to https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ for the instructions to prepare the secret. Set the registrySecret proxy-helm chart value to the name of the secret containing those credentials to use it.
2.2. TLS setup
The proxy-cert TLS secret is expected. It contains the TLS certificate and key for the Ingress rule and needs to have the public FQDN as Subject Alternate Name.
These secrets can be created using the kubectl create secret tls -n $NAMESPACE command. The certificate file passed to this command needs to start with the server certificate followed by the chain of intermediary CA certificates if any. The root CA is not needed in these secrets as it is expected in a ConfigMap.
The Root CA certificate of proxy-cert is expected in a ConfigMap named uyuni-ca stored in the ca.crt key. It can be created with a command like kubectl create cm -n $NAMESPACE uyuni-ca --from-file=ca.crt=/path/to/uyuni-ca.crt.
2.3. Storage
The proxy chart defines a volume as a Persistent Volume Claim (PVC).
|
The created PVC can be tuned Helm chart values, it can have the following values:
-
size: to set the requested size of the PVC. -
storageClass: can be used to select the storage class to use for the PVC. -
extraLabels: can be used to add custom labels to the PVC. -
annotations: can be used to set custom annotations on the PVC. -
volumeName: can be used to hard code which volume the PVC should be bound to. -
selector: is the YAML fragment of the PVC selector to use to find the PV to bind to.
Refer to https://kubernetes.io/docs/concepts/storage/persistent-volumes/ for more information on persistent volumes and their claims.
Refer to the proxy-helm README for the list of persistent volume claims which will be created and will need to be bound to persistent volumes.
|
While the default sizes are provided, it is highly recommended to change them based on the distributions you plan to synchronize. For more information on storage requirements see 일반 요구사항. |
2.4. Exposing ports
SUSE Multi-Linux Manager proxy requires some TCP and UDP ports to be routed to its services. Refer to the proxy-helm README for the list of ports to be exposed.
|
RKE2 ships with nginx as the default ingress controller. However, as this is deprecated and soon to be unsupported, the |
|
The |
There are multiple ways to expose the ports, but this documentation will only mention how to configure RKE2’s Traefik for this. This is not a task for the SUSE Multi-Linux Manager administrator, but the Kubernetes cluster administrator as it requires configuration to be set on the cluster nodes.
To set Traefik to expose and route the needed ports, create a /var/lib/rancher/rke2/server/manifests/uyuni-traefik.yaml on each node with the following content. Note that Traefik takes a few seconds to be reinstalled after saving the file.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-traefik
namespace: kube-system
spec:
valuesContent: |-
ports:
ssh:
port: 8022
expose:
default: true
exposedPort: 8022
protocol: TCP
hostPort: 8022
salt-publish:
port: 4505
expose:
default: true
exposedPort: 4505
protocol: TCP
hostPort: 4505
containerPort: 4505
salt-request:
port: 4506
expose:
default: true
exposedPort: 4506
protocol: TCP
hostPort: 4506
containerPort: 4506
If Traefik is used as the Ingress controller, the user needs access to additional resources. Add the following to the rules of the previously defined role:
- apiGroups: ["traefik.io", "traefik.containo.us"]
resources: ["ingressroutetcps"]
verbs: ["*"]
If Gateway API is used instead, add the following to the rules of the previously defined role:
- apiGroups: ["gateway.networking.k8s.io"]
resources: ["gateways", "httproutes", "tcproutes"]
verbs: ["*"]
TFTP is complex to expose from a Kubernetes pod due to the nature of the protocol: the TFTP server receives requests on port 69, but negotiates another random port to continue. This port also needs to stay the same through the whole session for the server to recognize the client as being the same. This means that there are only two possible ways to use the TFTP server:
-
using a load balancer compatible with TFTP,
-
using the host network for the TFTP pod. This can be achieved by setting the
tftp.hostNetworkhelm chart value totrue.
3. Configuration generation
Before deploying the SUSE Multi-Linux Manager proxy, a configuration archive needs to be generated.
3.1. Web UI를 사용하여 프록시 구성 생성
Web UI에서 으로 이동하여 필요한 데이터를 입력합니다.
Proxy FQDN필드에 프록시의 정규화된 도메인 이름을 입력합니다.
상위 FQDN필드에 SUSE Multi-Linux Manager Server 또는 다른 SUSE Multi-Linux Manager Proxy에 대한 정규화된 도메인 이름을 입력하십시오.
프록시 SSH 포트필드에 SSH 서비스가 SUSE Multi-Linux Manager Proxy에서 수신 대기하는 SSH 포트를 입력하십시오. 권장 사항은 기본 포트인 8022를 유지하는 것입니다.In the
Max Squid cache size [MB]field type maximal allowed size for Squid cache. Recommended is to use at most 80% of available storage for the containers.
2GB는 기본 프록시 squid 캐시 크기입니다. 사용자의 환경에 적합하도록 조정해야 합니다.
In the
SSL certificateselection list choose if new server certificate should be generated for SUSE Multi-Linux Manager Proxy or an existing one should be used. You can consider generated certificates as SUSE Multi-Linux Manager builtin (self signed) certificates. If SUSE Multi-Linux Manager server runs on Kubernetes, the generated certificate option is not possible and replaced with no SSL certificate as they are managed outside the containers.선택에 따라 새 인증서를 생성하기 위해 CA 인증서에 서명할 경로 또는 프록시 인증서로 사용할 기존 인증서 및 해당 키에 대한 경로를 입력하십시오.
서버에서 생성된 CA 인증서는
/var/lib/containers/storage/volumes/root/_data/ssl-build디렉토리에 저장됩니다.기존 또는 사용자 정의 인증서와 기업 및 중간 인증서의 개념에 대한 자세한 내용은 SSL 인증서 임포트에서 확인할 수 있습니다.
생성을 클릭하여 SUSE Multi-Linux Manager 서버에 새 프록시 FQDN을 등록하고 컨테이너 호스트에 대한 세부사항이 포함된 구성 아카이브(
config.tar.gz)를 생성합니다.잠시 후 다운로드할 파일이 표시됩니다. 이 파일을 로컬에 저장합니다.
3.2. spacecmd 및 자체 서명 인증서를 사용하여 프록시 구성 생성
You can generate a Proxy configuration using spacecmd. This is only possible if SUSE Multi-Linux Manager server runs on podman and has a self-signed root CA certificate.
컨테이너 호스트에 SSH로 연결합니다.
서버 및 프록시 FQDN을 바꾸는 다음 명령을 실행합니다.
mgrctl exec -ti 'spacecmd proxy_container_config_generate_cert -- dev-pxy.example.com dev-srv.example.com 2048 email@example.com -o /tmp/config.tar.gz'서버 컨테이너에서 생성된 구성을 복사합니다.
mgrctl cp server:/tmp/config.tar.gz .
3.3. spacecmd 및 사용자 정의 인증서를 사용하여 프록시 구성 생성
spacecmd를 사용하여 기본 자체 서명 인증서가 아닌 사용자 정의 인증서에 대해 프록시 구성을 생성할 수 있습니다.
서버 컨테이너 호스트에 SSH로 연결합니다.
Execute the following commands, replacing the Server and Proxy FQDN:
for f in ca.crt proxy.crt proxy.key; do mgrctl cp $f server:/tmp/$f done mgrctl exec -ti 'spacecmd proxy_container_config -- -p 8022 pxy.example.com srv.example.com 2048 email@example.com /tmp/ca.crt /tmp/proxy.crt /tmp/proxy.key -o /tmp/config.tar.gz'설정에서 중간 CA를 사용하는 경우, 이를 복사하여
-i옵션과 함께 명령에 포함시킵니다(필요한 경우 여러 번 제공 가능).mgrctl cp intermediateCA.pem server:/tmp/intermediateCA.pem mgrctl exec -ti 'spacecmd proxy_container_config -- -p 8022 -i /tmp/intermediateCA.pem pxy.example.com srv.example.com 2048 email@example.com /tmp/ca.crt /tmp/proxy.crt /tmp/proxy.key -o /tmp/config.tar.gz'서버 컨테이너에서 생성된 구성을 복사합니다.
mgrctl cp server:/tmp/config.tar.gz .
3.4. Generate Proxy Configuration With spacecmd and no Certificate
You can generate a Proxy configuration using spacecmd with no TLS certificates. This is needed for SUSE Multi-Linux Manager running on Kubernetes as the certificates are handled outside of the containers.
서버 컨테이너 호스트에 SSH로 연결합니다.
Execute the following commands, replacing the Server and Proxy FQDN:
for f in ca.crt proxy.crt proxy.key; do mgrctl cp $f server:/tmp/$f done mgrctl exec -ti 'spacecmd proxy_container_config_nossl -- -p 8022 pxy.example.com srv.example.com 2048 email@example.com -o /tmp/config.tar.gz'서버 컨테이너에서 생성된 구성을 복사합니다.
mgrctl cp server:/tmp/config.tar.gz .
4. SUSE Multi-Linux Manager 프록시 Helm 차트 배포
Copy and extract the generated configuration tar.gz file and then install using helm:
helm install smlm-proxy \
oci://registry.suse.com/suse/multi-linux-manager/5.2/proxy-helm \
-n $NAMESPACE \
--description "Proxy installation" \
--set "registrySecret=the-scc-secret" \
--set-file global.config=path/to/config.yaml \
--set-file global.ssh=path/to/ssh.yaml \
--set-file global.httpd=path/to/httpd.yaml \
When setting multiple values, using a YAML values file is recommended instead of passing several --set parameters. Refer to the helm command help for more details.
5. Example helm charts
Some helm charts using the proxy-helm chart can be found in the Manager-5.2 branch of the uyuni-charts git repository. They show case how the TLS certificate can be generated using cert-manager and trust-manager. Those examples may assume to have Kubernetes cluster administrator permissions.
|
These examples are not supported, only provided for documentation purpose. |