|
This is unreleased documentation for Admission Controller 1.36-dev. |
Controlling host capabilities for namespaced policies
SUSE Security Admission Controller upholds the following security promise:
"If you can deploy namespaced policies, you can do so without obtaining raised privileges."
This how-to explains how cluster operators can enforce that promise by:
-
Restricting which host capabilities PolicyServers expose to namespaced policies.
-
Controlling which PolicyServer a namespaced policy runs on, using the
ns-policyserver-mapperpolicy.
|
The per-PolicyServer restriction on host capabilities is part of SUSE Security Admission Controller
Earlier versions may be vulnerable to reconnaissance or information disclosure, see our Threat model. |
Background
AdmissionPolicy and AdmissionPolicyGroup are namespaced resources.
Because they can be made deployable to non-privileged users, they do not have
a spec.contextAwareResources field and cannot fetch information from the
Kubernetes API.
However, namespaced policies can still exercise other host capabilities provided by policy-server:
-
Querying OCI registries (verifying signatures, fetching manifests)
-
Performing Kubernetes
SubjectAccessReviewchecks (can_i) -
DNS lookups
-
Certificate trust verification
These capabilities could be abused for reconnaissance or information disclosure
(see our Threat model). The
spec.namespacedPoliciesCapabilities field on PolicyServer lets cluster
operators gate exactly which of these capabilities are available to namespaced
policies.
Host capability call reference
See the full list on our capabilities call reference documentation page.
Configuring spec.namespacedPoliciesCapabilities
Add the namespacedPoliciesCapabilities field to any PolicyServer to control
which host capabilities its namespaced policies may use.
The field accepts an array of strings. Wildcard patterns follow the same conventions as Kubernetes Dynamic Admission Controller match rules:
| Value | Meaning |
|---|---|
(unset) |
Allow all host capabilities (default, backwards-compatible) |
|
Explicitly allow all host capabilities |
|
Deny all host capabilities to namespaced policies |
|
Allow all OCI capabilities, all versions |
|
Allow all OCI v2 capabilities only |
|
Allow only those two specific calls |
|
Cluster-wide policies ( |
Example: deny all host capabilities
apiVersion: policies.kubewarden.io/v1
kind: PolicyServer
metadata:
name: for-namespaced-policies
spec:
image: ghcr.io/kubewarden/policy-server:latest
replicas: 1
namespacedPoliciesCapabilities: []
Any namespaced policy scheduled on this PolicyServer will have all host capability calls blocked.
Configuring default PolicyServer via Helm
The kubewarden-defaults Helm chart exposes
.Values.policyServer.namespacedPoliciesCapabilities to configure the
default PolicyServer. When unset, all host capabilities are allowed
(backwards-compatible default).
To lock down the default PolicyServer so namespaced policies have no host
capability access:
helm upgrade --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults \
--set 'policyServer.namespacedPoliciesCapabilities={}'
To allow only OCI and DNS capabilities:
helm upgrade --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults \
--set 'policyServer.namespacedPoliciesCapabilities={oci/*,net/v1/dns_lookup_host}'
Configuring custom PolicyServers
Cluster operators can configure their custom PolicyServers by setting
spec.namespacedPoliciesCapabilities. If not provided, PolicyServers by
default allow all capability calls.
Controlling on which PolicyServer namespaced policies run
Even with a well-configured PolicyServer, low-privileged users could explicitly
set spec.policyServer in their AdmissionPolicy to a different PolicyServer
that exposes broader capabilities. The ns-policyserver-mapper policy prevents
this.
ns-policyserver-mapper policy
The ns-policyserver-mapper policy is a mutating ClusterAdmissionPolicy that
intercepts CREATE and UPDATE requests for AdmissionPolicy and
AdmissionPolicyGroup resources. It:
-
Reads the
admission.kubewarden.io/policy-serverlabel from theNamespacein which the policy is being deployed. -
Mutates the policy’s
spec.policyServerfield to the value of that label, overriding whatever the user specified. -
Rejects the request if the Namespace does not have the label, preventing policies from being silently scheduled on an unintended PolicyServer.
Labeling Namespaces
Label each Namespace with the name of the PolicyServer that should handle its namespaced policies:
kubectl label namespace my-team-ns admission.kubewarden.io/policy-server=for-namespaced-policies
Or declare it in the Namespace manifest:
apiVersion: v1
kind: Namespace
metadata:
name: my-team-ns
labels:
admission.kubewarden.io/policy-server: for-namespaced-policies
Deploying ns-policyserver-mapper
Deploy the policy as a ClusterAdmissionPolicy:
apiVersion: policies.kubewarden.io/v1
kind: ClusterAdmissionPolicy
metadata:
name: ns-to-policyserver
spec:
module: registry://ghcr.io/kubewarden/policies/ns-policyserver-mapper:v0.1.0
mode: protect
mutating: true
rules:
- apiGroups: ["policies.kubewarden.io"]
apiVersions: ["v1"]
resources: ["admissionpolicies", "admissionpolicygroups"]
operations:
- CREATE
- UPDATE
contextAwareResources:
- apiVersion: v1
kind: Namespace
|
Deploy |
Complete example: secure self-service namespace setup
The following walkthrough shows how to configure a fully locked-down, self-service setup where a team can deploy their own namespaced policies without elevated privileges.
1. Create a dedicated PolicyServer with restricted capabilities
apiVersion: policies.kubewarden.io/v1
kind: PolicyServer
metadata:
name: for-namespaced-policies
namespace: kubewarden
spec:
image: ghcr.io/kubewarden/policy-server:v1.35.0
namespacedPoliciesCapabilities: []
2. Deploy the ns-policyserver-mapper policy
apiVersion: policies.kubewarden.io/v1
kind: ClusterAdmissionPolicy
metadata:
name: ns-to-policyserver
spec:
module: registry://ghcr.io/kubewarden/policies/ns-policyserver-mapper:v0.1.0
mode: protect
mutating: true
rules:
- apiGroups: ["policies.kubewarden.io"]
apiVersions: ["v1"]
resources: ["admissionpolicies", "admissionpolicygroups"]
operations:
- CREATE
- UPDATE
contextAwareResources:
- apiVersion: v1
kind: Namespace
Wait for the policy to become active:
kubectl wait --for=condition=PolicyActive clusteradmissionpolicy/ns-to-policyserver
3. Label team Namespaces
kubectl label namespace team-alpha admission.kubewarden.io/policy-server=for-namespaced-policies
kubectl label namespace team-beta admission.kubewarden.io/policy-server=for-namespaced-policies
Upgrade considerations
When upgrading from a SUSE Security Admission Controller version that predates this feature
(⇐ v1.34):
-
Existing namespaced policies continue to work without any change. The default behavior when
spec.namespacedPoliciesCapabilitiesis unset is to allow all host capabilities, matching the previous behavior. -
To begin restricting capabilities, set
spec.namespacedPoliciesCapabilitieson each PolicyServer and deployns-policyserver-mapper. -
If you set
spec.namespacedPoliciesCapabilities: []on an existing PolicyServer, any namespaced policies that rely on host capabilities will have those calls blocked. Review your policies before applying this change. -
Namespaced policies already deployed retain their scheduling until they are updated. The
ns-policyserver-mapperpolicy only applies onCREATEandUPDATEoperations. To migrate existing policies, trigger an update for each one after labeling their Namespace.
Authoring and auditing a policy with host capability calls
Authoring
Policy authors can self-report a list of host capabilities that a policy uses using the policy metadata (see metadata documentation page). For example:
[...]
hostCapabilities:
- kubernetes/list_resources_by_namespace
- kubernetes/list_resources_all
- kubernetes/get_resource
[...]
In addition, when the policy author annotates the policy Wasm module,
kwctl annotate performs an heuristic scan of the Wasm binary’s data for known
host-capability strings, and compares what it detects against the
hostCapabilities list declared in metadata.yml. Any mismatch is reported as
warnings on stderr:
-
Used but undeclared: host capabilities found in the binary but absent from the metadata declaration.
-
Declared but not detected: host capabilities listed in the metadata but not found in the binary.
which host capabilities it uses by running kwctl annotate.
Here is an example output when the metadata declares oci/v1/verify but the
binary actually uses kubernetes/get_resource and
kubernetes/list_resources_by_namespace:
kwctl annotate -m metadata.yml -o annotated-policy.wasm policy.wasm
WARN host capabilities used by the policy but not declared in metadata
capabilities={"kubernetes/get_resource", "kubernetes/list_resources_by_namespace"}
WARN host capabilities declared in metadata but not detected in the policy
capabilities={"oci/v1/verify"}
Auditing
|
Both the self-reporting from the policy author and the Use this information as one signal alongside other trust indicators (image signing, source code review, policy provenance, and so on), not as an authoritative proof of what capabilities a policy exercises. |
Running a policy with host capability calls
kwctl run and kwctl bench now have a flag
--allowed-host-capabilities. This flag sets the host capabilities the policy
is allowed to use and can be repeated many times. For example:
kwctl run \
--allowed-host-capabilities 'oci/*' \
--allowed-host-capabilities 'kubernetes/get_resource'
kwctl will emit errors if the policy has not been granted enough permissions
for the needed capabilities.
This allows Policy Users to assess their policies out-of-cluster.