apply
The apply
command is used to perform a dry run on one or more policies with a given set of input resources. This can be useful to determine a policy’s effectiveness prior to committing to a cluster. In the case of mutate policies, the apply
command can show the mutated resource as an output. The input resources can either be resource manifests (one or multiple) or can be taken from a running Kubernetes cluster. The apply
command supports files from URLs both as policies and resources.
Apply to a resource:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml
Apply a policy to all matching resources in a cluster based on the current kubectl
context:
1kyverno apply /path/to/policy.yaml --cluster
The resources can also be passed from stdin:
1kustomize build nginx/overlays/envs/prod/ | kyverno apply /path/to/policy.yaml --resource -
Apply all cluster policies in the current cluster to all matching resources in a cluster based on the current kubectl
context:
1kubectl get clusterpolicies -o yaml | kyverno apply - --cluster
Apply multiple policies to multiple resources:
1kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --cluster
Apply a policy to a resource with a policy exception:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml
Apply multiple policies to multiple resources with exceptions:
1kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --exception /path/to/exception1.yaml --exception /path/to/exception2.yaml
Apply multiple policies to multiple resources where exceptions are evaluated from the provided resources:
1kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --exceptions-with-resources
Apply a mutation policy to a specific resource:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml
2
3applying 1 policy to 1 resource...
4
5mutate policy <policy_name> applied to <resource_name>:
6<final mutated resource output>
Save the mutated resource to a file:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o newresource.yaml
Save the mutated resource to a directory:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o foo/
Run a policy with a mutate existing rule on a group of target resources:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --target-resource /path/to/target1.yaml --target-resource /path/to/target2.yaml
2
3Applying 1 policy rule(s) to 1 resource(s)...
4
5mutate policy <policy-name> applied to <trigger-name>:
6<trigger-resource>
7---
8patched targets:
9
10<patched-target1>
11
12---
13
14<patched-target2>
15
16---
17
18pass: 2, fail: 0, warn: 0, error: 0, skip: 0
Run a policy with a mutate existing rule on target resources from a directory:
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --target-resources /path/to/targets/
2
3Applying 1 policy rule(s) to 1 resource(s)...
4
5mutate policy <policy-name> applied to <trigger-name>:
6<trigger-resource>
7---
8patched targets:
9
10<patched-targets>
11
12pass: 5, fail: 0, warn: 0, error: 0, skip: 0
Apply a policy containing variables using the --set
or -s
flag to pass in the values. Variables that begin with {{request.object}}
normally do not need to be specified as these will be read from the resource.
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --set <variable1>=<value1>,<variable2>=<value2>
Use -f
or --values-file
for applying multiple policies to multiple resources while passing a file containing variables and their values. Variables specified can be of various types include AdmissionReview fields, ConfigMap context data, API call context data, and Global Context Entries.
Use -u
or --userinfo
for applying policies while passing an optional user_info.yaml file which contains necessary admission request data made during the request.
Note
When passing ConfigMap array data into the values file, the data must be formatted as JSON outlined here.1kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml --userinfo /path/to/user_info.yaml
Format of value.yaml
with all possible fields:
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5policies:
6 - name: <policy1 name>
7 rules:
8 - name: <rule1 name>
9 values:
10 <context variable1 in policy1 rule1>: <value>
11 <context variable2 in policy1 rule1>: <value>
12 - name: <rule2 name>
13 values:
14 <context variable1 in policy1 rule2>: <value>
15 <context variable2 in policy1 rule2>: <value>
16 resources:
17 - name: <resource1 name>
18 values:
19 <variable1 in policy1>: <value>
20 <variable2 in policy1>: <value>
21 - name: <resource2 name>
22 values:
23 <variable1 in policy1>: <value>
24 <variable2 in policy1>: <value>
25namespaceSelector:
26- name: <namespace1 name>
27 labels:
28 <label key>: <label value>
29- name: <namespace2 name>
30 labels:
31 <label key>: <label value>
Format of user_info.yaml
:
1apiVersion: cli.kyverno.io/v1alpha1
2kind: UserInfo
3metadata:
4 name: user-info
5clusterRoles:
6- admin
7userInfo:
8 username: molybdenum@somecorp.com
Example:
Policy manifest (add_network_policy.yaml
):
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: add-networkpolicy
5spec:
6 background: false
7 rules:
8 - name: default-deny-ingress
9 match:
10 any:
11 - resources:
12 kinds:
13 - Namespace
14 clusterRoles:
15 - cluster-admin
16 generate:
17 apiVersion: networking.k8s.io/v1
18 kind: NetworkPolicy
19 name: default-deny-ingress
20 namespace: "{{request.object.metadata.name}}"
21 synchronize: true
22 data:
23 spec:
24 # select all pods in the namespace
25 podSelector: {}
26 policyTypes:
27 - Ingress
Resource manifest (required_default_network_policy.yaml
):
1kind: Namespace
2apiVersion: v1
3metadata:
4 name: devtest
Apply a policy to a resource using the --set
or -s
flag to pass a variable directly:
1kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -s request.object.metadata.name=devtest
Apply a policy to a resource using the --values-file
or -f
flag:
YAML file containing variables (value.yaml
):
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5policies:
6 - name: add-networkpolicy
7 resources:
8 - name: devtest
9 values:
10 request.namespace: devtest
1kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -f /path/to/value.yaml
On applying the above policy to the mentioned resources, the following output will be generated:
1Applying 1 policy to 1 resource...
2(Total number of result count may vary as the policy is mutated by Kyverno. To check the mutated policy please try with log level 5)
3
4pass: 1, fail: 0, warn: 0, error: 0, skip: 0
The summary count is based on the number of rules applied on the number of resources.
Value files also support global values, which can be passed to all resources the policy is being applied to.
Format of value.yaml
:
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5policies:
6 - name: <policy1 name>
7 resources:
8 - name: <resource1 name>
9 values:
10 <variable1 in policy1>: <value>
11 <variable2 in policy1>: <value>
12 - name: <resource2 name>
13 values:
14 <variable1 in policy1>: <value>
15 <variable2 in policy1>: <value>
16 - name: <policy2 name>
17 resources:
18 - name: <resource1 name>
19 values:
20 <variable1 in policy2>: <value>
21 <variable2 in policy2>: <value>
22 - name: <resource2 name>
23 values:
24 <variable1 in policy2>: <value>
25 <variable2 in policy2>: <value>
26globalValues:
27 <global variable1>: <value>
28 <global variable2>: <value>
If a resource-specific value and a global value have the same variable name, the resource value takes precedence over the global value. See the Pod test-global-prod
in the following example.
Example:
Policy manifest (add_dev_pod.yaml
):
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: cm-globalval-example
5spec:
6 background: false
7 rules:
8 - name: validate-mode
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 failureAction: Enforce
16 message: "The value {{ request.mode }} for val1 is not equal to 'dev'."
17 deny:
18 conditions:
19 any:
20 - key: "{{ request.mode }}"
21 operator: NotEquals
22 value: dev
Resource manifest (dev_prod_pod.yaml
):
1apiVersion: v1
2kind: Pod
3metadata:
4 name: test-global-prod
5spec:
6 containers:
7 - name: nginx
8 image: nginx:latest
9---
10apiVersion: v1
11kind: Pod
12metadata:
13 name: test-global-dev
14spec:
15 containers:
16 - name: nginx
17 image: nginx:1.12
YAML file containing variables (value.yaml
):
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5policies:
6 - name: cm-globalval-example
7 resources:
8 - name: test-global-prod
9 values:
10 request.mode: prod
11globalValues:
12 request.mode: dev
1kyverno apply /path/to/add_dev_pod.yaml --resource /path/to/dev_prod_pod.yaml -f /path/to/value.yaml
The Pod test-global-dev
passes the validation, and test-global-prod
fails.
Apply a policy with the Namespace selector:
Use --values-file
or -f
for passing a file containing Namespace details. Check here to know more about Namespace selectors.
1kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml
Format of value.yaml
:
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5namespaceSelector:
6 - name: <namespace1 name>
7 labels:
8 <namespace label key>: <namespace label value>
9 - name: <namespace2 name>
10 labels:
11 <namespace label key>: <namespace label value>
Example:
Policy manifest (enforce-pod-name.yaml
):
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: enforce-pod-name
5spec:
6 background: true
7 rules:
8 - name: validate-name
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 namespaceSelector:
15 matchExpressions:
16 - key: foo.com/managed-state
17 operator: In
18 values:
19 - managed
20 validate:
21 failureAction: Audit
22 message: "The Pod must end with -nginx"
23 pattern:
24 metadata:
25 name: "*-nginx"
Resource manifest (nginx.yaml
):
1kind: Pod
2apiVersion: v1
3metadata:
4 name: test-nginx
5 namespace: test1
6spec:
7 containers:
8 - name: nginx
9 image: nginx:latest
Namespace manifest (namespace.yaml
):
1apiVersion: v1
2kind: Namespace
3metadata:
4 name: test1
5 labels:
6 foo.com/managed-state: managed
YAML file containing variables (value.yaml
):
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5namespaceSelector:
6 - name: test1
7 labels:
8 foo.com/managed-state: managed
To test the above policy, use the following command:
1kyverno apply /path/to/enforce-pod-name.yaml --resource /path/to/nginx.yaml -f /path/to/value.yaml
Apply a resource to a policy which uses a context variable:
Use --values-file
or -f
for passing a file containing the context variable.
1kyverno apply /path/to/policy1.yaml --resource /path/to/resource1.yaml -f /path/to/value.yaml
policy1.yaml
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: cm-variable-example
5 annotations:
6 pod-policies.kyverno.io/autogen-controllers: DaemonSet,Deployment,StatefulSet
7spec:
8 background: false
9 rules:
10 - name: example-configmap-lookup
11 context:
12 - name: dictionary
13 configMap:
14 name: mycmap
15 namespace: default
16 match:
17 any:
18 - resources:
19 kinds:
20 - Pod
21 mutate:
22 patchStrategicMerge:
23 metadata:
24 labels:
25 my-environment-name: "{{dictionary.data.env}}"
resource1.yaml
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx-config-test
5spec:
6 containers:
7 - image: nginx:latest
8 name: test-nginx
value.yaml
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4 name: values
5policies:
6 - name: cm-variable-example
7 rules:
8 - name: example-configmap-lookup
9 values:
10 dictionary.data.env: dev1
You can also inject global context entries using variables. Here’s an example of a Values file that injects a global context entry:
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Value
3metadata:
4 name: values
5globalValues:
6 request.operation: CREATE
7policies:
8 - name: gctx
9 rules:
10 - name: main-deployment-exists
11 values:
12 deploymentCount: 1
In this example, request.operation
is set as a global value, and deploymentCount
is set for a specific rule in the gctx
policy.
Policies that have their failureAction set to Audit
can be set to produce a warning instead of a failure using the --audit-warn
flag. This will also cause a non-zero exit code if no enforcing policies failed.
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --audit-warn
Additionally, you can use the --warn-exit-code
flag with the apply
command to control the exit code when warnings are reported. This is useful in CI/CD systems when used with the --audit-warn
flag to treat Audit
policies as warnings. When no failures or errors are found, but warnings are encountered, the CLI will exit with the defined exit code.
1kyverno apply disallow-latest-tag.yaml --resource=echo-test.yaml --audit-warn --warn-exit-code 3
2echo $?
33
You can also use --warn-exit-code
in combination with --warn-no-pass
flag to make the CLI exit with the warning code if no objects were found that satisfy a policy. This may be useful during the initial development of a policy or if you want to make sure that an object exists in the Kubernetes manifest.
1kyverno apply disallow-latest-tag.yaml --resource=empty.yaml --warn-exit-code 3 --warn-no-pass
2echo $?
33
Policy Report
Policy reports provide information about policy execution and violations. Use --policy-report
with the apply
command to generate a policy report for validate
policies. mutate
and generate
policies do not trigger policy reports.
Policy reports can also be generated for a live cluster. While generating a policy report for a live cluster the -r
flag, which declares a resource, is assumed to be globally unique. And it doesn’t support naming the resource type (ex., Pod/foo when the cluster contains resources of different types with the same name). To generate a policy report for a live cluster use --cluster
with --policy-report
.
1kyverno apply policy.yaml --cluster --policy-report
Above example applies a policy.yaml
to all resources in the cluster.
Below are the combination of inputs that can be used for generating the policy report from the Kyverno CLI.
Policy | Resource | Cluster | Namespace | Interpretation |
---|---|---|---|---|
policy.yaml | -r resource.yaml | false | Apply policy from policy.yaml to the resources specified in resource.yaml | |
policy.yaml | -r resourceName | true | Apply policy from policy.yaml to the resource with a given name in the cluster | |
policy.yaml | true | Apply policy from policy.yaml to all the resources in the cluster | ||
policy.yaml | -r resourceName | true | -n=namespaceName | Apply policy from policy.yaml to the resource with a given name in a specific Namespace |
policy.yaml | true | -n=namespaceName | Apply policy from policy.yaml to all the resources in a specific Namespace |
Example:
Consider the following policy and resources:
policy.yaml
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: require-pod-requests-limits
5spec:
6 rules:
7 - name: validate-resources
8 match:
9 any:
10 - resources:
11 kinds:
12 - Pod
13 validate:
14 failureAction: Audit
15 message: "CPU and memory resource requests and limits are required"
16 pattern:
17 spec:
18 containers:
19 - resources:
20 requests:
21 memory: "?*"
22 cpu: "?*"
23 limits:
24 memory: "?*"
resource1.yaml
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx1
5 labels:
6 env: test
7spec:
8 containers:
9 - name: nginx
10 image: nginx
11 imagePullPolicy: IfNotPresent
12 resources:
13 requests:
14 memory: "64Mi"
15 cpu: "250m"
16 limits:
17 memory: "128Mi"
18 cpu: "500m"
resource2.yaml
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx2
5 labels:
6 env: test
7spec:
8 containers:
9 - name: nginx
10 image: nginx
11 imagePullPolicy: IfNotPresent
Case 1: Apply a policy manifest to multiple resource manifests
1kyverno apply policy.yaml -r resource1.yaml -r resource2.yaml --policy-report
Case 2: Apply a policy manifest to multiple resources in the cluster
Create the resources by first applying manifests resource1.yaml
and resource2.yaml
.
1kyverno apply policy.yaml -r nginx1 -r nginx2 --cluster --policy-report
Case 3: Apply a policy manifest to all resources in the cluster
1kyverno apply policy.yaml --cluster --policy-report
Given the contents of policy.yaml shown earlier, this will produce a report validating against all Pods in the cluster.
Case 4: Apply a policy manifest to multiple resources by name within a specific Namespace
1kyverno apply policy.yaml -r nginx1 -r nginx2 --cluster --policy-report -n default
Case 5: Apply a policy manifest to all resources within the default Namespace
1kyverno apply policy.yaml --cluster --policy-report -n default
Given the contents of policy.yaml
shown earlier, this will produce a report validating all Pods within the default Namespace.
On applying policy.yaml
to the mentioned resources, the following report will be generated:
1apiVersion: wgpolicyk8s.io/v1alpha1
2kind: ClusterPolicyReport
3metadata:
4 name: clusterpolicyreport
5results:
6- message: Validation rule 'validate-resources' succeeded.
7 policy: require-pod-requests-limits
8 resources:
9 - apiVersion: v1
10 kind: Pod
11 name: nginx1
12 namespace: default
13 rule: validate-resources
14 scored: true
15 status: pass
16- message: 'Validation error: CPU and memory resource requests and limits are required; Validation rule validate-resources failed at path /spec/containers/0/resources/limits/'
17 policy: require-pod-requests-limits
18 resources:
19 - apiVersion: v1
20 kind: Pod
21 name: nginx2
22 namespace: default
23 rule: validate-resources
24 scored: true
25 status: fail
26summary:
27 error: 0
28 fail: 1
29 pass: 1
30 skip: 0
31 warn: 0
Applying Policy Exceptions
Policy Exceptions can be applied alongside policies by using the -e
or --exceptions
flag to pass the Policy Exception manifest.
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml
Example:
Applying a policy to a resource with a policy exception.
Policy manifest (policy.yaml
):
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: max-containers
5spec:
6 background: false
7 rules:
8 - name: max-two-containers
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 failureAction: Enforce
16 message: "A maximum of 2 containers are allowed inside a Pod."
17 deny:
18 conditions:
19 any:
20 - key: "{{request.object.spec.containers[] | length(@)}}"
21 operator: GreaterThan
22 value: 2
Policy Exception manifest (exception.yaml
):
1apiVersion: kyverno.io/v2
2kind: PolicyException
3metadata:
4 name: container-exception
5spec:
6 exceptions:
7 - policyName: max-containers
8 ruleNames:
9 - max-two-containers
10 - autogen-max-two-containers
11 match:
12 any:
13 - resources:
14 kinds:
15 - Pod
16 - Deployment
17 conditions:
18 any:
19 - key: "{{ request.object.metadata.labels.color || '' }}"
20 operator: Equals
21 value: blue
Resource manifest (resource.yaml
):
A Deployment matching the characteristics defined in the PolicyException, shown below, will be allowed creation even though it technically violates the rule’s definition.
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: three-containers-deployment
5 labels:
6 app: my-app
7 color: blue
8spec:
9 replicas: 3
10 selector:
11 matchLabels:
12 app: my-app
13 template:
14 metadata:
15 labels:
16 app: my-app
17 color: blue
18 spec:
19 containers:
20 - name: nginx-container
21 image: nginx:latest
22 ports:
23 - containerPort: 80
24 - name: redis-container
25 image: redis:latest
26 ports:
27 - containerPort: 6379
28 - name: busybox-container
29 image: busybox:latest
30 command: ["/bin/sh", "-c", "while true; do echo 'Hello from BusyBox'; sleep 10; done"]
Apply the above policy to the resource with the exception
1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml
The following output will be generated:
1Applying 3 policy rule(s) to 1 resource(s) with 1 exception(s)...
2
3pass: 0, fail: 0, warn: 0, error: 0, skip: 1
Kubernetes Native Policies
The kyverno apply
command can be used to apply native Kubernetes policies and their corresponding bindings to resources, allowing you to test them locally without a cluster.
ValidatingAdmissionPolicy
With the apply
command, Kubernetes ValidatingAdmissionPolicies can be applied to resources as follows:
Policy manifest (check-deployment-replicas.yaml):
1apiVersion: admissionregistration.k8s.io/v1
2kind: ValidatingAdmissionPolicy
3metadata:
4 name: check-deployments-replicas
5spec:
6 failurePolicy: Fail
7 matchConstraints:
8 resourceRules:
9 - apiGroups: ["apps"]
10 apiVersions: ["v1"]
11 operations: ["CREATE", "UPDATE"]
12 resources: ["deployments"]
13 validations:
14 - expression: "object.spec.replicas <= 3"
15 message: "Replicas must be less than or equal 3"
Resource manifest (deployment.yaml):
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-pass
5spec:
6 replicas: 2
7 selector:
8 matchLabels:
9 app: nginx-pass
10 template:
11 metadata:
12 labels:
13 app: nginx-pass
14 spec:
15 containers:
16 - name: nginx-server
17 image: nginx
Apply the ValidatingAdmissionPolicy to the resource:
1kyverno apply /path/to/check-deployment-replicas.yaml --resource /path/to/deployment.yaml
The following output will be generated:
1Applying 1 policy rule(s) to 1 resource(s)...
2
3pass: 1, fail: 0, warn: 0, error: 0, skip: 0
The below example applies a ValidatingAdmissionPolicyBinding
along with the policy to all resources in the cluster.
Policy manifest (check-deployment-replicas.yaml):
1apiVersion: admissionregistration.k8s.io/v1
2kind: ValidatingAdmissionPolicy
3metadata:
4 name: "check-deployment-replicas"
5spec:
6 matchConstraints:
7 resourceRules:
8 - apiGroups:
9 - apps
10 apiVersions:
11 - v1
12 operations:
13 - CREATE
14 - UPDATE
15 resources:
16 - deployments
17 validations:
18 - expression: object.spec.replicas <= 5
19---
20apiVersion: admissionregistration.k8s.io/v1
21kind: ValidatingAdmissionPolicyBinding
22metadata:
23 name: "check-deployment-replicas-binding"
24spec:
25 policyName: "check-deployment-replicas"
26 validationActions: [Deny]
27 matchResources:
28 namespaceSelector:
29 matchLabels:
30 environment: staging
The above policy verifies that the number of deployment replicas is not greater than 5 and is limited to a namespace labeled environment: staging
.
Create a Namespace with the label environment: staging
:
1kubectl create ns staging
2kubectl label ns staging environment=staging
Create two Deployments, one of them in the staging
namespace, which violates the policy.
1kubectl create deployment nginx-1 --image=nginx --replicas=6 -n staging
2kubectl create deployment nginx-2 --image=nginx --replicas=6
Get all Deployments from the cluster:
1kubectl get deployments -A
2
3NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
4default nginx-2 6/6 6 6 7m26s
5kube-system coredns 2/2 2 2 13m
6local-path-storage local-path-provisioner 1/1 1 1 13m
7staging nginx-1 6/6 6 6 7m44s
Apply the ValidatingAdmissionPolicy with its binding to all resources in the cluster:
1kyverno apply /path/to/check-deployment-replicas.yaml --cluster --policy-report
The following output will be generated:
1Applying 1 policy rule(s) to 4 resource(s)...
2----------------------------------------------------------------------
3POLICY REPORT:
4----------------------------------------------------------------------
5apiVersion: wgpolicyk8s.io/v1alpha2
6kind: ClusterPolicyReport
7metadata:
8 creationTimestamp: null
9 name: merged
10results:
11- message: 'failed expression: object.spec.replicas <= 5'
12 policy: check-deployment-replicas
13 resources:
14 - apiVersion: apps/v1
15 kind: Deployment
16 name: nginx-1
17 namespace: staging
18 uid: a95d1594-44a7-4c8a-9225-04ac34cb9494
19 result: fail
20 scored: true
21 source: kyverno
22 timestamp:
23 nanos: 0
24 seconds: 1707394871
25summary:
26 error: 0
27 fail: 1
28 pass: 0
29 skip: 0
30 warn: 0
As expected, the policy is only applied to nginx-1
as it matches both the policy definition and its binding.
MutatingAdmissionPolicy
Similarly, you can test a MutatingAdmissionPolicy to preview the changes it would make to a resource. The CLI will output the final, mutated resource.
Example 1: Basic Mutation
For instance, you can test a MutatingAdmissionPolicy that adds a label to a ConfigMap.
Policy manifest (add-label-to-configmap.yaml):
1apiVersion: admissionregistration.k8s.io/v1alpha1
2kind: MutatingAdmissionPolicy
3metadata:
4 name: "add-label-to-configmap"
5spec:
6 matchConstraints:
7 resourceRules:
8 - apiGroups: [""]
9 apiVersions: ["v1"]
10 operations: ["CREATE"]
11 resources: ["configmaps"]
12 failurePolicy: Fail
13 reinvocationPolicy: Never
14 mutations:
15 - patchType: "ApplyConfiguration"
16 applyConfiguration:
17 expression: >
18 object.metadata.?labels["lfx-mentorship"].hasValue() ?
19 Object{} :
20 Object{ metadata: Object.metadata{ labels: {"lfx-mentorship": "kyverno"}}}
Resource manifest (configmap.yaml):
1apiVersion: v1
2kind: ConfigMap
3metadata:
4 name: game-demo
5 labels:
6 app: game
7data:
8 player_initial_lives: "3"
Now, apply the MutatingAdmissionPolicy to the ConfigMap resource:
1kyverno apply /path/to/add-label-to-configmap.yaml --resource /path/to/configmap.yaml
The output will show the mutated ConfigMap with the added label:
1Applying 1 policy rule(s) to 1 resource(s)...
2
3policy add-label-to-configmap applied to default/ConfigMap/game-demo:
4apiVersion: v1
5data:
6 player_initial_lives: "3"
7kind: ConfigMap
8metadata:
9 labels:
10 app: game
11 lfx-mentorship: kyverno
12 name: game-demo
13 namespace: default
14---
15
16Mutation has been applied successfully.
17pass: 1, fail: 0, warn: 0, error: 0, skip: 0
The output displays the ConfigMap with the new lfx-mentorship: kyverno
label, confirming the mutation was applied correctly.
Example 2: Mutation with a Binding and Namespace Selector
You can also test policies that include a MutatingAdmissionPolicyBinding to control where the policy is applied. This example makes use of a namespace selector to apply the policy only to ConfigMaps in a specific namespace.
To do this, you must provide a values.yaml
file to simulate the labels of the Namespaces your resources belong to.
Policy manifest (add-label-to-configmap.yaml):
This file defines a policy to add a label and a binding that restricts it to Namespaces labeled environment: staging
or environment: production
.
1apiVersion: admissionregistration.k8s.io/v1alpha1
2kind: MutatingAdmissionPolicy
3metadata:
4 name: "add-label-to-configmap"
5spec:
6 matchConstraints:
7 resourceRules:
8 - apiGroups: [""]
9 apiVersions: ["v1"]
10 operations: ["CREATE"]
11 resources: ["configmaps"]
12 failurePolicy: Fail
13 reinvocationPolicy: Never
14 mutations:
15 - patchType: "ApplyConfiguration"
16 applyConfiguration:
17 expression: >
18 object.metadata.?labels["lfx-mentorship"].hasValue() ?
19 Object{} :
20 Object{ metadata: Object.metadata{ labels: {"lfx-mentorship": "kyverno"}}}
21---
22apiVersion: admissionregistration.k8s.io/v1alpha1
23kind: MutatingAdmissionPolicyBinding
24metadata:
25 name: "add-label-to-configmap-binding"
26spec:
27 policyName: "add-label-to-configmap"
28 matchResources:
29 namespaceSelector:
30 matchExpressions:
31 - key: environment
32 operator: In
33 values:
34 - staging
35 - production
Resource manifest (configmaps.yaml):
This file contains three ConfigMap resources in different Namespaces. Only the ones in staging and production should be mutated.
1apiVersion: v1
2kind: ConfigMap
3metadata:
4 name: matched-cm-1
5 namespace: staging
6 labels:
7 color: red
8data:
9 player_initial_lives: "3"
10---
11apiVersion: v1
12kind: ConfigMap
13metadata:
14 name: matched-cm-2
15 namespace: production
16 labels:
17 color: red
18data:
19 player_initial_lives: "3"
20---
21apiVersion: v1
22kind: ConfigMap
23metadata:
24 name: unmatched-cm
25 namespace: testing
26 labels:
27 color: blue
28data:
29 player_initial_lives: "3"
Values file (values.yaml):
This file provides the necessary context. It tells the Kyverno CLI what labels are associated with the staging, production, and testing Namespaces so it can correctly evaluate the namespaceSelector in the binding.
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Value
3metadata:
4 name: values
5namespaceSelector:
6- labels:
7 environment: staging
8 name: staging
9- labels:
10 environment: production
11 name: production
12- labels:
13 environment: testing
14 name: testing
Now, apply the MutatingAdmissionPolicy and its binding to the ConfigMaps:
1kyverno apply /path/to/add-label-to-configmap.yaml --resource /path/to/configmaps.yaml -f /path/to/values.yaml
The output will show the mutated ConfigMaps in the staging and production Namespaces, while the one in the testing Namespace remains unchanged:
1Applying 1 policy rule(s) to 3 resource(s)...
2
3policy add-label-to-configmap applied to staging/ConfigMap/matched-cm-1:
4apiVersion: v1
5data:
6 player_initial_lives: "3"
7kind: ConfigMap
8metadata:
9 labels:
10 color: red
11 lfx-mentorship: kyverno
12 name: matched-cm-1
13 namespace: staging
14---
15
16Mutation has been applied successfully.
17policy add-label-to-configmap applied to production/ConfigMap/matched-cm-2:
18apiVersion: v1
19data:
20 player_initial_lives: "3"
21kind: ConfigMap
22metadata:
23 labels:
24 color: red
25 lfx-mentorship: kyverno
26 name: matched-cm-2
27 namespace: production
28---
29
30Mutation has been applied successfully.
31pass: 2, fail: 0, warn: 0, error: 0, skip: 0
Applying ValidatingPolicies
In this example, we will apply a ValidatingPolicy
against two Deployment
manifests: one that complies with the policy and one that violates it.
First, we define a ValidatingPolicy
that ensures any Deployment
has no more than two replicas.
Policy manifest (check-deployment-replicas.yaml):
1apiVersion: policies.kyverno.io/v1alpha1
2kind: ValidatingPolicy
3metadata:
4 name: check-deployment-replicas
5spec:
6 matchConstraints:
7 resourceRules:
8 - apiGroups: ["apps"]
9 apiVersions: ["v1"]
10 operations: ["CREATE", "UPDATE"]
11 resources: ["deployments"]
12 validations:
13 - expression: "object.spec.replicas <= 2"
14 message: "Deployment replicas must be less than or equal to 2"
Next, we have two Deployment
manifests. The good-deployment
is compliant with 2 replicas, while the bad-deployment
is non-compliant with 3 replicas.
Resource manifest (deployments.yaml):
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: good-deployment
5 labels:
6 app: nginx
7spec:
8 replicas: 2
9 selector:
10 matchLabels:
11 app: nginx
12 template:
13 metadata:
14 labels:
15 app: nginx
16 spec:
17 containers:
18 - name: nginx
19 image: nginx:latest
20---
21apiVersion: apps/v1
22kind: Deployment
23metadata:
24 name: bad-deployment
25 labels:
26 app: nginx
27spec:
28 replicas: 3
29 selector:
30 matchLabels:
31 app: nginx
32 template:
33 metadata:
34 labels:
35 app: nginx
36 spec:
37 containers:
38 - name: nginx
39 image: nginx:latest
Now, we use the kyverno apply
command to test the policy against both resources.
1kyverno apply /path/to/check-deployment-replicas.yaml --resource /path/to/deployments.yaml --policy-report
The following output will be generated:
1apiVersion: openreports.io/v1alpha1
2kind: ClusterReport
3metadata:
4 creationTimestamp: null
5 name: merged
6results:
7- message: Deployment replicas must be less than or equal 2
8 policy: check-deployment-replicas
9 properties:
10 process: background scan
11 resources:
12 - apiVersion: apps/v1
13 kind: Deployment
14 name: bad-deployment
15 namespace: default
16 result: fail
17 scored: true
18 source: KyvernoValidatingPolicy
19 timestamp:
20 nanos: 0
21 seconds: 1752755472
22- message: success
23 policy: check-deployment-replicas
24 properties:
25 process: background scan
26 resources:
27 - apiVersion: apps/v1
28 kind: Deployment
29 name: good-deployment
30 namespace: default
31 result: pass
32 scored: true
33 source: KyvernoValidatingPolicy
34 timestamp:
35 nanos: 0
36 seconds: 1752755472
37source: ""
38summary:
39 error: 0
40 fail: 1
41 pass: 1
42 skip: 0
43 warn: 0
In addition to testing local YAML files, you can use the kyverno apply
command to validate policies against resources that are already running in a Kubernetes cluster. Instead of specifying resource files with the --resource
flag, you can use the --cluster
flag.
For example, to test the check-deployment-replicas
policy against all Deployment resources in your currently active cluster, you would run:
1kyverno apply /path/to/check-deployment-replicas.yaml --cluster --policy-report
Many advanced policies need to look up the state of other resources in the cluster using Kyverno’s custom CEL functions like resource.Get()
. When testing such policies locally with the kyverno apply
command, the CLI cannot connect to the cluster to retrieve the required resources so you have to provide these resources as input via the --context-path
flag.
This flag allows you to specify the resources that the policy will reference. The CLI will then use these resources to evaluate the policy.
This example demonstrates how to test a policy that validates an incoming Pod by checking its name against a value stored in a ConfigMap.
First, we define a ValidatingPolicy
that uses resource.Get()
to fetch a ConfigMap named policy-cm
. The policy then validates that the incoming Pod’s name matches the name
key in the ConfigMap’s data.
Policy manifest (check-pod-name-from-configmap.yaml):
1apiVersion: policies.kyverno.io/v1alpha1
2kind: ValidatingPolicy
3metadata:
4 name: check-pod-name-from-configmap
5spec:
6 matchConstraints:
7 resourceRules:
8 - apiGroups: [""]
9 apiVersions: ["v1"]
10 operations: ["CREATE", "UPDATE"]
11 resources: ["pods"]
12 variables:
13 # This variable uses a Kyverno CEL function to get a ConfigMap from the cluster.
14 - name: cm
15 expression: >-
16 resource.Get("v1", "configmaps", object.metadata.namespace, "policy-cm")
17 validations:
18 # This rule validates that the Pod's name matches the 'name' key in the ConfigMap's data.
19 - expression: >-
20 object.metadata.name == variables.cm.data.name
Next, we define two Pod manifests: good-pod
, which should pass the validation, and bad-pod
, which should fail.
Resource manifest (pods.yaml):
1apiVersion: v1
2kind: Pod
3metadata:
4 name: good-pod
5spec:
6 containers:
7 - name: nginx
8 image: nginx
9---
10apiVersion: v1
11kind: Pod
12metadata:
13 name: bad-pod
14spec:
15 containers:
16 - name: nginx
17 image: nginx
Because the CLI cannot connect to a cluster to fetch the policy-cm
ConfigMap, we must provide it in a context file. This file contains a mock ConfigMap that resource.Get()
will use during local evaluation.
Context file (context.yaml):
1apiVersion: cli.kyverno.io/v1alpha1
2kind: Context
3metadata:
4 name: context
5spec:
6 # The resources defined here will be available to functions like resource.Get()
7 resources:
8 - apiVersion: v1
9 kind: ConfigMap
10 metadata:
11 namespace: default
12 name: policy-cm
13 data:
14 # According to this, the valid pod name is 'good-pod'.
15 name: good-pod
Now, we can run the kyverno apply
command, providing the policy, the resources, and the context file. We also use the -p
(or --policy-report
) flag to generate a ClusterReport detailing the results.
1kyverno apply /path/to/policy.yaml --resource /path/to/pods.yaml --context-file /path/to/context.yaml -p
The following output will be generated:
1apiVersion: openreports.io/v1alpha1
2kind: ClusterReport
3metadata:
4 creationTimestamp: null
5 name: merged
6results:
7- message: success
8 policy: check-pod-name-from-configmap
9 properties:
10 process: background scan
11 resources:
12 - apiVersion: v1
13 kind: Pod
14 name: good-pod
15 namespace: default
16 result: pass
17 scored: true
18 source: KyvernoValidatingPolicy
19 timestamp:
20 nanos: 0
21 seconds: 1752756617
22- policy: check-pod-name-from-configmap
23 properties:
24 process: background scan
25 resources:
26 - apiVersion: v1
27 kind: Pod
28 name: bad-pod
29 namespace: default
30 result: fail
31 scored: true
32 source: KyvernoValidatingPolicy
33 timestamp:
34 nanos: 0
35 seconds: 1752756617
36source: ""
37summary:
38 error: 0
39 fail: 1
40 pass: 1
41 skip: 0
42 warn: 0
The
good-pod
resource resulted in apass
because its name matches the value in the ConfigMap provided by the context file.The
bad-pod
resource resulted in afail
because its name does not match, and the report includes the validation error message from the policy.
When using the --cluster
flag, the CLI connects to your active Kubernetes cluster, so a local context file is not needed. The resource.Get()
function will fetch the live ConfigMap directly from the cluster so you have to ensure the ConfigMap and the Pod resources exist in your cluster before running the command.
1kyverno apply /path/to/check-pod-name-from-configmap.yaml --cluster --policy-report
In case of applying a ValidatingPolicy
with a PolicyException
, you can use the --exception
flag to specify the exception manifest. The CLI will then apply the policy and the exception together.
In this example, we will test a policy that disallows hostPath
volumes, but we will use a PolicyException to create an exemption for a specific Pod.
Policy manifest (disallow-host-path.yaml):
1apiVersion: policies.kyverno.io/v1alpha1
2kind: ValidatingPolicy
3metadata:
4 name: disallow-host-path
5spec:
6 matchConstraints:
7 resourceRules:
8 - apiGroups: [""]
9 apiVersions: ["v1"]
10 operations: ["CREATE", "UPDATE"]
11 resources: ["pods"]
12 validations:
13 - expression: "!has(object.spec.volumes) || object.spec.volumes.all(volume, !has(volume.hostPath))"
14 message: "HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset."
Next, we define a Pod that clearly violates this policy by mounting a hostPath
volume. Without an exception, this Pod would be blocked.
Resource manifest (pod-with-hostpath.yaml):
1apiVersion: v1
2kind: Pod
3metadata:
4 name: pod-with-hostpath
5spec:
6 containers:
7 - name: nginx
8 image: nginx
9 volumes:
10 - name: udev
11 hostPath:
12 path: /etc/udev
Now, we create a PolicyException to exempt our specific Pod from this policy. The exception matches the Pod by name and references the disallow-host-path
policy.
Policy Exception manifest (exception.yaml):
1apiVersion: policies.kyverno.io/v1alpha1
2kind: PolicyException
3metadata:
4 name: exempt-hostpath-pod
5spec:
6 policyRefs:
7 - name: disallow-host-path
8 kind: ValidatingPolicy
9 matchConditions:
10 - name: "skip-pod-by-name"
11 expression: "object.metadata.name == 'pod-with-hostpath'"
Now, we use the kyverno apply
command, providing the policy, the resource, and the exception using the --exception
flag. We will also use -p
to generate a detailed report.
1kyverno apply policy.yaml --resource resource.yaml --exception exception.yaml -p
The following output will be generated:
1apiVersion: openreports.io/v1alpha1
2kind: ClusterReport
3metadata:
4 creationTimestamp: null
5 name: merged
6results:
7- message: 'rule is skipped due to policy exception: exempt-hostpath-pod'
8 policy: disallow-host-path
9 properties:
10 exceptions: exempt-hostpath-pod
11 process: background scan
12 resources:
13 - apiVersion: v1
14 kind: Pod
15 name: pod-with-hostpath
16 namespace: default
17 result: skip
18 rule: exception
19 scored: true
20 source: KyvernoValidatingPolicy
21 timestamp:
22 nanos: 0
23 seconds: 1752759828
24source: ""
25summary:
26 error: 0
27 fail: 0
28 pass: 0
29 skip: 1
30 warn: 0
The output confirms that the PolicyException worked as intended:
result: skip
: The policy rule was not enforced on the resource.properties.exceptions: exempt-hostpath-pod
: The report explicitly names the PolicyException responsible for the skip.summary.skip: 1
: The final count reflects that one rule was skipped.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.