apply

The apply command is used to perform a dry run on one or more policies with a given set of input resources. This can be useful to determine a policy’s effectiveness prior to committing to a cluster. In the case of mutate policies, the apply command can show the mutated resource as an output. The input resources can either be resource manifests (one or multiple) or can be taken from a running Kubernetes cluster. The apply command supports files from URLs both as policies and resources.

Apply to a resource:

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml

Apply a policy to all matching resources in a cluster based on the current kubectl context:

1kyverno apply /path/to/policy.yaml --cluster

The resources can also be passed from stdin:

1kustomize build nginx/overlays/envs/prod/ | kyverno apply /path/to/policy.yaml --resource -

Apply all cluster policies in the current cluster to all matching resources in a cluster based on the current kubectl context:

1kubectl get clusterpolicies -o yaml | kyverno apply - --cluster

Apply multiple policies to multiple resources:

1kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --cluster

Apply a policy to a resource with a policy exception:

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml

Apply multiple policies to multiple resources with exceptions:

1kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --exception /path/to/exception1.yaml --exception /path/to/exception2.yaml 

Apply a mutation policy to a specific resource:

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml
2
3applying 1 policy to 1 resource... 
4
5mutate policy <policy_name> applied to <resource_name>:
6<final mutated resource output>

Save the mutated resource to a file:

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o newresource.yaml

Save the mutated resource to a directory:

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o foo/

Apply a policy containing variables using the --set or -s flag to pass in the values. Variables that begin with {{request.object}} normally do not need to be specified as these will be read from the resource.

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --set <variable1>=<value1>,<variable2>=<value2>

Use -f or --values-file for applying multiple policies to multiple resources while passing a file containing variables and their values. Variables specified can be of various types include AdmissionReview fields, ConfigMap context data, and API call context data.

Use -u or --userinfo for applying policies while passing an optional user_info.yaml file which contains necessary admission request data made during the request.

1kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml --userinfo /path/to/user_info.yaml

Format of value.yaml with all possible fields:

 1apiVersion: cli.kyverno.io/v1alpha1
 2kind: Values
 3metadata:
 4  name: values
 5policies:
 6  - name: <policy1 name>
 7    rules:
 8    - name: <rule1 name>
 9      values:
10        <context variable1 in policy1 rule1>: <value>
11        <context variable2 in policy1 rule1>: <value>
12    - name: <rule2 name>
13      values:
14        <context variable1 in policy1 rule2>: <value>
15        <context variable2 in policy1 rule2>: <value>
16    resources:
17    - name: <resource1 name>
18      values:
19        <variable1 in policy1>: <value>
20        <variable2 in policy1>: <value>
21    - name: <resource2 name>
22      values:
23        <variable1 in policy1>: <value>
24        <variable2 in policy1>: <value>
25namespaceSelector:
26- name: <namespace1 name>
27  labels:
28    <label key>: <label value>
29- name: <namespace2 name>
30  labels:
31    <label key>: <label value>

Format of user_info.yaml:

1apiVersion: cli.kyverno.io/v1alpha1
2kind: UserInfo
3metadata:
4  name: user-info
5clusterRoles:
6- admin
7userInfo:
8  username: molybdenum@somecorp.com

Example:

Policy manifest (add_network_policy.yaml):

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: add-networkpolicy
 5spec:
 6  background: false
 7  rules:
 8  - name: default-deny-ingress
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Namespace
14        clusterRoles:
15        - cluster-admin
16    generate:
17      apiVersion: networking.k8s.io/v1
18      kind: NetworkPolicy
19      name: default-deny-ingress
20      namespace: "{{request.object.metadata.name}}"
21      synchronize: true
22      data:
23        spec:
24          # select all pods in the namespace
25          podSelector: {}
26          policyTypes:
27          - Ingress

Resource manifest (required_default_network_policy.yaml):

1kind: Namespace
2apiVersion: v1
3metadata:
4  name: devtest

Apply a policy to a resource using the --set or -s flag to pass a variable directly:

1kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -s request.object.metadata.name=devtest

Apply a policy to a resource using the --values-file or -f flag:

YAML file containing variables (value.yaml):

 1apiVersion: cli.kyverno.io/v1alpha1
 2kind: Values
 3metadata:
 4  name: values
 5policies:
 6  - name: add-networkpolicy
 7    resources:
 8      - name: devtest
 9        values:
10          request.namespace: devtest
1kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -f /path/to/value.yaml

On applying the above policy to the mentioned resources, the following output will be generated:

1Applying 1 policy to 1 resource... 
2(Total number of result count may vary as the policy is mutated by Kyverno. To check the mutated policy please try with log level 5)
3
4pass: 1, fail: 0, warn: 0, error: 0, skip: 0 

The summary count is based on the number of rules applied on the number of resources.

Value files also support global values, which can be passed to all resources the policy is being applied to.

Format of value.yaml:

 1apiVersion: cli.kyverno.io/v1alpha1
 2kind: Values
 3metadata:
 4  name: values
 5policies:
 6  - name: <policy1 name>
 7    resources:
 8      - name: <resource1 name>
 9        values:
10          <variable1 in policy1>: <value>
11          <variable2 in policy1>: <value>
12      - name: <resource2 name>
13        values:
14          <variable1 in policy1>: <value>
15          <variable2 in policy1>: <value>
16  - name: <policy2 name>
17    resources:
18      - name: <resource1 name>
19        values:
20          <variable1 in policy2>: <value>
21          <variable2 in policy2>: <value>
22      - name: <resource2 name>
23        values:
24          <variable1 in policy2>: <value>
25          <variable2 in policy2>: <value>
26globalValues:
27  <global variable1>: <value>
28  <global variable2>: <value>

If a resource-specific value and a global value have the same variable name, the resource value takes precedence over the global value. See the Pod test-global-prod in the following example.

Example:

Policy manifest (add_dev_pod.yaml):

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: cm-globalval-example
 5spec:
 6  validationFailureAction: Enforce
 7  background: false
 8  rules:
 9    - name: validate-mode
10      match:
11        any:
12        - resources:
13            kinds:
14              - Pod
15      validate:
16        message: "The value {{ request.mode }} for val1 is not equal to 'dev'."
17        deny:
18          conditions:
19            any:
20              - key: "{{ request.mode }}"
21                operator: NotEquals
22                value: dev

Resource manifest (dev_prod_pod.yaml):

 1apiVersion: v1
 2kind: Pod
 3metadata:
 4  name: test-global-prod
 5spec:
 6  containers:
 7    - name: nginx
 8      image: nginx:latest
 9---
10apiVersion: v1
11kind: Pod
12metadata:
13  name: test-global-dev
14spec:
15  containers:
16    - name: nginx
17      image: nginx:1.12

YAML file containing variables (value.yaml):

 1apiVersion: cli.kyverno.io/v1alpha1
 2kind: Values
 3metadata:
 4  name: values
 5policies:
 6  - name: cm-globalval-example
 7    resources:
 8      - name: test-global-prod
 9        values:
10          request.mode: prod
11globalValues:
12  request.mode: dev
1kyverno apply /path/to/add_dev_pod.yaml --resource /path/to/dev_prod_pod.yaml -f /path/to/value.yaml

The Pod test-global-dev passes the validation, and test-global-prod fails.

Apply a policy with the Namespace selector:

Use --values-file or -f for passing a file containing Namespace details. Check here to know more about Namespace selectors.

1kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml

Format of value.yaml:

 1apiVersion: cli.kyverno.io/v1alpha1
 2kind: Values
 3metadata:
 4  name: values
 5namespaceSelector:
 6  - name: <namespace1 name>
 7    labels:
 8      <namespace label key>: <namespace label value>
 9  - name: <namespace2 name>
10    labels:
11      <namespace label key>: <namespace label value>

Example:

Policy manifest (enforce-pod-name.yaml):

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: enforce-pod-name
 5spec:
 6  validationFailureAction: Audit
 7  background: true
 8  rules:
 9    - name: validate-name
10      match:
11        any:
12        - resources:
13            kinds:
14              - Pod
15            namespaceSelector:
16              matchExpressions:
17              - key: foo.com/managed-state
18                operator: In
19                values:
20                - managed
21      validate:
22        message: "The Pod must end with -nginx"
23        pattern:
24          metadata:
25            name: "*-nginx"

Resource manifest (nginx.yaml):

1kind: Pod
2apiVersion: v1
3metadata:
4  name: test-nginx
5  namespace: test1
6spec:
7  containers:
8  - name: nginx
9    image: nginx:latest

Namespace manifest (namespace.yaml):

1apiVersion: v1
2kind: Namespace
3metadata:
4  name: test1
5  labels:
6    foo.com/managed-state: managed

YAML file containing variables (value.yaml):

1apiVersion: cli.kyverno.io/v1alpha1
2kind: Values
3metadata:
4  name: values
5namespaceSelector:
6  - name: test1
7    labels:
8      foo.com/managed-state: managed

To test the above policy, use the following command:

1kyverno apply /path/to/enforce-pod-name.yaml --resource /path/to/nginx.yaml -f /path/to/value.yaml

Apply a resource to a policy which uses a context variable:

Use --values-file or -f for passing a file containing the context variable.

1kyverno apply /path/to/policy1.yaml --resource /path/to/resource1.yaml -f /path/to/value.yaml

policy1.yaml

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: cm-variable-example
 5  annotations:
 6    pod-policies.kyverno.io/autogen-controllers: DaemonSet,Deployment,StatefulSet
 7spec:
 8  validationFailureAction: Enforce
 9  background: false
10  rules:
11    - name: example-configmap-lookup
12      context:
13      - name: dictionary
14        configMap:
15          name: mycmap
16          namespace: default
17      match:
18        any:
19        - resources:
20            kinds:
21            - Pod
22      mutate:
23        patchStrategicMerge:
24          metadata:
25            labels:
26              my-environment-name: "{{dictionary.data.env}}"

resource1.yaml

1apiVersion: v1
2kind: Pod
3metadata:
4  name: nginx-config-test
5spec:
6  containers:
7  - image: nginx:latest
8    name: test-nginx

value.yaml

 1apiVersion: cli.kyverno.io/v1alpha1
 2kind: Values
 3metadata:
 4  name: values
 5policies:
 6  - name: cm-variable-example
 7    rules:
 8      - name: example-configmap-lookup
 9        values:
10          dictionary.data.env: dev1

Policies that have their validationFailureAction set to Audit can be set to produce a warning instead of a failure using the --audit-warn flag. This will also cause a non-zero exit code if no enforcing policies failed.

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --audit-warn

Additionally, you can use the --warn-exit-code flag with the apply command to control the exit code when warnings are reported. This is useful in CI/CD systems when used with the --audit-warn flag to treat Audit policies as warnings. When no failures or errors are found, but warnings are encountered, the CLI will exit with the defined exit code.

1kyverno apply disallow-latest-tag.yaml --resource=echo-test.yaml --audit-warn --warn-exit-code 3
2echo $?
33

You can also use --warn-exit-code in combination with --warn-no-pass flag to make the CLI exit with the warning code if no objects were found that satisfy a policy. This may be useful during the initial development of a policy or if you want to make sure that an object exists in the Kubernetes manifest.

1kyverno apply disallow-latest-tag.yaml --resource=empty.yaml --warn-exit-code 3 --warn-no-pass
2echo $?
33

Policy Report

Policy reports provide information about policy execution and violations. Use --policy-report with the apply command to generate a policy report for validate policies. mutate and generate policies do not trigger policy reports.

Policy reports can also be generated for a live cluster. While generating a policy report for a live cluster the -r flag, which declares a resource, is assumed to be globally unique. And it doesn’t support naming the resource type (ex., Pod/foo when the cluster contains resources of different types with the same name). To generate a policy report for a live cluster use --cluster with --policy-report.

1kyverno apply policy.yaml --cluster --policy-report

Above example applies a policy.yaml to all resources in the cluster.

Below are the combination of inputs that can be used for generating the policy report from the Kyverno CLI.

PolicyResourceClusterNamespaceInterpretation
policy.yaml-r resource.yamlfalseApply policy from policy.yaml to the resources specified in resource.yaml
policy.yaml-r resourceNametrueApply policy from policy.yaml to the resource with a given name in the cluster
policy.yamltrueApply policy from policy.yaml to all the resources in the cluster
policy.yaml-r resourceNametrue-n=namespaceNameApply policy from policy.yaml to the resource with a given name in a specific Namespace
policy.yamltrue-n=namespaceNameApply policy from policy.yaml to all the resources in a specific Namespace

Example:

Consider the following policy and resources:

policy.yaml

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: require-pod-requests-limits
 5spec:
 6  validationFailureAction: Audit
 7  rules:
 8  - name: validate-resources
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Pod
14    validate:
15      message: "CPU and memory resource requests and limits are required"
16      pattern:
17        spec:
18          containers:
19          - resources:
20              requests:
21                memory: "?*"
22                cpu: "?*"
23              limits:
24                memory: "?*"

resource1.yaml

 1apiVersion: v1
 2kind: Pod
 3metadata:
 4  name: nginx1
 5  labels:
 6    env: test
 7spec:
 8  containers:
 9  - name: nginx
10    image: nginx
11    imagePullPolicy: IfNotPresent
12    resources:
13      requests:
14        memory: "64Mi"
15        cpu: "250m"
16      limits:
17        memory: "128Mi"
18        cpu: "500m"

resource2.yaml

 1apiVersion: v1
 2kind: Pod
 3metadata:
 4  name: nginx2
 5  labels:
 6    env: test
 7spec:
 8  containers:
 9  - name: nginx
10    image: nginx
11    imagePullPolicy: IfNotPresent

Case 1: Apply a policy manifest to multiple resource manifests

1kyverno apply policy.yaml -r resource1.yaml -r resource2.yaml --policy-report

Case 2: Apply a policy manifest to multiple resources in the cluster

Create the resources by first applying manifests resource1.yaml and resource2.yaml.

1kyverno apply policy.yaml -r nginx1 -r nginx2 --cluster --policy-report

Case 3: Apply a policy manifest to all resources in the cluster

1kyverno apply policy.yaml --cluster --policy-report

Given the contents of policy.yaml shown earlier, this will produce a report validating against all Pods in the cluster.

Case 4: Apply a policy manifest to multiple resources by name within a specific Namespace

1kyverno apply policy.yaml -r nginx1 -r nginx2 --cluster --policy-report -n default

Case 5: Apply a policy manifest to all resources within the default Namespace

1kyverno apply policy.yaml --cluster --policy-report -n default

Given the contents of policy.yaml shown earlier, this will produce a report validating all Pods within the default Namespace.

On applying policy.yaml to the mentioned resources, the following report will be generated:

 1apiVersion: wgpolicyk8s.io/v1alpha1
 2kind: ClusterPolicyReport
 3metadata:
 4  name: clusterpolicyreport
 5results:
 6- message: Validation rule 'validate-resources' succeeded.
 7  policy: require-pod-requests-limits
 8  resources:
 9  - apiVersion: v1
10    kind: Pod
11    name: nginx1
12    namespace: default
13  rule: validate-resources
14  scored: true
15  status: pass
16- message: 'Validation error: CPU and memory resource requests and limits are required; Validation rule validate-resources failed at path /spec/containers/0/resources/limits/'
17  policy: require-pod-requests-limits
18  resources:
19  - apiVersion: v1
20    kind: Pod
21    name: nginx2
22    namespace: default
23  rule: validate-resources
24  scored: true
25  status: fail
26summary:
27  error: 0
28  fail: 1
29  pass: 1
30  skip: 0
31  warn: 0

Applying Policy Exceptions

Policy Exceptions can be applied alongside policies by using the -e or --exceptions flag to pass the Policy Exception manifest.

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml

Example:

Applying a policy to a resource with a policy exception.

Policy manifest (policy.yaml):

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: max-containers
 5spec:
 6  validationFailureAction: Enforce
 7  background: false
 8  rules:
 9  - name: max-two-containers
10    match:
11      any:
12      - resources:
13          kinds:
14          - Pod
15    validate:
16      message: "A maximum of 2 containers are allowed inside a Pod."
17      deny:
18        conditions:
19          any:
20          - key: "{{request.object.spec.containers[] | length(@)}}"
21            operator: GreaterThan
22            value: 2

Policy Exception manifest (exception.yaml):

 1apiVersion: kyverno.io/v2beta1
 2kind: PolicyException
 3metadata:
 4  name: container-exception
 5spec:
 6  exceptions:
 7  - policyName: max-containers
 8    ruleNames:
 9    - max-two-containers
10    - autogen-max-two-containers
11  match:
12    any:
13    - resources:
14        kinds:
15        - Pod
16        - Deployment
17  conditions:
18    any:
19    - key: "{{ request.object.metadata.labels.color || '' }}"
20      operator: Equals
21      value: blue

Resource manifest (resource.yaml):

A Deployment matching the characteristics defined in the PolicyException, shown below, will be allowed creation even though it technically violates the rule’s definition.

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: three-containers-deployment
 5  labels:
 6    app: my-app
 7    color: blue
 8spec:
 9  replicas: 3
10  selector:
11    matchLabels:
12      app: my-app
13  template:
14    metadata:
15      labels:
16        app: my-app
17        color: blue
18    spec:
19      containers:
20        - name: nginx-container
21          image: nginx:latest
22          ports:
23            - containerPort: 80
24        - name: redis-container
25          image: redis:latest
26          ports:
27            - containerPort: 6379
28        - name: busybox-container
29          image: busybox:latest
30          command: ["/bin/sh", "-c", "while true; do echo 'Hello from BusyBox'; sleep 10; done"]    

Apply the above policy to the resource with the exception

1kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml

The following output will be generated:

1Applying 3 policy rule(s) to 1 resource(s) with 1 exception(s)...
2
3pass: 0, fail: 0, warn: 0, error: 0, skip: 1 

Applying ValidatingAdmissionPolicies

With the apply command, Kubernetes ValidatingAdmissionPolicies can be applied to resources as follows:

Policy manifest (check-deployment-replicas.yaml):

 1apiVersion: admissionregistration.k8s.io/v1
 2kind: ValidatingAdmissionPolicy
 3metadata:
 4  name: check-deployments-replicas
 5spec:
 6  failurePolicy: Fail
 7  matchConstraints:
 8    resourceRules:
 9    - apiGroups:   ["apps"]
10      apiVersions: ["v1"]
11      operations:  ["CREATE", "UPDATE"]
12      resources:   ["deployments"]
13  validations:
14    - expression: "object.spec.replicas <= 3"
15      message: "Replicas must be less than or equal 3"

Resource manifest (deployment.yaml):

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: nginx-pass
 5spec:
 6  replicas: 2
 7  selector:
 8    matchLabels:
 9      app: nginx-pass
10  template:
11    metadata:
12      labels:
13        app: nginx-pass
14    spec:
15      containers:
16      - name: nginx-server
17        image: nginx

Apply the ValidatingAdmissionPolicy to the resource:

1kyverno apply /path/to/check-deployment-replicas.yaml --resource /path/to/deployment.yaml

The following output will be generated:

1Applying 1 policy rule(s) to 1 resource(s)...
2
3pass: 1, fail: 0, warn: 0, error: 0, skip: 0 

The below example applies a ValidatingAdmissionPolicyBinding along with the policy to all resources in the cluster.

Policy manifest (check-deployment-replicas.yaml):

 1apiVersion: admissionregistration.k8s.io/v1
 2kind: ValidatingAdmissionPolicy
 3metadata:
 4  name: "check-deployment-replicas"
 5spec:
 6  matchConstraints:
 7    resourceRules:
 8    - apiGroups:
 9      - apps
10      apiVersions:
11      - v1
12      operations:
13      - CREATE
14      - UPDATE
15      resources:
16      - deployments
17  validations:
18  - expression: object.spec.replicas <= 5
19---
20apiVersion: admissionregistration.k8s.io/v1
21kind: ValidatingAdmissionPolicyBinding
22metadata:
23  name: "check-deployment-replicas-binding"
24spec:
25  policyName: "check-deployment-replicas"
26  validationActions: [Deny]
27  matchResources:
28    namespaceSelector:
29      matchLabels:
30        environment: staging

The above policy verifies that the number of deployment replicas is not greater than 5 and is limited to a namespace labeled environment: staging.

Create a Namespace with the label environment: staging:

1kubectl create ns staging
2kubectl label ns staging environment=staging

Create two Deployments, one of them in the staging namespace, which violates the policy.

1kubectl create deployment nginx-1 --image=nginx --replicas=6 -n staging
2kubectl create deployment nginx-2 --image=nginx --replicas=6

Get all Deployments from the cluster:

1kubectl get deployments -A
2
3NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
4default              nginx-2                  6/6     6            6           7m26s
5kube-system          coredns                  2/2     2            2           13m
6local-path-storage   local-path-provisioner   1/1     1            1           13m
7staging              nginx-1                  6/6     6            6           7m44s

Apply the ValidatingAdmissionPolicy with its binding to all resources in the cluster:

1kyverno apply /path/to/check-deployment-replicas.yaml --cluster --policy-report

The following output will be generated:

 1Applying 1 policy rule(s) to 4 resource(s)...
 2----------------------------------------------------------------------
 3POLICY REPORT:
 4----------------------------------------------------------------------
 5apiVersion: wgpolicyk8s.io/v1alpha2
 6kind: ClusterPolicyReport
 7metadata:
 8  creationTimestamp: null
 9  name: merged
10results:
11- message: 'failed expression: object.spec.replicas <= 5'
12  policy: check-deployment-replicas
13  resources:
14  - apiVersion: apps/v1
15    kind: Deployment
16    name: nginx-1
17    namespace: staging
18    uid: a95d1594-44a7-4c8a-9225-04ac34cb9494
19  result: fail
20  scored: true
21  source: kyverno
22  timestamp:
23    nanos: 0
24    seconds: 1707394871
25summary:
26  error: 0
27  fail: 1
28  pass: 0
29  skip: 0
30  warn: 0

As expected, the policy is only applied to nginx-1 as it matches both the policy definition and its binding.


Last modified September 26, 2024 at 11:24 PM PST: add docs for VAPs (#1358) (3331b54)