Platform Notes

Special considerations for certain Kubernetes platforms.

Platform-Specific Notes

Depending on the application used to either install and manage Kyverno or the Kubernetes platform on which the cluster is built, there are some specific considerations of which to be aware. These notes are provided assuming the Helm chart is the installation artifact used.

Notes for ArgoCD users

When deploying the Kyverno Helm chart with ArgoCD, you will need to enable Replace in the syncOptions. You may want to also ignore differences in aggregated ClusterRoles which Kyverno uses by default. Aggregated ClusterRoles are built by aggregating other ClusterRoles in the cluster and are dynamic by nature, therefore desired and observed states cannot match.

You can do so by following instructions in these pages of the ArgoCD documentation:

ArgoCD uses Helm only for templating but applies the results with kubectl. Unfortunately kubectl adds metadata that exceeds the limit allowed by Kubernetes. Using Replace overcomes this limitation. Another option is to use server-side apply, supported in ArgoCD v2.5+.

Below is an example of an ArgoCD Application manifest that should work with the Kyverno Helm chart:

 1apiVersion: argoproj.io/v1alpha1
 2kind: Application
 3metadata:
 4  name: kyverno
 5  namespace: argocd
 6spec:
 7  destination:
 8    namespace: kyverno
 9    server: https://kubernetes.default.svc
10  project: default
11  source:
12    chart: kyverno
13    repoURL: https://kyverno.github.io/kyverno
14    targetRevision: <my.target.version>
15  syncPolicy:
16    automated:
17      prune: true
18      selfHeal: true
19    syncOptions:
20      - CreateNamespace=true
21      - Replace=true

For considerations when using Argo CD along with Kyverno mutate policies, see the documentation here.

Argo CD users may also have Kyverno add labels to webhooks via the webhookLabels key in the Kyverno ConfigMap, helpful when viewing the Kyverno application in Argo CD.

Ownership Clashes

ArgoCD automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app. The Kyverno Helm chart also sets this label for the same purposes. In order to resolve this conflict, configure ArgoCD to use a different tracking mechanism as described in the ArgoCD documentation.

Notes for OpenShift Users

Red Hat OpenShift contains a feature called Security Context Constraints (SCC) which enforces certain security controls in a profile-driven manner. An OpenShift cluster contains several of these out of the box with OpenShift 4.11 preferring restricted-v2 by default. The Kyverno Helm chart defines its own values for the Pod’s securityContext object which, although it conforms to the upstream Pod Security Standards’ restricted profile, may potentially be incompatible with your defined Security Context Constraints. Deploying the Kyverno Helm chart as-is on an OpenShift environment may result in an error similar to “unable to validate against any security context constraint”. In order to get past this, deploy the Kyverno Helm chart with with the required securityContext flags/fields set to a value of null. OpenShift will apply the defined SCC upon deployment. If on OpenShift 4.11+, the restricted-v2 profile is known to allow for successful deployment of the chart without modifying the Helm chart installation process.

Notes for EKS Users

For EKS clusters built with the VPC CNI plug-in, if you wish to opt for the operability strategy as defined in the Security vs Operability section, during the installation of Kyverno you should exclude the kube-system Namespace from webhooks as this is the Namespace where the plug-in runs. In situations where all the cluster Nodes are “deleted” (ex., only one node group in the cluster which is scaled to zero), which also impacts where the Kyverno replicas run, if kube-system is not excluded and where at least one policy in Fail mode matches on Pods, the VPC CNI plug-in’s DaemonSet Pods may not be able to come online to finish the Node bootstrapping process. If this situation occurs, because the underlying cluster network cannot return to a healthy state, Kyverno will be unable to service webhook requests. As of Kyverno 1.12, kube-system is excluded by default in webhooks.

Notes for AKS Users

AKS uses an Admission Enforcer control the webhooks in an AKS cluster and will remove those that may impact system Namespaces. Since Kyverno registers as a webhook, this Admission Enforcer may remove Kyverno’s webhook causing the two to fight over webhook reconciliation. See this Microsoft Azure FAQ for further information. When deploying Kyverno on an AKS cluster, set the Helm option config.webhookAnnotations to include the necessary annotation to disable the Admission Enforcer. Kyverno will configure its webhooks with this annotation to prevent their removal by AKS. The annotation that should be used is "admissions.enforcer/disabled": true. See the chart README for more information. As of Kyverno 1.12, this annotation has already been set for you.


Last modified April 08, 2024 at 8:29 AM PST: Refactor links (#1205) (5060f3d)