OpenTelemetry
OpenTelemetry Setup
Section titled “OpenTelemetry Setup”Setting up OpenTelemetry requires configuration of a few YAML files. The required configurations are listed below.
Install Cert-Manager
Section titled “Install Cert-Manager”Install Cert-Manager by following the documentation.
Config file for OpenTelemetry Collector
Section titled “Config file for OpenTelemetry Collector”Create a configmap.yaml file in the kyverno Namespace with the following content:
apiVersion: v1kind: ConfigMapmetadata: name: collector-config namespace: kyvernodata: collector.yaml: | receivers: otlp: protocols: grpc: endpoint: ":8000" processors: batch: send_batch_size: 10000 timeout: 5s extensions: health_check: {} exporters: jaeger: endpoint: "jaeger-collector.observability.svc.cluster.local:14250" tls: insecure: true prometheus: endpoint: ":9090" logging: loglevel: debug service: extensions: [health_check] pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger, logging] metrics: receivers: [otlp] processors: [batch] exporters: [prometheus, logging]- Here the Prometheus exporter endpoint is set as 9090 which means Prometheus will be able to scrape this service on the given endpoint to collect metrics.
- Similarly, the Jaeger endpoint references a Jaeger collector at the default Jaeger endpoint 14250.
The Collector Deployment
Section titled “The Collector Deployment”Create a deployment.yaml file in the kyverno Namespace with the following content:
apiVersion: apps/v1kind: Deploymentmetadata: name: opentelemetrycollector namespace: kyvernospec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: opentelemetrycollector template: metadata: labels: app.kubernetes.io/name: opentelemetrycollector spec: containers: - name: otelcol args: - --config=/conf/collector.yaml image: otel/opentelemetry-collector:0.50.0 volumeMounts: - name: collector-config mountPath: /conf volumes: - configMap: name: collector-config items: - key: collector.yaml path: collector.yaml name: collector-configThis references the collector defined in the configmap.yaml above. Here we are using a Deployment with just a single replica. Ideally, a DaemonSet is preferred. Check the OpenTelemetry documentation for more deployment strategies.
The Collector Service
Section titled “The Collector Service”Finally, create a service.yaml file in the kyverno Namespace with the following content:
apiVersion: v1kind: Servicemetadata: name: opentelemetrycollector namespace: kyvernospec: ports: - name: otlp-grpc port: 8000 protocol: TCP targetPort: 8000 - name: metrics port: 9090 protocol: TCP targetPort: 9090 selector: app.kubernetes.io/name: opentelemetrycollector type: ClusterIPThis defines a Service for the discovery of the collector Deployment.
Setting up Kyverno and passing required flags
Section titled “Setting up Kyverno and passing required flags”See the installation instructions for Kyverno. Depending on the method used, the following flags must be passed.
- Pass the flag
metricsPortto defined the OpenTelemetry Collector endpoint for collecting metrics. - Pass the flag
otelConfig=grpcto export the metrics and traces to an OpenTelemetry collector on the metrics port endpoint
Setting up a secure connection between Kyverno and the collector
Section titled “Setting up a secure connection between Kyverno and the collector”Kyverno also supports setting up a secure connection with the OpenTelemetry exporter using TLS on the server-side (on the collector). This will require you to create a certificate-key pair for the OpenTelemetry collector from some private CA and then saving the certificate as a Secret in your Kyverno Namespace with key named ca.pem.
Considering you already have the server.pem and server-key.pem files along with the ca.pem file (you can configure these using a tool such as OepnSSL or cfssl). Your OpenTelemetry configmap.yaml and deployment.yaml files will also change accordingly:
configmap.yaml
apiVersion: v1kind: ConfigMapmetadata: name: collector-config namespace: kyvernodata: collector.yaml: | receivers: otlp: protocols: grpc: endpoint: ":8000" tls: cert_file: /etc/ssl/certs/server/server.pem key_file: /etc/ssl/certs/server/server-key.pem ca_file: /etc/ssl/certs/ca/ca.pem processors: batch: send_batch_size: 10000 timeout: 5s extensions: health_check: {} exporters: jaeger: endpoint: "jaeger-collector.observability.svc.cluster.local:14250" tls: insecure: true prometheus: endpoint: ":9090" logging: loglevel: debug service: extensions: [health_check] pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger, logging] metrics: receivers: [otlp] processors: [batch] exporters: [prometheus, logging]deployment.yaml
apiVersion: apps/v1kind: Deploymentmetadata: name: opentelemetrycollector namespace: kyvernospec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: opentelemetrycollector template: metadata: labels: app.kubernetes.io/name: opentelemetrycollector spec: containers: - name: otelcol args: - --config=/conf/collector.yaml image: otel/opentelemetry-collector:0.50.0 volumeMounts: - name: collector-config mountPath: /conf - name: otel-collector-secrets mountPath: /etc/ssl/certs/server - name: root-ca mountPath: /etc/ssl/certs/ca volumes: - configMap: name: collector-config items: - key: collector.yaml path: collector.yaml name: collector-config - secret: secretName: otel-collector-secrets name: otel-collector-secrets - secret: secretName: root-ca name: root-caThis will ensure that the OpenTelemetry collector can only accept encrypted data on the receiver endpoint.
Pass the flag transportCreds as the Secret name containing the ca.pem file (Empty string means insecure connection will be used).
Setting up Prometheus
Section titled “Setting up Prometheus”- For the metrics backend, you can install Prometheus on you cluster. For a general example, we have a ready-made configuration for you. Install Prometheus by running:
kubectl apply -k github.com/kyverno/grafana-dashboard/examples/prometheus- Port-forward the Prometheus service to view the metrics on localhost.
kubectl port-forward svc/prometheus-server 9090:9090 -n kyvernoSetting up Jaeger
Section titled “Setting up Jaeger”The traces are pushed to the Jaeger backend on port 14250. To install Jaeger:
First install the Jaeger Operator. Replace the version as needed.
kubectl create namespace observabilitykubectl create -n observability -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.33.0/jaeger-operator.yamlkubectl wait --for=condition=Available deployment --timeout=2m -n observability --allCreate a Jaeger resource configuration as shown below
jaeger.yaml
apiVersion: jaegertracing.io/v1kind: Jaegermetadata: name: jaeger namespace: observabilityInstall the Jaeger backend
kubectl create -f jaeger.yamlPort-forward the Jaeger Service on 16686 to view the traces.
kubectl port-forward svc/jaeger-query 16686:16686 -n observability