Kubernetes

Using OPA Gatekeeper for your Kubernetes Cluster Policies

Damian Igbe
Feb. 19, 2022, 7:29 p.m.

Subscribe to Newsletter

Be first to know about new blogs, training offers, and company news.

Introduction

Open Policy Agent (OPA) is a CNCF project for implementing policies and governance across an enterprise infrastructure deployment.  OPA creates a policy and governance specification that is usable across different open-source projects like Kubernetes, Terraform, Ansible, etc.  You can create different policies for various aspects of your infrastructure based on what is important to your project. For example, if you don’t want to allow privileged pods in your Kubernetes cluster, or if you require that pods are properly labeled before being deployed, or if you don't want your users to download images from a certain repository, you don’t just talk about it with the team, you use OPA to enforce your desires.  You do this by first creating the policies and then implementing the policies on your  Kubernetes cluster.

While OPA is a general-purpose policy engine, Gatekeeper is specifically for Kubernetes, implemented from the OPA specification. You have the option to either implement your policies in native OPA or in Gatekeeper. In this blog, we will be using the Gatekeeper for our demo. 

In Kubernetes, pod policies were majorly handled by PSP (Pod Security Policies) but PSP has been decommissioned since Kubernetes version 1.22 to be completely removed by version 1.25. This means that OPA occupies a very important role in the production workload of Kubernetes clusters. Kyverno is a similar project that does the same thing for the Kubernetes cluster.  

There are 2 aspects to using Gatekeeper.

  • The Constraint Template and
  • The Constraint.

  

The ConstraintTemplate

The Constraint template specifies the policies in a generic sense, while the Constraint is the actual policy enforcing the constraint template. Writing a Constraint template requires that you understand the OPA’s policy language called Rego. However, you don't have to commit to learning Rego because you can always reuse the templates created and maintained by the community, especially if you are just starting out. It makes sense to first search and see if you can find what you are looking for before committing to writing a new policy. You can look for templates from this git repository.   In the link, you will also find the most common PSP templates that have converted into the Gatekeeper format.

OPA Gatekeeper Repository

The templates below are from the getting started repo adapted from the above link. You can  get the templates here:

Getting started with Gatekeeper

We are using a constrainttemplate called RequiredLabels. This policy will ensure that every namespace created has a label. If a policy does not have a label, it will not be allowed to get created.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
      validation:
        # Schema for the `parameters` field
        openAPIV3Schema:
          properties:
            labels:
              type: array
              items: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) > 0
          msg := sprintf("you must provide labels: %v", [missing])
        }
 

In the Constraint Template above, there are several sections.

  1. The CRD sections consisting of validation and
  2. The target section consisting of
    • Target,
    • Rego,
    • Libraries (not shown here),
    • and Violations.  

I will only focus on the validation section, since this is the section that we require in the constraint template. You will see the validation section shown in a different color (Orange color). In the constraint template that will be shown below, the labels will be created to conform to the properties  section. It does not matter how complicated the constrainttemplate may be, the only thing required to link the constrainttemplate to the constraint are usually the  CRD's kind and the properties, both highlighted in orange color in the constrainttemplate and constraint.

You will need the other sections if you want to create a constraintTemplate, but remember that you can always reuse the constrainttemplates from the above link.

The validation section is again shown below:

# Schema for the `parameters` field
kind: K8sRequiredLabels
   validation:
       openAPIV3Schema:
         properties:
           labels:
             type: array
              items: string
       

This always starts with openAPIV3Schema, then properties, then labels. In the label section, you specify the kind of labels to expect. 

The Constraint

Here is the constraint file to implement the above constraint template. The linkage between the constrainttemplate and the constraint are shown in orange.

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-gk
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels: ["gatekeeper"]  


Testing The Constraint


Now that we have both the constraint template and constraint defined, we can create both objects in our kubernetes cluster. Once the policies are enforced, every namespace must be defined with labels as shown below:

apiVersion: v1
kind: Namespace
metadata:
  name: good-ns
  labels:
  "gatekeeper": "true"

 

The Dry-run Mode

Introducing new policies to existing clusters can have adverse behaviour, for example by restricting existing workloads.

With Gatekeeper, the dry-run mode enables you to test the effectiveness  of your pod security without making actual changes to the cluster. You can first test your policies before enforcement. Policy violations are logged and identified without interference. The idea is to first test the policy, and if all is good, then enforce it. Let us test the k8spspprivilegedcontainer in the followingg steps.

Step 1:  Apply the Gatekeeper config for replicating data for audit and dry-run functionality. This is an important step that will help you see what is happening in the cluster.

kubectl create -f- <<EOF
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
  name: config
  namespace: "gatekeeper-system"
spec:
  sync:
    syncOnly:
      - group: ""
        version: "v1"
        kind: "Namespace"
      - group: ""
        version: "v1"
        kind: "Pod"
EOF


Step 2: With no constraints applied, let's run a workload with elevated privileges. This should be allowed since there is no policy restricting this.

kubectl create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    securityContext:
      privileged: true
EOF

 

Step 3: Load the k8spspprivilegedcontainerconstraint template. Once thisis loaded, we will no longer be able to create privileged pod. 

kubectl create -f- <<EOF
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8spspprivilegedcontainer
spec:
  crd:
    spec:
      names:
        kind: K8sPSPPrivilegedContainer
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8spspprivileged

        violation[{"msg": msg, "details": {}}] {
            c := input_containers[_]
            c.securityContext.privileged
            msg := sprintf("Privileged container is not allowed: %v, securityContext: %v", [c.name, c.securityContext])
        }
        input_containers[c] {
            c := input.review.object.spec.containers[_]
        }
        input_containers[c] {
            c := input.review.object.spec.initContainers[_]
        }
EOF
 

Step 4: Now let's create a new constraint to extend this constraint template. We will set the enforcementActionto dryrun so that it will not be eonforced.

kubectl create -f- <<EOF
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPrivilegedContainer
metadata:
  name: psp-privileged-container
spec:
  enforcementAction: dryrun
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
EOF

 

Step 5: With Gatekeeper synchronizing running object data, and passively checking for violations, we can confirm if any violations were found by checking the status of the constraint. Run the first line of the command, which shows the output that follows.

kubectl get k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container -o yaml


apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPrivilegedContainer
metadata:

...

 name: psp-privileged-container

...

spec:

enforcementAction: dryrun
match:
 kinds:
 - apiGroups:
   - ""
   kinds:
   - Pod
status:
auditTimestamp: "2019-12-15T22:19:54Z"
byPod:
- enforced: true
 id: gatekeeper-controller-manager-0
violations:
- enforcementAction: dryrun
 kind: Pod
 message: 'Privileged container is not allowed: nginx, securityContext: {"privileged":
   true}'
 name: nginx
   namespace: default

 

Step 6: Let's run another privileged Pod, to confirm that the policy does not interfere with deployments:

kubectl create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: privpod
  labels:
    app: privpod
spec:
  containers:
  - name: nginx
    image: nginx
    securityContext:
      privileged: true
EOF


This new Pod will be successfully deployed, maning that the policy was only in dryrun mode. To test further, we can remove the dry-run mode and test if its now enforcing polcies as expected.

Step 7:  It is a good practice to always delete policies whenever you don't need them, just to prevent interfering with creating resources in your cluster. To clean up the resources created in this section, run the following commands:

kubectl delete k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container
kubectl delete constrainttemplate k8spspprivilegedcontainer
kubectl delete pod/nginx
kubectl delete pod/privpod


Conclusion

Here I have introduced you to OPA Gatekeeper. I have also shown you how to work with a constrainttemplate and constraint. In production environment, be careful when introducing new policies. You can do this by using the dry-run feature of the gatekeeper, which helps to run the policies without enforcing them. In the video, I will show you how to install gatekeeper and the proceed to create the policies  shown above. 

 

Zero-to-Hero Program: We Train and Mentor you to land your first Tech role