In this tutorial, we will learn about how to setup OPA in EKS using 7 easy steps. In kubernetes, PSP is used to restrict the pods defined in the cluster from being disruptive to worker nodes or wider cluster. From Kubernetes version 1.25 and above, PSP is deprecated. Hence, OPA is used as an admission controlled as an alternative to PSP to enforce semantic validation of objects during create, update and delete operations.
What is OPA?
OPA (Open Policy Agent) is an open source policy enforcement tool. OPA is developed in go programming language and uses a declarative policy language called rego to design the policies and uses these policies for decision making process . It provides a uniform framework for enforcing and controlling the policies for various components in cloud native solution. OPA implements security policies as code. OPA can be used for several purposes like
- It is implemented as Kubernetes Admission controller to validate API requests.
- It is used to authorize REST API endpoints.
- It is used to integrate custom authorization logic into applications etc.
How to Setup OPA in EKS Using 7 Easy Steps
Also read: How to enable SSH password authentication in Linux
Prerequisites
- An existing EKS cluster.
- Kubernetes version must be 1.20 and above.
Step-1: Install OPA gatekeeper
In this step, download any stable version of gatekeeper from the official Github repository. I have installed stable version v3.11.0. Use below command to download.
[nasauser@linuxnasa~]$ wget https://github.com/open-policy-agent/gatekeeper/archive/refs/tags/v3.11.0.tar.gz --2023-03-31 06:19:52-- https://github.com/open-policy-agent/gatekeeper/archive/refs/tags/v3.11.0.tar.gz Resolving github.com (github.com)... 20.207.73.82 Connecting to github.com (github.com)|20.207.73.82|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://codeload.github.com/open-policy-agent/gatekeeper/tar.gz/refs/tags/v3.11.0 [following] --2023-03-31 06:19:53-- https://codeload.github.com/open-policy-agent/gatekeeper/tar.gz/refs/tags/v3.11.0 Resolving codeload.github.com (codeload.github.com)... 20.207.73.88 Connecting to codeload.github.com (codeload.github.com)|20.207.73.88|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/x-gzip] Saving to: ‘v3.11.0.tar.gz’ v3.11.0.tar.gz [ <=> ] 11.99M 4.92MB/s in 2.4s 2023-03-31 06:19:56 (4.92 MB/s) - ‘v3.11.0.tar.gz’ saved [12576290]
Below tar.gz file will get downloaded in current working directory.
[nasauser@linuxnasa~]$ ls v3.11.0.tar.gz
Untar the zip file using below command.
[nasauser@linuxnasa~]$ tar -xzf v3.11.0.tar.gz gatekeeper-3.11.0/
Switch to below path where gatekeeper yaml file is present.
[nasauser@linuxnasa~]$ cd gatekeeper-3.11.0/deploy/
Deploy gatekeeper using below YAML file in the current working directory. It will create a new namespace called gatekeeper-system along with other resources as shown in the output below.
[nasauser@linuxnasa deploy]$ kubectl apply -f gatekeeper.yaml namespace/gatekeeper-system created resourcequota/gatekeeper-critical-pods created customresourcedefinition.apiextensions.k8s.io/assign.mutations.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/assignmetadata.mutations.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/expansiontemplate.expansion.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/modifyset.mutations.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/mutatorpodstatuses.status.gatekeeper.sh created customresourcedefinition.apiextensions.k8s.io/providers.externaldata.gatekeeper.sh created serviceaccount/gatekeeper-admin created role.rbac.authorization.k8s.io/gatekeeper-manager-role created clusterrole.rbac.authorization.k8s.io/gatekeeper-manager-role created rolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created secret/gatekeeper-webhook-server-cert created service/gatekeeper-webhook-service created deployment.apps/gatekeeper-audit created deployment.apps/gatekeeper-controller-manager created poddisruptionbudget.policy/gatekeeper-controller-manager created mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created
Verify the deployed gatekeeper pods using below command.
[nasauser@linuxnasa deploy]$ kubectl get pod -n gatekeeper-system NAME READY STATUS RESTARTS AGE gatekeeper-audit-c455cbf-8fwt6 1/1 Running 1 (3m43s ago) 3m54s gatekeeper-controller-manager-b9fb9c948-76mgs 1/1 Running 0 3m54s gatekeeper-controller-manager-b9fb9c948-8gqb9 1/1 Running 0 3m54s gatekeeper-controller-manager-b9fb9c948-dbn62 1/1 Running 0 3m54s
Step-2: Create constraint template
In this step, create a ConstraintTemplate file called contraint-template.yaml that defines the schema and rego code which will get executed when the constraints are evaluated. Use below commands.
First switch to home directory
[nasauser@linuxnasa deploy]$ cd - /home/nasauser [nasauser@linuxnasa~]$
Create below constraint template which defines schema and a rego code (also called as policy) which will ensure that Pods run as non-root user.
[nasauser@linuxnasa~]$ vi constraint-template.yaml apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8spspprivilegedcontainer spec: crd: spec: names: kind: K8sPSPPrivilegedContainer targets: - target: admission.k8s.gatekeeper.sh rego: | package k8spspprivileged violation[{"msg": msg, "details": {}}] { c := input_containers[_] c.securityContext.privileged msg := sprintf("Privileged Pods not allowed: %v, securityContext: %v", [c.name, c.securityContext]) } input_containers[c] { c := input.review.object.spec.containers[_] } input_containers[c] { c := input.review.object.spec.initContainers[_] }
NOTE:
Deploy constraint template using below command.
[nasauser@linuxnasa~]$ kubectl apply -f constraint-template.yaml constrainttemplate.templates.gatekeeper.sh/k8spspprivilegedcontainer created
Step-3: Create a Constraint
In this step, Create a constraint which will be evaluated against the constraint template created in previous step. Constraints define the Kubernetes resources to which Constraint templates will be applicable, in this case Pods.
[nasauser@linuxnasa~]$ vi constraint.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPPrivilegedContainer metadata: name: privileged-pod-constraint spec: match: kinds: - apiGroups: [""] kinds: ["Pod"]
Deploy constraint using below command.
[nasauser@linuxnasa~]$ kubectl apply -f constraint.yaml k8spspprivilegedcontainer.constraints.gatekeeper.sh/privileged-pod-constraint created
Step-4: Create privileged Pod
In this step, we will create a privilege pod which will violate the policy created in step2.
[nasauser@linuxnasa~]$ vi privileg-pod.yaml apiVersion: v1 kind: Pod metadata: name: nginx-privileged labels: app: nginx-privileged spec: containers: - name: nginx image: nginx securityContext: privileged: true
First, create a new name space using below command.
[nasauser@linuxnasa~]$ kubectl create ns test-constraint namespace/test-contraint created
Deploy the pod in namespace test-constraint using below command.
[nasauser@linuxnasa~]$ kubectl apply -f privileg-pod.yaml -n test-constraint Error from server (Forbidden): error when creating "privileg-pod.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [privileged-pod-container] Privileged Pods not allowed: nginx, securityContext: {"privileged": true}
Step-5: Check logs
In this step, You can check audit controller log using below command.
[nasauser@linuxnasa~]$ kubectl logs -l control-plane=audit-controller -n gatekeeper-system {"level":"info","ts":1680425183.4519703,"logger":"controller","msg":"audit results for constraint","process":"audit","audit_id":"2023-04-02T08:46:21Z","event_type":"constraint_audited","constraint_group":"constraints.gatekeeper.sh","constraint_api_version":"v1beta1","constraint_kind":"K8sPSPPrivilegedContainer","constraint_name":"psp-privileged-container","constraint_namespace":"","constraint_action":"deny","constraint_status":"enforced","constraint_violations":"4"} {"level":"info","ts":1680425183.452024,"logger":"controller","msg":"closing the previous audit reporting thread","process":"audit","audit_id":"2023-04-02T08:46:21Z"} {"level":"info","ts":1680425183.4520361,"logger":"controller","msg":"auditing is complete","process":"audit","audit_id":"2023-04-02T08:46:21Z","event_type":"audit_finished"} {"level":"info","ts":1680425183.4520588,"logger":"controller","msg":"constraint","process":"audit","audit_id":"2023-04-02T08:46:21Z","resource kind":"K8sPSPPrivilegedContainer"} {"level":"info","ts":1680425183.4589453,"logger":"controller","msg":"constraint","process":"audit","audit_id":"2023-04-02T08:46:21Z","count of constraints":1} {"level":"info","ts":1680425183.4591208,"logger":"controller","msg":"starting update constraints loop","process":"audit","audit_id":"2023-04-02T08:46:21Z","constraints to update":"map[{constraints.gatekeeper.sh K8sPSPPrivilegedContainer v1beta1 privileged-pod-container}:{}]"} {"level":"info","ts":1680425183.462803,"logger":"controller","msg":"updating constraint status","process":"audit","audit_id":"2023-04-02T08:46:21Z","constraintName":"privileged-pod-container"} {"level":"info","ts":1680425183.462878,"logger":"controller","msg":"constraint status update","process":"audit","audit_id":"2023-04-02T08:46:21Z","object":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}} {"level":"info","ts":1680425183.4708369,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}} {"level":"info","ts":1680425183.4713397,"logger":"controller","msg":"updated constraint status violations","process":"audit","audit_id":"2023-04-02T08:46:21Z","constraintName":"privileged-pod-container","count":4}
You can check controller-manager log using below command.
[nasauser@linuxnasa~]$ kubectl logs -l control-plane=controller-manager -n gatekeeper-system {"level":"info","ts":1680424823.5010352,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}} {"level":"info","ts":1680424883.5227737,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}} {"level":"info","ts":1680425183.470918,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}} {"level":"info","ts":1680425243.5977407,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}} {"level":"info","ts":1680425303.4950802,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"privileged-pod-container"}}
Step-6: Deploy unprivileged pod
In this step, we will change the deployment yaml file to make it unprivileged which will meet the policy expectation and does not violate the policy.
Create a new deployment file and change the privilege parameter value to false as shown below
[nasauser@linuxnasa~]$ vi unprivileged-pod.yaml apiVersion: v1 kind: Pod metadata: name: nginx-unprivileged labels: app: nginx-unprivileged spec: containers: - name: nginx image: nginx securityContext: privileged: false
Deploy the pod using below command.
[nasauser@linuxnasa~]$ kubectl apply -f unprivileged-pod.yaml -n test-constraint pod/nginx-unprivileged created
Step-7: Verify pod
In this step, verify if pod is created using below command.
[nasauser@linuxnasa~]$ kubectl get pod -n test-constraint NAME READY STATUS RESTARTS AGE nginx-unprivileged 1/1 Running 0 46s
Conclusion
In this tutorial, we learnt about what is OPA and how we can use OPA in EKS as Admission controller. You can create your own constraint templates and tests them by defining custom constraints. There are many other usage of OPA which can be explored to understand other corners of it.