Skip to content

KubeArmor and Cilium on Minikube Cluster

Overview

This user journey guides you to install and verify the compatibility of Kuberarmor and Cilium on Minikube by applying policies on kubernetes workloads.

Step 1: Clone the KubeArmor Repo

git clone https://github.com/kubearmor/KubeArmor.git

Alt

Step 2: Install VirtualBox

cd KubeArmor/contribution/minikube
./install_virtualbox.sh

Alt

Note: Once VirtualBox installed, reboot the system.

sudo reboot

Step 3: Install Minikube

cd KubeArmor/contribution/minikube
./install_minikube.sh

Alt

./start_minikube.sh

Alt

Alt

Step 4: Karmor Install

Install Karmor CLI:

curl -sfL https://raw.githubusercontent.com/kubearmor/kubearmor-client/main/install.sh | sudo sh -s -- -b /usr/local/bin

Alt

karmor install

Alt

Karmor verify:

karmor version

Alt

kubectl get pods -n kube-system | grep kubearmor

Alt

Step 5: KubeArmor Policy

1. Creating sample ubuntu deployment

kubectl apply -f ubuntu.yaml
kubectl get pods --show-labels

Alt

2. Apply the following policy

*use label of the deployment

cat ksp-block-sting-rhel-v-230335.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-block-stig-rhel-v-230335
  namespace: default # Change your namespace
spec:
  tags: ["STIG","RHEL"]
  message: "Alert! /home/test.txt access will be Audit"
  selector:
    matchLabels:
      app: ubuntu # Change your matchLabels
  file:
    severity: 5
    matchPaths:
    - path: /home/test.txt
    action: Block

3. Apply the policy

kubectl apply -f ksp-block-sting-rhel-v-230335.yaml

Alt

4. Violating the policy

kubectl exec -it ubuntu-deployment-746964c6c6-j67jv bash

Alt

Step 6: Getting Alerts/Telemetry from KubeArmor

1. KubeArmor SVC port forward to monitor the logs

kubectl port-forward -n kube-system svc/kubearmor 32767:32767

Alt

2. Verifying policy violation logs

Karmor log

Alt

Step 7: Cilium Installation

Install Cilium CLI:

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-
linux-amd64.tar.gz{,.sha256sum}

Alt

sha256sum --check cilium-linux-amd64.tar.gz.sha256sum

Alt

sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
cilium install

Alt

kubectl get pods -n kube-system | grep cilium

Alt

cilium status --wait

Alt

Cilium Hubble enable:

cilium hubble enable

Alt

Cilium Hubble verify:

kubectl get pods -n kube-system | grep hubble

Alt

Install Hubble CLI Client:

export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum

Alt

sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz{,.sha256sum}

Step 8: Getting Alerts/Telemetry from Cilium

Enable port-forwarding for Cilium Hubble relay:

cilium hubble port-forward&

Alt

Step 9: Cilium Policy

1. Create a tightfighter & deathstart deployment

cat tightfighter-deathstart-app.yaml
apiVersion: v1
kind: Service
metadata:
 name: deathstar
 labels:
   app.kubernetes.io/name: deathstar
spec:
 type: ClusterIP
 ports:
 - port: 80
 selector:
   org: empire
   class: deathstar
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: deathstar
 labels:
   app.kubernetes.io/name: deathstar
spec:
 replicas: 2
 selector:
   matchLabels:
     org: empire
     class: deathstar
 template:
   metadata:
     labels:
       org: empire
       class: deathstar
       app.kubernetes.io/name: deathstar
   spec:
     containers:
     - name: deathstar
       image: docker.io/cilium/starwars
---
apiVersion: v1
kind: Pod
metadata:
 name: tiefighter
 labels:
   org: empire
   class: tiefighter
   app.kubernetes.io/name: tiefighter
spec:
 containers:
 - name: spaceship
   image: docker.io/tgraf/netperf
---
apiVersion: v1
kind: Pod
metadata:
 name: xwing
 labels:
   app.kubernetes.io/name: xwing
   org: alliance
   class: xwing
spec:
 containers:
 - name: spaceship
   image: docker.io/tgraf/netperf
kubectl apply -f tightfighter-deathstart-app.yaml

Alt

kubectl get pods --show-labels

Alt

2. Apply the following policy

cat sample-cilium-ingress-policy.yaml
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
 name: "rule1-egress"
spec:
 description: "L7 policy to restrict access to specific HTTP call"
 endpointSelector:
   matchLabels:
     class: tiefighter
 egress:
 - toPorts:
   - ports:
     - port: "80"
       protocol: TCP
     rules:
       http:
       - method: "POST"
         path: "/v1/request-landing"
kubectl apply -f sample-cilium-ingress-policy.yaml
kubectl get CiliumNetworkPolicy

Alt

3. Violating the policy

kubectl get svc
kubectl exec -n default tiefighter -- curl -s -XPOST 10.106.29.11/v1/request-landing
kubectl exec -n default tiefighter -- curl -s -XPOST 10.106.29.11/v1/test

Alt

4. Verifying the Cilium violation logs

hubble observe --pod tiefighter --protocol http 

Alt

Back to top